blob_id stringlengths 40 40 | directory_id stringlengths 40 40 | path stringlengths 3 288 | content_id stringlengths 40 40 | detected_licenses listlengths 0 112 | license_type stringclasses 2 values | repo_name stringlengths 5 115 | snapshot_id stringlengths 40 40 | revision_id stringlengths 40 40 | branch_name stringclasses 684 values | visit_date timestamp[us]date 2015-08-06 10:31:46 2023-09-06 10:44:38 | revision_date timestamp[us]date 1970-01-01 02:38:32 2037-05-03 13:00:00 | committer_date timestamp[us]date 1970-01-01 02:38:32 2023-09-06 01:08:06 | github_id int64 4.92k 681M ⌀ | star_events_count int64 0 209k | fork_events_count int64 0 110k | gha_license_id stringclasses 22 values | gha_event_created_at timestamp[us]date 2012-06-04 01:52:49 2023-09-14 21:59:50 ⌀ | gha_created_at timestamp[us]date 2008-05-22 07:58:19 2023-08-21 12:35:19 ⌀ | gha_language stringclasses 147 values | src_encoding stringclasses 25 values | language stringclasses 1 value | is_vendor bool 2 classes | is_generated bool 2 classes | length_bytes int64 128 12.7k | extension stringclasses 142 values | content stringlengths 128 8.19k | authors listlengths 1 1 | author_id stringlengths 1 132 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
6733dbf2e59a135a057a56d9eee35b59737f3702 | e1950865f000adc926f228d84131e20b244b48f6 | /python/Array/Difference_largest&smallest_value.py | 8abca0f2e09d3273617df883d8b580c0038b9e95 | [] | no_license | manastole03/Programming-practice | c73859b13392a6a1036f557fa975225672fb1e91 | 2889dc94068b8d778f6b0cf516982d7104fa2318 | refs/heads/master | 2022-12-06T07:48:47.237014 | 2020-08-29T18:22:59 | 2020-08-29T18:22:59 | 281,708,273 | 3 | 1 | null | null | null | null | UTF-8 | Python | false | false | 315 | py | array=[]
n=int(input('How many elements u want in array: '))
for i in range(n):
f= int(input('Enter no: '))
array.append(f)
print('Entered array: ',array)
if len(array)>=1:
max1=max(array)
min1 = min(array)
diff=max1-min1
print('The difference of largest &smallest value from array: ',diff)
| [
"noreply@github.com"
] | manastole03.noreply@github.com |
4e1634773b2a14ec7748b2f1e814115299f79cb7 | 744594f30c5e283f6252909fc68102dd7bc61091 | /2020/24/2020_day_24_1.py | 245b430c35de9e6f604111ab201b953f313c1c1d | [
"MIT"
] | permissive | vScourge/Advent_of_Code | 84f40c76e5dc13977876eea6dbea7d05637de686 | 36e4f428129502ddc93c3f8ba7950aed0a7314bb | refs/heads/master | 2022-12-20T22:12:28.646102 | 2022-12-15T22:16:28 | 2022-12-15T22:16:28 | 160,765,438 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 6,010 | py | """
--- Day 24: Lobby Layout ---
Your raft makes it to the tropical island; it turns out that the small crab was an excellent navigator.
You make your way to the resort.
As you enter the lobby, you discover a small problem: the floor is being renovated. You can't even reach the
check-in desk until they've finished installing the new tile floor.
The tiles are all hexagonal; they need to be arranged in a hex grid with a very specific color pattern.
Not in the mood to wait, you offer to help figure out the pattern.
The tiles are all white on one side and black on the other. They start with the white side facing up.
The lobby is large enough to fit whatever pattern might need to appear there.
A member of the renovation crew gives you a list of the tiles that need to be flipped over (your puzzle input).
Each line in the list identifies a single tile that needs to be flipped by giving a series of steps starting
from a reference tile in the very center of the room. (Every line starts from the same reference tile.)
Because the tiles are hexagonal, every tile has six neighbors: east, southeast, southwest, west, northwest, and northeast.
These directions are given in your list, respectively, as e, se, sw, w, nw, and ne. A tile is identified by a series of
these directions with no delimiters; for example, esenee identifies the tile you land on if you start at the reference
tile and then move one tile east, one tile southeast, one tile northeast, and one tile east.
Each time a tile is identified, it flips from white to black or from black to white. Tiles might be flipped more than once.
For example, a line like esew flips a tile immediately adjacent to the reference tile, and a line like nwwswee flips the reference tile itself.
Here is a larger example:
sesenwnenenewseeswwswswwnenewsewsw
neeenesenwnwwswnenewnwwsewnenwseswesw
seswneswswsenwwnwse
nwnwneseeswswnenewneswwnewseswneseene
swweswneswnenwsewnwneneseenw
eesenwseswswnenwswnwnwsewwnwsene
sewnenenenesenwsewnenwwwse
wenwwweseeeweswwwnwwe
wsweesenenewnwwnwsenewsenwwsesesenwne
neeswseenwwswnwswswnw
nenwswwsewswnenenewsenwsenwnesesenew
enewnwewneswsewnwswenweswnenwsenwsw
sweneswneswneneenwnewenewwneswswnese
swwesenesewenwneswnwwneseswwne
enesenwswwswneneswsenwnewswseenwsese
wnwnesenesenenwwnenwsewesewsesesew
nenewswnwewswnenesenwnesewesw
eneswnwswnwsenenwnwnwwseeswneewsenese
neswnwewnwnwseenwseesewsenwsweewe
wseweeenwnesenwwwswnew
In the above example, 10 tiles are flipped once (to black), and 5 more are flipped twice (to black, then back to white). After all of these instructions have been followed, a total of 10 tiles are black.
Go through the renovation crew's list and determine which tiles they need to flip. After all of the instructions have been followed, how many tiles are left with the black side up?
--- Part Two ---
The tile floor in the lobby is meant to be a living art exhibit. Every day, the tiles are all flipped according to the following rules:
Any black tile with zero or more than 2 black tiles immediately adjacent to it is flipped to white.
Any white tile with exactly 2 black tiles immediately adjacent to it is flipped to black.
Here, tiles immediately adjacent means the six tiles directly touching the tile in question.
The rules are applied simultaneously to every tile; put another way, it is first determined which tiles need to be flipped, then they are all flipped at the same time.
In the above example, the number of black tiles that are facing up after the given number of days has passed is as follows:
Day 1: 15
Day 2: 12
Day 3: 25
Day 4: 14
Day 5: 23
Day 6: 28
Day 7: 41
Day 8: 37
Day 9: 49
Day 10: 37
Day 20: 132
Day 30: 259
Day 40: 406
Day 50: 566
Day 60: 788
Day 70: 1106
Day 80: 1373
Day 90: 1844
Day 100: 2208
After executing this process a total of 100 times, there would be 2208 black tiles facing up.
How many tiles will be black after 100 days?
"""
### IMPORTS ###
import collections
import math
import numpy
import time
### CONSTANTS ###
INPUT_FILENAME = 'input.txt'
NE = 0
E = 1
SE = 2
SW = 3
W = 4
NW = 5
### FUNCTIONS ###
def parse_input( ):
lines = open( INPUT_FILENAME, 'r' ).read( ).splitlines( )
paths = [ ]
l = 0
for line in lines:
moves = [ ]
i = 0
while i < len( line ):
c = line[ i ]
if c == 'e':
moves.append( E )
i += 1
elif c == 'w':
moves.append( W )
i += 1
elif c == 'n':
if line[ i+1 ] == 'e':
moves.append( NE )
else:
moves.append( NW )
i += 2
else:
if line[ i+1 ] == 'e':
moves.append( SE )
else:
moves.append( SW )
i += 2
paths.append( moves )
l += 1
return paths
def move_on_grid( pos, direction ):
if direction == NE:
pos = ( pos[ 0 ], pos[ 1 ] - 1 )
elif direction == E:
pos = ( pos[ 0 ] + 1, pos[ 1 ] )
elif direction == SE:
pos = ( pos[ 0 ] + 1, pos[ 1 ] + 1 )
elif direction == SW:
pos = ( pos[ 0 ], pos[ 1 ] + 1 )
elif direction == W:
pos = ( pos[ 0 ] - 1, pos[ 1 ] )
elif direction == NW:
pos = ( pos[ 0 ] - 1, pos[ 1 ] - 1 )
return pos
def main( paths ):
"""
"""
grid = { (0,0): True } # True = white
pos = ( 0, 0 )
for path in paths:
print( '\nPath =', path )
for direction in path:
pos = move_on_grid( pos, direction )
if pos not in grid:
grid[ pos ] = True
print( 'dir {0}, pos {1} = {2}'.format( direction, pos, grid[ pos ] ) )
# Flip this tile
grid[ pos ] = not grid[ pos ]
# Reset to reference tile
pos = ( 0, 0 )
return list( grid.values( ) ).count( False )
# test answer = 67384529
### CLASSES ###
#class Point( ):
#def __init__( self, x, y ):
#self.x = x
#self.y = y
#def __repr__( self ):
#return '<Point ({0}, {1})>'.format( self.x, self.y )
### MAIN ###
if __name__ == "__main__":
time_start = time.perf_counter( )
paths = parse_input( )
answer = main( paths )
print( 'answer =', answer )
print( 'done in {0:.4f} secs'.format( time.perf_counter( ) - time_start ) )
# 30033 not right
| [
"adam.pletcher@gmail.com"
] | adam.pletcher@gmail.com |
06fa9bef1f87a28a6e7fa0953a214270b2aaadfe | 2845f06c6be4262e9a5e56ebf407d824543f42cc | /tests/test_roles_pages_database.py | 6275f1d40dd5252fb381f743fe2eef31f78d63e0 | [
"CC0-1.0"
] | permissive | silky/WALKOFF | 42c315b35aadf42dc5f31074b7b6eff441338f61 | d4f4afad47e8c57b71647175978650520c061f87 | refs/heads/master | 2021-08-31T15:10:11.347414 | 2017-12-20T15:22:59 | 2017-12-20T15:22:59 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 7,415 | py | import unittest
from server.database import db, Role, ResourcePermission, default_resources
class TestRoles(unittest.TestCase):
@classmethod
def setUpClass(cls):
import server.flaskserver
cls.context = server.flaskserver.app.test_request_context()
cls.context.push()
db.create_all()
def tearDown(self):
db.session.rollback()
for role in [role for role in Role.query.all() if role.name != 'admin']:
db.session.delete(role)
for resource in [resource for resource in ResourcePermission.query.all() if
resource.resource not in default_resources]:
db.session.delete(resource)
db.session.commit()
def assertRoleConstructionIsCorrect(self, role, name, description='', resources=None):
self.assertEqual(role.name, name)
self.assertEqual(role.description, description)
expected_resources = set(resources) if resources is not None else set()
self.assertSetEqual({resource.resource for resource in role.resources}, expected_resources)
def test_resources_init(self):
resource = ResourcePermission(resource='/test/resource')
self.assertEqual(resource.resource, '/test/resource')
def test_resources_as_json(self):
resource = ResourcePermission(resource='/test/resource')
self.assertDictEqual(resource.as_json(), {'resource': '/test/resource'})
def test_role_init_default(self):
role = Role(name='test')
self.assertRoleConstructionIsCorrect(role, 'test')
def test_role_init_with_description(self):
role = Role(name='test', description='desc')
self.assertRoleConstructionIsCorrect(role, 'test', description='desc')
def test_role_init_with_resources_none_in_db(self):
resources = ['resource1', 'resource2', 'resource3']
role = Role(name='test', resources=resources)
db.session.add(role)
self.assertRoleConstructionIsCorrect(role, 'test', resources=resources)
self.assertSetEqual({resource.resource for resource in ResourcePermission.query.all()},
set(default_resources) | set(resources))
def test_role_init_with_some_in_db(self):
resources = ['resource1', 'resource2', 'resource3']
db.session.add(ResourcePermission('resource1'))
role = Role(name='test', resources=resources)
db.session.add(role)
self.assertRoleConstructionIsCorrect(role, 'test', resources=resources)
self.assertSetEqual({resource.resource for resource in ResourcePermission.query.all()},
set(resources) | set(default_resources))
for resource in (resource for resource in ResourcePermission.query.all() if resource.resource in resources):
self.assertListEqual([role.name for role in resource.roles], ['test'])
def test_set_resources_to_role_no_resources_to_add(self):
role = Role(name='test')
role.set_resources([])
self.assertListEqual(role.resources, [])
def test_set_resources_to_role_with_no_resources_and_no_resources_in_db(self):
role = Role(name='test')
resources = ['resource1', 'resource2']
role.set_resources(resources)
db.session.add(role)
self.assertSetEqual({resource.resource for resource in role.resources}, set(resources))
self.assertEqual({resource.resource for resource in ResourcePermission.query.all()},
set(resources) | set(default_resources))
def test_set_resources_to_role_with_no_resources_and_resources_in_db(self):
role = Role(name='test')
db.session.add(ResourcePermission('resource1'))
resources = ['resource1', 'resource2']
role.set_resources(resources)
db.session.add(role)
self.assertSetEqual({resource.resource for resource in role.resources}, set(resources))
self.assertEqual({resource.resource for resource in ResourcePermission.query.all()},
set(resources) | set(default_resources))
def test_set_resources_to_role_with_existing_resources_with_overlap(self):
resources = ['resource1', 'resource2', 'resource3']
role = Role(name='test', resources=resources)
new_resources = ['resource3', 'resource4', 'resource5']
role.set_resources(new_resources)
db.session.add(role)
self.assertSetEqual({resource.resource for resource in role.resources}, set(new_resources))
self.assertEqual({resource.resource for resource in ResourcePermission.query.all()},
set(new_resources) | set(default_resources))
def test_set_resources_to_role_shared_resources(self):
resources1 = ['resource1', 'resource2', 'resource3', 'resource4']
overlap_resources = ['resource3', 'resource4']
resources2 = ['resource3', 'resource4', 'resource5', 'resource6']
role1 = Role(name='test1', resources=resources1)
db.session.add(role1)
role2 = Role(name='test2', resources=resources2)
db.session.add(role2)
db.session.commit()
self.assertSetEqual({resource.resource for resource in role1.resources}, set(resources1))
self.assertSetEqual({resource.resource for resource in role2.resources}, set(resources2))
def assert_resources_have_correct_roles(resources, roles):
for resource in resources:
resource = ResourcePermission.query.filter_by(resource=resource).first()
self.assertSetEqual({role.name for role in resource.roles}, roles)
assert_resources_have_correct_roles(['resource1', 'resource2'], {'test1'})
assert_resources_have_correct_roles(overlap_resources, {'test1', 'test2'})
assert_resources_have_correct_roles(['resource5', 'resource6'], {'test2'})
def test_resource_as_json_with_multiple_roles(self):
resources1 = ['resource1', 'resource2', 'resource3', 'resource4']
overlap_resources = ['resource3', 'resource4']
resources2 = ['resource3', 'resource4', 'resource5', 'resource6']
role1 = Role(name='test1', resources=resources1)
db.session.add(role1)
role2 = Role(name='test2', resources=resources2)
db.session.add(role2)
db.session.commit()
def assert_resource_json_is_correct(resources, roles):
for resource in resources:
resource_json = ResourcePermission.query.filter_by(resource=resource).first().as_json(with_roles=True)
self.assertEqual(resource_json['resource'], resource)
self.assertSetEqual(set(resource_json['roles']), roles)
assert_resource_json_is_correct(['resource1', 'resource2'], {'test1'})
assert_resource_json_is_correct(overlap_resources, {'test1', 'test2'})
assert_resource_json_is_correct(['resource5', 'resource6'], {'test2'})
def test_role_as_json(self):
resources = ['resource1', 'resource2', 'resource3']
role = Role(name='test', description='desc', resources=resources)
role_json = role.as_json()
self.assertSetEqual(set(role_json.keys()), {'name', 'description', 'resources', 'id'})
self.assertEqual(role_json['name'], 'test')
self.assertEqual(role_json['description'], 'desc')
self.assertSetEqual(set(role_json['resources']), set(resources))
| [
"Tervala_Justin@bah.com"
] | Tervala_Justin@bah.com |
aabcf429ef53a8ac43a20aa0dedcc5f2bbefab71 | 27e890f900bd4bfb2e66f4eab85bc381cf4d5d3f | /tests/unit/modules/remote_management/oneview/test_oneview_ethernet_network_info.py | 566e168bfb951fa3ff89894ff981cfced17feebf | [] | no_license | coll-test/notstdlib.moveitallout | eb33a560070bbded5032385d0aea2f3cf60e690b | 0987f099b783c6cf977db9233e1c3d9efcbcb3c7 | refs/heads/master | 2020-12-19T22:28:33.369557 | 2020-01-23T18:51:26 | 2020-01-23T18:51:26 | 235,865,139 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,645 | py | # Copyright (c) 2016-2017 Hewlett Packard Enterprise Development LP
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from ansible_collections.notstdlib.moveitallout.tests.unit.compat import unittest
from .oneview_module_loader import EthernetNetworkInfoModule
from .hpe_test_utils import FactsParamsTestCase
ERROR_MSG = 'Fake message error'
PARAMS_GET_ALL = dict(
config='config.json',
name=None
)
PARAMS_GET_BY_NAME = dict(
config='config.json',
name="Test Ethernet Network",
options=[]
)
PARAMS_GET_BY_NAME_WITH_OPTIONS = dict(
config='config.json',
name="Test Ethernet Network",
options=['associatedProfiles', 'associatedUplinkGroups']
)
PRESENT_ENETS = [{
"name": "Test Ethernet Network",
"uri": "/rest/ethernet-networks/d34dcf5e-0d8e-441c-b00d-e1dd6a067188"
}]
ENET_ASSOCIATED_UPLINK_GROUP_URIS = [
"/rest/uplink-sets/c6bf9af9-48e7-4236-b08a-77684dc258a5",
"/rest/uplink-sets/e2f0031b-52bd-4223-9ac1-d91cb519d548"
]
ENET_ASSOCIATED_PROFILE_URIS = [
"/rest/server-profiles/83e2e117-59dc-4e33-9f24-462af951cbbe",
"/rest/server-profiles/57d3af2a-b6d2-4446-8645-f38dd808ea4d"
]
ENET_ASSOCIATED_UPLINK_GROUPS = [dict(uri=ENET_ASSOCIATED_UPLINK_GROUP_URIS[0], name='Uplink Set 1'),
dict(uri=ENET_ASSOCIATED_UPLINK_GROUP_URIS[1], name='Uplink Set 2')]
ENET_ASSOCIATED_PROFILES = [dict(uri=ENET_ASSOCIATED_PROFILE_URIS[0], name='Server Profile 1'),
dict(uri=ENET_ASSOCIATED_PROFILE_URIS[1], name='Server Profile 2')]
class EthernetNetworkInfoSpec(unittest.TestCase,
FactsParamsTestCase
):
def setUp(self):
self.configure_mocks(self, EthernetNetworkInfoModule)
self.ethernet_networks = self.mock_ov_client.ethernet_networks
FactsParamsTestCase.configure_client_mock(self, self.ethernet_networks)
def test_should_get_all_enets(self):
self.ethernet_networks.get_all.return_value = PRESENT_ENETS
self.mock_ansible_module.params = PARAMS_GET_ALL
EthernetNetworkInfoModule().run()
self.mock_ansible_module.exit_json.assert_called_once_with(
changed=False,
ethernet_networks=(PRESENT_ENETS)
)
def test_should_get_enet_by_name(self):
self.ethernet_networks.get_by.return_value = PRESENT_ENETS
self.mock_ansible_module.params = PARAMS_GET_BY_NAME
EthernetNetworkInfoModule().run()
self.mock_ansible_module.exit_json.assert_called_once_with(
changed=False,
ethernet_networks=(PRESENT_ENETS)
)
def test_should_get_enet_by_name_with_options(self):
self.ethernet_networks.get_by.return_value = PRESENT_ENETS
self.ethernet_networks.get_associated_profiles.return_value = ENET_ASSOCIATED_PROFILE_URIS
self.ethernet_networks.get_associated_uplink_groups.return_value = ENET_ASSOCIATED_UPLINK_GROUP_URIS
self.mock_ov_client.server_profiles.get.side_effect = ENET_ASSOCIATED_PROFILES
self.mock_ov_client.uplink_sets.get.side_effect = ENET_ASSOCIATED_UPLINK_GROUPS
self.mock_ansible_module.params = PARAMS_GET_BY_NAME_WITH_OPTIONS
EthernetNetworkInfoModule().run()
self.mock_ansible_module.exit_json.assert_called_once_with(
changed=False,
ethernet_networks=PRESENT_ENETS,
enet_associated_profiles=ENET_ASSOCIATED_PROFILES,
enet_associated_uplink_groups=ENET_ASSOCIATED_UPLINK_GROUPS
)
if __name__ == '__main__':
unittest.main()
| [
"wk@sydorenko.org.ua"
] | wk@sydorenko.org.ua |
d05e46ed8437f7d527414745aa1d21a565f77bcf | 2c4763aa544344a3a615f9a65d1ded7d0f59ae50 | /playground/maxjobs2/compute/look_busy.py | 1efd78e04fb1aacdaf55a0b72015541ac14e3022 | [] | no_license | afeldman/waf | 572bf95d6b11571bbb2941ba0fe463402b1e39f3 | 4c489b38fe1520ec1bc0fa7e1521f7129c20f8b6 | refs/heads/master | 2021-05-09T18:18:16.598191 | 2019-03-05T06:33:42 | 2019-03-05T06:33:42 | 58,713,085 | 0 | 0 | null | 2016-05-13T07:34:33 | 2016-05-13T07:34:33 | null | UTF-8 | Python | false | false | 161 | py | #! /usr/bin/env python
import sys, time
loops = int(sys.argv[1])
if not loops:
time.sleep(1)
pass
else:
for i in range(loops):
time.sleep(1)
| [
"anton.feldmann@outlook.de"
] | anton.feldmann@outlook.de |
58b4b646201129910d97928a3ebbcb3ee03945ce | e04dbc32247accf073e3089ed4013427ad182c7c | /ABC116/ABC116Canother.py | b53f770c6c899ccae48911c0efb0f9a92ef13287 | [] | no_license | twobooks/atcoder_training | 9deb237aed7d9de573c1134a858e96243fb73ca0 | aa81799ec87cc9c9d76de85c55e99ad5fa7676b5 | refs/heads/master | 2021-10-28T06:33:19.459975 | 2021-10-20T14:16:57 | 2021-10-20T14:16:57 | 233,233,854 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 959 | py | # from math import factorial,sqrt,ceil,gcd
# from itertools import permutations as permus
# from collections import deque,Counter
# import re
# from functools import lru_cache # 簡単メモ化 @lru_cache(maxsize=1000)
# from decimal import Decimal, getcontext
# # getcontext().prec = 1000
# # eps = Decimal(10) ** (-100)
# import numpy as np
# import networkx as nx
# from scipy.sparse.csgraph import shortest_path, dijkstra, floyd_warshall, bellman_ford, johnson
# from scipy.sparse import csr_matrix
# from scipy.special import comb
# slist = "abcdefghijklmnopqrstuvwxyz"
N = int(input())
arrA = list(map(int,input().split()))
pre = 0
ans = 0
for i in arrA:
if pre<i:
ans += i - pre
pre = i
print(ans)
# print(*ans) # unpackして出力。間にスペースが入る
# for row in board:
# print(*row,sep="") #unpackして間にスペース入れずに出力する
# print("{:.10f}".format(ans))
# print("{:0=10d}".format(ans))
| [
"twobookscom@gmail.com"
] | twobookscom@gmail.com |
281bea9f30ded0b2fb8d1e0706f225679bb9f704 | 5ca5a7120c3c147b3ae86c2271c60c82745997ea | /my_python/my_selenium/base/test_13_download.py | fd85ceb34ea10aa5513974884abf4560202e7507 | [] | no_license | JR1QQ4/auto_test | 6b9ea7bd317fd4338ac0964ffd4042b293640af3 | 264b991b4dad72986e2aeb1a30812baf74e42bc6 | refs/heads/main | 2023-03-21T01:32:29.192030 | 2021-03-16T14:07:11 | 2021-03-16T14:07:11 | 321,591,405 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 455 | py | #!/usr/bin/python
# -*- coding:utf-8 -*-
import os
from selenium import webdriver
options = webdriver.ChromeOptions()
prefs = {'profile.default_content_settings.popups': 0,
'download.default_directory': os.getcwd()}
options.add_experimental_option('prefs', prefs)
driver = webdriver.Chrome(chrome_options=options)
driver.get("https://pypi.org/project/selenium/#files")
driver.find_element_by_partial_link_text("selenium-3.141.0.tar.gz").click()
| [
"chenjunrenyx@163.com"
] | chenjunrenyx@163.com |
3f1d5036eb1e0ed7004a294759da99858e649cde | de24f83a5e3768a2638ebcf13cbe717e75740168 | /moodledata/vpl_data/458/usersdata/323/109641/submittedfiles/programa.py | ad68b922ad244e7ce1ace15c98e16edce333698a | [] | no_license | rafaelperazzo/programacao-web | 95643423a35c44613b0f64bed05bd34780fe2436 | 170dd5440afb9ee68a973f3de13a99aa4c735d79 | refs/heads/master | 2021-01-12T14:06:25.773146 | 2017-12-22T16:05:45 | 2017-12-22T16:05:45 | 69,566,344 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 607 | py | # -*- coding: utf-8 -*-
import numpy as np
soma=0
pesos=[]
while True:
n=int(input('Digite a dimensão do tabuleiro(n>=3): '))
if n>=3:
break
a=np.zeros((n,n))
for i in range(0,n,1):
for j in range(0,n,1):
a[i,j]= int(input('Digite o elemento da matriz: '))
somalin=0
somacol=0
for i in range(0,n,1):
for e in range(0,n,1):
for j in range(0,n,1):
somalin= somalin+ a[i,j]
somacol=somacol+a[j,e]
soma= somalin+somacol - 2*a[i,e]
pesos.append(soma)
somalin=0
somacol=0
soma=0
print((max(pesos))) | [
"rafael.mota@ufca.edu.br"
] | rafael.mota@ufca.edu.br |
c019752c841e55e8463b38905d9be4765ca6ac17 | 860628a9330d18d5c803c24eb314bd3756411c32 | /tweet-sentiment-extraction/src/conf/model_config_roberta.py | 0c483b4a10fbccb760230e605508629ed3980f27 | [] | no_license | yphacker/kaggle | fb24bdcc88d55c2a9cee347fcac48f13cb30ca45 | fd3c1c2d5ddf53233560ba4bbd68a2c5c17213ad | refs/heads/master | 2021-07-11T00:22:04.472777 | 2020-06-10T17:01:51 | 2020-06-10T17:01:51 | 145,675,378 | 1 | 1 | null | null | null | null | UTF-8 | Python | false | false | 533 | py | # coding=utf-8
# author=yphacker
import os
from conf import config
from tokenizers import ByteLevelBPETokenizer
model_name = 'roberta'
# pretrain_model_name = 'roberta-base'
pretrain_model_name = 'roberta-large'
pretrain_model_path = os.path.join(config.input_path, pretrain_model_name)
tokenizer = ByteLevelBPETokenizer(
vocab_file='{}/vocab.json'.format(pretrain_model_path),
merges_file='{}/merges.txt'.format(pretrain_model_path),
lowercase=True,
add_prefix_space=True
)
learning_rate = 5e-5
adjust_lr_num = 0
| [
"yphacker@163.com"
] | yphacker@163.com |
00edb4322460d93f2d3711152adb763567f1e9d3 | ad69e42fbe0cdb27406497855885ad7fcdad55aa | /lib/ViDE/Shell/InstallTools.py | f58a12f3aa13c051b39a1d892257bbef56bfd256 | [] | no_license | pombredanne/ViDE | 880b921a8240bbc9ac136f98bf4a12662684a5d1 | 0ec9860a9ebccc7bcc62e8fb39d6ebbc196c1360 | refs/heads/master | 2021-01-22T17:15:20.518689 | 2013-03-23T22:10:04 | 2013-03-23T22:10:49 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,769 | py | from Misc import InteractiveCommandLineProgram as ICLP
from ViDE import Log
from ViDE.Core.Action import CompoundException
from ViDE.Core.ExecutionReport import ExecutionReport
from ViDE.Context import Context
class InstallTools( ICLP.Command ):
def __init__( self, program ):
ICLP.Command.__init__( self, program )
self.jobs = 1
self.addOption( [ "j", "jobs" ], "jobs", ICLP.StoreArgument( "JOBS" ), "use JOBS parallel jobs" )
self.keepGoing = False
self.addOption( [ "k", "keep-going" ], "keepGoing", ICLP.StoreConstant( True ), "keep going in case of failure" )
self.dryRun = False
self.addOption( [ "n", "dry-run" ], "dryRun", ICLP.StoreConstant( True ), "print commands instead of executing them" )
self.downloadOnly = False
self.addOption( [ "d", "dl-only" ], "downloadOnly", ICLP.StoreConstant( True ), "only do the part of installation which needs internet access" )
def execute( self, args ):
context = Context( self.program )
if self.downloadOnly:
artifact = context.toolset.getFetchArtifact()
else:
artifact = context.toolset.getInstallArtifact()
action = artifact.getProductionAction()
if self.dryRun:
print "\n".join( action.preview() )
else:
try:
action.execute( self.keepGoing, self.jobs )
except CompoundException, e:
Log.error( "installation failed", e )
finally:
report = ExecutionReport( action, 800 )
report.drawTo( "installation-report.png" )
artifact.getGraph().drawTo( "installation-artifacts.png" )
action.getGraph().drawTo( "installation-actions.png" )
| [
"vincent@vincent-jacques.net"
] | vincent@vincent-jacques.net |
2b321f3e9f564f12d0040086f7d63bdaa06e9062 | b24e45267a8d01b7d3584d062ac9441b01fd7b35 | /Usuario/.history/serializers_20191103100253.py | a533fa8f2dba6f7033a69b0426c8326793d51d3a | [] | no_license | slalbertojesus/merixo-rest | 1707b198f31293ced38930a31ab524c0f9a6696c | 5c12790fd5bc7ec457baad07260ca26a8641785d | refs/heads/master | 2022-12-10T18:56:36.346159 | 2020-05-02T00:42:39 | 2020-05-02T00:42:39 | 212,175,889 | 0 | 0 | null | 2022-12-08T07:00:07 | 2019-10-01T18:56:45 | Python | UTF-8 | Python | false | false | 1,033 | py | from rest_framework import serializers
from django.db import models
from .models import Usuario
class UsuarioSerializer(serializers.ModelSerializer):
contraseñaConfirmacion = serializers.CharField(style={'input_type':'password'}, write_only= True)
class Meta:
model = Usuario
fields = ['usuario', 'password', 'nombre', 'correo', 'passwordConfirm', 'rol']
extra_kwargs = {
'password':{'write_only': True}
}
def save(self):
usuario = Usuario(
usuario = self.validated_data['usuario'],
correo = self.validated_data['correo'],
rol = "usuario"
)
contraseña = self.validated_data['password']
contraseñaConfirmacion = self.validated_data['passwordConfirm']
if password != passwordConfirm:
raise serializers.ValidationError({'password': 'Las contraseñas no son iguales'})
usuario.password = password
usuario.save()
return usuario
| [
"slalbertojesus@gmail.com"
] | slalbertojesus@gmail.com |
caa393008241d3cd3ea683c33ff81cd65a23c46b | 207e93eca4bc1a9bb66d94820a52f51e3626db22 | /1024文章/test.py | 29bf11b6bac61b5a445e0c5f257659bd1630c59d | [] | no_license | 99tian/Python-spider | 055d19ab72b3cd2f0ed3da187920e52edddf8c24 | 8718f75fbe95ad37105212a437279de107995fb8 | refs/heads/master | 2020-06-26T07:19:36.827530 | 2019-07-29T11:36:22 | 2019-07-29T11:36:22 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,114 | py | import requests
import re
new_url = "https://1024.fil6.tk/htm_data/1907/16/3586699.html"
header = {
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36",
"cookie": "__cfduid=d991240ded8f691413d0a5238e78525ee1563844869; UM_distinctid=16c1c6ba0a3d2-0247f6cbe42a12-c343162-100200-16c1c6ba0a462d; PHPSESSID=cjkptobgiir2bmui06ttr75gi6; 227c9_lastvisit=0%091563848591%09%2Findex.php%3Fu%3D481320%26vcencode%3D5793155999; CNZZDATA950900=cnzz_eid%3D589544733-1563841323-%26ntime%3D1563848242"
}
print(new_url)
res = requests.get(new_url, headers=header)
# print(res.content.decode('gbk', 'ignore'))
# res_html = res.text()
# with open("1.html", 'w') as f:
# f.write(res.content.decode('gbk', 'ignore'))
res_html = res.content.decode('gbk', 'ignore')
res_jpg = re.findall(r"data-src='(.*?)'", res_html)
print(res_jpg)
number = 0
for i in res_jpg:
print(i)
number += 1
res_jpg = requests.get(i, headers=header)
save_path = "{}.jpg".format(number)
with open(save_path, "wb") as f:
f.write(res_jpg.content) | [
"15670339118@qq.com"
] | 15670339118@qq.com |
35c350e7307608d6ea6d0e070989b6efc28cdd32 | 8bd1cc4e963766d34f635d870e2ad792afdc6ae1 | /0x01-python-if_else_loops_functions/12-fizzbuzz.py | 26d488fac778767d9b279079df2e61f5a24ca59b | [] | no_license | nildiert/holbertonschool-higher_level_programming | 71e6747163a04510b056b898503213c90419e5dc | 5e9ea8cd2f2d0ed01e65e6866d4df751f7771099 | refs/heads/master | 2021-07-06T06:21:01.143189 | 2020-09-22T17:45:26 | 2020-09-22T17:45:26 | 184,036,414 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 309 | py | #!/usr/bin/python3
def fizzbuzz():
for i in range(1, 101):
if (i % 3 != 0 and i % 5 != 0):
print("{:d}".format(i), end=' ')
else:
print("{}".format("Fizz" if ((i % 3) == 0) else ""), end='')
print("{}".format("Buzz" if ((i % 5) == 0) else ""), end=' ')
| [
"niljordan23@gmail.com"
] | niljordan23@gmail.com |
6bffacdc5e48bcf1112d0cf9725bcc9795592413 | 933833285c122fea63e2397717469a9701139e7b | /app/api/patients/actions/put_patient.py | 822af27482bdc004b7e6811b65c32ce5212583b3 | [] | no_license | borgishmorg/mis-backend | ea4371b7ae5ebb46b7cd0c9eb26640c47dc67656 | f4dbcbfb8c9af717e54eadab24a1dfee3e537cee | refs/heads/master | 2023-05-31T05:02:18.947337 | 2021-06-11T22:55:01 | 2021-06-11T22:55:01 | 361,906,170 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 711 | py | from fastapi import Depends, Path, HTTPException, status
from app.dependencies import token_payload, TokenPayload, Permission
from ..controller import PatientsController, PatientDoesNotExistException
from ..schemas import Patient, PatientIn
def put_patient(
patient_in: PatientIn,
id: int = Path(...),
patients: PatientsController = Depends(),
token_payload: TokenPayload = Depends(token_payload(permissions=[Permission.PATIENTS_EDIT]))
) -> Patient:
try:
return patients.update_patient(id, patient_in)
except PatientDoesNotExistException as exception:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail=str(exception)
)
| [
"43957407+borgishmorg@users.noreply.github.com"
] | 43957407+borgishmorg@users.noreply.github.com |
83940fd7a2aaea529631f8864801e929ecc5f18f | 9743d5fd24822f79c156ad112229e25adb9ed6f6 | /xai/brain/wordbase/adjectives/_narrower.py | dd7f198f8464d92004e1bced39ed083f41cb636d | [
"MIT"
] | permissive | cash2one/xai | de7adad1758f50dd6786bf0111e71a903f039b64 | e76f12c9f4dcf3ac1c7c08b0cc8844c0b0a104b6 | refs/heads/master | 2021-01-19T12:33:54.964379 | 2017-01-28T02:00:50 | 2017-01-28T02:00:50 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 250 | py |
from xai.brain.wordbase.adjectives._narrow import _NARROW
#calss header
class _NARROWER(_NARROW, ):
def __init__(self,):
_NARROW.__init__(self)
self.name = "NARROWER"
self.specie = 'adjectives'
self.basic = "narrow"
self.jsondata = {}
| [
"xingwang1991@gmail.com"
] | xingwang1991@gmail.com |
1324f492677704a221972cd8a2be003c924f9c79 | 608bc4314c5d91744c0731b91882e124fd44fb9a | /protomol-test/MDLTests/data/XYZTest.py | 19bd5d247903a2a89a0b393778ccf410bdf2e151 | [] | no_license | kuangchen/ProtoMolAddon | bfd1a4f10e7d732b8ed22d38bfa3c7d1f0b228c0 | 78c96b72204e301d36f8cbe03397f2a02377279f | refs/heads/master | 2021-01-10T19:55:40.467574 | 2015-06-09T21:18:51 | 2015-06-09T21:18:51 | 19,328,104 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 530 | py | # USING THE NEW STRUCTURE
from MDL import *
# PHYSICAL
phys = Physical()
io = IO()
io.readPDBPos(phys, "data/alanine.pdb")
io.readPSF(phys, "data/alanine.psf")
io.readPAR(phys, "data/alanine.par")
phys.bc = "Vacuum"
phys.cellsize = 5
phys.exclude = "scaled1-4"
phys.temperature = 300
phys.seed = 1234
# FORCES
forces = Forces()
ff = forces.makeForceField(phys, "charmm")
# EXECUTE
prop = Propagator(phys, forces, io)
gamma = prop.propagate("Leapfrog", steps=20, dt=0.5, forcefield=ff)
io.writeXYZPos(phys, 'data/XYZTest.xyz')
| [
"kuangchen@ucla.edu"
] | kuangchen@ucla.edu |
feb6ea6de93053f94bd6847ba66ec91a7becadda | 86a03fa5371909a8ae8a5df02414753ab7826136 | /polls/views.py | 2283ef7746d3a557add5e2e8105a6f2efeec0f32 | [] | no_license | lnvc/backup | dc6fc1503f7b691f7533e086afba49243cc18a27 | aec426cafcf6966bf333e5d7ea3cb59ae71c23c5 | refs/heads/master | 2020-03-23T22:08:44.042727 | 2018-07-24T12:41:12 | 2018-07-24T12:41:12 | 142,156,078 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,986 | py | from django.http import HttpResponseRedirect
from django.shortcuts import get_object_or_404, render
from django.urls import reverse
from django.views import generic
from django.utils import timezone
from rest_framework.views import APIView
from rest_framework.response import Response
from rest_framework import status, viewsets, generics
from . serializers import QuestionSerializer, PersonSerializer, SubjectSerializer, QuestionSubjectSerializer, TestSerializer, TestQuestionSerializer, StudentSerializer, QuestionMakerSerializer, SchoolSerializer, AttendSchoolSerializer, CoachSerializer, CoachStudentSerializer, ParentSerializer, ParentStudentSerializer, TeamSerializer, StudentTeamSerializer, TestSubmissionSerializer, SubmissionSerializer
from rest_framework.parsers import MultiPartParser, FormParser, JSONParser
from .models import Question, Person, Subject, Question_Subject, Test, Test_Question, Student, Question_Maker, School, Attend_School, Coach, Coach_Student, Parent, Parent_Student, Team, Student_Team, Test_Submission, Submission
# def index(request):
# latest_question_list = Question.objects.order_by('-pub_date')[:5]
# context = {'latest_question_list': latest_question_list }
# return render(request, 'polls/index.html', context)
# def detail(request, question_id):
# question = get_object_or_404(Question, pk=question_id)
# return render(request, 'polls/detail.html', {'question': question})
# def results(request, question_id):
# result = get_object_or_404(Question, pk=question_id)
# return render(request, 'polls/results.html', {'result': result})
class PersonViewSet(viewsets.ModelViewSet):
queryset=Person.objects.all()
serializer_class=PersonSerializer
class QuestionViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser, )
queryset=Question.objects.all()
serializer_class = QuestionSerializer
class SubjectViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser, )
queryset=Subject.objects.all()
serializer_class = SubjectSerializer
class QuestionSubjectViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser, )
queryset = Question_Subject.objects.all()
serializer_class = QuestionSubjectSerializer
class TestViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser, )
queryset = Test.objects.all()
serializer_class = TestSerializer
class TestQuestionViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser, )
queryset = Test_Question.objects.all()
serializer_class = TestQuestionSerializer
class StudentViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser, )
queryset = Student.objects.all()
serializer_class = StudentSerializer
class QuestionMakerViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser, )
queryset = Question_Maker.objects.all()
serializer_class = QuestionMakerSerializer
class SchoolViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser, )
queryset=School.objects.all()
serializer_class = SchoolSerializer
class AttendSchoolViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser, )
queryset=Attend_School.objects.all()
serializer_class = AttendSchoolSerializer
class CoachViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser, )
queryset = Coach.objects.all()
serializer_class = CoachSerializer
class CoachStudentViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser, )
queryset = Coach_Student.objects.all()
serializer_class = CoachStudentSerializer
class ParentViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser, )
queryset = Parent.objects.all()
serializer_class = ParentSerializer
class ParentStudentViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser, )
queryset = Parent_Student.objects.all()
serializer_class = ParentStudentSerializer
class TeamViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser, )
queryset = Team.objects.all()
serializer_class = TeamSerializer
class StudentTeamViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser, )
queryset = Student_Team.objects.all()
serializer_class = StudentTeamSerializer
class TestSubmissionViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser, )
queryset = Test_Submission.objects.all()
serializer_class = TestSubmissionSerializer
class SubmissionViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser, )
queryset = Submission.objects.all()
serializer_class = SubmissionSerializer
class QuestionList(APIView):
def get(self,request):
q = Question.objects.all()
serializer = QuestionSerializer(q, many=False)
return Response(serializer.data)
def post(self,request):
serializer = QuestionSerializer(data=request.data)
if serializer.is_valid():
serializer.save()
return Response(serializer.data, status=status.HTTP_201_CREATED)
return Response(serializer.errors, status = status.HTTP_400_BAD_REQUEST)
class IndexView(generic.ListView):
template_name='polls/index.html'
context_object_name='latest_question_list'
def get_queryset(self):
"""Return the last five published quesions
not including those set to be published in
the future)."""
return Question[:5]
class DetailView(generic.DetailView):
model=Question
template_name='polls/detail.html'
def get_queryset(self):
return Question.objects.filter(pub_date__lte=timezone.now())
class ResultsView(generic.DetailView):
model=Question
template_name='polls/results.html'
| [
"you@example.com"
] | you@example.com |
d95520c50cf58eb54fb6a5be256649eab0022be7 | deb826125ca2f3959d30598aedd5919eea76e2d7 | /probabilistic_automata/distributions.py | 96b7cb728e312035c9fdbd8b45cad4d33791ea10 | [
"MIT"
] | permissive | pangzhan27/probabilistic_automata | 2fc77bc81381a303b48a5e2f98a5dac32dd689a0 | ac04e3c142688b1f66bea086c2779061d0c5d9b7 | refs/heads/master | 2023-03-17T13:20:05.148389 | 2020-10-27T05:31:39 | 2020-10-27T05:31:39 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,204 | py | from __future__ import annotations
import random
from typing import Callable, Mapping, Set, Union
from itertools import product
import attr
from dfa.dfa import Letter, State
Action = Letter
@attr.s(frozen=True, auto_attribs=True)
class ExplicitDistribution:
"""Object representing a discrete distribution over environment actions."""
_dist: Mapping[Action, float]
def sample(self) -> Action:
"""Sample an envionment action."""
actions, weights = zip(*self._dist.items())
return random.choices(actions, weights)[0]
def __call__(self, action):
"""Evaluates the probability of an action."""
return self._dist.get(action, 0)
def items(self):
"""Sequence of Action, Probability pairs defining the distribution."""
return self._dist.items()
@attr.s(frozen=True, auto_attribs=True)
class ProductDistribution:
"""Object representing the product distribution of left and right."""
left: Distribution
right: Distribution
def sample(self) -> Action:
"""Sample an envionment action."""
return (self.left.sample(), self.right.sample())
def __call__(self, action):
"""Evaluates the probability of an action."""
left_a, right_a = action
return self.left(left_a), self.right(right_a)
def items(self):
"""Sequence of Action, Probability pairs defining the distribution."""
prod = product(self.left.items(), self.right.items())
for (a1, p1), (a2, p2) in prod:
yield (a1, a2), p1 * p2
Distribution = Union[ProductDistribution, ExplicitDistribution]
EnvDist = Callable[[State, Action], Distribution]
def prod_dist(left: EnvDist, right: EnvDist) -> EnvDist:
return lambda s, a: ProductDistribution(
left=left(s[0], a[0]),
right=right(s[1], a[1]),
)
def uniform(actions: Set[Action]) -> EnvDist:
"""
Encodes an environment that selects actions uniformly at random,
i.e., maps all state/action combinations to a Uniform distribution
of the input (environment) actions.
"""
size = len(actions)
dist = ExplicitDistribution({a: 1/size for a in actions})
return lambda *_: dist
| [
"mvc@linux.com"
] | mvc@linux.com |
650ce6e69bd38870e944308884c26a7866b15c9e | 50afc0db7ccfc6c80e1d3877fc61fb67a2ba6eb7 | /challenge3(yeartocentury)/Isaac.py | ebec690b0b53c6daeb10860c12db594d25ff2bf0 | [
"MIT"
] | permissive | banana-galaxy/challenges | 792caa05e7b8aa10aad8e04369fc06aaf05ff398 | 8655c14828607535a677e2bb18689681ee6312fa | refs/heads/master | 2022-12-26T23:58:12.660152 | 2020-10-06T13:38:04 | 2020-10-06T13:38:04 | 268,851,516 | 11 | 8 | MIT | 2020-09-22T21:21:30 | 2020-06-02T16:24:41 | Python | UTF-8 | Python | false | false | 163 | py | def yearToCentury(year):
tmp = year/100
if not tmp.is_integer():
century = int(tmp) + 1
else:
century = int(tmp)
return century | [
"cawasp@gmail.com"
] | cawasp@gmail.com |
9bf65574b384aa67fe6a8e1a0886f8d1d618b557 | c6588d0e7d361dba019743cacfde83f65fbf26b8 | /x12/5030/270005030.py | 5f04838af1dae128686436547fdfee1244dd5f23 | [] | no_license | djfurman/bots-grammars | 64d3b3a3cd3bd95d625a82204c3d89db6934947c | a88a02355aa4ca900a7b527b16a1b0f78fbc220c | refs/heads/master | 2021-01-12T06:59:53.488468 | 2016-12-19T18:37:57 | 2016-12-19T18:37:57 | 76,887,027 | 0 | 0 | null | 2016-12-19T18:30:43 | 2016-12-19T18:30:43 | null | UTF-8 | Python | false | false | 1,326 | py | from bots.botsconfig import *
from records005030 import recorddefs
syntax = {
'version' : '00403', #version of ISA to send
'functionalgroup' : 'HS',
}
structure = [
{ID: 'ST', MIN: 1, MAX: 1, LEVEL: [
{ID: 'BHT', MIN: 1, MAX: 1},
{ID: 'HL', MIN: 1, MAX: 99999, LEVEL: [
{ID: 'TRN', MIN: 0, MAX: 9},
{ID: 'NM1', MIN: 1, MAX: 99999, LEVEL: [
{ID: 'REF', MIN: 0, MAX: 9},
{ID: 'N2', MIN: 0, MAX: 1},
{ID: 'N3', MIN: 0, MAX: 1},
{ID: 'N4', MIN: 0, MAX: 1},
{ID: 'PER', MIN: 0, MAX: 3},
{ID: 'PRV', MIN: 0, MAX: 1},
{ID: 'DMG', MIN: 0, MAX: 1},
{ID: 'INS', MIN: 0, MAX: 1},
{ID: 'HI', MIN: 0, MAX: 1},
{ID: 'DTP', MIN: 0, MAX: 9},
{ID: 'MPI', MIN: 0, MAX: 9},
{ID: 'EQ', MIN: 0, MAX: 99, LEVEL: [
{ID: 'AMT', MIN: 0, MAX: 2},
{ID: 'VEH', MIN: 0, MAX: 1},
{ID: 'PDR', MIN: 0, MAX: 1},
{ID: 'PDP', MIN: 0, MAX: 1},
{ID: 'III', MIN: 0, MAX: 10},
{ID: 'REF', MIN: 0, MAX: 1},
{ID: 'DTP', MIN: 0, MAX: 9},
]},
]},
]},
{ID: 'SE', MIN: 1, MAX: 1},
]}
]
| [
"jason.capriotti@gmail.com"
] | jason.capriotti@gmail.com |
ac339d72416eae3e5bbcc4fd5d4f226aa27e9c3d | 951a84f6fafa763ba74dc0ad6847aaf90f76023c | /P2/ZS103_2.py | ab42cb20c9766cacd43b87fbb837e2183593b473 | [] | no_license | SakuraGo/leetcodepython3 | 37258531f1994336151f8b5c8aec5139f1ba79f8 | 8cedddb997f4fb6048b53384ac014d933b6967ac | refs/heads/master | 2020-09-27T15:55:28.353433 | 2020-02-15T12:00:02 | 2020-02-15T12:00:02 | 226,550,406 | 1 | 1 | null | null | null | null | UTF-8 | Python | false | false | 2,780 | py | # 909. 蛇梯棋
# 在一块 N x N 的棋盘 board 上,从棋盘的左下角开始,每一行交替方向,按从 1 到 N*N 的数字给方格编号。例如,对于一块 6 x 6 大小的棋盘,可以编号如下:
# 玩家从棋盘上的方格 1 (总是在最后一行、第一列)开始出发。
#
# 每一次从方格 x 起始的移动都由以下部分组成:
#
# 你选择一个目标方块 S,它的编号是 x+1,x+2,x+3,x+4,x+5,或者 x+6,只要这个数字 <= N*N。
# 如果 S 有一个蛇或梯子,你就移动到那个蛇或梯子的目的地。否则,你会移动到 S。
# 在 r 行 c 列上的方格里有 “蛇” 或 “梯子”;如果 board[r][c] != -1,那个蛇或梯子的目的地将会是 board[r][c]。
#
# 注意,你每次移动最多只能爬过蛇或梯子一次:就算目的地是另一条蛇或梯子的起点,你也不会继续移动。
#
# 返回达到方格 N*N 所需的最少移动次数,如果不可能,则返回 -1。
from typing import List
class Solution:
def num2pos(self,num:int,aa:int):
i = (num-1)//aa
j = 0
if i%2 == 0:
j = (num-1)%aa
else:
j = aa-1 -(num-1)%aa
return j,i
def pos2num(self,j:int,i:int,aa:int):
num = 0
if i %2== 0:
num = aa*(i) + j + 1
else:
num = aa*(i) +1
num += (aa-1-j)
return num
def snakesAndLadders(self, board: List[List[int]]) -> int:
board = board[::-1]
steps = [1,2,3,4,5,6]
aa = len(board)
visited = [[0 for j in range(aa)] for i in range(aa)]
que = []
que.append((0,0,0)) ##存入pos与cnt
visited[0][0] = 1
while len(que)>0:
point = que.pop(0)
j,i,stepCnt = point
if i == 5 and j == 1:
print("qwer")
oldPoint = self.pos2num(j,i,aa)
for juli in steps:
newPoint = oldPoint + juli
# print("newPoint:",newPoint)
if newPoint == aa * aa: ##找到了
return stepCnt+1
if newPoint > aa * aa :
continue
newJ,newI = self.num2pos(newPoint,aa)
if board[newI][newJ] != -1:
if board[newI][newJ] == aa* aa : ##跳到了终点
return stepCnt+1
newJ,newI = self.num2pos(board[newI][newJ],aa)
print("newj:",newJ,"newi:",newI)
if visited[newI][newJ] == 0:
que.append((newJ,newI,stepCnt+1))
visited[newI][newJ] = 1
return -1
res = Solution().snakesAndLadders([[-1,-1,-1],[-1,9,8],[-1,8,9]])
print(res) | [
"452681917@qq.com"
] | 452681917@qq.com |
dc64082107082a4f160588b00be0abeb5d516633 | 32eeb97dff5b1bf18cf5be2926b70bb322e5c1bd | /benchmark/uhabits/testcase/firstcases/testcase7_014.py | 2f2ec0fff145e0b56af49965d81293bb7e30f162 | [] | no_license | Prefest2018/Prefest | c374d0441d714fb90fca40226fe2875b41cf37fc | ac236987512889e822ea6686c5d2e5b66b295648 | refs/heads/master | 2021-12-09T19:36:24.554864 | 2021-12-06T12:46:14 | 2021-12-06T12:46:14 | 173,225,161 | 5 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,784 | py | #coding=utf-8
import os
import subprocess
import time
import traceback
from appium import webdriver
from appium.webdriver.common.touch_action import TouchAction
from selenium.common.exceptions import NoSuchElementException, WebDriverException
desired_caps = {
'platformName' : 'Android',
'deviceName' : 'Android Emulator',
'platformVersion' : '4.4',
'appPackage' : 'org.isoron.uhabits',
'appActivity' : 'org.isoron.uhabits.activities.habits.list.ListHabitsActivity',
'resetKeyboard' : True,
'androidCoverage' : 'org.isoron.uhabits/org.isoron.uhabits.JacocoInstrumentation',
'noReset' : True
}
def command(cmd, timeout=5):
p = subprocess.Popen(cmd, stderr=subprocess.STDOUT, stdout=subprocess.PIPE, shell=True)
time.sleep(timeout)
p.terminate()
return
def getElememt(driver, str) :
for i in range(0, 5, 1):
try:
element = driver.find_element_by_android_uiautomator(str)
except NoSuchElementException:
time.sleep(1)
else:
return element
os.popen("adb shell input tap 50 50")
element = driver.find_element_by_android_uiautomator(str)
return element
def getElememtBack(driver, str1, str2) :
for i in range(0, 2, 1):
try:
element = driver.find_element_by_android_uiautomator(str1)
except NoSuchElementException:
time.sleep(1)
else:
return element
for i in range(0, 5, 1):
try:
element = driver.find_element_by_android_uiautomator(str2)
except NoSuchElementException:
time.sleep(1)
else:
return element
os.popen("adb shell input tap 50 50")
element = driver.find_element_by_android_uiautomator(str2)
return element
def swipe(driver, startxper, startyper, endxper, endyper) :
size = driver.get_window_size()
width = size["width"]
height = size["height"]
try:
driver.swipe(start_x=int(width * startxper), start_y=int(height * startyper), end_x=int(width * endxper),
end_y=int(height * endyper), duration=2000)
except WebDriverException:
time.sleep(1)
driver.swipe(start_x=int(width * startxper), start_y=int(height * startyper), end_x=int(width * endxper),
end_y=int(height * endyper), duration=2000)
return
# testcase014
try :
starttime = time.time()
driver = webdriver.Remote('http://localhost:4723/wd/hub', desired_caps)
driver.press_keycode(4)
except Exception, e:
print 'FAIL'
print 'str(e):\t\t', str(e)
print 'repr(e):\t', repr(e)
print traceback.format_exc()
else:
print 'OK'
finally:
cpackage = driver.current_package
endtime = time.time()
print 'consumed time:', str(endtime - starttime), 's'
command("adb shell am broadcast -a com.example.pkg.END_EMMA --es name \"7_014\"")
jacocotime = time.time()
print 'jacoco time:', str(jacocotime - endtime), 's'
driver.quit()
if (cpackage != 'org.isoron.uhabits'):
cpackage = "adb shell am force-stop " + cpackage
os.popen(cpackage) | [
"prefest2018@gmail.com"
] | prefest2018@gmail.com |
8d949557ef10722cedbae45f3fd2a9d282ab3f43 | e588da296dd6ec3bedee9d24444dfca6e8780aef | /zip.py | 28af78f6e23bf9a9807804c1dec5fd9415d28c26 | [] | no_license | sujith1919/TCS-Python | 98eac61a02500a0e8f3139e431c98a509828c867 | c988cf078616540fe7f56e3ebdfd964aebd14519 | refs/heads/master | 2023-03-02T09:03:10.052633 | 2021-02-02T16:40:18 | 2021-02-02T16:40:18 | 335,355,862 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 298 | py | import zipfile
# Create zip file
f = zipfile.ZipFile('test.zip', 'w')
# add some files
f.write('file1.txt')
# add file as a new name
f.write('file2.txt', 'file-two.txt')
# add content from program (string)
f.writestr('file3.txt', 'Hello how are you')
# flush and close
f.close() | [
"jayarajan.sujith@oracle.com"
] | jayarajan.sujith@oracle.com |
a759ee12bac0d12f6a9906296363dc9ddf2ce2e0 | 9e988c0dfbea15cd23a3de860cb0c88c3dcdbd97 | /sdBs/AllRun/sbss_1219+551a/sdB_sbss_1219+551a_coadd.py | cda740b4da8b1b88c24b94f699a32d9f50cfa24e | [] | no_license | tboudreaux/SummerSTScICode | 73b2e5839b10c0bf733808f4316d34be91c5a3bd | 4dd1ffbb09e0a599257d21872f9d62b5420028b0 | refs/heads/master | 2021-01-20T18:07:44.723496 | 2016-08-08T16:49:53 | 2016-08-08T16:49:53 | 65,221,159 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 442 | py | from gPhoton.gMap import gMap
def main():
gMap(band="NUV", skypos=[185.475792,54.846639], skyrange=[0.0333333333333,0.0333333333333], stepsz = 30., cntfile="/data2/fleming/GPHOTON_OUTPUT/LIGHTCURVES/sdBs/sdB_sbss_1219+551a/sdB_sbss_1219+551a_movie_count.fits", cntcoaddfile="/data2/fleming/GPHOTON_OUTPUT/LIGHTCURVES/sdB/sdB_sbss_1219+551a/sdB_sbss_1219+551a_count_coadd.fits", overwrite=True, verbose=3)
if __name__ == "__main__":
main()
| [
"thomas@boudreauxmail.com"
] | thomas@boudreauxmail.com |
81a2c8da318194cf868c51bd344f3defcc47b246 | 9d611e18ef40e96ed852f2b7cf7842dc3de33e18 | /examples/django_demo/mysite/settings.py | 8b59f1fc45b7f0e842aa521c47d8b8fad7bd049f | [] | no_license | dantezhu/xstat | 03cc39923d3d33a75e2a38801ee5f7eabf3b33cf | cc92e1d860abdaf6e4304127edfe595e3fcbf308 | refs/heads/master | 2023-04-13T15:23:02.398594 | 2023-04-08T17:21:26 | 2023-04-08T17:21:26 | 34,090,855 | 1 | 2 | null | 2018-06-02T07:50:37 | 2015-04-17T01:52:58 | Python | UTF-8 | Python | false | false | 2,073 | py | """
Django settings for mysite project.
For more information on this file, see
https://docs.djangoproject.com/en/1.6/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.6/ref/settings/
"""
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
import os
BASE_DIR = os.path.dirname(os.path.dirname(__file__))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.6/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'qwjbg&!+mkd&i-p%2^!vb!m4@^=*2(tp4k2b3e$h_x%no@6^h2'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
TEMPLATE_DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'test_stat',
)
MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'xstat.DjangoStat',
)
ROOT_URLCONF = 'mysite.urls'
WSGI_APPLICATION = 'mysite.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.6/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Internationalization
# https://docs.djangoproject.com/en/1.6/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.6/howto/static-files/
STATIC_URL = '/static/'
# stat
XSTAT_TITLE = 'dante.test'
XSTAT_HOST = '127.0.0.1'
| [
"dantezhu@qq.com"
] | dantezhu@qq.com |
8ebed524c2137dfd56be352f2713160bd5d0e472 | fb7c95127adc8ecd137568da5658f8c2b748b09b | /pwncat/modules/agnostic/enumerate/__init__.py | 58cf45f67d3bddbf85708c4195b3d45e2dbb75b9 | [] | no_license | Aman-Dhimann/pwncat | 37fcd3d0ab9acb668efb73024bc8dfc8c2c0d150 | f74510afb6c1d0f880462034adf78359145a89b4 | refs/heads/master | 2023-05-24T15:12:24.603790 | 2021-06-12T21:45:39 | 2021-06-12T21:45:39 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 134 | py | #!/usr/bin/env python3
# Alias `run enumerate` to `run enumerate.gather`
from pwncat.modules.agnostic.enumerate.gather import Module
| [
"caleb.stewart94@gmail.com"
] | caleb.stewart94@gmail.com |
f5940373af05f87a0a7e2d86d85bd2a415853b19 | 466f1d55748b6082e78b26c89ef75e1fd9555f66 | /test/unit_test/test_sample.py | 223b20fa7635aa9ca0f88413fb761b7aea639cac | [] | no_license | kiccho1101/kaggle_disaster_tweets_gokart | 6f9c07c1225767b2c4ec9c05764646ada6e43192 | 389f9582ba1b1208b5eb4e758eff1b967794cc34 | refs/heads/master | 2022-04-21T08:23:44.279540 | 2020-04-01T13:39:57 | 2020-04-01T13:39:57 | 247,387,879 | 4 | 0 | null | null | null | null | UTF-8 | Python | false | false | 508 | py | from logging import getLogger
import unittest
from unittest.mock import MagicMock
from kaggle_disaster_tweets_gokart.model.sample import Sample
logger = getLogger(__name__)
class TestSample(unittest.TestCase):
def setup(self):
self.output_data = None
def test_run(self):
task = Sample()
task.dump = MagicMock(side_effect=self._dump)
task.run()
self.assertEqual(self.output_data, "sample output")
def _dump(self, data):
self.output_data = data
| [
"youodf11khp@gmail.com"
] | youodf11khp@gmail.com |
8c07d5f47490fcaa49882b14eec700f4432e1f5b | cf3549c5200e78dd81095cd3e05b3015d6bc2290 | /spiderman/misc/logger.py | 25050f75ba990aecda5cc123944f902caee9e470 | [
"Apache-2.0"
] | permissive | zzcv/python | e0c56a363188b8a3dcc030b10a7bd4aa1fc426b2 | 69ac0cabb7154816b1df415c0cc32966d6335718 | refs/heads/master | 2020-09-14T12:57:08.046356 | 2019-11-18T11:54:54 | 2019-11-18T11:54:54 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,089 | py | #/usr/bin/env python
#coding=utf8
"""
# Author: kellanfan
# Created Time : Tue 28 Aug 2018 10:55:09 PM CST
# File Name: logger.py
# Description:
"""
import logging
import yaml
import threading
from logging.handlers import RotatingFileHandler
class Logger(object):
def __init__(self, config_file):
'''init logger'''
self.logger = logging.getLogger('Logger')
config = self.__getconfig(config_file)
mythread=threading.Lock()
mythread.acquire()
self.log_file_path = config.get('log_file_path')
self.maxBytes = eval(config.get('maxBytes'))
self.backupCount = int(config.get('backupCount'))
self.outputConsole_level = int(config.get('outputConsole_level'))
self.outputFile_level = int(config.get('outputFile_level'))
self.outputConsole = int(config.get('outputConsole'))
self.outputFile = int(config.get('outputFile'))
self.formatter = logging.Formatter('%(asctime)s %(levelname)s -%(thread)d- %(filename)s : %(message)s')
mythread.release()
def __call__(self):
return self.outputLog()
def outputLog(self):
'''put out log'''
if self.outputConsole == 1:
console_handler = logging.StreamHandler()
console_handler.setFormatter(self.formatter)
self.logger.setLevel(self.outputConsole_level)
self.logger.addHandler(console_handler)
else:
pass
if self.outputFile == 1:
file_handler = RotatingFileHandler(self.log_file_path, maxBytes=self.maxBytes, backupCount=self.backupCount)
file_handler.setFormatter(self.formatter)
self.logger.setLevel(self.outputFile_level)
self.logger.addHandler(file_handler)
else:
pass
return self.logger
def __getconfig(self,config_file):
with open(config_file) as f:
configs = yaml.load(f.read())
return configs
if __name__ == '__main__':
mylog = Logger('logger.yml')
aa = mylog()
aa.error('aaa')
| [
"icyfk1989@163.com"
] | icyfk1989@163.com |
55d0e7eaad9e1b13ec5d11db16c9b9034bcd39e5 | 35b6013c1943f37d1428afd2663c8aba0a02628d | /functions/v2/response_streaming/main_test.py | 9f850a1718585ed2fceea23f8e35d73d9493e9f9 | [
"Apache-2.0"
] | permissive | GoogleCloudPlatform/python-docs-samples | d2a251805fbeab15d76ed995cf200727f63f887d | 44e819e713c3885e38c99c16dc73b7d7478acfe8 | refs/heads/main | 2023-08-28T12:52:01.712293 | 2023-08-28T11:18:28 | 2023-08-28T11:18:28 | 35,065,876 | 7,035 | 7,593 | Apache-2.0 | 2023-09-14T20:20:56 | 2015-05-04T23:26:13 | Jupyter Notebook | UTF-8 | Python | false | false | 968 | py | # Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the 'License');
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an 'AS IS' BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import flask
import pytest
import main
# Create a fake "app" for generating test request contexts.
@pytest.fixture(scope="module")
def app() -> flask.Flask:
return flask.Flask(__name__)
def test_main(app):
with app.test_request_context():
response = main.stream_big_query_output(flask.request)
assert response.is_streamed
assert response.status_code == 200
| [
"noreply@github.com"
] | GoogleCloudPlatform.noreply@github.com |
914f45793a8304544515d3639215795706f93065 | 3c6b0521eb788dc5e54e46370373e37eab4a164b | /predictive_engagement/pytorch_src/create_utt_embed.py | 9d6db792c1991cb86869136008c3cec28720be98 | [
"MIT"
] | permissive | y12uc231/DialEvalMetrics | 7402f883390b94854f5d5ae142f700a697d7a21c | f27d717cfb02b08ffd774e60faa6b319a766ae77 | refs/heads/main | 2023-09-02T21:56:07.232363 | 2021-11-08T21:25:24 | 2021-11-08T21:25:24 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,888 | py |
import argparse
from bert_serving.client import BertClient
import csv
import os
import pickle
#In order to create utterance embeddings, you need to first start BertServer (follow https://github.com/hanxiao/bert-as-service) with following command:
#bert-serving-start -model_dir /tmp/english_L-12_H-768_A-12/ -num_worker=4 -max_seq_len=128 -pooling_strategy=REDUCE_MEAN
#model_dir is the directory that pretrained Bert model has been downloaded
def make_Bert_embeddings(data_dir, fname, f_queries_embed, f_replies_embed, type):
'''Create embedding file for all queries and replies in the given files
Param:
data_dir: the directory of data
fname: name of the input file containing queries, replies, engagement_score
f_queries_embed: name of the output file containing the queries bert embeddings
f_replies_embed: name of the output file containing the replies bert embeddings
type: indicate train/valid/test set
'''
csv_file = open(data_dir + fname)
csv_reader = csv.reader(csv_file, delimiter=',')
foutput_q = os.path.join(data_dir + f_queries_embed)
foutput_r = os.path.join(data_dir + f_replies_embed)
queries,replies = [],[]
next(csv_reader)
for row in csv_reader:
queries.append(row[1].split('\n')[0])
replies.append(row[2].split('\n')[0])
if os.path.exists(foutput_q) and os.path.exists(foutput_r) :
print('Bert embedding files for utterances exist!')
return
else:
print("Bert embedding files for utterances do not exist")
queries_vectors = {}
replies_vectors = {}
bc = BertClient()
has_empty = False
fwq = open(foutput_q, 'wb')
for idx, q in enumerate(queries):
print(str(idx)+'query {}'.format(type))
if q not in queries_vectors.keys() and q !='':
queries_vectors[q] = bc.encode([q])
if q not in queries_vectors.keys() and q =='':
queries_vectors[q] = bc.encode(['[PAD]'])
has_empty=True
if has_empty == False:
queries_vectors[''] = bc.encode(['[PAD]'])
pickle.dump(queries_vectors, fwq)
fwr = open(foutput_r, 'wb')
has_empty = False
for idx, r in enumerate(replies):
print(str(idx)+'reply {}'.format(type))
if r not in replies_vectors.keys() and r !='':
replies_vectors[r] = bc.encode([r])
if r not in replies_vectors.keys() and r =='':
replies_vectors[r] = bc.encode(['[PAD]'])
has_empty = True
if has_empty == False:
replies_vectors[''] = bc.encode(['[PAD]'])
pickle.dump(replies_vectors, fwr)
def load_Bert_embeddings(data_dir, f_queries_embed, f_replies_embed):
'''Load embeddings of queries and replies
Param:
data_dir: the directory of data
f_queries_embed: name of the input file containing the queries bert embeddings
f_replies_embed: name of the input file containing the replies bert embeddings
'''
print('Loading Bert embeddings of sentences')
queries_vectors = {}
replies_vectors = {}
print('query embedding')
fwq = open(data_dir + f_queries_embed, 'rb')
dict_queries = pickle.load(fwq)
for query, embeds in dict_queries.items():
queries_vectors[query] = embeds[0]
print('len of embeddings is '+str(len(queries_vectors)))
print('reply embedding')
fwr = open(data_dir + f_replies_embed, 'rb')
dict_replies = pickle.load(fwr)
for reply, embeds in dict_replies.items():
replies_vectors[reply] = embeds[0]
print('len of embeddings is '+str(len(replies_vectors)))
return queries_vectors, replies_vectors
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Parameters for engagement classification')
parser.add_argument('--data', type=str)
args = parser.parse_args()
data_dir = './../data/'
pooling = 'mean'
ifname= 'ConvAI_utts'
dd_ifname = 'DD_finetune'
ofname = ''
#make_Bert_embeddings(data_dir, ifname+'_train.csv', ifname+'_train_queries_embed_'+pooling, ifname+'_train_replies_embed_'+pooling, 'train')
#make_Bert_embeddings(data_dir, ifname+'_valid.csv', ifname+'_valid_queries_embed_'+pooling, ifname+'_valid_replies_embed_'+pooling, 'valid')
#make_Bert_embeddings(data_dir, ifname+'_test.csv', ifname+'_test_queries_embed_'+pooling, ifname+'_test_replies_embed_'+pooling, 'test')
#make_Bert_embeddings(data_dir, 'humanAMT_engscores_utt.csv', 'humanAMT_queries_embed_'+pooling, 'humanAMT_replies_embed_'+pooling, 'testAMT')
#make_Bert_embeddings(data_dir, dd_ifname+'_train.csv', dd_ifname+'_queries_train_embed_'+pooling, dd_ifname+'_replies_train_embed_'+pooling, 'train')
#make_Bert_embeddings(data_dir, dd_ifname+'_valid.csv', dd_ifname+'_queries_valid_embed_'+pooling, dd_ifname+'_replies_valid_embed_'+pooling, 'valid')
#make_Bert_embeddings(data_dir, 'DD_queries_generated_replies.csv', 'DD_queries_embed_'+pooling, 'DD_generated_replies_embed_'+pooling, 'test')
make_Bert_embeddings(data_dir, args.data, f'{args.data}_queries_embed_'+pooling, f'{args.data}_replies_embed_'+pooling, 'test') | [
"yitingye@cs.cmu.edu"
] | yitingye@cs.cmu.edu |
d506e2c4c46364a2576d48fc2c2cc52ef154142b | ace30d0a4b1452171123c46eb0f917e106a70225 | /filesystems/vnx_rootfs_lxc_ubuntu64-16.04-v025-openstack-compute/rootfs/usr/lib/python2.7/dist-packages/openstackclient/tests/functional/base.py | 857432962124a08b874a970806fe3f3b60643f25 | [
"Python-2.0"
] | permissive | juancarlosdiaztorres/Ansible-OpenStack | e98aa8c1c59b0c0040c05df292964520dd796f71 | c01951b33e278de9e769c2d0609c0be61d2cb26b | refs/heads/master | 2022-11-21T18:08:21.948330 | 2018-10-15T11:39:20 | 2018-10-15T11:39:20 | 152,568,204 | 0 | 3 | null | 2022-11-19T17:38:49 | 2018-10-11T09:45:48 | Python | UTF-8 | Python | false | false | 4,414 | py | # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import re
import shlex
import subprocess
import testtools
from tempest.lib.cli import output_parser
from tempest.lib import exceptions
COMMON_DIR = os.path.dirname(os.path.abspath(__file__))
FUNCTIONAL_DIR = os.path.normpath(os.path.join(COMMON_DIR, '..'))
ROOT_DIR = os.path.normpath(os.path.join(FUNCTIONAL_DIR, '..'))
EXAMPLE_DIR = os.path.join(ROOT_DIR, 'examples')
def execute(cmd, fail_ok=False, merge_stderr=False):
"""Executes specified command for the given action."""
cmdlist = shlex.split(cmd)
result = ''
result_err = ''
stdout = subprocess.PIPE
stderr = subprocess.STDOUT if merge_stderr else subprocess.PIPE
proc = subprocess.Popen(cmdlist, stdout=stdout, stderr=stderr)
result, result_err = proc.communicate()
result = result.decode('utf-8')
if not fail_ok and proc.returncode != 0:
raise exceptions.CommandFailed(proc.returncode, cmd, result,
result_err)
return result
class TestCase(testtools.TestCase):
delimiter_line = re.compile('^\+\-[\+\-]+\-\+$')
@classmethod
def openstack(cls, cmd, fail_ok=False):
"""Executes openstackclient command for the given action."""
return execute('openstack ' + cmd, fail_ok=fail_ok)
@classmethod
def get_openstack_configuration_value(cls, configuration):
opts = cls.get_opts([configuration])
return cls.openstack('configuration show ' + opts)
@classmethod
def get_openstack_extention_names(cls):
opts = cls.get_opts(['Name'])
return cls.openstack('extension list ' + opts)
@classmethod
def get_opts(cls, fields, output_format='value'):
return ' -f {0} {1}'.format(output_format,
' '.join(['-c ' + it for it in fields]))
@classmethod
def assertOutput(cls, expected, actual):
if expected != actual:
raise Exception(expected + ' != ' + actual)
@classmethod
def assertInOutput(cls, expected, actual):
if expected not in actual:
raise Exception(expected + ' not in ' + actual)
@classmethod
def assertsOutputNotNone(cls, observed):
if observed is None:
raise Exception('No output observed')
def assert_table_structure(self, items, field_names):
"""Verify that all items have keys listed in field_names."""
for item in items:
for field in field_names:
self.assertIn(field, item)
def assert_show_fields(self, show_output, field_names):
"""Verify that all items have keys listed in field_names."""
# field_names = ['name', 'description']
# show_output = [{'name': 'fc2b98d8faed4126b9e371eda045ade2'},
# {'description': 'description-821397086'}]
# this next line creates a flattened list of all 'keys' (like 'name',
# and 'description' out of the output
all_headers = [item for sublist in show_output for item in sublist]
for field_name in field_names:
self.assertIn(field_name, all_headers)
def parse_show_as_object(self, raw_output):
"""Return a dict with values parsed from cli output."""
items = self.parse_show(raw_output)
o = {}
for item in items:
o.update(item)
return o
def parse_show(self, raw_output):
"""Return list of dicts with item values parsed from cli output."""
items = []
table_ = output_parser.table(raw_output)
for row in table_['values']:
item = {}
item[row[0]] = row[1]
items.append(item)
return items
def parse_listing(self, raw_output):
"""Return list of dicts with basic item parsed from cli output."""
return output_parser.listing(raw_output)
| [
"jcdiaztorres96@gmail.com"
] | jcdiaztorres96@gmail.com |
0c8551b52d52cd8af6da7aa30601f5ab2777e761 | 7041c85dffb757c3e7063118730363f32ebb9b8a | /코테대비/20190227/색종이 배치.py | 4b6cbe8beff587fecc1ed3ff818e29dd5e36fc7d | [] | no_license | woonji913/til | efae551baff56f3ca16169b93185a65f4d81cd7a | a05efc68f88f535c26cb4d4a396a1e9cd6bf0248 | refs/heads/master | 2021-06-06T23:17:54.504620 | 2019-06-19T04:29:18 | 2019-06-19T04:29:18 | 163,778,844 | 1 | 0 | null | 2021-05-08T16:27:17 | 2019-01-02T01:08:19 | HTML | UTF-8 | Python | false | false | 562 | py | x1, y1, dx1, dy1 = map(int, input().split())
x2, y2, dx2, dy2 = map(int, input().split())
# if x1+dx1 == x2 and y1+dy1 == y2:
# print('1')
# elif x1+dx1 > x2 or y1+dy1 > y2:
# print('2')
# elif x1+dx1 > x2 and y1+dy1 > y2:
# print('3')
# elif x1+dx1 < x2 and y1+dy1 < y2:
# print('4')
x = set(range(x1, x1+dx1+1)) & set(range(x2, x2+dx2+1))
y = set(range(y1, y1+dy1+1)) & set(range(y2, y2+dy2+1))
if len(x) == 1 and len(y) == 1:
print(1)
elif len(x) == 1 or len(y) == 1:
print(2)
elif len(x) and len(y):
print(3)
else:
print(4) | [
"johnnyboy0913@gmail.com"
] | johnnyboy0913@gmail.com |
28b7be7be7b0d1fc78f01bddcaab4e7accb7b6cf | a17ab912e585db05931830be9b35943c31c5db4a | /Algo1.py | 4f007729e8cf958b62733bdf56b43ec72007a921 | [] | no_license | syuuhei-yama/Python_Algorithm | ef15b65d25ab458f7ddb784573f26a2de2718409 | 1f6b39ada0261dd02d2808d157ddbaaa8e3a1e24 | refs/heads/master | 2023-03-29T08:21:35.550924 | 2021-04-09T10:43:45 | 2021-04-09T10:43:45 | 327,771,890 | 0 | 0 | null | 2021-01-08T02:28:47 | 2021-01-08T01:56:34 | Python | UTF-8 | Python | false | false | 1,058 | py | #最大値
# a = [1, 3, 10, 2, 8]
# max = a[0]
#
# for i in range(1,len(a)):
#
# if (max < a[i]): #最小値はここをmin変更
# max = a[i]
#
# print(max)
#データの交換
# a = 10
# b = 20
#
# t = a
# a = b
# b = t
#
# print("a=",a,",b=",b)
#サーチアルゴリズム
# a = [10,3,1,4,2]
#
# search_s = 4
# findID = -1
#
# for i in range(len(a)):
# if a[i] == search_s:
# findID = i
# break
# print("見つかったID=",findID)
#完全数
# Q = int(input())
#
# for i in range(Q):
# n = int(input())
# num = []
#
# for i in range(1, n):
# if n % i == 0:
# num.append(i)
# nums = sum(num)
#
# if n == nums:
# print('perfect')
# elif n - nums == 1:
# print('nearly')
# else:
# print('neither')
#競技プログラミング
# ls = list(map(int,input().split()))
# ls = [s for s in input()]
# print(ls)
# n = int(input())
# l = [int(input()) for _ in range(n)]
# a, b, x = map(int, input().split())
# print('YES' if a <= x <= b+a else 'NO')
| [
"syuuhei0615@icloud.com"
] | syuuhei0615@icloud.com |
2437f835bb3468bf06a61053f5ee3b1fa1ae36c9 | 08a2a4550d725c1f7ed6fb1d3bfc9abc35de5e1e | /tencentcloud/ocr/v20181119/errorcodes.py | f1172d3098f29b6dc4a40b523dccded1ab7120ee | [
"Apache-2.0"
] | permissive | wearetvxq/tencentcloud-sdk-python | 8fac40c7ea756ec222d3f41b2321da0c731bf496 | cf5170fc83130744b8b631000efacd1b7ba03262 | refs/heads/master | 2023-07-16T15:11:54.409444 | 2021-08-23T09:27:56 | 2021-08-23T09:27:56 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,671 | py | # -*- coding: utf8 -*-
# Copyright (c) 2017-2021 THL A29 Limited, a Tencent company. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# 帐号已欠费。
FAILEDOPERATION_ARREARSERROR = 'FailedOperation.ArrearsError'
# 今日次数达到限制。
FAILEDOPERATION_COUNTLIMITERROR = 'FailedOperation.CountLimitError'
# 检测失败。
FAILEDOPERATION_DETECTFAILED = 'FailedOperation.DetectFailed'
# 文件下载失败。
FAILEDOPERATION_DOWNLOADERROR = 'FailedOperation.DownLoadError'
# 图片内容为空。
FAILEDOPERATION_EMPTYIMAGEERROR = 'FailedOperation.EmptyImageError'
# 引擎识别超时。
FAILEDOPERATION_ENGINERECOGNIZETIMEOUT = 'FailedOperation.EngineRecognizeTimeout'
# 身份证信息不合法(身份证号、姓名字段校验非法等)。
FAILEDOPERATION_IDCARDINFOILLEGAL = 'FailedOperation.IdCardInfoIllegal'
# 图片模糊。
FAILEDOPERATION_IMAGEBLUR = 'FailedOperation.ImageBlur'
# 图片解码失败。
FAILEDOPERATION_IMAGEDECODEFAILED = 'FailedOperation.ImageDecodeFailed'
# 照片未检测到名片。
FAILEDOPERATION_IMAGENOBUSINESSCARD = 'FailedOperation.ImageNoBusinessCard'
# 图片中未检测到身份证。
FAILEDOPERATION_IMAGENOIDCARD = 'FailedOperation.ImageNoIdCard'
# 图片中未检测到文本。
FAILEDOPERATION_IMAGENOTEXT = 'FailedOperation.ImageNoText'
# 图片尺寸过大,请参考输出参数中关于图片大小限制的说明。
FAILEDOPERATION_IMAGESIZETOOLARGE = 'FailedOperation.ImageSizeTooLarge'
# 发票数据不一致。
FAILEDOPERATION_INVOICEMISMATCH = 'FailedOperation.InvoiceMismatch'
# 输入的Language不支持。
FAILEDOPERATION_LANGUAGENOTSUPPORT = 'FailedOperation.LanguageNotSupport'
# 照片中存在多张卡。
FAILEDOPERATION_MULTICARDERROR = 'FailedOperation.MultiCardError'
# 非香港身份证。
FAILEDOPERATION_NOHKIDCARD = 'FailedOperation.NoHKIDCard'
# 非护照。
FAILEDOPERATION_NOPASSPORT = 'FailedOperation.NoPassport'
# OCR识别失败。
FAILEDOPERATION_OCRFAILED = 'FailedOperation.OcrFailed'
# 查询无记录。
FAILEDOPERATION_QUERYNORECORD = 'FailedOperation.QueryNoRecord'
# 未知错误。
FAILEDOPERATION_UNKNOWERROR = 'FailedOperation.UnKnowError'
# 服务未开通。
FAILEDOPERATION_UNOPENERROR = 'FailedOperation.UnOpenError'
# 内部错误。
INTERNALERROR = 'InternalError'
# Config不是有效的JSON格式。
INVALIDPARAMETER_CONFIGFORMATERROR = 'InvalidParameter.ConfigFormatError'
# 图片解码失败。
INVALIDPARAMETER_ENGINEIMAGEDECODEFAILED = 'InvalidParameter.EngineImageDecodeFailed'
# 无效的GTIN。
INVALIDPARAMETER_INVALIDGTINERROR = 'InvalidParameter.InvalidGTINError'
# 参数值错误。
INVALIDPARAMETERVALUE_INVALIDPARAMETERVALUELIMIT = 'InvalidParameterValue.InvalidParameterValueLimit'
# 文件内容太大。
LIMITEXCEEDED_TOOLARGEFILEERROR = 'LimitExceeded.TooLargeFileError'
# 发票不存在。
RESOURCENOTFOUND_NOINVOICE = 'ResourceNotFound.NoInvoice'
# 不支持当天发票查询。
RESOURCENOTFOUND_NOTSUPPORTCURRENTINVOICEQUERY = 'ResourceNotFound.NotSupportCurrentInvoiceQuery'
# 计费状态异常。
RESOURCESSOLDOUT_CHARGESTATUSEXCEPTION = 'ResourcesSoldOut.ChargeStatusException'
| [
"tencentcloudapi@tenent.com"
] | tencentcloudapi@tenent.com |
275b76afbe63a3413985b5472a69d50bf3e62d67 | a7f442bc306d1a8366a3e30db50af0c2c90e9091 | /blockchain-env/Lib/site-packages/Cryptodome/Signature/DSS.pyi | 8860aa173625e356921ff29bb410f001ef975c4a | [] | no_license | Patreva/Python-flask-react-blockchain | cbdce3e0f55d4ba68be6ecfba35620585894bbbc | 474a9795820d8a4b5a370d400d55b52580055a2e | refs/heads/main | 2023-03-29T01:18:53.985398 | 2021-04-06T08:01:24 | 2021-04-06T08:01:24 | 318,560,922 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,129 | pyi | from typing import Union, Optional, Callable
from typing_extensions import Protocol
from Cryptodome.PublicKey.DSA import DsaKey
from Cryptodome.PublicKey.ECC import EccKey
class Hash(Protocol):
def digest(self) -> bytes: ...
__all__ = ['new']
class DssSigScheme:
def __init__(self, key: Union[DsaKey, EccKey], encoding: str, order: int) -> None: ...
def can_sign(self) -> bool: ...
def sign(self, msg_hash: Hash) -> bytes: ...
def verify(self, msg_hash: Hash, signature: bytes) -> bool: ...
class DeterministicDsaSigScheme(DssSigScheme):
def __init__(self, key, encoding, order, private_key) -> None: ...
class FipsDsaSigScheme(DssSigScheme):
def __init__(self, key: DsaKey, encoding: str, order: int, randfunc: Callable) -> None: ...
class FipsEcDsaSigScheme(DssSigScheme):
def __init__(self, key: EccKey, encoding: str, order: int, randfunc: Callable) -> None: ...
def new(key: Union[DsaKey, EccKey], mode: str, encoding: Optional[str]='binary', randfunc: Optional[Callable]=None) -> Union[DeterministicDsaSigScheme, FipsDsaSigScheme, FipsEcDsaSigScheme]: ...
| [
"patrickwahome74@gmail.com"
] | patrickwahome74@gmail.com |
d11b7ecf23ce5efea923c536b66dba11bd9dbde5 | 52f4426d2776871cc7f119de258249f674064f78 | /baekjoon/brute_force/16637.py | ab3dbfea614db4c9828f52df66a085568f73d432 | [] | no_license | namhyun-gu/algorithm | 8ad98d336366351e715465643dcdd9f04eeb0ad2 | d99c44f9825576c16aaca731888e0c32f2ae6e96 | refs/heads/master | 2023-06-06T02:28:16.514422 | 2021-07-02T10:34:03 | 2021-07-02T10:34:03 | 288,646,740 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,168 | py | # region Input redirection
import io
import sys
example = """
5
8*3+5
"""
sys.stdin = io.StringIO(example.strip())
# endregion
#
# ⛔ DO NOT COPY ABOVE CONTENTS
#
import sys
def calculate(op1, operator, op2):
if operator == "+":
return op1 + op2
elif operator == "-":
return op1 - op2
else:
return op1 * op2
def dfs(value, index=0):
global answer
if index >= len(operator):
answer = max(answer, value)
return
no_bracket_val = calculate(value, operator[index], operand[index + 1])
dfs(no_bracket_val, index + 1)
if index + 1 < len(operator):
in_bracket = calculate(
operand[index + 1], operator[index + 1], operand[index + 2]
)
bracket_val = calculate(value, operator[index], in_bracket)
dfs(bracket_val, index + 2)
if __name__ == "__main__":
input = sys.stdin.readline
N = input()
answer = -sys.maxsize
operand = []
operator = []
for idx, ch in enumerate(input().rstrip()):
if idx % 2:
operator.append(ch)
else:
operand.append(int(ch))
dfs(operand[0])
print(answer) | [
"mnhan0403@gmail.com"
] | mnhan0403@gmail.com |
a29aa2f427cebc2eaf7d63646e7b5923666c047e | 6b6a18001a3a0931bbe8b5185179223b7bd2879a | /python_selenium/SL_UW/src/SL_UM/testsuite_03.py | 8f115ce2f48dec6270948f9e63daa4667f324eae | [] | no_license | hufengping/percolata | cfa02fcf445983415b99c8ec77a08b3f0b270015 | b643e89b48c97a9be3b5509120f325455643b7af | refs/heads/master | 2020-04-06T07:04:22.076039 | 2016-04-25T08:02:40 | 2016-04-25T08:02:40 | 42,929,818 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,606 | py | # -*- coding: utf-8 -*-
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import Select
from selenium.common.exceptions import NoSuchElementException
from selenium.common.exceptions import NoAlertPresentException
from selenium.webdriver.common.action_chains import ActionChains
import test_unittest, time, re,os,string
from public import Autotest,db2file
from testsuite_01 import *
import xml.dom.minidom
#打开XML文档
xmlpath = os.path.split(os.path.realpath(__file__))[0] #获取当前路径
xmlpath2 = xmlpath.split('src')[0]
dom = xml.dom.minidom.parse(xmlpath2+'testdata\\config.xml')
#得到文档元素对象
root = dom.documentElement
#获取当前时间
now = time.strftime("%Y-%m-%d_%H_%M_%S")
tdata= time.strftime("%Y-%m-%d")
class TestCase_01(test_unittest.TestCase):
u'''模拟扫描'''
@classmethod
def setUpClass(cls):#所有用例执行前,执行
#判断条形码是否获取正确
if barcode == 0:
print "条形码获取失败,测试终止!"
print "请检查条形码生产环境是否正常访问。"
quit(cls)
#firefox下载默认设置
fp = webdriver.FirefoxProfile()
fp.set_preference("browser.download.folderList",2)
fp.set_preference("browser.download.manager.showWhenStarting",False)
fp.set_preference("browser.download.dir", xmlpath2+'log')
fp.set_preference("browser.helperApps.neverAsk.saveToDisk",
"application/octet-stream") #下载文件的类型
cls.driver = webdriver.Firefox(firefox_profile=fp)
cls.driver.implicitly_wait(30)
logins = root.getElementsByTagName('url_vscan')
cls.base_url = logins[0].firstChild.data
#print self.base_url
cls.verificationErrors = []
cls.accept_next_alert = True
def setUp(self):#每条用例执行前,执行
pass
def test_01_login(self):
u'''登录'''
driver = self.driver
driver.get(self.base_url)
driver.maximize_window()
logins = root.getElementsByTagName('vuser')
#获得null 标签的username、passwrod 属性值
username=logins[0].getAttribute("username")
password=logins[0].getAttribute("password")
prompt_info = logins[0].firstChild.data
#登录
Autotest.login(self,username,password)
#获取断言信息进行断言
text = driver.find_element_by_xpath("/html/body/div[2]/div/div[3]/div/div[2]/div/table/tbody/tr/td").text
try:
self.assertEqual(text,prompt_info,U'登录信息验证错误,请检查网络或登录信息!')
except AssertionError,e:
print e
print ' 请查看截图文件 '+now+'.png'
driver.get_screenshot_as_file(xmlpath2+"log\\"+now+U'登录信息验证不通过.png')#如果没有找到上面的元素就截取当前页面。
def test_02_mnsm(self):
u'''模拟扫描'''
driver = self.driver
#选择菜单
driver.find_element_by_xpath('/html/body/div[2]/div/div[4]/div/div[1]/div/div/div/div[2]/div/div/div/div/table/tbody[2]/tr/td/div/nobr/table/tbody/tr/td[2]').click()
driver.find_element_by_xpath('/html/body/div[2]/div/div[4]/div/div[1]/div/div/div/div[2]/div/div/div/div/table/tbody[2]/tr[2]/td/div/nobr').click()
#切换frame
driver.switch_to_frame(driver.find_element_by_xpath('/html/body/div[2]/div/div[4]/div/div[2]/div/div[1]/div/iframe'))
#填写条形码
driver.find_element_by_id('barCode').send_keys(barcode)
#删除标志位:最后两个
driver.find_element_by_id('barCode').send_keys(Keys.BACK_SPACE)
driver.find_element_by_id('barCode').send_keys(Keys.BACK_SPACE)
#勾选投保书(个人营销渠道)
driver.find_element_by_xpath("//input[@value='1111']").click()
#点击提交
driver.find_element_by_id('loginform').click()
try:
text = driver.find_element_by_xpath('/html/body/div/font').text
self.assertEqual(text,"操作成功!",U'虚拟扫描操作失败!')
except AssertionError,e:
print e
print ' 虚拟扫描失败 '+now+'.png'
driver.get_screenshot_as_file(xmlpath2+"log\\"+now+U'虚拟扫描失败.png')
def is_element_present(self, how, what):
try: self.driver.find_element(by=how, value=what)
except NoSuchElementException, e: return False
return True
def is_alert_present(self):
try:
self.driver.switch_to_alert()
except NoAlertPresentException, e:
return False
return True
def close_alert_and_get_its_text(self):
try:
alert = self.driver.switch_to_alert()
alert_text = alert.text
if self.accept_next_alert:
alert.accept()
else:
alert.dismiss()
return alert_text
finally: self.accept_next_alert = True
def tearDown(self):#每条用例执行完,执行
self.assertEqual([], self.verificationErrors)
@classmethod #所有用例执行完,执行
def tearDownClass(cls):
cls.driver.quit()
if __name__ == "__main__":
test_unittest.main()
| [
"fengping.hu@percolata.com"
] | fengping.hu@percolata.com |
2a466abd81f68c9562f93904ea12db543b2386bd | 34ed92a9593746ccbcb1a02630be1370e8524f98 | /lib/pints/pints/toy/__init__.py | 62a438a380de519170d87bfec4dd2216d11adb7a | [
"LicenseRef-scancode-unknown-license-reference",
"BSD-3-Clause"
] | permissive | HOLL95/Cytochrome_SV | 87b7a680ed59681230f79e1de617621680ea0fa0 | d02b3469f3ee5a4c85d756053bc87651093abea1 | refs/heads/master | 2022-08-01T05:58:16.161510 | 2021-02-01T16:09:31 | 2021-02-01T16:09:31 | 249,424,867 | 0 | 0 | null | 2022-06-22T04:09:11 | 2020-03-23T12:29:29 | Jupyter Notebook | UTF-8 | Python | false | false | 2,282 | py | #
# Root of the toy module.
# Provides a number of toy models and logpdfs for tests of Pints' functions.
#
# This file is part of PINTS.
# Copyright (c) 2017-2019, University of Oxford.
# For licensing information, see the LICENSE file distributed with the PINTS
# software package.
#
from __future__ import absolute_import, division
from __future__ import print_function, unicode_literals
from ._toy_classes import ToyLogPDF, ToyModel, ToyODEModel # noqa
from ._annulus import AnnulusLogPDF # noqa
from ._beeler_reuter_model import ActionPotentialModel # noqa
from ._cone import ConeLogPDF # noqa
from ._constant_model import ConstantModel # noqa
from ._fitzhugh_nagumo_model import FitzhughNagumoModel # noqa
from ._gaussian import GaussianLogPDF # noqa
from ._german_credit import GermanCreditLogPDF # noqa
from ._german_credit_hierarchical import GermanCreditHierarchicalLogPDF # noqa
from ._goodwin_oscillator_model import GoodwinOscillatorModel # noqa
from ._hes1_michaelis_menten import Hes1Model # noqa
from ._hh_ik_model import HodgkinHuxleyIKModel # noqa
from ._high_dimensional_gaussian import HighDimensionalGaussianLogPDF # noqa
from ._logistic_model import LogisticModel # noqa
from ._lotka_volterra_model import LotkaVolterraModel # noqa
from ._multimodal_gaussian import MultimodalGaussianLogPDF # noqa
from ._neals_funnel import NealsFunnelLogPDF # noqa
from ._parabola import ParabolicError # noqa
from ._repressilator_model import RepressilatorModel # noqa
from ._rosenbrock import RosenbrockError, RosenbrockLogPDF # noqa
from ._sho_model import SimpleHarmonicOscillatorModel # noqa
from ._simple_egg_box import SimpleEggBoxLogPDF # noqa
from ._sir_model import SIRModel # noqa
from ._twisted_gaussian_banana import TwistedGaussianLogPDF # noqa
from ._stochastic_degradation_model import StochasticDegradationModel # noqa
| [
"henney@localhost.localdomain"
] | henney@localhost.localdomain |
03bd8be91946ecefcef85ec3815b2aac64eb4c10 | a5a99f646e371b45974a6fb6ccc06b0a674818f2 | /Geometry/HGCalCommonData/python/testHGCalDD4hepV17ShiftReco_cff.py | b91b5bcc3b5a765e452e85505b5af6ee752a7b5d | [
"Apache-2.0"
] | permissive | cms-sw/cmssw | 4ecd2c1105d59c66d385551230542c6615b9ab58 | 19c178740257eb48367778593da55dcad08b7a4f | refs/heads/master | 2023-08-23T21:57:42.491143 | 2023-08-22T20:22:40 | 2023-08-22T20:22:40 | 10,969,551 | 1,006 | 3,696 | Apache-2.0 | 2023-09-14T19:14:28 | 2013-06-26T14:09:07 | C++ | UTF-8 | Python | false | false | 3,329 | py | import FWCore.ParameterSet.Config as cms
# This config came from a copy of 2 files from Configuration/Geometry/python
from Configuration.Geometry.GeometryDD4hep_cff import *
DDDetectorESProducer.confGeomXMLFiles = cms.FileInPath("Geometry/HGCalCommonData/data/dd4hep/testHGCalV17Shift.xml")
from Geometry.TrackerNumberingBuilder.trackerNumberingGeometry_cff import *
from SLHCUpgradeSimulations.Geometry.fakePhase2OuterTrackerConditions_cff import *
from Geometry.EcalCommonData.ecalSimulationParameters_cff import *
from Geometry.HcalCommonData.hcalDDDSimConstants_cff import *
from Geometry.HGCalCommonData.hgcalParametersInitialization_cfi import *
from Geometry.HGCalCommonData.hgcalNumberingInitialization_cfi import *
from Geometry.MuonNumbering.muonGeometryConstants_cff import *
from Geometry.MuonNumbering.muonOffsetESProducer_cff import *
from Geometry.MTDNumberingBuilder.mtdNumberingGeometry_cff import *
# tracker
from Geometry.CommonTopologies.globalTrackingGeometry_cfi import *
from RecoTracker.GeometryESProducer.TrackerRecoGeometryESProducer_cfi import *
from Geometry.TrackerGeometryBuilder.trackerParameters_cff import *
from Geometry.TrackerNumberingBuilder.trackerTopology_cfi import *
from Geometry.TrackerGeometryBuilder.idealForDigiTrackerGeometry_cff import *
trackerGeometry.applyAlignment = False
# calo
from Geometry.CaloEventSetup.HGCalTopology_cfi import *
from Geometry.HGCalGeometry.HGCalGeometryESProducer_cfi import *
from Geometry.CaloEventSetup.CaloTopology_cfi import *
from Geometry.CaloEventSetup.CaloGeometryBuilder_cfi import *
CaloGeometryBuilder = cms.ESProducer("CaloGeometryBuilder",
SelectedCalos = cms.vstring("HCAL",
"ZDC",
"EcalBarrel",
"TOWER",
"HGCalEESensitive",
"HGCalHESiliconSensitive",
"HGCalHEScintillatorSensitive"
)
)
from Geometry.EcalAlgo.EcalBarrelGeometry_cfi import *
from Geometry.HcalEventSetup.HcalGeometry_cfi import *
from Geometry.HcalEventSetup.CaloTowerGeometry_cfi import *
from Geometry.HcalEventSetup.CaloTowerTopology_cfi import *
from Geometry.HcalCommonData.hcalDDDRecConstants_cfi import *
from Geometry.HcalEventSetup.hcalTopologyIdeal_cfi import *
from Geometry.CaloEventSetup.EcalTrigTowerConstituents_cfi import *
from Geometry.EcalMapping.EcalMapping_cfi import *
from Geometry.EcalMapping.EcalMappingRecord_cfi import *
# muon
from Geometry.MuonNumbering.muonNumberingInitialization_cfi import *
from RecoMuon.DetLayers.muonDetLayerGeometry_cfi import *
from Geometry.GEMGeometryBuilder.gemGeometry_cff import *
from Geometry.CSCGeometryBuilder.idealForDigiCscGeometry_cff import *
from Geometry.DTGeometryBuilder.idealForDigiDtGeometry_cff import *
# forward
from Geometry.ForwardGeometry.ForwardGeometry_cfi import *
# timing
from RecoMTD.DetLayers.mtdDetLayerGeometry_cfi import *
from Geometry.MTDGeometryBuilder.mtdParameters_cff import *
from Geometry.MTDNumberingBuilder.mtdNumberingGeometry_cff import *
from Geometry.MTDNumberingBuilder.mtdTopology_cfi import *
from Geometry.MTDGeometryBuilder.mtdGeometry_cfi import *
from Geometry.MTDGeometryBuilder.idealForDigiMTDGeometry_cff import *
mtdGeometry.applyAlignment = False
| [
"sunanda.banerjee@cern.ch"
] | sunanda.banerjee@cern.ch |
26bd13736c8d11c30c34f742b5b9c47f85ab65ca | b3c47795e8b6d95ae5521dcbbb920ab71851a92f | /Leetcode/Algorithm/python/2000/01128-Number of Equivalent Domino Pairs.py | c27c95103c1b64e55092e2f04f8949fd3efbaa18 | [
"LicenseRef-scancode-warranty-disclaimer"
] | no_license | Wizmann/ACM-ICPC | 6afecd0fd09918c53a2a84c4d22c244de0065710 | 7c30454c49485a794dcc4d1c09daf2f755f9ecc1 | refs/heads/master | 2023-07-15T02:46:21.372860 | 2023-07-09T15:30:27 | 2023-07-09T15:30:27 | 3,009,276 | 51 | 23 | null | null | null | null | UTF-8 | Python | false | false | 369 | py | from collections import defaultdict
class Solution(object):
def numEquivDominoPairs(self, dominoes):
d = defaultdict(int)
for (a, b) in dominoes:
key = (min(a, b), max(a, b))
d[key] += 1
res = 0
for key, value in d.items():
res += value * (value - 1) / 2
return res
| [
"noreply@github.com"
] | Wizmann.noreply@github.com |
d0c7a838d3ac45a39f6e97b95ffc799933ba0a3b | 3414c15e7333e2702818cd81de387a4def13a011 | /discord/message.py | 8981de2cb265cca36b37a28be84535a4595f322c | [
"MIT"
] | permissive | maxpowa/discord.py | b191f50de3ce34a48dcacb9802ee334f84841ae0 | 740b9a95c2a80caac59dd8f0ec6ea0cefa6b731c | refs/heads/async | 2020-12-28T19:57:08.739397 | 2015-12-24T13:24:23 | 2015-12-24T13:40:00 | 54,578,528 | 0 | 1 | null | 2016-03-23T17:08:59 | 2016-03-23T17:08:59 | null | UTF-8 | Python | false | false | 7,336 | py | # -*- coding: utf-8 -*-
"""
The MIT License (MIT)
Copyright (c) 2015 Rapptz
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the "Software"),
to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.
"""
from . import utils
from .user import User
from .member import Member
from .object import Object
import re
class Message:
"""Represents a message from Discord.
There should be no need to create one of these manually.
Attributes
-----------
edited_timestamp : Optional[datetime.datetime]
A naive UTC datetime object containing the edited time of the message.
timestamp : datetime.datetime
A naive UTC datetime object containing the time the message was created.
tts : bool
Specifies if the message was done with text-to-speech.
author
A :class:`Member` that sent the message. If :attr:`channel` is a
private channel, then it is a :class:`User` instead.
content : str
The actual contents of the message.
embeds : list
A list of embedded objects. The elements are objects that meet oEmbed's specification_.
.. _specification: http://oembed.com/
channel
The :class:`Channel` that the message was sent from.
Could be a :class:`PrivateChannel` if it's a private message.
In :issue:`very rare cases <21>` this could be a :class:`Object` instead.
For the sake of convenience, this :class:`Object` instance has an attribute ``is_private`` set to ``True``.
server : Optional[:class:`Server`]
The server that the message belongs to. If not applicable (i.e. a PM) then it's None instead.
mention_everyone : bool
Specifies if the message mentions everyone.
.. note::
This does not check if the ``@everyone`` text is in the message itself.
Rather this boolean indicates if the ``@everyone`` text is in the message
**and** it did end up mentioning everyone.
mentions : list
A list of :class:`Member` that were mentioned. If the message is in a private message
then the list is always empty.
.. warning::
The order of the mentions list is not in any particular order so you should
not rely on it. This is a discord limitation, not one with the library.
channel_mentions : list
A list of :class:`Channel` that were mentioned. If the message is in a private message
then the list is always empty.
id : str
The message ID.
attachments : list
A list of attachments given to a message.
"""
def __init__(self, **kwargs):
# at the moment, the timestamps seem to be naive so they have no time zone and operate on UTC time.
# we can use this to our advantage to use strptime instead of a complicated parsing routine.
# example timestamp: 2015-08-21T12:03:45.782000+00:00
# sometimes the .%f modifier is missing
self.edited_timestamp = utils.parse_time(kwargs.get('edited_timestamp'))
self.timestamp = utils.parse_time(kwargs.get('timestamp'))
self.tts = kwargs.get('tts')
self.content = kwargs.get('content')
self.mention_everyone = kwargs.get('mention_everyone')
self.embeds = kwargs.get('embeds')
self.id = kwargs.get('id')
self.channel = kwargs.get('channel')
self.author = User(**kwargs.get('author', {}))
self.attachments = kwargs.get('attachments')
self._handle_upgrades(kwargs.get('channel_id'))
self._handle_mentions(kwargs.get('mentions', []))
def _handle_mentions(self, mentions):
self.mentions = []
self.channel_mentions = []
if getattr(self.channel, 'is_private', True):
return
if self.channel is not None:
for mention in mentions:
id_search = mention.get('id')
member = utils.find(lambda m: m.id == id_search, self.server.members)
if member is not None:
self.mentions.append(member)
if self.server is not None:
for mention in self.raw_channel_mentions:
channel = utils.find(lambda m: m.id == mention, self.server.channels)
if channel is not None:
self.channel_mentions.append(channel)
@utils.cached_property
def raw_mentions(self):
"""A property that returns an array of user IDs matched with
the syntax of <@user_id> in the message content.
This allows you receive the user IDs of mentioned users
even in a private message context.
"""
return re.findall(r'<@([0-9]+)>', self.content)
@utils.cached_property
def raw_channel_mentions(self):
"""A property that returns an array of channel IDs matched with
the syntax of <#channel_id> in the message content.
This allows you receive the channel IDs of mentioned users
even in a private message context.
"""
return re.findall(r'<#([0-9]+)>', self.content)
@utils.cached_property
def clean_content(self):
"""A property that returns the content in a "cleaned up"
manner. This basically means that mentions are transformed
into the way the client shows it. e.g. ``<#id>`` will transform
into ``#name``.
"""
transformations = {
re.escape('<#{0.id}>'.format(channel)): '#' + channel.name
for channel in self.channel_mentions
}
mention_transforms = {
re.escape('<@{0.id}>'.format(member)): '@' + member.name
for member in self.mentions
}
transformations.update(mention_transforms)
def repl(obj):
return transformations.get(re.escape(obj.group(0)), '')
pattern = re.compile('|'.join(transformations.keys()))
return pattern.sub(repl, self.content)
def _handle_upgrades(self, channel_id):
self.server = None
if self.channel is None:
if channel_id is not None:
self.channel = Object(id=channel_id)
self.channel.is_private = True
return
if not self.channel.is_private:
self.server = self.channel.server
found = utils.find(lambda m: m.id == self.author.id, self.server.members)
if found is not None:
self.author = found
| [
"rapptz@gmail.com"
] | rapptz@gmail.com |
e202015319e99b6a80ad917d5e2b370f2dd271a5 | 9743d5fd24822f79c156ad112229e25adb9ed6f6 | /xai/brain/wordbase/nouns/_morphine.py | 943506387fc8ca7d5796f182ab71f36d81361708 | [
"MIT"
] | permissive | cash2one/xai | de7adad1758f50dd6786bf0111e71a903f039b64 | e76f12c9f4dcf3ac1c7c08b0cc8844c0b0a104b6 | refs/heads/master | 2021-01-19T12:33:54.964379 | 2017-01-28T02:00:50 | 2017-01-28T02:00:50 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 375 | py |
#calss header
class _MORPHINE():
def __init__(self,):
self.name = "MORPHINE"
self.definitions = [u'a drug made from opium, used to stop people from feeling pain or to make people feel calmer']
self.parents = []
self.childen = []
self.properties = []
self.jsondata = {}
self.specie = 'nouns'
def run(self, obj1 = [], obj2 = []):
return self.jsondata
| [
"xingwang1991@gmail.com"
] | xingwang1991@gmail.com |
dd2ffa10dfcbccf5bba097a474bc3960747cc90a | e17483ba000de9c6135e26ae6c09d9aa33004574 | /ipynbs/python性能优化/使用Cython优化python程序的性能/使用cython加速python源码/静态化python模块/logistic_A.py | fd14d103b55e5a2a96dda6457df38711fa0c5c6f | [
"Apache-2.0"
] | permissive | HAOzj/TutorialForPython | 27ae50c6b9fb3289ae7f67b8106d3d4996d145a7 | df7a6db94b77f4861b11966399f5359d00911a16 | refs/heads/master | 2020-03-17T09:19:45.199165 | 2018-04-02T13:33:27 | 2018-04-02T13:33:27 | 133,470,105 | 1 | 0 | null | 2018-05-15T06:35:01 | 2018-05-15T06:35:01 | null | UTF-8 | Python | false | false | 252 | py | #cython: language_level=3
import cython
from math import exp
if cython.compiled:
print("Yep, I'm compiled.")
else:
print("Just a lowly interpreted script.")
@cython.boundscheck(False)
@cython.ccall
def logistic(x):
return 1/(1+exp(-x)) | [
"hsz1273327@gmail.com"
] | hsz1273327@gmail.com |
d45c8e5024e20d5a3c8cfd01ad650dea3b3917cc | c08cfe3c8feb5b04314557481e1f635cd20750cd | /write_idea.py | 87c6e69ed46acc09c49be0427a27da1c187f448f | [] | no_license | steinbachr/write-idea | 80c2d52d5f581174583902254b2177c2a058e016 | 2110f92872913db5f385efa828ab244a31945027 | refs/heads/master | 2016-09-06T13:49:07.931996 | 2014-10-01T16:34:25 | 2014-10-01T16:34:25 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,738 | py | from bs4 import BeautifulSoup
import csv
import random
import requests
#in the future, we can have this passed in to the program from command line
MAX_TO_CHOOSE = 3
MAX_ATTEMPTS_TO_GET_DEFINITIONS = 3
def get_random_words(max_to_choose):
"""
get one or more random words from the dictionary
:return: list of words of at most size max_to_choose as chosen from the dictionary
"""
all_words = []
with open('dictionary.csv', 'rb') as dictionary:
for word in dictionary:
all_words.append(word.strip().replace("\n", ""))
to_choose = random.randint(1, max_to_choose)
return random.sample(all_words, to_choose)
def get_definition_from_merriam(word):
"""
for the given word, get its definition from merriam webster
:param word: str the word to get the definition for
:return: str the definition of the word
"""
api_key = 'c2b1a784-7bd2-4efe-b2c9-b328ce42ed4e'
api_url = 'http://www.dictionaryapi.com/api/v1/references/collegiate/xml/{word}?key={key}'.format(word=word, key=api_key)
resp = requests.get(api_url)
definition = None
try:
soup = BeautifulSoup(resp.content)
definition = soup.dt.find(text=True).strip(":")
except Exception:
pass
return definition
print "\n\nWords Of The Day:"
print ">>>>>>>>>>>>>>>>>>>>>>>>>>>"
definitions = {}
i = 0
while len(definitions) < 1 and i < MAX_ATTEMPTS_TO_GET_DEFINITIONS:
for word in get_random_words(MAX_TO_CHOOSE):
definition = get_definition_from_merriam(word)
if definition:
definitions[word] = definition
i += 1
print "\n".join(["{word}: {defn}".format(word=word, defn=definition) for word, definition in definitions.items()])
| [
"steinbach.rj@gmail.com"
] | steinbach.rj@gmail.com |
33ee4f9be70261fa3f9375fad072eb4c621512dc | 36620131b411892abf1072694c3ac39b0da6d75e | /object_detection/models/tta.py | 0a110ef2584abf9300f527600d7f242c8c53c785 | [] | no_license | zhangyahui520/object-detection | 82f3e61fb0f6a39881b9ed3a750478a095023eff | b83b55a05e911d5132c79e3f0029a449e99f948d | refs/heads/master | 2022-12-23T17:39:19.281547 | 2020-09-21T08:40:53 | 2020-09-21T08:40:53 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,001 | py | import torch
from functools import partial
from torch import nn
from typing import Tuple, List, Callable
from object_detection.entities import (
YoloBoxes,
Confidences,
ImageBatch,
yolo_hflip,
yolo_vflip,
)
class HFlipTTA:
def __init__(self, to_boxes: Callable,) -> None:
self.img_transform = partial(torch.flip, dims=(3,))
self.to_boxes = to_boxes
self.box_transform = yolo_hflip
def __call__(
self, model: nn.Module, images: ImageBatch
) -> Tuple[List[YoloBoxes], List[Confidences]]:
images = ImageBatch(self.img_transform(images))
outputs = model(images)
box_batch, conf_batch = self.to_boxes(outputs)
box_batch = [self.box_transform(boxes) for boxes in box_batch]
return box_batch, conf_batch
class VHFlipTTA:
def __init__(self, to_boxes: Callable,) -> None:
self.img_transform = partial(torch.flip, dims=(2, 3))
self.to_boxes = to_boxes
self.box_transform = lambda x: yolo_vflip(yolo_hflip(x))
def __call__(
self, model: nn.Module, images: ImageBatch
) -> Tuple[List[YoloBoxes], List[Confidences]]:
images = ImageBatch(self.img_transform(images))
outputs = model(images)
box_batch, conf_batch = self.to_boxes(outputs)
box_batch = [self.box_transform(boxes) for boxes in box_batch] # type:ignore
return box_batch, conf_batch
class VFlipTTA:
def __init__(self, to_boxes: Callable,) -> None:
self.img_transform = partial(torch.flip, dims=(2,))
self.to_boxes = to_boxes
self.box_transform = yolo_vflip
def __call__(
self, model: nn.Module, images: ImageBatch
) -> Tuple[List[YoloBoxes], List[Confidences]]:
images = ImageBatch(self.img_transform(images))
outputs = model(images)
box_batch, conf_batch = self.to_boxes(outputs)
box_batch = [self.box_transform(boxes) for boxes in box_batch]
return box_batch, conf_batch
| [
"yao.ntno@gmail.com"
] | yao.ntno@gmail.com |
019b805bb5bfb35d449a227190521a6eeb6c52fb | 3f42c5e33e58921754000b41db0156d0def70cf3 | /Snakefile | f5d32c6189ffed86ddbee3452a9118303866f682 | [] | no_license | SilasK/oldSRA_download | ab2536708f513e583ab012b747688211ca302779 | 96b121b53a8008b78d6d7f7b42c6ff13ef427ab9 | refs/heads/master | 2022-02-09T00:32:41.101141 | 2019-06-06T12:16:58 | 2019-06-06T12:16:58 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,930 |
import pandas as pd
SRR_list = pd.read_csv(config['url_table'],sep='\t',index_col=0).index
if 'outdir' in config:
outdir = config['outdir']
else:
outdir= config['url_table'].replace("_info.tab.txt",'')
rule paired:
input:
expand("{outdir}/{SRR}_{direction}.fastq.gz",SRR=SRR_list,outdir=outdir,direction=['R1','R2']),
#expand("{outdir}/{SRR}.msh",SRR=SRR_list,outdir=outdir)
rule single:
input:
expand("{outdir}/{SRR}.fastq.gz",SRR=SRR_list,outdir=outdir),
rule download_SRR_single:
output:
"{outdir}/{SRR}.fastq.gz",
wildcard_constraints:
SRR="[A-Z0-9]+"
params:
outdir=outdir
threads:
4
conda:
"envs/download.yaml"
shell:
"parallel-fastq-dump --sra-id {wildcards.SRR} --threads {threads} --gzip --outdir {params.outdir}"
rule download_SRR_paired:
output:
"{outdir}/{SRR}_1.fastq.gz",
"{outdir}/{SRR}_2.fastq.gz"
params:
outdir=outdir
#wildcard_constraints:
# SRR="SRR[A-Z0-9]+"
threads:
4
conda:
"envs/download.yaml"
shell:
"parallel-fastq-dump --sra-id {wildcards.SRR} --threads {threads} --gzip --split-files --outdir {params.outdir}"
localrules: rename_SRR
rule rename_SRR:
input:
"{outdir}/{SRR}_1.fastq.gz",
"{outdir}/{SRR}_2.fastq.gz"
output:
"{outdir}/{SRR}_R1.fastq.gz",
"{outdir}/{SRR}_R2.fastq.gz"
threads:
1
shell:
"mv {input[0]} {output[0]}; "
"mv {input[1]} {output[1]}"
rule sketch_reads:
input:
"{folder}/{sample}_R1.fastq.gz",
"{folder}/{sample}_R2.fastq.gz"
output:
"{folder}/{sample}.msh"
params:
prefix= lambda wc, output: os.path.splitext(output[0])[0]
conda:
"envs/mash.yaml"
threads:
4
shell:
"mash sketch -p {threads} -o {params.prefix} {input}"
| [
"silas.kieser@gmail.com"
] | silas.kieser@gmail.com | |
f70aa53f367b064e21d312d73f67784a25bcfafa | 08bd0c20e99bac54760441de061bb74818837575 | /0x0A-python-inheritance/1-my_list.py | 5f43596d29f2a834cffdc0ceb90333663c1b3464 | [] | no_license | MachinEmmus/holbertonschool-higher_level_programming | 961874eb51e06bc823911e943573123ead0483e5 | 2b7ebe4dc2005db1fe0ca4c330c0bd00897bb157 | refs/heads/master | 2021-08-08T19:43:48.933704 | 2020-10-09T22:22:17 | 2020-10-09T22:22:17 | 226,937,346 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 168 | py | #!/usr/bin/python3
class MyList(list):
"""Class MyList hereda of List"""
def print_sorted(self):
"""Print a sorted list"""
print(sorted(self))
| [
"emonsalvep38@gmail.com"
] | emonsalvep38@gmail.com |
8e6b88adc39dca84b244727245ff6e71b2828f61 | 0ce040a6ed4bc4ef131da9cb3df5672d995438fc | /apps/auth_ext/templatetags/url_login_tags.py | 5d696407431609c12da1fec012dcb3f912841cfc | [
"MIT"
] | permissive | frol/Fling-receiver | 59b553983b345312a96c11aec7e71e1c83ab334f | e4312ce4bd522ec0edfbfe7c325ca59e8012581a | refs/heads/master | 2016-09-05T16:10:28.270931 | 2014-02-10T07:52:51 | 2014-02-10T07:52:51 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 341 | py | from django.conf import settings
if 'coffin' in settings.INSTALLED_APPS:
from coffin.template import Library
else:
from django.template import Library
from auth_ext_15.models import UrlLoginToken
register = Library()
@register.simple_tag
def url_login_token(user):
return "url_login_token=%s" % UrlLoginToken.get_token(user)
| [
"frolvlad@gmail.com"
] | frolvlad@gmail.com |
22fcaae921154451a61d72f1c2467f394839b037 | 09e57dd1374713f06b70d7b37a580130d9bbab0d | /data/p2DJ/New/program/qiskit/class/startQiskit_Class29.py | fa2c3be3d8e26646725aac76f06a2446786f107b | [
"BSD-3-Clause"
] | permissive | UCLA-SEAL/QDiff | ad53650034897abb5941e74539e3aee8edb600ab | d968cbc47fe926b7f88b4adf10490f1edd6f8819 | refs/heads/main | 2023-08-05T04:52:24.961998 | 2021-09-19T02:56:16 | 2021-09-19T02:56:16 | 405,159,939 | 2 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,963 | py | # qubit number=2
# total number=6
import cirq
import qiskit
from qiskit import IBMQ
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister
from qiskit import BasicAer, execute, transpile
from pprint import pprint
from qiskit.test.mock import FakeVigo
from math import log2,floor, sqrt, pi
import numpy as np
import networkx as nx
def build_oracle(n: int, f) -> QuantumCircuit:
# implement the oracle O_f^\pm
# NOTE: use U1 gate (P gate) with \lambda = 180 ==> CZ gate
# or multi_control_Z_gate (issue #127)
controls = QuantumRegister(n, "ofc")
target = QuantumRegister(1, "oft")
oracle = QuantumCircuit(controls, target, name="Of")
for i in range(2 ** n):
rep = np.binary_repr(i, n)
if f(rep) == "1":
for j in range(n):
if rep[j] == "0":
oracle.x(controls[j])
oracle.mct(controls, target[0], None, mode='noancilla')
for j in range(n):
if rep[j] == "0":
oracle.x(controls[j])
# oracle.barrier()
# oracle.draw('mpl', filename='circuit/deutsch-oracle.png')
return oracle
def make_circuit(n:int,f) -> QuantumCircuit:
# circuit begin
input_qubit = QuantumRegister(n, "qc")
target = QuantumRegister(1, "qt")
prog = QuantumCircuit(input_qubit, target)
# inverse last one (can be omitted if using O_f^\pm)
prog.x(target)
# apply H to get superposition
for i in range(n):
prog.h(input_qubit[i])
prog.h(input_qubit[1]) # number=1
prog.h(target)
prog.barrier()
# apply oracle O_f
oracle = build_oracle(n, f)
prog.append(
oracle.to_gate(),
[input_qubit[i] for i in range(n)] + [target])
# apply H back (QFT on Z_2^n)
for i in range(n):
prog.h(input_qubit[i])
prog.barrier()
# measure
prog.y(input_qubit[1]) # number=2
prog.y(input_qubit[1]) # number=3
prog.cx(input_qubit[1],input_qubit[0]) # number=4
prog.cx(input_qubit[1],input_qubit[0]) # number=5
# circuit end
return prog
if __name__ == '__main__':
n = 2
f = lambda rep: rep[-1]
# f = lambda rep: "1" if rep[0:2] == "01" or rep[0:2] == "10" else "0"
# f = lambda rep: "0"
prog = make_circuit(n, f)
sample_shot =2800
backend = BasicAer.get_backend('statevector_simulator')
circuit1 = transpile(prog,FakeVigo())
circuit1.x(qubit=3)
circuit1.x(qubit=3)
prog = circuit1
info = execute(prog, backend=backend).result().get_statevector()
qubits = round(log2(len(info)))
info = {
np.binary_repr(i, qubits): round((info[i]*(info[i].conjugate())).real,3)
for i in range(2 ** qubits)
}
writefile = open("../data/startQiskit_Class29.csv","w")
print(info,file=writefile)
print("results end", file=writefile)
print(circuit1.depth(),file=writefile)
print(circuit1,file=writefile)
writefile.close()
| [
"wangjiyuan123@yeah.net"
] | wangjiyuan123@yeah.net |
dea529b9c44b8eae30d6e13b1b93487a76945c7d | ea4e3ac0966fe7b69f42eaa5a32980caa2248957 | /download/unzip/pyobjc/pyobjc-14/pyobjc/stable/PyOpenGL-2.0.2.01/src/shadow/WGL.ARB.buffer_region.0100.py | e5a49cc1445d9af64e1e0388088e5cd8a8402c82 | [] | no_license | hyl946/opensource_apple | 36b49deda8b2f241437ed45113d624ad45aa6d5f | e0f41fa0d9d535d57bfe56a264b4b27b8f93d86a | refs/heads/master | 2023-02-26T16:27:25.343636 | 2020-03-29T08:50:45 | 2020-03-29T08:50:45 | 249,169,732 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,838 | py | # This file was created automatically by SWIG.
# Don't modify this file, modify the SWIG interface instead.
# This file is compatible with both classic and new-style classes.
import _buffer_region
def _swig_setattr_nondynamic(self,class_type,name,value,static=1):
if (name == "this"):
if isinstance(value, class_type):
self.__dict__[name] = value.this
if hasattr(value,"thisown"): self.__dict__["thisown"] = value.thisown
del value.thisown
return
method = class_type.__swig_setmethods__.get(name,None)
if method: return method(self,value)
if (not static) or hasattr(self,name) or (name == "thisown"):
self.__dict__[name] = value
else:
raise AttributeError("You cannot add attributes to %s" % self)
def _swig_setattr(self,class_type,name,value):
return _swig_setattr_nondynamic(self,class_type,name,value,0)
def _swig_getattr(self,class_type,name):
method = class_type.__swig_getmethods__.get(name,None)
if method: return method(self)
raise AttributeError,name
import types
try:
_object = types.ObjectType
_newclass = 1
except AttributeError:
class _object : pass
_newclass = 0
del types
__version__ = _buffer_region.__version__
__date__ = _buffer_region.__date__
__api_version__ = _buffer_region.__api_version__
__author__ = _buffer_region.__author__
__doc__ = _buffer_region.__doc__
wglInitBufferRegionARB = _buffer_region.wglInitBufferRegionARB
__info = _buffer_region.__info
wglCreateBufferRegionARB = _buffer_region.wglCreateBufferRegionARB
wglDeleteBufferRegionARB = _buffer_region.wglDeleteBufferRegionARB
wglSaveBufferRegionARB = _buffer_region.wglSaveBufferRegionARB
wglRestoreBufferRegionARB = _buffer_region.wglRestoreBufferRegionARB
| [
"hyl946@163.com"
] | hyl946@163.com |
8012ec6497a89fc0ec225279447a91a004faebf2 | 07a9d52a91df135c82660c601811e3f623fe440c | /timereporter/commands/command.py | ecc3bff895ea285dbf057c436d9fa73495511e68 | [] | no_license | Godsmith/timereporter | ea9e622db880721bbf82f17e3e004434878ccd51 | 384ad973ea913a77593f665c97b337b54bc09b4a | refs/heads/master | 2021-07-05T09:14:28.327572 | 2021-06-15T21:31:52 | 2021-06-15T21:31:52 | 102,777,809 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,649 | py | import datetime
from typing import Union, Dict, Tuple
from datetime import date
from typing import List
from timereporter.calendar import Calendar
from timereporter.mydatetime import timedelta
from timereporter.views.view import View
from timereporter.views.console_week_view import ConsoleWeekView
class Command:
TIMEDELTA = timedelta(weeks=1)
WRITE_TO_DISK = True
def __init__(self, calendar: Calendar, date_: date, args: Union[list, str]):
self.calendar = calendar
self.date = date_
self.args = args
# TODO: use the new argument splitter method instead
if isinstance(self.args, str):
self.args = self.args.split()
if "last" in self.args:
self.date -= self.TIMEDELTA
elif "next" in self.args:
self.date += self.TIMEDELTA
self.options = self._parse_options()
def _parse_options(self) -> Dict[str, str]:
options = {}
new_args = []
assert isinstance(self.args, list)
for arg in self.args:
if arg.startswith("--"):
name = arg.split("=")[0]
value = arg.split("=")[1] if "=" in arg else True
options[name] = value
if name not in self.valid_options():
raise UnexpectedOptionError(name)
else:
new_args.append(arg)
self.args = new_args
return options
@classmethod
def can_handle(cls, args) -> bool:
args = [arg for arg in args if not arg.startswith("--")]
args = [arg for arg in args if arg not in ("last", "next")]
return cls._can_handle(args)
@classmethod
def _can_handle(cls, args: List[str]) -> bool:
raise NotImplementedError
def valid_options(self) -> List[str]:
return []
def execute(
self, created_at: datetime.datetime = datetime.datetime.now()
) -> Tuple[Calendar, View]:
return self.new_calendar(created_at), self.view()
def view(self) -> View:
return ConsoleWeekView(self.date)
def new_calendar(self, created_at: datetime.datetime) -> Calendar:
return self.calendar
class CommandError(Exception):
pass
class UnexpectedOptionError(CommandError):
"""Raised when there is an option not expected by the command."""
def __init__(self, option: Union[str, list]):
suffix = ""
if isinstance(option, list):
option = ", ".join(option)
suffix = "s"
# TODO: this should print the help for the command instead
super().__init__(f"Error: unexpected option{suffix}: {option}")
| [
"filip.lange@gmail.com"
] | filip.lange@gmail.com |
538ac952035739975c914adbf70b6b475a3e7114 | 05b42178aaefd7efdb2fb19fdea8e58056d8d4bd | /geeksforgeeks/graph/bfs/recursive/test.py | 5b140e1541e1d4ea37276028ba914fdf1ee91a6a | [] | no_license | chrisjdavie/interview_practice | 43ca3df25fb0538d685a59ac752a6a4b269c44e9 | 2d47d583ed9c838a802b4aa4cefe649c77f5dd7f | refs/heads/master | 2023-08-16T18:22:46.492623 | 2023-08-16T16:04:01 | 2023-08-16T16:04:01 | 247,268,317 | 0 | 0 | null | 2020-03-14T17:35:12 | 2020-03-14T12:01:43 | Python | UTF-8 | Python | false | false | 1,403 | py | from unittest import TestCase
from run import initialise_graph, bfs
class TestExamples(TestCase):
def test_raises(self):
N = 201
edges = [(0, i+1) for i in range(N)]
graph = initialise_graph(edges)
with self.assertRaises(ValueError):
bfs(graph, N)
def test_example_1(self):
num_nodes = 5
edges = [(0, 1), (0, 2), (0, 3), (2, 4)]
graph = initialise_graph(edges)
self.assertEqual(bfs(graph, num_nodes), [0, 1, 2, 3, 4])
def test_example_2(self):
num_nodes = 3
edges = [(0, 1), (0, 2)]
graph = initialise_graph(edges)
self.assertEqual(bfs(graph, num_nodes), [0, 1, 2])
def test_circular(self):
# it's a graph not a tree!
num_nodes = 2
edges = [(0, 1), (1, 0)]
graph = initialise_graph(edges)
self.assertEqual(bfs(graph, num_nodes), [0, 1])
def test_circular_not_root(self):
# it's a graph not a tree!
num_nodes = 2
edges = [(0, 1), (1, 2), (2, 1)]
graph = initialise_graph(edges)
self.assertEqual(bfs(graph, num_nodes), [0, 1, 2])
def test_double_link(self):
# it's a graph not a tree!
num_nodes = 2
edges = [(0, 1), (0, 2), (1, 3), (2, 3)]
graph = initialise_graph(edges)
self.assertEqual(bfs(graph, num_nodes), [0, 1, 2, 3])
| [
"cjdavie@googlemail.com"
] | cjdavie@googlemail.com |
ad3a856cb3f801a06193c47798147f8d62bf9219 | 6cecdc007a3aafe0c0d0160053811a1197aca519 | /apps/reports/templatetags/blacklist_tags.py | aeaa55dde3ba57c9a67cbc0f798f30964de013e5 | [] | no_license | commtrack/temp-aquatest | 91d678c927cc4b2dce6f709afe7faf2768b58157 | 3b10d179552b1e9d6a0e4ad5e91a92a05dba19c7 | refs/heads/master | 2016-08-04T18:06:47.582196 | 2010-09-29T13:20:13 | 2010-09-29T13:20:13 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,007 | py | from datetime import timedelta
from django import template
from hq.utils import build_url as build_url_util
register = template.Library()
@register.simple_tag
def build_device_url(domain, device_id):
"""Builds the link on the report when you click on the device
to get a filtered view of the metadata."""
return build_url_util("/reports/%s/custom/metadata?filter_deviceid=%s" %\
(domain.id, device_id))
@register.simple_tag
def build_count_url(domain, device_id, date):
"""Builds the link on the report when you click on the device
to get a filtered view of the metadata."""
# this is pretty ugly, but one way to get the URL's working in email
day_after_date = date + timedelta(days=1)
return build_url_util\
("/reports/%s/custom/metadata?filter_deviceid=%s&filter_timeend__gte=%s&filter_timeend__lte=%s" \
% (domain.id, device_id, date, day_after_date))
| [
"allen.machary@gmail.com"
] | allen.machary@gmail.com |
1d0cb7bf6626f99b695aa23d6257d11309fdb68b | 260817cebb942bf825e1f68f4566240f097419d5 | /day21/7.把模块当成脚本来使用.py | cf7d75246dd9335c79d04127f427d6c0f5b74a18 | [] | no_license | microease/old-boys-python-15 | ff55d961192a0b31aa8fd33a548f161497b12785 | 7e9c5f74201db9ea26409fb9cfe78277f93e360e | refs/heads/master | 2020-05-19T10:32:21.022217 | 2019-11-25T17:00:54 | 2019-11-25T17:00:54 | 184,972,964 | 2 | 0 | null | null | null | null | UTF-8 | Python | false | false | 876 | py | # by luffycity.com
# 执行一个py文件的方式:
# 在cmd执行,在python执行 : 直接执行这个文件 - 以脚本的形式运行这个文件
# 导入这个文件
# 都是py文件
# 直接运行这个文件 这个文件就是一个脚本
# 导入这个文件 这个文件就是一个模块
# import re
# import time
#
# import my_module
# import calculate
#
# ret = calculate.main('1*2+3')
# print(ret)
# 当一个py文件
# 当做一个脚本的时候 : 能够独立的提供一个功能,能自主完成交互
# 当成一个模块的时候 : 能够被导入这调用这个功能,不能自主交互
# 一个文件中的__name__变量
# 当这个文件被当做脚本执行的时候 __name__ == '__main__'
# 当这个文件被当做模块导入的时候 __name__ == '模块的名字'
import calculate
print(calculate.main('1+2')) | [
"microease@163.com"
] | microease@163.com |
9551710717c579f61e12f178765ee0e9b2e661a6 | 0376a3528032dc8637123eb4307fac53fe33c631 | /openstack/_hacking.py | 94952630c61025d4208e5f250312451351a2910a | [
"Apache-2.0"
] | permissive | FrontSide/openstacksdk | d5a7461721baf67d3590d2611538620b15939999 | 9fc0fdaed3114f06c7fc90ce8cf338c5ae01df2f | refs/heads/master | 2021-02-10T21:14:37.463978 | 2020-02-26T17:23:45 | 2020-02-26T17:23:45 | 244,419,992 | 0 | 0 | Apache-2.0 | 2020-03-02T16:32:40 | 2020-03-02T16:32:39 | null | UTF-8 | Python | false | false | 1,401 | py | # Copyright (c) 2019, Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import re
"""
Guidelines for writing new hacking checks
- Use only for openstacksdk specific tests. OpenStack general tests
should be submitted to the common 'hacking' module.
- Pick numbers in the range O3xx. Find the current test with
the highest allocated number and then pick the next value.
- Keep the test method code in the source file ordered based
on the O3xx value.
- List the new rule in the top level HACKING.rst file
- Add test cases for each new rule to nova/tests/unit/test_hacking.py
"""
SETUPCLASS_RE = re.compile(r"def setUpClass\(")
def assert_no_setupclass(logical_line):
"""Check for use of setUpClass
O300
"""
if SETUPCLASS_RE.match(logical_line):
yield (0, "O300: setUpClass not allowed")
def factory(register):
register(assert_no_setupclass)
| [
"mordred@inaugust.com"
] | mordred@inaugust.com |
4b983a60a39a1f46e6e9b4ca7b66a505cf8aaedf | 7698a74a06e10dd5e1f27e6bd9f9b2a5cda1c5fb | /zzz.scripts_from_reed/getposesfast.py | b472fb639bae241dfbfb02f717d2a318ea0f5f08 | [] | no_license | kingbo2008/teb_scripts_programs | ef20b24fe8982046397d3659b68f0ad70e9b6b8b | 5fd9d60c28ceb5c7827f1bd94b1b8fdecf74944e | refs/heads/master | 2023-02-11T00:57:59.347144 | 2021-01-07T17:42:11 | 2021-01-07T17:42:11 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,897 | py |
# This script is written by Reed Stein and Trent Balius in April, 2017.
# This a fast get poses script
import os
import sys
import gzip
def get_zinc_names_and_chunks_from_extract_all_sort_uniq(filename,N):
print "running function: get_zinc_names_and_chunks_from_extract_all_sort_uniq"
fh = open(filename,'r')
zinc_dic = {}
count = 0
for line in fh:
splitline = line.split()
zincid = splitline[2]
chunk = splitline[0]
zinc_dic[zincid] = chunk
#print chunk, zincid
count = count + 1
if count > N:
break
return zinc_dic
def process_gz(filename, lig_dict, chunk, zinclist):
print "running function: process_gz"
#this function collects the name, grids, and total energy of the best scoring pose of each ligand
lig_list = []
ligandFile = gzip.open(filename,'rb')
lig_line = []
line_count = 0
for line in ligandFile:
if len(lig_line) == 3:
lig_list.append(lig_line)
lig_line = []
line1 = line.strip().split()
if len(line1) > 1:
if line1[1] == "Name:":
lig_line.append(line_count)
lig_line.append(line1[2])
elif line1[1] == "Total":
lig_line.append(line1[3])
line_count += 1
for comp in lig_list:
line_num = comp[0]
name = comp[1]
tot_e = float(comp[2])
if not (name in zinclist): #skip any ligand not in list (dictionary)
#print name
continue
if not (name in lig_dict):
lig_dict[name] = [line_num, tot_e, chunk]
#print name, line_num, tot_e
elif name in lig_dict and lig_dict[name][1] > tot_e:
lig_dict[name][0] = line_num
lig_dict[name][1] = tot_e
lig_dict[name][2] = chunk
#print "update", name, line_num, tot_e
return lig_dict
def write_out_poses(lig_dict, lig_dir):
print "running function: write_out_poses"
line_number_dict = {}
for name in lig_dict:
line_num = lig_dict[name][0]
#chunk_num = lig_dict[name][2].split("chunk")[-1]
dirname = lig_dict[name][2]
#if chunk_num in line_number_dict:
if dirname in line_number_dict:
line_number_dict[dirname].append(line_num)
else:
line_number_dict[dirname] = [line_num]
output = open("poses.mol2",'w')
for dirname_l in line_number_dict:
#print dirname_l
line_number_list = line_number_dict[dirname_l]
#print line_number_dict[dirname_l]
db2_gz_file = lig_dir+dirname_l+"/test.mol2.gz"
ligandFile = gzip.open(db2_gz_file,'rb')
new_line_count = 0
name_found = False
header_flag = False
for new_line in ligandFile:
splitline = new_line.strip().split()
if new_line_count in line_number_list: # line is in the list so set flag to True for writting out
#print(new_line)
if len(splitline) > 1:
if splitline[1] == "Name:":
name_found = True
#output.write(new_line)
else: # otherwise see if you are have reached a new pose if that is not in the list then stop writting.
if (len(splitline) > 1):
if splitline[1] == "Name:" and name_found and new_line_count not in line_number_list:
name_found = False
if new_line[0] == "#" and not header_flag:
#print new_line
header_flag = True
header = ''
header=header+new_line
new_line_count += 1
continue
if new_line[0] == "#":
header=header+new_line
new_line_count += 1
continue
if name_found:
if header_flag: # line 91. if the header flag is true and it is no longer in the header and it is a pose you want to write, write the header frist.
output.write(header)
output.write(new_line)
if new_line[0] != "#": # if line does not start with a # symbol then set header flag to false. setting to false must go after line 91.
header_flag = False
new_line_count += 1
ligandFile.close()
output.close()
def main():
if (len(sys.argv) != 4):
print "Give this script 3 inputs:"
print "(1) path to where docking is located. "
print "(2) path to where the extract all file is. "
print "(3) number of molecules (poses) to get. "
exit()
docking_dir = sys.argv[1]
#extractname = sys.argv[2]
extractfile = sys.argv[2]
number_of_poses = int(sys.argv[3])
print "docking_dir: "+docking_dir
#print "extractname: "+extractname
#extractfile = docking_dir+extractname
print "extract file path: "+extractfile
print "number_of_poses: "+str(number_of_poses)
#os.chdir(lig_dir)
#if os.path.isfile(lig_dir+"poses.mol2"):
#if os.path.isfile(docking_dir+"poses.mol2"):
if os.path.isfile("poses.mol2"):
print "poses.mol2 already exists. Quitting."
sys.exit()
#extractfile = docking_dir+"extract_all.sort.uniq.txt"
if not os.path.isfile(extractfile):
print "there needs to be an extract_all.sort.uniq.txt. "
exit()
zinc_dic = get_zinc_names_and_chunks_from_extract_all_sort_uniq(extractfile,number_of_poses)
#print zinc_dic.keys()
#chunk_list = [name for name in os.listdir(".") if os.path.isdir(name) and name[0:5] == "chunk"]
chunk_dic = {}
chunk_list = []
for key in zinc_dic:
if not (zinc_dic[key] in chunk_dic):
chunk_list.append(zinc_dic[key])
chunk_dic[zinc_dic[key]]=0
chunk_dic[zinc_dic[key]] = chunk_dic[zinc_dic[key]] + 1
lig_dict = {}
for chunk in chunk_list:
print chunk, chunk_dic[chunk]
gz_file = docking_dir+chunk+"/test.mol2.gz"
lig_dict = process_gz(gz_file, lig_dict, chunk, zinc_dic)
write_out_poses(lig_dict, docking_dir)
main()
| [
"tbalius@gimel.cluster.ucsf.bkslab.org"
] | tbalius@gimel.cluster.ucsf.bkslab.org |
767687887e67bf1d760a55cc1e95fe314e57b094 | ae7ba9c83692cfcb39e95483d84610715930fe9e | /bmw9t/nltk/ch_four/15.py | b4241dc542b556abf74ca5ecc1f5e328b4674211 | [] | no_license | xenron/sandbox-github-clone | 364721769ea0784fb82827b07196eaa32190126b | 5eccdd8631f8bad78eb88bb89144972dbabc109c | refs/heads/master | 2022-05-01T21:18:43.101664 | 2016-09-12T12:38:32 | 2016-09-12T12:38:32 | 65,951,766 | 5 | 7 | null | null | null | null | UTF-8 | Python | false | false | 692 | py | # ◑ Write a program that takes a sentence expressed as a single string, splits it and counts up the words. Get it to print out each word and the word's frequency, one per line, in alphabetical order.
from nltk import *
def program(sent):
"""answers the question."""
# splits a sentence
words = sent.split(' ')
# gets the length of the sentence and the frequencies with which they appear.
length = len(words)
fd = FreqDist(words)
# stores the frequencies and sorts them alphabetically
frequencies = sorted(fd.most_common(length))
#prints them out one per line.
for frequency in frequencies:
print(frequency)
sentence = 'this is my sentence yo what up up'
program(sentence) | [
"xenron@outlook.com"
] | xenron@outlook.com |
a71afb8ba73ddffb652d248309425b73e03defb7 | e98a1e360e947a0f91edc3cb603d915a3630cfbc | /stack_medium/0113_verify_preorder_serialization_of_a_binary_tree.py | bfacc96b15fa81dbd938e4770b24a1533f6619ba | [] | no_license | myungwooko/algorithm | 3a6a05cf7efa469aa911fe04871ef368ab98bb65 | 673e51199a2d07198894a283479d459bef0272c5 | refs/heads/master | 2021-07-04T01:17:41.787653 | 2020-12-25T00:59:33 | 2020-12-25T00:59:33 | 213,865,632 | 1 | 1 | null | null | null | null | UTF-8 | Python | false | false | 1,842 | py | """
331. Verify Preorder Serialization of a Binary Tree
Medium
One way to serialize a binary tree is to use pre-order traversal. When we encounter a non-null node, we record the node's value. If it is a null node, we record using a sentinel value such as #.
_9_
/ \
3 2
/ \ / \
4 1 # 6
/ \ / \ / \
# # # # # #
For example, the above binary tree can be serialized to the string "9,3,4,#,#,1,#,#,2,#,6,#,#", where # represents a null node.
Given a string of comma separated values, verify whether it is a correct preorder traversal serialization of a binary tree. Find an algorithm without reconstructing the tree.
Each comma separated value in the string must be either an integer or a character '#' representing null pointer.
You may assume that the input format is always valid, for example it could never contain two consecutive commas such as "1,,3".
Example 1:
Input: "9,3,4,#,#,1,#,#,2,#,6,#,#"
Output: true
Example 2:
Input: "1,#"
Output: false
Example 3:
Input: "9,#,#,1"
Output: false
- 마지막이 모두 #로 끝맺음 되어야 한다.
"""
class Solution(object):
#every last has to finish "#""
def isValidSerialization(self, preorder: str) -> bool:
stack = []
for c in preorder.split(','):
stack.append(c)
print(stack, stack[:-2])
while stack[-2:] == ['#', '#']:
stack.pop()
stack.pop()
if not stack: return False
# even though just below value is '#' it doesn't matter
stack.pop()
#this is for next's child as finishing well as before.
stack.append('#')
print(stack)
return stack == ['#']
preorder = "9,3,4,#,#,1,#,#,2,#,6,#,#"
s = Solution()
test = s.isValidSerialization(preorder)
print(test)
| [
"myungwoo.ko@gmail.com"
] | myungwoo.ko@gmail.com |
730c29e0650bb728a7b7a31f71b116856885c5c5 | 0b491c106daeafe21e0e4e57ea3a7fd25072c902 | /pyspedas/examples/basic/ex_analysis.py | 7bf8a422548ad0d9ca497f6d618dbf9d1db4abb9 | [
"MIT"
] | permissive | jibarnum/pyspedas | 491f7e7bac2485f52a8c2b0b841d85e4a1ce41ff | a956911ed8f2a11e30c527c92d2bda1342bea8e3 | refs/heads/master | 2020-06-18T21:43:06.885769 | 2019-10-08T17:19:43 | 2019-10-08T17:19:43 | 196,460,801 | 0 | 0 | MIT | 2019-07-11T20:26:58 | 2019-07-11T20:26:58 | null | UTF-8 | Python | false | false | 706 | py | # -*- coding: utf-8 -*-
"""
File:
ex_analysis.py
Desrciption:
Basic example using analysis functions.
Downloads THEMIS data and plots it.
"""
import pyspedas
import pytplot
def ex_analysis():
# Print the installed version of pyspedas
pyspedas.version()
# Delete any existing pytplot variables
pytplot.del_data()
# Download THEMIS state data for 2015-12-31
pyspedas.load_data('themis', '2015-12-31', ['tha'], 'state', 'l1')
# Use some analysis functions on tplot variables
pyspedas.subtract_average('tha_pos')
pyspedas.subtract_median('tha_pos')
# Plot
pytplot.tplot(["tha_pos", "tha_pos-d", "tha_pos-m"])
# Run the example code
# ex_analysis()
| [
"egrimes@igpp.ucla.edu"
] | egrimes@igpp.ucla.edu |
327521fba8a42d166df3e832b2af503df40dc25f | 6c92b2faa4d8c328ab855429843f08f4f220a75a | /collective/azindexpage/testing.py | e60cb8f4f6d588913286868a4d2cc593cc7bfaa1 | [] | no_license | collective/collective.azindexpage | d6c5644d95889e806582b003dd43149dc6110fb4 | 2e04bd8c018acf94488deee3bb3d35355ca392a8 | refs/heads/master | 2023-08-12T04:11:54.703896 | 2014-12-11T10:33:50 | 2014-12-11T10:33:50 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 508 | py | from plone.app.testing import *
import collective.azindexpage
FIXTURE = PloneWithPackageLayer(
zcml_filename="configure.zcml",
zcml_package=collective.azindexpage,
additional_z2_products=[],
gs_profile_id='collective.azindexpage:default',
name="collective.azindexpage:FIXTURE"
)
INTEGRATION = IntegrationTesting(
bases=(FIXTURE,),
name="collective.azindexpage:Integration"
)
FUNCTIONAL = FunctionalTesting(
bases=(FIXTURE,),
name="collective.azindexpage:Functional"
)
| [
"toutpt@gmail.com"
] | toutpt@gmail.com |
d0f4aa8a11338fea334ca1061586eee7e025352f | e82b761f53d6a3ae023ee65a219eea38e66946a0 | /All_In_One/addons/io_scene_osi/__init__.py | 39b9e606dd18de1318ae77941cac5cff3c0055e8 | [] | no_license | 2434325680/Learnbgame | f3a050c28df588cbb3b14e1067a58221252e2e40 | 7b796d30dfd22b7706a93e4419ed913d18d29a44 | refs/heads/master | 2023-08-22T23:59:55.711050 | 2021-10-17T07:26:07 | 2021-10-17T07:26:07 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,486 | py | bl_info = {
"name": "Super Mario Odyssey Stage Importer",
"description": "Import Super Mario Odyssey Stages",
"author": "Philippus229, bymlv2 (v1.0.3) by leoetlino, SARCExtract (v0.5) by aboood40091, Sorted Containers (v2.0.4), PyYAML (v3.10)",
"version": (0, 4, 0),
"blender": (2, 79, 0),
"location": "File > Import-Export",
"warning": "This add-on is under development.",
"wiki_url": "https://github.com/Philippus229/io_scene_osi/wiki",
"tracker_url": "https://github.com/Philippus229/io_scene_osi/issues",
"category": "Learnbgame",
}
# Reload the package modules when reloading add-ons in Blender with F8.
if "bpy" in locals():
import importlib
if "addon" in locals():
importlib.reload(addon)
if "importing" in locals():
importlib.reload(importing)
if "byml" in locals():
importlib.reload(byml)
if "sortedcontainers" in locals():
importlib.reload(sortedcontainers)
import bpy
from . import addon
from . import importing
from . import byml
from . import sortedcontainers
def register():
bpy.utils.register_module(__name__)
# Addon
bpy.types.UILayout.osi_colbox = addon.osi_colbox
# Importing
bpy.types.INFO_MT_file_import.append(importing.ImportOperator.menu_func)
def unregister():
bpy.utils.unregister_module(__name__)
# Addon
del bpy.types.UILayout.osi_colbox
# Importing
bpy.types.INFO_MT_file_import.remove(importing.ImportOperator.menu_func)
| [
"root@localhost.localdomain"
] | root@localhost.localdomain |
2ec9aef834da051f7fa5fcf27a06b2c59b2a3aef | 8c186a62d1f60099d0677e2c1233af31f1dca19a | /client/watchman_subscriber.py | a2737ee79af7a08744080468bfecb0cdb2dd1323 | [
"MIT"
] | permissive | darrynza/pyre-check | 3f172625d9b2484190cc0b67805fabb1b8ba00ff | cb94f27b3db824446abf21bbb19d0cef516841ec | refs/heads/master | 2020-04-29T04:19:23.518722 | 2019-03-15T15:00:38 | 2019-03-15T15:03:32 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,009 | py | # Copyright (c) 2019-present, Facebook, Inc.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
# pyre-strict
import functools
import logging
import os
import signal
import sys
from typing import Any, Dict, List, NamedTuple
from .filesystem import AnalysisDirectory, acquire_lock, remove_if_exists
LOG = logging.getLogger(__name__) # type: logging.Logger
Subscription = NamedTuple(
"Subscription", [("root", str), ("name", str), ("subscription", Dict[str, Any])]
)
class WatchmanSubscriber(object):
def __init__(self, analysis_directory: AnalysisDirectory) -> None:
self._base_path = os.path.join(
analysis_directory.get_root(), ".pyre", self._name
) # type: str
self._alive = True # type: bool
@property
def _name(self) -> str:
"""
A name to identify the subscriber. Used as the directory and file names
for the log, lock, and pid files.
"""
raise NotImplementedError
@property
def _subscriptions(self) -> List[Subscription]:
"""
List of subscriptions
"""
raise NotImplementedError
def _handle_response(self, response: Dict[str, Any]) -> None:
"""
Callback invoked when a message is received from watchman
"""
raise NotImplementedError
@property
@functools.lru_cache(1)
def _watchman_client(self) -> "pywatchman.client": # noqa
try:
import pywatchman # noqa
return pywatchman.client(timeout=3600.0)
except ImportError as exception:
LOG.info("Not starting %s due to %s", self._name, str(exception))
sys.exit(1)
def _subscribe_to_watchman(self, subscription: Subscription) -> None:
self._watchman_client.query(
"subscribe", subscription.root, subscription.name, subscription.subscription
)
def _run(self) -> None:
try:
os.makedirs(self._base_path)
except OSError:
pass
lock_path = os.path.join(self._base_path, "{}.lock".format(self._name))
pid_path = os.path.join(self._base_path, "{}.pid".format(self._name))
def cleanup() -> None:
LOG.info("Cleaning up lock and pid files before exiting.")
remove_if_exists(pid_path)
remove_if_exists(lock_path)
# pyre-ignore: missing annotations on underscored parameters, fixed on master
def interrupt_handler(_signal_number=None, _frame=None) -> None:
LOG.info("Interrupt signal received.")
cleanup()
sys.exit(0)
signal.signal(signal.SIGINT, interrupt_handler)
# Die silently if unable to acquire the lock.
with acquire_lock(lock_path, blocking=False):
file_handler = logging.FileHandler(
os.path.join(self._base_path, "%s.log" % self._name)
)
file_handler.setFormatter(
logging.Formatter("%(asctime)s %(levelname)s %(message)s")
)
LOG.addHandler(file_handler)
with open(pid_path, "w+") as pid_file:
pid_file.write(str(os.getpid()))
for subscription in self._subscriptions:
self._subscribe_to_watchman(subscription)
connection = self._watchman_client.recvConn
if not connection:
LOG.error("Connection to Watchman for %s not found", self._name)
sys.exit(1)
while self._alive:
# This call is blocking, which prevents this loop from burning CPU.
response = connection.receive()
try:
if response["is_fresh_instance"]:
LOG.info(
"Ignoring initial watchman message for %s", response["root"]
)
else:
self._handle_response(response)
except KeyError:
pass
cleanup()
def daemonize(self) -> None:
"""We double-fork here to detach the daemon process from the parent.
If we were to just fork the child as a daemon, we'd have to worry about the
parent process exiting zombifying the daemon."""
if os.fork() == 0:
pid = os.fork()
if pid == 0:
try:
# Closing the sys.stdout and stderr file descriptors here causes
# the program to crash when attempting to log.
os.close(sys.stdout.fileno())
os.close(sys.stderr.fileno())
self._run()
sys.exit(0)
except Exception as exception:
LOG.info("Not running %s due to %s", self._name, str(exception))
sys.exit(1)
else:
sys.exit(0)
| [
"facebook-github-bot@users.noreply.github.com"
] | facebook-github-bot@users.noreply.github.com |
4ac826e55018348c35f3b8ae04750caedaea7eda | 3fba33f91e1f50077dc2cce663b7de0f70a17a51 | /wlhub/users/admin.py | c7ecddd088d8bedf93021a3b00d05411aba80b0f | [] | no_license | azinit/wlhub | 59be2e9f555fa6655965d13580fd05963dc414b6 | 616761ef39f4cdb82d032f737bf50c66a9e935d1 | refs/heads/master | 2022-12-22T12:26:33.907642 | 2020-09-13T21:45:33 | 2020-09-13T21:45:33 | 295,242,617 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 897 | py | from django.contrib import admin
from django.contrib.auth import get_user_model
from core.mixins import ListLinksMixin
from .models import UserSurvey
@admin.register(get_user_model())
class UserAdmin(ListLinksMixin, admin.ModelAdmin):
fields = (
'is_superuser',
'first_name',
'last_name',
'email',
'username',
'password',
"thumb",
)
list_display = (
'username',
'first_name',
'last_name',
'email'
)
@admin.register(UserSurvey)
class UserSurveyAdmin(admin.ModelAdmin):
list_display = (
"__str__",
"rate",
"is_student",
"is_employee",
"is_employer",
"is_manager",
"is_freelancer",
)
list_filter = (
"is_student",
"is_employee",
"is_employer",
"is_manager",
"is_freelancer",
)
| [
"martis.azin@gmail.com"
] | martis.azin@gmail.com |
fd0f9258d690798c7c0594cd8a7cfa3ae6b4ee15 | 31e8b777b8b6da1ef8d172d2c7b5271a892e7dc9 | /frappe/desk/doctype/list_filter/list_filter.py | d2b01d301e11a116ba0b6ac48913556a110227d7 | [
"MIT"
] | permissive | Anurag810/frappe | a4d2f6f3a14cc600cced7146a02303cd1cb347f0 | 620cad18d60f090f5f9c13a5eefb56e86615de06 | refs/heads/develop | 2021-09-28T03:57:02.456172 | 2021-09-07T06:05:46 | 2021-09-07T06:05:46 | 157,325,015 | 5 | 0 | MIT | 2019-09-11T09:20:20 | 2018-11-13T05:25:01 | Python | UTF-8 | Python | false | false | 210 | py | # -*- coding: utf-8 -*-
# Copyright (c) 2018, Frappe Technologies and contributors
# License: MIT. See LICENSE
import frappe, json
from frappe.model.document import Document
class ListFilter(Document):
pass
| [
"rmehta@gmail.com"
] | rmehta@gmail.com |
438efbe8fed719db08deba9e3c76ce17b6fd093e | ee4db47ccecd23559b3b6f3fce1822c9e5982a56 | /Machine Learning/NaiveBSklearn.py | a217b9b46706d1e953309d5bcdbb082786eea1c2 | [] | no_license | meoclark/Data-Science-DropBox | d51e5da75569626affc89fdcca1975bed15422fd | 5f365cedc8d0a780abeb4e595cd0d90113a75d9d | refs/heads/master | 2022-10-30T08:43:22.502408 | 2020-06-16T19:45:05 | 2020-06-16T19:45:05 | 265,558,242 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 479 | py | from reviews import counter, training_counts
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
review = "This crib was perfect great excellent amazing great"
review_counts = counter.transform([review])
classifier = MultinomialNB()
training_labels = [0] * 1000 + [1] * 1000
classifier.fit(training_counts,training_labels)
pred = classifier.predict(review_counts)
print(pred)
print(classifier.predict_proba(review_counts)) | [
"oluchukwuegbo@gmail.com"
] | oluchukwuegbo@gmail.com |
59795eb9686bbaede2bedb7720af166564990330 | ca8627ac06c984aeb8ecd2e51c7a0493c794e3e4 | /azure-mgmt-cdn/azure/mgmt/cdn/models/sku.py | dd68ff0c3fa89c1cb00846dffac40e5d25557979 | [
"MIT"
] | permissive | matthchr/azure-sdk-for-python | ac7208b4403dc4e1348b48a1be9542081a807e40 | 8c0dc461a406e7e2142a655077903216be6d8b16 | refs/heads/master | 2021-01-11T14:16:23.020229 | 2017-03-31T20:39:15 | 2017-03-31T20:39:15 | 81,271,912 | 1 | 1 | null | 2017-02-08T01:07:09 | 2017-02-08T01:07:09 | null | UTF-8 | Python | false | false | 1,023 | py | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from msrest.serialization import Model
class Sku(Model):
"""The pricing tier (defines a CDN provider, feature list and rate) of the CDN
profile.
:param name: Name of the pricing tier. Possible values include:
'Standard_Verizon', 'Premium_Verizon', 'Custom_Verizon',
'Standard_Akamai', 'Standard_ChinaCdn'
:type name: str or :class:`SkuName <azure.mgmt.cdn.models.SkuName>`
"""
_attribute_map = {
'name': {'key': 'name', 'type': 'str'},
}
def __init__(self, name=None):
self.name = name
| [
"lmazuel@microsoft.com"
] | lmazuel@microsoft.com |
a0fb0edae27fc0c39cec9d69877252bc45c2f27e | bc441bb06b8948288f110af63feda4e798f30225 | /alert_service_sdk/model/tuna_service/requirement_instance_pb2.py | e96b38d97aac1d8eb40e480e7762e79325189dbb | [
"Apache-2.0"
] | permissive | easyopsapis/easyops-api-python | 23204f8846a332c30f5f3ff627bf220940137b6b | adf6e3bad33fa6266b5fa0a449dd4ac42f8447d0 | refs/heads/master | 2020-06-26T23:38:27.308803 | 2020-06-16T07:25:41 | 2020-06-16T07:25:41 | 199,773,131 | 5 | 0 | null | null | null | null | UTF-8 | Python | false | true | 8,098 | py | # -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: requirement_instance.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from alert_service_sdk.model.topboard import issue_pb2 as alert__service__sdk_dot_model_dot_topboard_dot_issue__pb2
from google.protobuf import struct_pb2 as google_dot_protobuf_dot_struct__pb2
DESCRIPTOR = _descriptor.FileDescriptor(
name='requirement_instance.proto',
package='tuna_service',
syntax='proto3',
serialized_options=_b('ZFgo.easyops.local/contracts/protorepo-models/easyops/model/tuna_service'),
serialized_pb=_b('\n\x1arequirement_instance.proto\x12\x0ctuna_service\x1a,alert_service_sdk/model/topboard/issue.proto\x1a\x1cgoogle/protobuf/struct.proto\"\x99\x02\n\x13RequirementInstance\x12\x12\n\ninstanceId\x18\x01 \x01(\t\x12\x0c\n\x04name\x18\x02 \x01(\t\x12\x10\n\x08sequence\x18\x03 \x01(\t\x12\r\n\x05given\x18\x04 \x01(\t\x12\x0c\n\x04when\x18\x05 \x01(\t\x12\x0c\n\x04then\x18\x06 \x01(\t\x12\x0c\n\x04type\x18\x07 \x01(\t\x12\x17\n\x0f\x64\x61taDescription\x18\x08 \x01(\t\x12\x0c\n\x04\x64\x61ta\x18\t \x01(\t\x12\x0b\n\x03tag\x18\n \x01(\t\x12\x15\n\rinterfaceName\x18\x0b \x01(\t\x12*\n\tcontracts\x18\x0c \x03(\x0b\x32\x17.google.protobuf.Struct\x12\x1e\n\x05ISSUE\x18\r \x03(\x0b\x32\x0f.topboard.IssueBHZFgo.easyops.local/contracts/protorepo-models/easyops/model/tuna_serviceb\x06proto3')
,
dependencies=[alert__service__sdk_dot_model_dot_topboard_dot_issue__pb2.DESCRIPTOR,google_dot_protobuf_dot_struct__pb2.DESCRIPTOR,])
_REQUIREMENTINSTANCE = _descriptor.Descriptor(
name='RequirementInstance',
full_name='tuna_service.RequirementInstance',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='instanceId', full_name='tuna_service.RequirementInstance.instanceId', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='name', full_name='tuna_service.RequirementInstance.name', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='sequence', full_name='tuna_service.RequirementInstance.sequence', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='given', full_name='tuna_service.RequirementInstance.given', index=3,
number=4, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='when', full_name='tuna_service.RequirementInstance.when', index=4,
number=5, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='then', full_name='tuna_service.RequirementInstance.then', index=5,
number=6, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='type', full_name='tuna_service.RequirementInstance.type', index=6,
number=7, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='dataDescription', full_name='tuna_service.RequirementInstance.dataDescription', index=7,
number=8, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='data', full_name='tuna_service.RequirementInstance.data', index=8,
number=9, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='tag', full_name='tuna_service.RequirementInstance.tag', index=9,
number=10, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='interfaceName', full_name='tuna_service.RequirementInstance.interfaceName', index=10,
number=11, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='contracts', full_name='tuna_service.RequirementInstance.contracts', index=11,
number=12, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='ISSUE', full_name='tuna_service.RequirementInstance.ISSUE', index=12,
number=13, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=121,
serialized_end=402,
)
_REQUIREMENTINSTANCE.fields_by_name['contracts'].message_type = google_dot_protobuf_dot_struct__pb2._STRUCT
_REQUIREMENTINSTANCE.fields_by_name['ISSUE'].message_type = alert__service__sdk_dot_model_dot_topboard_dot_issue__pb2._ISSUE
DESCRIPTOR.message_types_by_name['RequirementInstance'] = _REQUIREMENTINSTANCE
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
RequirementInstance = _reflection.GeneratedProtocolMessageType('RequirementInstance', (_message.Message,), {
'DESCRIPTOR' : _REQUIREMENTINSTANCE,
'__module__' : 'requirement_instance_pb2'
# @@protoc_insertion_point(class_scope:tuna_service.RequirementInstance)
})
_sym_db.RegisterMessage(RequirementInstance)
DESCRIPTOR._options = None
# @@protoc_insertion_point(module_scope)
| [
"service@easyops.cn"
] | service@easyops.cn |
ec872a5755917412c16200e8f53d7cfc56006833 | 1003740abd789902bff88bd86d573252f4fe9d23 | /eventex/core/admin.py | c3eaa933d5c4420cceed77ae4007c7305ebcb480 | [] | no_license | hpfn/wttd-2017 | e73ca9c65fc6dcee78045c7edee0fc42768fbfb7 | c9b284bbd644dcc543a8fd9a11254548441a31bd | refs/heads/master | 2023-08-28T16:12:19.191071 | 2019-01-04T18:17:31 | 2019-01-13T12:42:15 | 91,342,897 | 0 | 1 | null | 2023-09-06T21:55:07 | 2017-05-15T13:46:36 | Python | UTF-8 | Python | false | false | 1,366 | py | from django.contrib import admin
from eventex.core.models import Speaker, Contact, Talk, Course
class ContactInLine(admin.TabularInline):
model = Contact
extra = 1
class SpeakerModelAdmin(admin.ModelAdmin):
inlines = [ContactInLine]
prepopulated_fields = {'slug': ('name',)}
list_display = ['name', 'photo_img', 'website_link',
'email', 'phone']
def website_link(self, obj):
return '<a href="{0}">{0}</a>'.format(obj.website)
website_link.allow_tags = True
website_link.short_description = 'website'
def photo_img(self, obj):
return '<img width="32px" src={} />'.format(obj.photo)
photo_img.allow_tags = True
photo_img.short_description = 'foto'
def email(self, obj):
# return Contact.emails.filter(speaker=obj).first()
return obj.contact_set.emails().first()
email.short_description = 'e-mail'
def phone(self, obj):
# return Contact.phones.filter(speaker=obj).first()
return obj.contact_set.phones().first()
phone.short_description = 'telefone'
class TalkModelAdmin(admin.ModelAdmin):
def get_queryset(self, request):
qs = super().get_queryset(request)
return qs.filter(course=None)
admin.site.register(Speaker, SpeakerModelAdmin)
admin.site.register(Talk, TalkModelAdmin)
admin.site.register(Course)
| [
"hpfn@debian.org"
] | hpfn@debian.org |
ca7fe4e826cfb4c5139c30a91811719f01d3ccd7 | ffef4697f09fb321a04f2b3aad98b688f4669fb5 | /tests/ut/python/pipeline/parse/test_create_obj.py | a702f37e0bfbf65e1248e54fe478471796d1b85d | [
"Apache-2.0",
"AGPL-3.0-only",
"BSD-3-Clause-Open-MPI",
"MPL-1.1",
"LicenseRef-scancode-proprietary-license",
"LicenseRef-scancode-unknown-license-reference",
"Unlicense",
"MPL-2.0",
"LGPL-2.1-only",
"GPL-2.0-only",
"Libpng",
"BSL-1.0",
"MIT",
"MPL-2.0-no-copyleft-exception",
"IJG",
"Z... | permissive | Ewenwan/mindspore | 02a0f1fd660fa5fec819024f6feffe300af38c9c | 4575fc3ae8e967252d679542719b66e49eaee42b | refs/heads/master | 2021-05-19T03:38:27.923178 | 2020-03-31T05:49:10 | 2020-03-31T05:49:10 | 251,512,047 | 1 | 0 | Apache-2.0 | 2020-03-31T05:48:21 | 2020-03-31T05:48:20 | null | UTF-8 | Python | false | false | 3,876 | py | # Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""
@File : test_create_obj.py
@Author:
@Date : 2019-06-26
@Desc : test create object instance on parse function, eg: 'construct'
Support class : nn.Cell ops.Primitive
Support parameter: type is define on function 'ValuePtrToPyData'
(int,float,string,bool,tensor)
"""
import logging
import numpy as np
import mindspore.nn as nn
from mindspore.ops import operations as P
from mindspore.common.api import ms_function
from mindspore.common.tensor import Tensor
from ...ut_filter import non_graph_engine
log = logging.getLogger("test")
log.setLevel(level=logging.ERROR)
class Net(nn.Cell):
""" Net definition """
def __init__(self):
super(Net, self).__init__()
self.softmax = nn.Softmax(0)
self.axis = 0
def construct(self, x):
x = nn.Softmax(self.axis)(x)
return x
# Test: creat CELL OR Primitive instance on construct
@non_graph_engine
def test_create_cell_object_on_construct():
""" test_create_cell_object_on_construct """
log.debug("begin test_create_object_on_construct")
np1 = np.random.randn(2, 3, 4, 5).astype(np.float32)
input_me = Tensor(np1)
net = Net()
output = net(input_me)
out_me1 = output.asnumpy()
print(np1)
print(out_me1)
log.debug("finished test_create_object_on_construct")
# Test: creat CELL OR Primitive instance on construct
class Net1(nn.Cell):
""" Net1 definition """
def __init__(self):
super(Net1, self).__init__()
self.add = P.TensorAdd()
@ms_function
def construct(self, x, y):
add = P.TensorAdd()
result = add(x, y)
return result
@non_graph_engine
def test_create_primitive_object_on_construct():
""" test_create_primitive_object_on_construct """
log.debug("begin test_create_object_on_construct")
x = Tensor(np.array([[1, 2, 3], [1, 2, 3]], np.float32))
y = Tensor(np.array([[2, 3, 4], [1, 1, 2]], np.float32))
net = Net1()
net.construct(x, y)
log.debug("finished test_create_object_on_construct")
# Test: creat CELL OR Primitive instance on construct use many parameter
class NetM(nn.Cell):
""" NetM definition """
def __init__(self, name, axis):
super(NetM, self).__init__()
# self.relu = nn.ReLU()
self.name = name
self.axis = axis
self.softmax = nn.Softmax(self.axis)
def construct(self, x):
x = self.softmax(x)
return x
class NetC(nn.Cell):
""" NetC definition """
def __init__(self, tensor):
super(NetC, self).__init__()
self.tensor = tensor
def construct(self, x):
x = NetM("test", 1)(x)
return x
# Test: creat CELL OR Primitive instance on construct
@non_graph_engine
def test_create_cell_object_on_construct_use_many_parameter():
""" test_create_cell_object_on_construct_use_many_parameter """
log.debug("begin test_create_object_on_construct")
np1 = np.random.randn(2, 3, 4, 5).astype(np.float32)
input_me = Tensor(np1)
net = NetC(input_me)
output = net(input_me)
out_me1 = output.asnumpy()
print(np1)
print(out_me1)
log.debug("finished test_create_object_on_construct")
| [
"leon.wanghui@huawei.com"
] | leon.wanghui@huawei.com |
7dbb78d80fe115de113c038eef35d2eb2e41c5e9 | 1e1f7d3687b71e69efa958d5bbda2573178f2acd | /accounts/doctype/tds_control/tds_control.py | 69588d88ceb891f49dca1843969485b1b0c98525 | [] | no_license | ravidey/erpnext | 680a31e2a6b957fd3f3ddc5fd6b383d8ea50f515 | bb4b9bfa1551226a1d58fcef0cfe8150c423f49d | refs/heads/master | 2021-01-17T22:07:36.049581 | 2011-06-10T07:32:01 | 2011-06-10T07:32:01 | 1,869,316 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 6,401 | py | # Please edit this list and import only required elements
import webnotes
from webnotes.utils import add_days, add_months, add_years, cint, cstr, date_diff, default_fields, flt, fmt_money, formatdate, generate_hash, getTraceback, get_defaults, get_first_day, get_last_day, getdate, has_common, month_name, now, nowdate, replace_newlines, sendmail, set_default, str_esc_quote, user_format, validate_email_add
from webnotes.model import db_exists
from webnotes.model.doc import Document, addchild, removechild, getchildren, make_autoname, SuperDocType
from webnotes.model.doclist import getlist, copy_doclist
from webnotes.model.code import get_obj, get_server_obj, run_server_obj, updatedb, check_syntax
from webnotes import session, form, is_testing, msgprint, errprint
set = webnotes.conn.set
sql = webnotes.conn.sql
get_value = webnotes.conn.get_value
in_transaction = webnotes.conn.in_transaction
convert_to_lists = webnotes.conn.convert_to_lists
# -----------------------------------------------------------------------------------------
class DocType:
def __init__(self, doc, doclist=[]):
self.doc = doc
self.doclist = doclist
# ============TDS==================
# Stop payable voucher on which tds is applicable is made before posting date of the
# voucher in which tds was applicable for 1st time
def validate_first_entry(self,obj):
if obj.doc.doctype == 'Payable Voucher':
supp_acc = obj.doc.credit_to
elif obj.doc.doctype == 'Journal Voucher':
supp_acc = obj.doc.supplier_account
if obj.doc.ded_amount:
# first pv
first_pv = sql("select posting_date from `tabPayable Voucher` where credit_to = '%s' and docstatus = 1 and tds_category = '%s' and fiscal_year = '%s' and tds_applicable = 'Yes' and (ded_amount != 0 or ded_amount is not null) order by posting_date asc limit 1"%(supp_acc, obj.doc.tds_category, obj.doc.fiscal_year))
first_pv_date = first_pv and first_pv[0][0] or ''
# first jv
first_jv = sql("select posting_date from `tabJournal Voucher` where supplier_account = '%s'and docstatus = 1 and tds_category = '%s' and fiscal_year = '%s' and tds_applicable = 'Yes' and (ded_amount != 0 or ded_amount is not null) order by posting_date asc limit 1"%(supp_acc, obj.doc.tds_category, obj.doc.fiscal_year))
first_jv_date = first_jv and first_jv[0][0] or ''
#first tds voucher date
first_tds_date = ''
if first_pv_date and first_jv_date:
first_tds_date = first_pv_date < first_jv_date and first_pv_date or first_jv_date
elif first_pv_date:
first_tds_date = first_pv_date
elif first_jv_date:
first_tds_date = first_jv_date
if first_tds_date and getdate(obj.doc.posting_date) < first_tds_date:
msgprint("First tds voucher for this category has been made already. Hence payable voucher cannot be made before posting date of first tds voucher ")
raise Exception
# TDS function definition
#---------------------------
def get_tds_amount(self, obj):
# Validate if posting date b4 first tds entry for this category
self.validate_first_entry(obj)
# get current amount and supplier head
if obj.doc.doctype == 'Payable Voucher':
supplier_account = obj.doc.credit_to
total_amount=flt(obj.doc.grand_total)
for d in getlist(obj.doclist,'advance_allocation_details'):
if flt(d.tds_amount)!=0:
total_amount -= flt(d.allocated_amount)
elif obj.doc.doctype == 'Journal Voucher':
supplier_account = obj.doc.supplier_account
total_amount = obj.doc.total_debit
if obj.doc.tds_category:
# get total billed
total_billed = 0
pv = sql("select sum(ifnull(grand_total,0)), sum(ifnull(ded_amount,0)) from `tabPayable Voucher` where tds_category = %s and credit_to = %s and fiscal_year = %s and docstatus = 1 and name != %s and is_opening != 'Yes'", (obj.doc.tds_category, supplier_account, obj.doc.fiscal_year, obj.doc.name))
jv = sql("select sum(ifnull(total_debit,0)), sum(ifnull(ded_amount,0)) from `tabJournal Voucher` where tds_category = %s and supplier_account = %s and fiscal_year = %s and docstatus = 1 and name != %s and is_opening != 'Yes'", (obj.doc.tds_category, supplier_account, obj.doc.fiscal_year, obj.doc.name))
tds_in_pv = pv and pv[0][1] or 0
tds_in_jv = jv and jv[0][1] or 0
total_billed += flt(pv and pv[0][0] or 0)+flt(jv and jv[0][0] or 0)+flt(total_amount)
# get slab
slab = sql("SELECT * FROM `tabTDS Rate Detail` t1, `tabTDS Rate Chart` t2 WHERE t1.category = '%s' AND t1.parent=t2.name and t2.applicable_from <= '%s' ORDER BY t2.applicable_from DESC LIMIT 1" % (obj.doc.tds_category, obj.doc.posting_date), as_dict = 1)
if slab and flt(slab[0]['slab_from']) <= total_billed:
if flt(tds_in_pv) <= 0 and flt(tds_in_jv) <= 0:
total_amount = total_billed
slab = slab[0]
# special tds rate
special_tds = sql("select special_tds_rate, special_tds_limit, special_tds_rate_applicable from `tabTDS Detail` where parent = '%s' and tds_category = '%s'"% (supplier_account,obj.doc.tds_category))
# get_pan_number
pan_no = sql("select pan_number from `tabAccount` where name = '%s'" % supplier_account)
pan_no = pan_no and cstr(pan_no[0][0]) or ''
if not pan_no and flt(slab.get('rate_without_pan')):
msgprint("As there is no PAN number mentioned in the account head: %s, TDS amount will be calculated at rate %s%%" % (supplier_account, cstr(slab['rate_without_pan'])))
tds_rate = flt(slab.get('rate_without_pan'))
elif special_tds and special_tds[0][2]=='Yes' and (flt(special_tds[0][1])==0 or flt(special_tds[0][1]) >= flt(total_amount)):
tds_rate = flt(special_tds[0][0])
else:
tds_rate=flt(slab['rate'])
# calculate tds amount
if flt(slab['rate']):
ac = sql("SELECT account_head FROM `tabTDS Category Account` where parent=%s and company=%s", (obj.doc.tds_category,obj.doc.company))
if ac:
obj.doc.tax_code = ac[0][0]
obj.doc.rate = tds_rate
obj.doc.ded_amount = round(flt(tds_rate) * flt(total_amount) / 100)
else:
msgprint("TDS Account not selected in TDS Category %s" % (obj.doc.tds_category))
raise Exception
| [
"pdvyas@erpnext.com"
] | pdvyas@erpnext.com |
f3cbd8df8e6fd91388f44a5728142de7a9fa8cbd | 2652fd6261631794535589427a384693365a585e | /trunk/workspace/Squish/src/TestScript/UI/suite_UI_51/tst_UI_51_Router_1841_NVRAM_Save_Erase/test.py | 378fcb31dd1415d8743e1473b7159fa68a9cef1d | [] | no_license | ptqatester1/ptqa | 88c652380167f64a953bfd7a65041e7d8ac48c90 | 5b5997ea459e9aac17db8da2041e2af331927104 | refs/heads/master | 2021-01-21T19:06:49.275364 | 2017-06-19T03:15:00 | 2017-06-19T03:15:00 | 92,115,462 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,253 | py | from API.ComponentBox import ComponentBoxConst
from API.Device.Router.Router import Router
from API.Utility.Util import Util
#Function initialization
util = Util()
#Device initialization
router0 = Router(ComponentBoxConst.DeviceModel.ROUTER_1841, 100, 200, "Router0")
def main():
util.init()
createTopology()
util.clickOnSimulation()
util.clickOnRealtime()
checkPoint1()
checkPoint2()
def createTopology():
router0.create()
def checkPoint1():
router0.select()
router0.clickConfigTab()
router0.config.settings.saveButton()
router0.clickCliTab()
router0.cli.textCheckPoint("Router#copy running-config startup-config")
router0.cli.textCheckPoint("Destination filename \[startup-config\]?")
router0.cli.textCheckPoint("Building configuration...")
router0.cli.textCheckPoint("\[OK\]")
def checkPoint2():
router0.clickConfigTab()
router0.config.settings.eraseButton()
router0.config.settings.popups.eraseStartupConfigNoButton()
router0.clickCliTab()
router0.cli.textCheckPoint("Erase of nvram: complete", 0)
router0.clickConfigTab()
router0.config.settings.eraseNvram()
router0.clickCliTab()
router0.cli.textCheckPoint("Erase of nvram: complete") | [
"ptqatester1@gmail.com"
] | ptqatester1@gmail.com |
5bdf2f8e51db469e4af4c9ca2a139d967f5f99fc | c6f93ccf29f978a7834a01c25e636364adeaa4ea | /setup.py | 86142ec0f903179b1486fcbb97500086415833d8 | [
"MIT"
] | permissive | aviv-julienjehannet/jschon | d099709831fbee740c8cb5466c5964cec8d669fa | c8a0ddbb8202d9e80e8c4e959ec8bfd28297eec1 | refs/heads/main | 2023-04-16T05:51:41.170699 | 2021-04-23T13:33:01 | 2021-04-23T13:33:01 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,399 | py | import pathlib
from setuptools import setup, find_packages
HERE = pathlib.Path(__file__).parent.resolve()
README = (HERE / 'README.md').read_text(encoding='utf-8')
setup(
name='jschon',
version='0.2.0',
description='A pythonic, extensible JSON Schema implementation.',
long_description=README,
long_description_content_type='text/markdown',
url='https://github.com/marksparkza/jschon',
author='Mark Jacobson',
author_email='mark@saeon.ac.za',
license='MIT',
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'License :: OSI Approved :: MIT License',
'Operating System :: OS Independent',
'Programming Language :: Python',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3 :: Only',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
'Topic :: Software Development :: Libraries',
'Topic :: Software Development :: Libraries :: Python Modules',
],
packages=find_packages(exclude=['tests']),
include_package_data=True,
python_requires='~=3.8',
install_requires=['rfc3986'],
extras_require={
'test': ['tox'],
'dev': [
'pytest',
'coverage',
'hypothesis',
'pytest-benchmark',
]
},
)
| [
"52427991+marksparkza@users.noreply.github.com"
] | 52427991+marksparkza@users.noreply.github.com |
1e11232be19b72a2cad2d2b0d8b3a87e17d8fa2c | ce819ddd76427722d967e06190fc24ac98758009 | /PyQT_MySQL/Study_PyQT5/22/chapter22_2.py | f6b7f01c1136095b3696243a8837986f3bf7f369 | [] | no_license | huilizhou/Deeplearning_Python_DEMO | cb4164d21899757a4061836571b389dad0e63094 | 0a2898122b47b3e0196966a2fc61468afa99f67b | refs/heads/master | 2021-08-16T10:28:51.992892 | 2020-04-04T08:26:07 | 2020-04-04T08:26:07 | 148,308,575 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,165 | py | import sys
from PyQt5.QtWidgets import QApplication, QWidget, QInputDialog, QLineEdit, QTextEdit, QPushButton, \
QGridLayout
class Demo(QWidget):
def __init__(self):
super(Demo, self).__init__()
self.name_btn = QPushButton('Name', self)
self.gender_btn = QPushButton('Gender', self)
self.age_btn = QPushButton('Age', self)
self.score_btn = QPushButton('Score', self)
self.info_btn = QPushButton('Info', self)
self.name_btn.clicked.connect(
lambda: self.open_dialog_func(self.name_btn))
self.gender_btn.clicked.connect(
lambda: self.open_dialog_func(self.gender_btn))
self.age_btn.clicked.connect(
lambda: self.open_dialog_func(self.age_btn))
self.score_btn.clicked.connect(
lambda: self.open_dialog_func(self.score_btn))
self.info_btn.clicked.connect(
lambda: self.open_dialog_func(self.info_btn))
self.name_line = QLineEdit(self)
self.gender_line = QLineEdit(self)
self.age_line = QLineEdit(self)
self.score_line = QLineEdit(self)
self.info_textedit = QTextEdit(self)
self.g_layout = QGridLayout()
self.g_layout.addWidget(self.name_btn, 0, 0, 1, 1)
self.g_layout.addWidget(self.name_line, 0, 1, 1, 1)
self.g_layout.addWidget(self.gender_btn, 1, 0, 1, 1)
self.g_layout.addWidget(self.gender_line, 1, 1, 1, 1)
self.g_layout.addWidget(self.age_btn, 2, 0, 1, 1)
self.g_layout.addWidget(self.age_line, 2, 1, 1, 1)
self.g_layout.addWidget(self.score_btn, 3, 0, 1, 1)
self.g_layout.addWidget(self.score_line, 3, 1, 1, 1)
self.g_layout.addWidget(self.info_btn, 4, 0, 1, 1)
self.g_layout.addWidget(self.info_textedit, 4, 1, 1, 1)
self.setLayout(self.g_layout)
def open_dialog_func(self, btn):
if btn == self.name_btn: # 1
name, ok = QInputDialog.getText(
self, 'Name Input', 'Please enter the name:')
if ok:
self.name_line.setText(name)
elif btn == self.gender_btn: # 2
gender_list = ['Female', 'Male']
gender, ok = QInputDialog.getItem(
self, 'Gender Input', 'Please choose the gender:', gender_list, 0, False)
if ok:
self.gender_line.setText(gender)
elif btn == self.age_btn:
age, ok = QInputDialog.getInt(
self, 'Age Input', 'Please select the age:')
if ok:
self.age_line.setText(str(age))
elif btn == self.score_btn:
score, ok = QInputDialog.getDouble(
self, 'Score Input', 'Please select the score:')
if ok:
self.score_line.setText(str(score))
else:
info, ok = QInputDialog.getMultiLineText(
self, 'Info Input', 'Please enter the info:')
if ok:
self.info_textedit.setText(info)
if __name__ == '__main__':
app = QApplication(sys.argv)
demo = Demo()
demo.show()
sys.exit(app.exec_())
| [
"2540278344@qq.com"
] | 2540278344@qq.com |
efaed80c4684aaf515c0b2e38b52d25235d134be | 942ec6d53f40ff43f36594bb607dc7a86f0e6370 | /rasa_core/interpreter.py | 947655e9ce2ad753cfffade9378668eb4bd18f6e | [
"Apache-2.0",
"MIT",
"BSD-3-Clause"
] | permissive | hydercps/rasa_core | 319be5a0646bbc050538aa5ef016ea84183cf0b4 | e0a9db623cbe0bfa2ffd232a1c05a80441dd6ab7 | refs/heads/master | 2021-05-15T13:06:23.482657 | 2017-10-26T11:58:41 | 2017-10-26T11:58:41 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,169 | py | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import logging
import re
import os
import requests
from builtins import str
logger = logging.getLogger(__name__)
class NaturalLanguageInterpreter(object):
def parse(self, text):
raise NotImplementedError(
"Interpreter needs to be able to parse "
"messages into structured output.")
@staticmethod
def create(obj):
if isinstance(obj, NaturalLanguageInterpreter):
return obj
if isinstance(obj, str):
return RasaNLUInterpreter(model_directory=obj)
else:
return RegexInterpreter() # default interpreter
class RegexInterpreter(NaturalLanguageInterpreter):
@staticmethod
def extract_intent_and_entities(user_input):
value_assign_rx = '\s*(.+)\s*=\s*(.+)\s*'
structed_message_rx = '^_([^\[]+)(\[(.+)\])?'
m = re.search(structed_message_rx, user_input)
if m is not None:
intent = m.group(1).lower()
offset = m.start(3)
entities_str = m.group(3)
entities = []
if entities_str is not None:
for entity_str in entities_str.split(','):
for match in re.finditer(value_assign_rx, entity_str):
start = match.start(2) + offset
end = match.end(0) + offset
entity = {
"entity": match.group(1),
"start": start,
"end": end,
"value": match.group(2)}
entities.append(entity)
return intent, entities
else:
return None, []
def parse(self, text):
intent, entities = self.extract_intent_and_entities(text)
return {
'text': text,
'intent': {
'name': intent,
'confidence': 1.0,
},
'intent_ranking': [{
'name': intent,
'confidence': 1.0,
}],
'entities': entities,
}
class RasaNLUHttpInterpreter(NaturalLanguageInterpreter):
def __init__(self, model_name, token, server):
self.model_name = model_name
self.token = token
self.server = server
def parse(self, text):
"""Parses a text message.
Returns a default value if the parsing of the text failed."""
default_return = {"intent": {"name": "", "confidence": 0.0},
"entities": [], "text": ""}
result = self._rasa_http_parse(text)
return result if result is not None else default_return
def _rasa_http_parse(self, text):
"""Send a text message to a running rasa NLU http server.
Returns `None` on failure."""
if not self.server:
logger.error(
"Failed to parse text '{}' using rasa NLU over http. "
"No rasa NLU server specified!".format(text))
return None
params = {
"token": self.token,
"model": self.model_name,
"q": text
}
url = "{}/parse".format(self.server)
try:
result = requests.get(url, params=params)
if result.status_code == 200:
return result.json()
else:
logger.error(
"Failed to parse text '{}' using rasa NLU over http. "
"Error: {}".format(text, result.text))
return None
except Exception as e:
logger.error(
"Failed to parse text '{}' using rasa NLU over http. "
"Error: {}".format(text, e))
return None
class RasaNLUInterpreter(NaturalLanguageInterpreter):
def __init__(self, model_directory, config_file=None, lazy_init=False):
from rasa_nlu.model import Interpreter
from rasa_nlu.model import Metadata
from rasa_nlu.config import RasaNLUConfig
self.metadata = Metadata.load(model_directory)
self.lazy_init = lazy_init
self.config_file = config_file
if not lazy_init:
self.interpreter = Interpreter.load(self.metadata,
RasaNLUConfig(config_file,
os.environ))
else:
self.interpreter = None
def parse(self, text):
"""Parses a text message.
Returns a default value if the parsing of the text failed."""
if self.lazy_init and self.interpreter is None:
from rasa_nlu.model import Interpreter
from rasa_nlu.config import RasaNLUConfig
self.interpreter = Interpreter.load(self.metadata,
RasaNLUConfig(self.config_file,
os.environ))
return self.interpreter.parse(text)
| [
"tom.bocklisch@scalableminds.com"
] | tom.bocklisch@scalableminds.com |
e6bba0955a52988e23383a991c56e01477c61b16 | 201c9dc696159ea684e654fe7f3e3a3b8026cbf0 | /admaren/asgi.py | 2b6a15c77be3db4ace683c797fe32e96ace86a20 | [] | no_license | AthifSaheer/admaren-machine-test | 8a44652bf09f31c54da96db0b9cb9654b7544514 | 2547e4aed9ca23e30bc33151d77b7efd1db6a45c | refs/heads/main | 2023-08-04T03:02:41.844039 | 2021-10-02T06:51:23 | 2021-10-02T06:51:23 | 412,713,951 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 391 | py | """
ASGI config for admaren project.
It exposes the ASGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/3.2/howto/deployment/asgi/
"""
import os
from django.core.asgi import get_asgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'admaren.settings')
application = get_asgi_application()
| [
"liteboook@gmail.com"
] | liteboook@gmail.com |
31ed82385a1930c9ec4ea90def4ed19f98bd1449 | cccc9fa74b16cc4a2ae37dfb2449d6dc1ce215cd | /image_comparison/experiments/experiment2/run_experiment2_atlas_variance.py | 74c5750cd05bcb3469bdb973bab2ad3a4cb44bdd | [] | no_license | nagyistge/brainmeta | 611daf90d77432fa72a79b30fa4b895a60647536 | 105cffebcc0bf1c246ed11b67f3da2fff4a05f99 | refs/heads/master | 2021-05-30T06:44:40.095517 | 2016-01-11T23:21:50 | 2016-01-11T23:21:50 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,659 | py | #!/usr/bin/python
# This batch script will prepare and submit jobs for running on a SLURM cluster
import os
import time
import pandas
import numpy as np
import nibabel as nib
# Input file
basedir = "/scratch/users/vsochat/DATA/BRAINMETA/experiment1"
outdirectory = "%s/atlas_permutations_spearman" %(basedir)
input_file = "%s/openfmri_labels.tsv" %(basedir)
standard = "%s/standard/MNI152_T1_2mm_brain_mask.nii.gz" %(basedir)
# This will be a gold standard correlation data file
gs_file = "%s/gs_comparisons.tsv" %(basedir)
# Range of sampling percentages from 0.05 to 1.0
percentages = np.divide(range(1,101),100.0)
# We will sample some number of regions each time, from 1 through 116, to represent different mask sizes
# This will simulate different levels of coverage!
for percent_sample in percentages:
outfile = "%s/sampled_%s.pkl" %(outdirectory,percent_sample)
if not os.path.exists(outfile):
filey = ".job/coverage_%s.job" %(percent_sample)
filey = open(filey,"w")
filey.writelines("#!/bin/bash\n")
filey.writelines("#SBATCH --job-name=coverage_%s\n" %(percent_sample))
filey.writelines("#SBATCH --output=.out/coverage_%s.out\n" %(percent_sample))
filey.writelines("#SBATCH --error=.out/coverage_%s.err\n" %(percent_sample))
filey.writelines("#SBATCH --time=2-00:00\n")
filey.writelines("#SBATCH --mem=64000\n")
filey.writelines("python /home/vsochat/SCRIPT/python/brainmeta/image_comparison/experiments/experiment2_atlas_variance.py %s %s %s %s %s" %(percent_sample,input_file,outfile,standard,gs_file))
filey.close()
os.system("sbatch -p russpold " + ".job/coverage_%s.job" %(percent_sample))
| [
"vsochat@stanford.edu"
] | vsochat@stanford.edu |
0c0f12cec808a46fc7c2dbc0073bbc8c45ac9ffd | 62226afe584a0d7f8d52fc38ca416b19ffafcb7a | /hwtLib/examples/axi/simpleAxiRegs_test.py | 6fb518c70de16523bfc4cfeffee8ecc47957fc42 | [
"MIT"
] | permissive | Nic30/hwtLib | d08a08bdd0bf764971c4aa319ff03d4df8778395 | 4c1d54c7b15929032ad2ba984bf48b45f3549c49 | refs/heads/master | 2023-05-25T16:57:25.232026 | 2023-05-12T20:39:01 | 2023-05-12T20:39:01 | 63,018,738 | 36 | 8 | MIT | 2021-04-06T17:56:14 | 2016-07-10T21:13:00 | Python | UTF-8 | Python | false | false | 1,694 | py | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
import unittest
from hwt.simulator.simTestCase import SimTestCase
from hwtLib.examples.axi.simpleAxiRegs import SimpleAxiRegs
from pyMathBitPrecise.bit_utils import mask
from hwtSimApi.constants import CLK_PERIOD
allMask = mask(32 // 8)
class SimpleAxiRegsTC(SimTestCase):
@classmethod
def setUpClass(cls):
cls.u = SimpleAxiRegs()
cls.compileSim(cls.u)
def test_nop(self):
u = self.u
self.runSim(25 * CLK_PERIOD)
self.assertEmpty(u.axi._ag.r.data)
self.assertEmpty(u.axi._ag.b.data)
def test_falseWrite(self):
u = self.u
axi = u.axi._ag
axi.w.data += [(11, allMask), (37, allMask)]
self.runSim(25 * CLK_PERIOD)
self.assertEqual(len(axi.w.data), 2 - 1)
self.assertEmpty(u.axi._ag.r.data)
self.assertEmpty(u.axi._ag.b.data)
def test_write(self):
u = self.u
axi = u.axi._ag
axi.aw.data += [(0, 0), (4, 0)]
axi.w.data += [(11, allMask), (37, allMask)]
self.runSim(25 * CLK_PERIOD)
self.assertEmpty(axi.aw.data)
self.assertEmpty(axi.w.data)
self.assertEmpty(u.axi._ag.r.data)
self.assertEqual(len(u.axi._ag.b.data), 2)
model = self.rtl_simulator.model.io
self.assertValSequenceEqual(
[model.reg0.val, model.reg1.val],
[11, 37])
if __name__ == "__main__":
testLoader = unittest.TestLoader()
# suite = unittest.TestSuite([SimpleAxiRegsTC("test_write")])
suite = testLoader.loadTestsFromTestCase(SimpleAxiRegsTC)
runner = unittest.TextTestRunner(verbosity=3)
runner.run(suite)
| [
"nic30@seznam.cz"
] | nic30@seznam.cz |
4cea90c357cb1e8caed9ed683cb52f6f3bd5744f | c383d6adebdfc35e96fa88809111f79f7ebee819 | /interview/search/sequential_search.py | 8be59d3d4a3b577ea5030d69c480a9da5ca7390d | [] | no_license | aryabartar/learning | 5eb9f32673d01dcf181c73436dd3fecbf777d555 | f3ff3e4548922d9aa0f700e65fa949ab0108653c | refs/heads/master | 2020-05-16T06:03:11.974609 | 2019-08-26T14:10:42 | 2019-08-26T14:10:42 | 182,834,405 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 274 | py | def find_unordered(elem, array):
for i in array:
if elem == i:
return True
return False
def find_ordered(elem, array):
for i in array:
if i < elem:
break
if elem == i:
return True
return False
| [
"bartararya@gmail.com"
] | bartararya@gmail.com |
50029d742fdf872199ac05d382e8a46edf30c565 | d1e4f29e583ee964d63bc48554eaa73d67d58eb2 | /analytics/migrations/0012_add_on_delete.py | 24b2d5421292422475152c04fa1d2adc982982f8 | [
"LicenseRef-scancode-free-unknown",
"Apache-2.0"
] | permissive | hygolei/zulip | 299f636f9238f50b0d2746f1c371748f182f1f4e | 39fe66ab0824bc439929debeb9883c3046c6ed70 | refs/heads/master | 2023-07-11T22:50:27.434398 | 2021-08-09T10:07:35 | 2021-08-09T10:07:35 | 375,401,165 | 1 | 1 | Apache-2.0 | 2021-08-09T10:07:36 | 2021-06-09T15:20:09 | Python | UTF-8 | Python | false | false | 1,300 | py | # Generated by Django 1.11.6 on 2018-01-29 08:14
import django.db.models.deletion
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("analytics", "0011_clear_analytics_tables"),
]
operations = [
migrations.AlterField(
model_name="installationcount",
name="anomaly",
field=models.ForeignKey(
null=True, on_delete=django.db.models.deletion.SET_NULL, to="analytics.Anomaly"
),
),
migrations.AlterField(
model_name="realmcount",
name="anomaly",
field=models.ForeignKey(
null=True, on_delete=django.db.models.deletion.SET_NULL, to="analytics.Anomaly"
),
),
migrations.AlterField(
model_name="streamcount",
name="anomaly",
field=models.ForeignKey(
null=True, on_delete=django.db.models.deletion.SET_NULL, to="analytics.Anomaly"
),
),
migrations.AlterField(
model_name="usercount",
name="anomaly",
field=models.ForeignKey(
null=True, on_delete=django.db.models.deletion.SET_NULL, to="analytics.Anomaly"
),
),
]
| [
"tabbott@zulipchat.com"
] | tabbott@zulipchat.com |
16223a5f2a1c413d52701ed8ee134cfd53475775 | 7a2d2cfbe99a13920e55e462bd40627e34d18f23 | /tests/openbb_terminal/portfolio/portfolio_optimization/conftest.py | 7237236327ef8a8f4b91f9b8ec13ffb3523f7ebf | [
"MIT"
] | permissive | conrad-strughold/GamestonkTerminal | b9ada627929dbc1be379f19c69b34e24764efcff | c9aa674d979a7c7fd7f251410ceaa1c8a4ef2e6e | refs/heads/main | 2023-06-24T02:59:45.096493 | 2023-05-16T15:15:20 | 2023-05-16T15:15:20 | 342,313,838 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 203 | py | import pytest
from _pytest.nodes import Node
def pytest_runtest_setup(item: Node):
if not item.config.getoption("--optimization"):
pytest.skip(msg="Runs only with option : --optimization")
| [
"noreply@github.com"
] | conrad-strughold.noreply@github.com |
8d50dc2df16514bc977ef796e90045f9ebe1b83b | be24b5f37823125b2b901c0029175bfb2f25fb0e | /tests/homework/test_homework6.py | 4c1972342f82f74bfdb69cc577b4e437d09f7552 | [
"MIT"
] | permissive | acc-cosc-1336/cosc-1336-spring-2018-Miguelh1997 | 1bd75c51e72431037a46a1b3079d7695c41920ce | ac4b0405c4070758d0fc07458d4dca8a8a0313de | refs/heads/master | 2021-05-11T09:11:41.887630 | 2018-05-12T03:11:38 | 2018-05-12T03:11:38 | 118,070,058 | 0 | 1 | MIT | 2018-05-12T03:16:17 | 2018-01-19T03:13:02 | Python | UTF-8 | Python | false | false | 1,725 | py | import unittest
from src.homework.homework6 import (get_point_mutation, get_dna_complement, transcribe_dna_into_rna, get_gc_content)
#write import statement for homework 6 file
class TestHomework6(unittest.TestCase):
def test_sample(self):
self.assertEqual(1,1)
#create a test case for function find_motif_in_dna with arguments GATATATGCATATACTT and ATAT
#the result should be 2 4 10 (three different integers)
#create a test case for function get_point_mutations with arguments GAGCCTACTAACGGGAT and CATCGTAATGACGGCCT
#the result should be 7
def test_get_point_mutation_GAGCCTACTAACGGGAT(self):
self.assertEqual(7, get_point_mutation('GAGCCTACTAACGGGAT','CATCGTAATGACGGCCT'))
#create a test case for function get_dna_complement with argument AAAACCCGGT the result should be ACCGGGTTTT
def test_get_dna_complement_AAAACCCGGT(self):
self.assertEqual('ACCGGGTTTT',get_dna_complement('AAAACCCGGT'))
#create a test case for function transcribe_dna_to_rna with argument GATGGAACTTGACTACGTAAATT
#the result should be GAUGGAACUUGACUACGUAAAUU
def test_transcribe_dna_into_rna_GATGGAACTTGACTACGTAAATT(self):
self.assertEqual('GAUGGAACUUGACUACGUAAAUU',transcribe_dna_into_rna('GATGGAACTTGACTACGTAAATT'))
#create a test case for function get_gc_content with arguments
#CCACCCTCGTGGTATGGCTAGGCATTCAGGAACCGGAGAACGCTTCAGACCAGCCCGGACTGGGAACCTGCGGGCAGTAGGTGGAAT
#the result should be 60.919540
def test_get_gc_content(self):
self.assertEqual('60.919540',get_gc_content('CCACCCTCGTGGTATGGCTAGGCATTCAGGAACCGGAGAACGCTTCAGACCAGCCCGGACTGGGAACCTGCGGGCAGTAGGTGGAAT'))
if __name__ == '__main__':
unittest.main(verbosity=2)
| [
"noreply@github.com"
] | acc-cosc-1336.noreply@github.com |
8e3381accfc766a875a987cabcb997c6987cb556 | 0e5f7fbea53b56ddeb0905c687aff43ae67034a8 | /src/resource/script/helper/cafm_api/RequestCheckData.py | 1dfdf090a7eceda36ead35ad6b1aba6fac678a09 | [] | no_license | arkanmgerges/cafm.identity | 359cdae2df84cec099828719202b773212549d6a | 55d36c068e26e13ee5bae5c033e2e17784c63feb | refs/heads/main | 2023-08-28T18:55:17.103664 | 2021-07-27T18:50:36 | 2021-07-27T18:50:36 | 370,453,892 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 397 | py | """
@author: Arkan M. Gerges<arkan.m.gerges@gmail.com>
"""
class RequestCheckData:
def __init__(self, requestId, checkForId=False, resultIdName=None, ignoreIfExists=False, returnResult=True):
self.requestId = requestId
self.checkForId=checkForId
self.resultIdName=resultIdName
self.ignoreIfExists=ignoreIfExists
self.returnResult=returnResult | [
"arkan.m.gerges@gmail.com"
] | arkan.m.gerges@gmail.com |
e5d5959f54521aae879a71ae8ee0fa751ca5f922 | a08d885cb9150d7e84f5ffbf0c9734893105a898 | /2022/Day 12/hill_climbing_algorithm_test.py | 8af50ef4df758b9a1ccd29b85665b167104ab870 | [] | no_license | vhsw/Advent-of-Code | ab422c389340a1caf2ec17c5db4981add6433fbe | 3c1dac27667472202ab15098c48efaac19348edf | refs/heads/master | 2022-12-29T03:56:59.648395 | 2022-12-26T11:01:45 | 2022-12-26T11:01:45 | 162,491,163 | 0 | 0 | null | 2022-05-10T08:43:32 | 2018-12-19T21:10:26 | Python | UTF-8 | Python | false | false | 355 | py | """Day 12: tests"""
from hill_climbing_algorithm import DATA, part1, part2
EXAMPLE = """
Sabqponm
abcryxxl
accszExk
acctuvwj
abdefghi
""".strip()
def test_part1():
"""Part 1 test"""
assert part1(EXAMPLE) == 31
assert part1(DATA) == 408
def test_part2():
"""Part 2 test"""
assert part2(EXAMPLE) == 29
assert part2(DATA) == 399
| [
"nevermind1025@gmail.com"
] | nevermind1025@gmail.com |
316d3967e9d92294530800e1eb8ba2b5054e610d | 6b2a8dd202fdce77c971c412717e305e1caaac51 | /solutions_5646553574277120_0/Python/krukovna/solve.py | e02a7c8f9c1b3e907cf93bbe302a8c63e63dcc89 | [] | no_license | alexandraback/datacollection | 0bc67a9ace00abbc843f4912562f3a064992e0e9 | 076a7bc7693f3abf07bfdbdac838cb4ef65ccfcf | refs/heads/master | 2021-01-24T18:27:24.417992 | 2017-05-23T09:23:38 | 2017-05-23T09:23:38 | 84,313,442 | 2 | 4 | null | null | null | null | UTF-8 | Python | false | false | 813 | py | import sys
class Solved(Exception):
pass
def check(value, items):
s = value
for i in range(0, len(items)):
if s >= items[i]:
s -= items[i]
return s == 0
def solve(top, items):
a = 0
items = list(reversed(items))
for i in range(1, top+1):
if not check(i, items):
items = list(reversed(sorted([i] + items)))
a += 1
raise Solved(a)
if __name__ == '__main__':
for i in range(int(sys.stdin.readline())):
data = list(map(int, sys.stdin.readline().strip().split(' ')))
amounts = list(sorted(map(int, sys.stdin.readline().strip().split(' '))))
try:
solve(data[2], amounts)
except Solved as e:
print('Case #{}: {}'.format(i+1, e))
| [
"eewestman@gmail.com"
] | eewestman@gmail.com |
f740cf330191dc26a3bd03d4333a03e49014095a | 5e915a39fe966811424df0574f6670d252f895c8 | /micropython/p4_temperatura_f.py | 56ba1eda6cf14c319ef6a799437c06ba9f51ead4 | [
"MIT"
] | permissive | monkmakes/micro_bit_kit_es | c9c2f77f722f2a8a7e2657164d700b6fc758ce92 | be2a76f0ad45a70bef66c7ba548b2578ab35ede8 | refs/heads/master | 2022-11-21T19:09:11.445210 | 2020-07-22T16:07:42 | 2020-07-22T16:07:42 | 281,723,041 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 175 | py | # P4 Temperatura F
from microbit import *
while True:
lectura = pin1.read_analog()
temperatura_f = round(lectura * 0.135 +1)
display.scroll(str(temperatura_f))
| [
"evilgeniusauthor@gmail.com"
] | evilgeniusauthor@gmail.com |
1d2dbd370c5088150b15093ffa636dc0ae89bcf1 | 025d7484c52b204bc286dfb9d17fc08e8e03604e | /base_branch_company/__init__.py | db33f366ec546e53aff32c5c46331dd48ef9a9c8 | [] | no_license | gotorishab/stpi | 3e2d2393a3b64f313c688bfcb4855052ea5e62b4 | a548e923f80e124ea5f90f4559ec727193c70528 | refs/heads/master | 2021-07-05T16:19:39.932782 | 2021-04-30T03:58:05 | 2021-04-30T03:58:05 | 236,436,956 | 0 | 6 | null | null | null | null | UTF-8 | Python | false | false | 142 | py | # -*- coding: utf-8 -*-
# Part of odoo. See LICENSE file for full copyright and licensing details.
from . import models
from . import wizard
| [
"gotorishab@gmail.com"
] | gotorishab@gmail.com |
99d3e82d29dc3df93ee1a712c799d4279cf6595d | e2e08d7c97398a42e6554f913ee27340226994d9 | /pyautoTest-master(ICF-7.5.0)/test_case/scg/scg_Administrator/test_c139428.py | 61510e984e6bcc959e07dc4e89da92d75977f1af | [] | no_license | lizhuoya1111/Automated_testing_practice | 88e7be512e831d279324ad710946232377fb4c01 | b3a532d33ddeb8d01fff315bcd59b451befdef23 | refs/heads/master | 2022-12-04T08:19:29.806445 | 2020-08-14T03:51:20 | 2020-08-14T03:51:20 | 287,426,498 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,400 | py | import pytest
import time
import sys
from os.path import dirname, abspath
sys.path.insert(0, dirname(dirname(abspath(__file__))))
from page_obj.scg.scg_def_sys import *
from page_obj.scg.scg_def import *
from page_obj.scg.scg_button import *
from page_obj.scg.scg_def_log import *
from page_obj.common.rail import *
from page_obj.scg.scg_dev import *
from page_obj.scg.scg_def_ifname_OEM import *
test_id = 139428
def test_c139428(browser):
try:
login_web(browser, url=dev1)
# # 定位到默认frame
# browser.switch_to.default_content()
# browser.switch_to.frame("lefttree")
# # 点击系统
# browser.find_element_by_xpath(系统).click()
# if not browser.find_element_by_xpath('//*[@id="menu"]/div[1]/div/ul/li[2]/ul').is_displayed():
# # 如果不可见,点击加号,展开元素
# browser.find_element_by_xpath(系统管理).click()
# # 点击物理接口
# browser.find_element_by_xpath(管理员).click()
# # 切换到默认frame
# browser.switch_to.default_content()
# # 切换到内容frame
# browser.switch_to.frame("content")
into_fun(browser, 管理员)
time.sleep(5)
browser.find_element_by_xpath('//*[@id="tabs"]/li[2]/a/span').click()
time.sleep(5)
browser.find_element_by_xpath('//*[@id="button_area"]/div/input').click()
time.sleep(3)
browser.find_element_by_xpath('//*[@id="profilename"]').send_keys("@#¥%&")
browser.find_element_by_xpath('//*[@id="description"]').send_keys("admin_profile")
browser.find_element_by_xpath('//*[@id="configsystem_0"]').click()
browser.find_element_by_xpath('//*[@id="reportsystem_0"]').click()
# 点击保存
browser.find_element_by_xpath('//*[@id="container"]/div/form/div[2]/div[2]/div/input[2]').click()
# 获取提示框信息
time.sleep(2)
alert = browser.switch_to_alert()
print(alert.text)
web_info = alert.text
# 接受告警
browser.switch_to_alert().accept()
try:
assert "name输入错误" in web_info
rail_pass(test_run_id, test_id)
except:
rail_fail(test_run_id, test_id)
assert "name输入错误" in web_info
except Exception as err:
# 如果上面的步骤有报错,重新设备,恢复配置
print(err)
reload(hostip=dev1)
rail_fail(test_run_id, test_id)
assert False
if __name__ == '__main__':
pytest.main(["-v", "-s", "test_c" + str(test_id) + ".py"])
| [
"15501866985@163.com"
] | 15501866985@163.com |
ec63a34cd757f9cabca23c6fcc9fb1e4d474b126 | 68a294455c03ada90e9ab80867c33b73672152f9 | /apps/producto/models.py | 11fc3eb1f9f9db601bd1427326252d430784871f | [] | no_license | chrisstianandres/citas | f7e89aa9481ee6aa260bd28cae44091a2c6db900 | 21f7f90ec958cabd71aa41c852877f0657677ade | refs/heads/master | 2023-08-28T07:24:35.187428 | 2021-11-20T00:03:08 | 2021-11-20T00:03:08 | 347,208,630 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,103 | py | import os
from datetime import datetime
from io import BytesIO
import qrcode
from PIL import Image, ImageDraw
from django.core.files import File
from django.db import models
from django.forms import model_to_dict
from apps.categoria.models import Categoria
from apps.presentacion.models import Presentacion
from citas.settings import STATIC_URL, MEDIA_URL, BASE_DIR, SECRET_KEY_ENCRIPT, MEDIA_ROOT
class Producto(models.Model):
categoria = models.ForeignKey(Categoria, on_delete=models.PROTECT, null=True, blank=True)
presentacion = models.ForeignKey(Presentacion, on_delete=models.PROTECT, null=True, blank=True)
nombre = models.CharField(max_length=100)
descripcion = models.CharField(max_length=200)
imagen = models.ImageField(upload_to='productos', blank=True, null=True)
qr = models.ImageField(upload_to='productos/qr', blank=True, null=True)
def __str__(self):
return '{}'.format(self.nombre)
def get_image(self):
if self.imagen:
return '{}{}'.format(MEDIA_URL, self.imagen)
else:
return '{}{}'.format(MEDIA_URL, 'productos/no_disponible.jpg')
def get_qr(self):
if self.qr:
return '{}{}'.format(MEDIA_URL, self.qr)
def get_qr_2(self):
if self.qr:
return '{}{}'.format(MEDIA_ROOT, self.qr)
# def save(self, *args, **kwargs):
#
# super().save(*args, *kwargs)
def toJSON(self):
item = model_to_dict(self)
item['presentacion'] = self.presentacion.toJSON()
item['categoria'] = self.categoria.toJSON()
item['imagen'] = self.get_image()
item['qr'] = self.get_qr()
item['tipo'] = 'Producto'
return item
class Meta:
db_table = 'producto'
verbose_name = 'producto'
verbose_name_plural = 'productos'
ordering = ['-id']
class envio_stock_dia(models.Model):
fecha = models.DateField(default=datetime.now(), unique=True)
enviado = models.BooleanField(default=True)
def __str__(self):
return '{}'.format(self.fecha.strftime('%Y-%m-%d'))
| [
"chrisstianandres@gmail.com"
] | chrisstianandres@gmail.com |
1192cded4effd3395252540e02dbd727c5dfe410 | 24cee07743790afde5040c38ef95bb940451e2f6 | /cci/LinkedList/2_4.py | e7b6c3f28f9924dec5761d25f28d55ff0ff9c01a | [] | no_license | tinaba96/coding | fe903fb8740d115cf5a7f4ff5af73c7d16b9bce1 | d999bf5620e52fabce4e564c73b9f186e493b070 | refs/heads/master | 2023-09-01T02:24:33.476364 | 2023-08-30T15:01:47 | 2023-08-30T15:01:47 | 227,594,153 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 710 | py | from LinkedList import LinkedList
def partition(ll,x):
current = ll.tail = ll.head
print('current:', current)
print('llinit:', ll)
print('lltail:', ll.tail)
print('currentnext:', current.next)
while current:
nextNode = current.next
print('nextNode:' , nextNode)
current.next = None
print('ll:',ll)
if current.value <= x:
print('head1:', ll.head)
current.next = ll.head
ll.head = current
print('head2:', ll.head)
else:
ll.tail.next = current
ll.tail = current
current = nextNode
if ll.tail.next is not None:
ll.tail.next = None
ll = LinkedList()
ll.generate(10, 0, 99)
print(ll)
partition(ll, ll.head.value)
print(ll)
| [
"tinaba178.96@gmail.com"
] | tinaba178.96@gmail.com |
8bfed1015851b962c2225cfa88aca33414e65fbe | 7b35ddab50851b774bffbc633fc6d4fd4faa1efa | /simplifytour/core/views.py | a7dc604ef44572d89b7585c7c2449762739c408f | [] | no_license | Tushant/simplifytour | e585607efef9937f4a32165a526c38cbc192a562 | 2cb9f70b8cf27fd4beddf251966fdc214a1dcd85 | refs/heads/master | 2020-07-22T06:46:52.480528 | 2019-09-08T12:05:15 | 2019-09-08T12:05:15 | 207,106,399 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,555 | py | import os
import mimetypes
try:
from urllib.parse import urljoin, urlparse
except ImportError:
from urlparse import urljoin, urlparse
from json import dumps
from django.contrib.admin.views.decorators import staff_member_required
from django.http import (HttpResponse, HttpResponseNotFound)
from django.utils.translation import ugettext_lazy as _
from django.contrib.staticfiles import finders
from simplifytour.core.models import Displayable
from simplifytour.conf import settings
@staff_member_required
def static_proxy(request):
"""
Serves TinyMCE plugins inside the inline popups and the uploadify
SWF, as these are normally static files, and will break with
cross-domain JavaScript errors if ``STATIC_URL`` is an external
host. URL for the file is passed in via querystring in the inline
popup plugin template, and we then attempt to pull out the relative
path to the file, so that we can serve it locally via Django.
"""
normalize = lambda u: ("//" + u.split("://")[-1]) if "://" in u else u
url = normalize(request.GET["u"])
host = "//" + request.get_host()
static_url = normalize(settings.STATIC_URL)
for prefix in (host, static_url, "/"):
if url.startswith(prefix):
url = url.replace(prefix, "", 1)
response = ""
(content_type, encoding) = mimetypes.guess_type(url)
if content_type is None:
content_type = "application/octet-stream"
path = finders.find(url)
if path:
if isinstance(path, (list, tuple)):
path = path[0]
if url.endswith(".htm"):
# Inject <base href="{{ STATIC_URL }}"> into TinyMCE
# plugins, since the path static files in these won't be
# on the same domain.
static_url = settings.STATIC_URL + os.path.split(url)[0] + "/"
if not urlparse(static_url).scheme:
static_url = urljoin(host, static_url)
base_tag = "<base href='%s'>" % static_url
with open(path, "r") as f:
response = f.read().replace("<head>", "<head>" + base_tag)
else:
try:
with open(path, "rb") as f:
response = f.read()
except IOError:
return HttpResponseNotFound()
return HttpResponse(response, content_type=content_type)
def displayable_links_js(request):
"""
Renders a list of url/title pairs for all ``Displayable`` subclass
instances into JSON that's used to populate a list of links in
TinyMCE.
"""
links = []
if "simplifytour.pages" in settings.INSTALLED_APPS:
from simplifytour.pages.models import Page
is_page = lambda obj: isinstance(obj, Page)
else:
is_page = lambda obj: False
# For each item's title, we use its model's verbose_name, but in the
# case of Page subclasses, we just use "Page", and then sort the items
# by whether they're a Page subclass or not, then by their URL.
for url, obj in Displayable.objects.url_map(for_user=request.user).items():
title = getattr(obj, "titles", obj.title)
real = hasattr(obj, "id")
page = is_page(obj)
if real:
verbose_name = _("Page") if page else obj._meta.verbose_name
title = "%s: %s" % (verbose_name, title)
links.append((not page and real, {"title": str(title), "value": url}))
sorted_links = sorted(links, key=lambda link: (link[0], link[1]['value']))
return HttpResponse(dumps([link[1] for link in sorted_links])) | [
"programmertushant@gmail.com"
] | programmertushant@gmail.com |
d0c90f25fddfaf49151612cb7ab6bc5f675ce960 | 7879c47da4cfa94ad676dc4f0a5aea308b6a05b9 | /banners/migrations/0019_auto_20190409_1629.py | df151200ad21b7b95dd60e9267ba70b2115debc2 | [] | no_license | SoloTodo/solotodo_core | 9bc51fb276a22d25d3d894552a20f07403eb1555 | 72d8e21512b8a358335c347c3cc9b39fc8789c9b | refs/heads/develop | 2023-08-13T04:21:03.957429 | 2023-08-10T16:14:44 | 2023-08-10T16:14:44 | 96,940,737 | 15 | 5 | null | 2023-07-25T15:46:18 | 2017-07-11T21:59:06 | Python | UTF-8 | Python | false | false | 408 | py | # Generated by Django 2.0.3 on 2019-04-09 16:29
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('banners', '0018_auto_20190313_1137'),
]
operations = [
migrations.AlterField(
model_name='bannerupdate',
name='status_message',
field=models.TextField(blank=True, null=True),
),
]
| [
"vkhemlan@gmail.com"
] | vkhemlan@gmail.com |
8e365f7fb7dbf06d1ec12db9b886d675b708e32a | a3d6556180e74af7b555f8d47d3fea55b94bcbda | /third_party/blink/web_tests/external/wpt/webdriver/tests/classic/get_window_rect/get.py | f7592a30e067030f3c6433bc2419db06c0db8da8 | [
"LGPL-2.0-or-later",
"LicenseRef-scancode-warranty-disclaimer",
"LGPL-2.1-only",
"GPL-1.0-or-later",
"GPL-2.0-only",
"LGPL-2.0-only",
"BSD-2-Clause",
"LicenseRef-scancode-other-copyleft",
"BSD-3-Clause",
"MIT",
"Apache-2.0"
] | permissive | chromium/chromium | aaa9eda10115b50b0616d2f1aed5ef35d1d779d6 | a401d6cf4f7bf0e2d2e964c512ebb923c3d8832c | refs/heads/main | 2023-08-24T00:35:12.585945 | 2023-08-23T22:01:11 | 2023-08-23T22:01:11 | 120,360,765 | 17,408 | 7,102 | BSD-3-Clause | 2023-09-10T23:44:27 | 2018-02-05T20:55:32 | null | UTF-8 | Python | false | false | 833 | py | from tests.support.asserts import assert_error, assert_success
def get_window_rect(session):
return session.transport.send(
"GET", "session/{session_id}/window/rect".format(**vars(session)))
def test_no_top_browsing_context(session, closed_window):
response = get_window_rect(session)
assert_error(response, "no such window")
def test_no_browsing_context(session, closed_frame):
response = get_window_rect(session)
assert_success(response)
def test_payload(session):
expected = session.execute_script("""return {
x: window.screenX,
y: window.screenY,
width: window.outerWidth,
height: window.outerHeight
}""")
response = get_window_rect(session)
value = assert_success(response)
assert isinstance(value, dict)
assert value == expected
| [
"commit-bot@chromium.org"
] | commit-bot@chromium.org |
b78201c41112819c6d5c05a0df40bc262974948d | 66727e413dc0899502eb22d9798c11c07ce5bcda | /tools/utilities/pythonlibs/audio/play_audio.py | 2054c201a6309ad74f5e7239b233959abea6cfc9 | [
"MIT",
"LicenseRef-scancode-unknown-license-reference"
] | permissive | yunqu/ELL | 83e9f01d9be1dbcfc5b3814929797e5cf0b44159 | bfeb0239ee8c90953a7210fca1087241749a52d4 | refs/heads/master | 2020-05-04T04:09:38.810236 | 2019-04-01T23:05:25 | 2019-04-01T23:05:25 | 178,960,204 | 1 | 0 | null | 2019-04-01T23:01:56 | 2019-04-01T23:01:56 | null | UTF-8 | Python | false | false | 1,658 | py | #!/usr/bin/env python3
###################################################################################################
#
# Project: Embedded Learning Library (ELL)
# File: play_audio.py
# Authors: Chris Lovett
#
# Requires: Python 3.x
#
###################################################################################################
import argparse
import wav_reader
import speaker
# this is a test script to show how to use WavReader and Speaker classes.
arg_parser = argparse.ArgumentParser(description="Play an audio file after resampling it")
arg_parser.add_argument("filename", help="wav file to play ")
arg_parser.add_argument("--sample_rate", "-s", help="Audio sample rate to use", default=16000, type=int)
arg_parser.add_argument("--channels", "-c", help="Audio channels to use", default=1, type=int)
args = arg_parser.parse_args()
# First tell the WavReader what sample rate and channels we want the audio converted to
reader = wav_reader.WavReader(args.sample_rate, args.channels)
# Create a speaker object which we will give to the WavReader. The WavReader will pass
# the re-sampled audio to the Speaker so you can hear what it sounds like
speaker = speaker.Speaker()
# open the reader asking for 256 size chunks of audio, converted to floating point betweeo -1 and 1.
reader.open(args.filename, 256, speaker)
print("wav file contains sample rate {} and {} channels".format(reader.actual_rate, reader.actual_channels))
# pump the reader until it returns None. In a real app you would assign the results of read() to
# a variable so you can process the audio chunks returned.
while reader.read() is not None:
pass
| [
"clovett@microsoft.com"
] | clovett@microsoft.com |
d6c9aa19d252414fe4a3ac029740b73baa7788ed | 625f2f86f2b2e07cb35204d9b3232427bf462a09 | /official/HIRun2017PP/QCDPhoton_pThat-30_TuneCP5_5p02TeV_pythia8/crabConfig_FOREST.py | 71cf587527010f861fb8c2574246a542d877d2c4 | [] | no_license | ttrk/production | abb84c423a076fd9966276b7ed4350936c755e0b | f8a64c9c38de215802799365f0f7a99e1ee78276 | refs/heads/master | 2023-02-08T23:48:56.355141 | 2023-01-26T08:46:22 | 2023-01-26T08:46:22 | 52,877,406 | 0 | 2 | null | null | null | null | UTF-8 | Python | false | false | 1,710 | py | from WMCore.Configuration import Configuration
config = Configuration()
config.section_("General")
config.General.requestName = "QCDPhoton_pThat-30_TuneCP5_5p02TeV_pythia8_FOREST"
config.General.transferLogs = False
config.section_("JobType")
config.JobType.pluginName = "Analysis"
config.JobType.psetName = "runForestAOD_pp_MC_94X.py"
#config.JobType.maxMemoryMB = 2500 # request high memory machines.
#config.JobType.maxJobRuntimeMin = 2750 # request longer runtime, ~48 hours.
## software : CMSSW_9_4_10
## forest_CMSSW_9_4_10
# https://github.com/CmsHI/cmssw/commit/a46919490e0f037a901b12e85e40e2444d7230af
## runForestAOD_pp_MC_94X.py commit + ggHi.doEffectiveAreas + enable ggHiNtuplizerGED doRecHits and doPhoERegression + activate l1object + HiGenParticleAna.etaMax = 5, ptMin = 0.4
# https://github.com/CmsHI/cmssw/commit/a46919490e0f037a901b12e85e40e2444d7230af
# dataset summary on DAS
# Number of blocks: 11 Number of events: 926276 Number of files: 28 Number of lumis: 17451 sum(file_size): 68949676101 (68.9GB)
config.section_("Data")
config.Data.inputDataset = "/QCDPhoton_pThat-30_TuneCP5_5p02TeV_pythia8/RunIIpp5Spring18DR-94X_mc2017_realistic_forppRef5TeV_v1-v1/AODSIM"
config.Data.inputDBS = "global"
config.Data.splitting = "FileBased"
config.Data.unitsPerJob = 1
config.Data.totalUnits = -1
config.Data.publication = False
config.Data.outputDatasetTag = "RunIIpp5Spring18DR-94X_mc2017_realistic_forppRef5TeV_v1-v1-FOREST"
config.Data.outLFNDirBase = "/store/user/katatar/official/HIRun2017PP/"
config.section_("Site")
config.Site.storageSite = "T2_US_MIT"
#config.Site.whitelist = ["T2_US_MIT"]
#config.section_("Debug")
#config.Debug.extraJDL = ["+CMS_ALLOW_OVERFLOW=False"]
| [
"tatark@mit.edu"
] | tatark@mit.edu |
00e411ddf0f13b487f205308a2467da5f9032f51 | 40de6d687cc0131eebde6edcd8b1ab640d2ca727 | /Web/API/old/1.py | a421ae21ffa1d3237082c846727a21b62120ebe2 | [] | no_license | Larionov0/DimaKindruk_Lessons | ad9bf6a4b8534de11fd445434481042ae3863cec | 2fb38b2d65df84ad8909541c82bf7bef96deb24e | refs/heads/master | 2023-06-05T11:42:28.503979 | 2021-06-24T17:08:33 | 2021-06-24T17:08:33 | 338,129,996 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,380 | py | import requests
import json
def print_structure(struct):
print(json.dumps(struct, indent=4))
def main_menu():
city = 'Kyiv'
while True:
print('--= Погода зараз =--')
print(f"Місто: {city}")
print('1 - дізнатись погоду')
print('2 - змінити місто')
print('0 - вихід з програми')
choice = input('Ваш вибір: ')
if choice == '1':
url = f'http://api.openweathermap.org/data/2.5/weather' \
f'?q={city}' \
f'&appid=cb5c7fc26a28e83605cff4b8efb1b85f' \
f'&units=metric'
try:
dct = requests.get(url).json()
text = '---= Погода =---\n' \
f'Головна: {dct["weather"][0]["main"]}\n' \
f'Температура: {dct["main"]["temp"]}\n' \
f'Відчувається як: {dct["main"]["feels_like"]}\n' \
f'Швидкість вітру: {dct["wind"]["speed"]}'
print(text)
except json.decoder.JSONDecodeError:
print('Щось не так з містом')
elif choice == '2':
city = input('Нове місто: ')
elif choice == '0':
break
main_menu()
| [
"larionov1001@gmail.com"
] | larionov1001@gmail.com |
a59ec9983b0fe83019a84fcdb7d3102b3379d6b6 | d7dc62a713617ebe10bb3ce228494637eca9ab7c | /scripts/dataset_summary.py | c6e53de833f547610e82a24a85108693127f3c03 | [
"MIT"
] | permissive | marcofavorito/google-hashcode-2020 | 8416bbdff0a09724065c6742ba8d7ae659bdd829 | 5e44b155eb4a7c6ed4202dd264bcc4d36ac953f2 | refs/heads/master | 2022-04-04T11:13:04.576572 | 2020-02-20T21:51:43 | 2020-02-20T21:51:43 | 241,432,548 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,753 | py | import argparse
from hashcode20.helpers import Input
import numpy as np
parser = argparse.ArgumentParser("hashcode20", description="CLI util for Google Hash Code 2019. "
"It assumes the input provided in stdin.")
parser.add_argument("--in", dest="in_file", type=str, default=None, help="provide an input data file.")
args = parser.parse_args()
def _score_book_list(book_ids, score):
return sum(map(lambda book_id: score[book_id], book_ids))
def print_stats(data, label):
print("Avg {}: {}".format(label, np.mean(data)))
print("Std {}: {}".format(label, np.std(data)))
print("Max {}: {}".format(label, np.max(data)))
print("Min {}: {}".format(label, np.min(data)))
print("00th {}: {}".format(label, np.percentile(data, 0) ))
print("25th {}: {}".format(label, np.percentile(data, 25) ))
print("50th {}: {}".format(label, np.percentile(data, 50) ))
print("75th {}: {}".format(label, np.percentile(data, 75) ))
print("100th {}: {}".format(label, np.percentile(data, 100)))
print("-"*50)
if __name__ == '__main__':
input_ = Input.read(args.in_file) # type: Input
print("# Libraries: {}".format(len(input_.libraries)))
print("# Book: {}".format(input_.nb_books))
print("# Days: {}".format(input_.nb_days))
print_stats(input_.scores, "Book score")
print_stats([len(l.books) for l in input_.libraries], "Books per Library")
print_stats([_score_book_list(l.books, input_.scores) for l in input_.libraries], "Score per Library")
print_stats(list(map(lambda l: l.ship_book_rate, input_.libraries)), "Shipping rate")
print_stats(list(map(lambda l: l.nb_signup_days, input_.libraries)), "signup day period")
| [
"marco.favorito@gmail.com"
] | marco.favorito@gmail.com |
0617e588eccff156ad691170642df8ed9583d1f0 | 1abcd4686acf314a044a533d2a541e83da835af7 | /backjoon_level_python/12025.py | 3031c47f8b001bd4d4ae1511ee00366b4039bde1 | [] | no_license | HoYoung1/backjoon-Level | 166061b2801514b697c9ec9013db883929bec77e | f8e49c8d2552f6d62be5fb904c3d6548065c7cb2 | refs/heads/master | 2022-05-01T05:17:11.305204 | 2022-04-30T06:01:45 | 2022-04-30T06:01:45 | 145,084,813 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 951 | py | def solve(password, k):
answer = ''
bit = bin(k-1)[2:]
bit = ''.join(reversed(bit))
# print('bit', bit)
count = 0
password = password.replace('6', '1').replace('7','2')
for s in password[::-1]:
if count < len(bit):
if s == '1':
if bit[count] == '1':
answer += '6'
else:
answer += s
count += 1
elif s == '2':
if bit[count] == '1':
answer += '7'
else:
answer += s
count += 1
else:
answer += s
else:
answer += s
# print(count)
# print(''.join(reversed(answer)))
if count == len(bit):
return ''.join(reversed(answer))
else:
return -1
if __name__ == '__main__':
password = input()
k = int(input())
print(solve(password, k)) | [
"empire1641@gmail.com"
] | empire1641@gmail.com |
2a63406ee420d62bf9d5c58274c937ec531958df | d922b02070c11c19ba6104daa3a1544e27a06e40 | /HW_4_6/venv/Scripts/pip3.8-script.py | 41620d84fc4a4267b4ade6a9be2cd60c46989b15 | [] | no_license | viharivnv/DSA | 2ca393a8e304ee7b4d540ff435e832d94ee4b2a7 | 777c7281999ad99a0359c44291dddaa868a2525c | refs/heads/master | 2022-10-15T15:26:59.045698 | 2020-06-17T15:55:33 | 2020-06-17T15:55:33 | 273,020,116 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 416 | py | #!C:\Users\vihar\PycharmProjects\HW_4_6\venv\Scripts\python.exe
# EASY-INSTALL-ENTRY-SCRIPT: 'pip==19.0.3','console_scripts','pip3.8'
__requires__ = 'pip==19.0.3'
import re
import sys
from pkg_resources import load_entry_point
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
sys.exit(
load_entry_point('pip==19.0.3', 'console_scripts', 'pip3.8')()
)
| [
"52350934+viharivnv@users.noreply.github.com"
] | 52350934+viharivnv@users.noreply.github.com |
d800850a9e2c4d86bb5615c1930155a165048d9b | 0a79b1804588be9a1f504b0d8b2425d39debb272 | /barriers/models/history/__init__.py | a329ef511e8981b7fe916bb2053b47d2bb4b1ab7 | [
"MIT"
] | permissive | cad106uk/market-access-python-frontend | 9d44d455e1c7d5f20991fbad18d1aa9172696cf9 | f9d5143e2330613385b8617f7134acbe01f196f7 | refs/heads/master | 2023-03-05T18:37:40.481455 | 2021-01-18T10:28:00 | 2021-01-18T10:28:00 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,247 | py | from .assessments.economic import EconomicAssessmentHistoryItem
from .assessments.economic_impact import EconomicImpactAssessmentHistoryItem
from .assessments.resolvability import ResolvabilityAssessmentHistoryItem
from .assessments.strategic import StrategicAssessmentHistoryItem
from .barriers import BarrierHistoryItem
from .notes import NoteHistoryItem
from .public_barriers import PublicBarrierHistoryItem
from .public_barrier_notes import PublicBarrierNoteHistoryItem
from .team_members import TeamMemberHistoryItem
from .utils import PolymorphicBase
from .wto import WTOHistoryItem
class HistoryItem(PolymorphicBase):
"""
Polymorphic wrapper for HistoryItem classes
Delegates to the correct subclass based on the value of data["model"]
That class then delegates to a subclass based on data["field"]
"""
key = "model"
subclasses = (
BarrierHistoryItem,
EconomicAssessmentHistoryItem,
EconomicImpactAssessmentHistoryItem,
NoteHistoryItem,
PublicBarrierHistoryItem,
PublicBarrierNoteHistoryItem,
ResolvabilityAssessmentHistoryItem,
StrategicAssessmentHistoryItem,
TeamMemberHistoryItem,
WTOHistoryItem,
)
class_lookup = {}
| [
"noreply@github.com"
] | cad106uk.noreply@github.com |
5d8989700260fd1d8dafa0e88e688ae38b405076 | 22b348a0d10519cb1f1da5e886fdf2d3c167cf5a | /myweb/test/_paste/_routes/demo_2.py | 67efaf264fa01ff34cd2c23b7403abf5e51bb3ce | [] | no_license | liuluyang/openstack_mogan_study | dab0a8f918ffd17e0a747715998e81304672b75b | 8624f765da7f5aa0c210f0fa945fc50cf8a67b9e | refs/heads/master | 2021-01-19T17:03:15.370323 | 2018-04-12T09:50:38 | 2018-04-12T09:50:38 | 101,040,396 | 1 | 1 | null | 2017-11-01T02:17:31 | 2017-08-22T08:30:22 | Python | UTF-8 | Python | false | false | 314 | py | from routes import Mapper
map = Mapper()
print map
print type(map)
map.connect(None, '/error/{action}/{id}', controller='error')
result = map.match('/error/lixin/200')
print result
map.connect(None, '/error/{action:index|lixin}/{id:\d+}', controller='error')
result = map.match('/error/lixin/200')
print result | [
"1120773382@qq.com"
] | 1120773382@qq.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.