blob_id stringlengths 40 40 | directory_id stringlengths 40 40 | path stringlengths 3 616 | content_id stringlengths 40 40 | detected_licenses listlengths 0 112 | license_type stringclasses 2 values | repo_name stringlengths 5 115 | snapshot_id stringlengths 40 40 | revision_id stringlengths 40 40 | branch_name stringclasses 777 values | visit_date timestamp[us]date 2015-08-06 10:31:46 2023-09-06 10:44:38 | revision_date timestamp[us]date 1970-01-01 02:38:32 2037-05-03 13:00:00 | committer_date timestamp[us]date 1970-01-01 02:38:32 2023-09-06 01:08:06 | github_id int64 4.92k 681M ⌀ | star_events_count int64 0 209k | fork_events_count int64 0 110k | gha_license_id stringclasses 22 values | gha_event_created_at timestamp[us]date 2012-06-04 01:52:49 2023-09-14 21:59:50 ⌀ | gha_created_at timestamp[us]date 2008-05-22 07:58:19 2023-08-21 12:35:19 ⌀ | gha_language stringclasses 149 values | src_encoding stringclasses 26 values | language stringclasses 1 value | is_vendor bool 2 classes | is_generated bool 2 classes | length_bytes int64 3 10.2M | extension stringclasses 188 values | content stringlengths 3 10.2M | authors listlengths 1 1 | author_id stringlengths 1 132 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9c3898401073d0b579095b6d8def453376e220b0 | 85c7be8904ce443eb7666aa338f4a03aec73a8d8 | /test/run_test.py | a8de9a716a8134201df89fc2940c6c560f0133a5 | [
"MIT"
] | permissive | naz947/nlpaug | 6c9fc8223e7933775b628ede220d8119dd9240ac | ee508e88a2dbe66ca7b05eb2491c20ca49e3aef7 | refs/heads/master | 2020-05-20T13:16:35.618003 | 2019-05-02T02:50:16 | 2019-05-02T02:50:16 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 376 | py | import unittest
if __name__ == '__main__':
test_dirs = [
# 'test/augmenter/char/',
# 'test/augmenter/word/',
'test/augmenter/spectrogram/',
'test/flow/'
]
runner = unittest.TextTestRunner()
for test_dir in test_dirs:
loader = unittest.TestLoader()
suite = loader.discover(test_dir)
runner.run(suite)
| [
"makcedward@gmail.com"
] | makcedward@gmail.com |
6733dbf2e59a135a057a56d9eee35b59737f3702 | e1950865f000adc926f228d84131e20b244b48f6 | /python/Array/Difference_largest&smallest_value.py | 8abca0f2e09d3273617df883d8b580c0038b9e95 | [] | no_license | manastole03/Programming-practice | c73859b13392a6a1036f557fa975225672fb1e91 | 2889dc94068b8d778f6b0cf516982d7104fa2318 | refs/heads/master | 2022-12-06T07:48:47.237014 | 2020-08-29T18:22:59 | 2020-08-29T18:22:59 | 281,708,273 | 3 | 1 | null | null | null | null | UTF-8 | Python | false | false | 315 | py | array=[]
n=int(input('How many elements u want in array: '))
for i in range(n):
f= int(input('Enter no: '))
array.append(f)
print('Entered array: ',array)
if len(array)>=1:
max1=max(array)
min1 = min(array)
diff=max1-min1
print('The difference of largest &smallest value from array: ',diff)
| [
"noreply@github.com"
] | manastole03.noreply@github.com |
4e1634773b2a14ec7748b2f1e814115299f79cb7 | 744594f30c5e283f6252909fc68102dd7bc61091 | /2020/24/2020_day_24_1.py | 245b430c35de9e6f604111ab201b953f313c1c1d | [
"MIT"
] | permissive | vScourge/Advent_of_Code | 84f40c76e5dc13977876eea6dbea7d05637de686 | 36e4f428129502ddc93c3f8ba7950aed0a7314bb | refs/heads/master | 2022-12-20T22:12:28.646102 | 2022-12-15T22:16:28 | 2022-12-15T22:16:28 | 160,765,438 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 6,010 | py | """
--- Day 24: Lobby Layout ---
Your raft makes it to the tropical island; it turns out that the small crab was an excellent navigator.
You make your way to the resort.
As you enter the lobby, you discover a small problem: the floor is being renovated. You can't even reach the
check-in desk until they've finished installing the new tile floor.
The tiles are all hexagonal; they need to be arranged in a hex grid with a very specific color pattern.
Not in the mood to wait, you offer to help figure out the pattern.
The tiles are all white on one side and black on the other. They start with the white side facing up.
The lobby is large enough to fit whatever pattern might need to appear there.
A member of the renovation crew gives you a list of the tiles that need to be flipped over (your puzzle input).
Each line in the list identifies a single tile that needs to be flipped by giving a series of steps starting
from a reference tile in the very center of the room. (Every line starts from the same reference tile.)
Because the tiles are hexagonal, every tile has six neighbors: east, southeast, southwest, west, northwest, and northeast.
These directions are given in your list, respectively, as e, se, sw, w, nw, and ne. A tile is identified by a series of
these directions with no delimiters; for example, esenee identifies the tile you land on if you start at the reference
tile and then move one tile east, one tile southeast, one tile northeast, and one tile east.
Each time a tile is identified, it flips from white to black or from black to white. Tiles might be flipped more than once.
For example, a line like esew flips a tile immediately adjacent to the reference tile, and a line like nwwswee flips the reference tile itself.
Here is a larger example:
sesenwnenenewseeswwswswwnenewsewsw
neeenesenwnwwswnenewnwwsewnenwseswesw
seswneswswsenwwnwse
nwnwneseeswswnenewneswwnewseswneseene
swweswneswnenwsewnwneneseenw
eesenwseswswnenwswnwnwsewwnwsene
sewnenenenesenwsewnenwwwse
wenwwweseeeweswwwnwwe
wsweesenenewnwwnwsenewsenwwsesesenwne
neeswseenwwswnwswswnw
nenwswwsewswnenenewsenwsenwnesesenew
enewnwewneswsewnwswenweswnenwsenwsw
sweneswneswneneenwnewenewwneswswnese
swwesenesewenwneswnwwneseswwne
enesenwswwswneneswsenwnewswseenwsese
wnwnesenesenenwwnenwsewesewsesesew
nenewswnwewswnenesenwnesewesw
eneswnwswnwsenenwnwnwwseeswneewsenese
neswnwewnwnwseenwseesewsenwsweewe
wseweeenwnesenwwwswnew
In the above example, 10 tiles are flipped once (to black), and 5 more are flipped twice (to black, then back to white). After all of these instructions have been followed, a total of 10 tiles are black.
Go through the renovation crew's list and determine which tiles they need to flip. After all of the instructions have been followed, how many tiles are left with the black side up?
--- Part Two ---
The tile floor in the lobby is meant to be a living art exhibit. Every day, the tiles are all flipped according to the following rules:
Any black tile with zero or more than 2 black tiles immediately adjacent to it is flipped to white.
Any white tile with exactly 2 black tiles immediately adjacent to it is flipped to black.
Here, tiles immediately adjacent means the six tiles directly touching the tile in question.
The rules are applied simultaneously to every tile; put another way, it is first determined which tiles need to be flipped, then they are all flipped at the same time.
In the above example, the number of black tiles that are facing up after the given number of days has passed is as follows:
Day 1: 15
Day 2: 12
Day 3: 25
Day 4: 14
Day 5: 23
Day 6: 28
Day 7: 41
Day 8: 37
Day 9: 49
Day 10: 37
Day 20: 132
Day 30: 259
Day 40: 406
Day 50: 566
Day 60: 788
Day 70: 1106
Day 80: 1373
Day 90: 1844
Day 100: 2208
After executing this process a total of 100 times, there would be 2208 black tiles facing up.
How many tiles will be black after 100 days?
"""
### IMPORTS ###
import collections
import math
import numpy
import time
### CONSTANTS ###
INPUT_FILENAME = 'input.txt'
NE = 0
E = 1
SE = 2
SW = 3
W = 4
NW = 5
### FUNCTIONS ###
def parse_input( ):
lines = open( INPUT_FILENAME, 'r' ).read( ).splitlines( )
paths = [ ]
l = 0
for line in lines:
moves = [ ]
i = 0
while i < len( line ):
c = line[ i ]
if c == 'e':
moves.append( E )
i += 1
elif c == 'w':
moves.append( W )
i += 1
elif c == 'n':
if line[ i+1 ] == 'e':
moves.append( NE )
else:
moves.append( NW )
i += 2
else:
if line[ i+1 ] == 'e':
moves.append( SE )
else:
moves.append( SW )
i += 2
paths.append( moves )
l += 1
return paths
def move_on_grid( pos, direction ):
if direction == NE:
pos = ( pos[ 0 ], pos[ 1 ] - 1 )
elif direction == E:
pos = ( pos[ 0 ] + 1, pos[ 1 ] )
elif direction == SE:
pos = ( pos[ 0 ] + 1, pos[ 1 ] + 1 )
elif direction == SW:
pos = ( pos[ 0 ], pos[ 1 ] + 1 )
elif direction == W:
pos = ( pos[ 0 ] - 1, pos[ 1 ] )
elif direction == NW:
pos = ( pos[ 0 ] - 1, pos[ 1 ] - 1 )
return pos
def main( paths ):
"""
"""
grid = { (0,0): True } # True = white
pos = ( 0, 0 )
for path in paths:
print( '\nPath =', path )
for direction in path:
pos = move_on_grid( pos, direction )
if pos not in grid:
grid[ pos ] = True
print( 'dir {0}, pos {1} = {2}'.format( direction, pos, grid[ pos ] ) )
# Flip this tile
grid[ pos ] = not grid[ pos ]
# Reset to reference tile
pos = ( 0, 0 )
return list( grid.values( ) ).count( False )
# test answer = 67384529
### CLASSES ###
#class Point( ):
#def __init__( self, x, y ):
#self.x = x
#self.y = y
#def __repr__( self ):
#return '<Point ({0}, {1})>'.format( self.x, self.y )
### MAIN ###
if __name__ == "__main__":
time_start = time.perf_counter( )
paths = parse_input( )
answer = main( paths )
print( 'answer =', answer )
print( 'done in {0:.4f} secs'.format( time.perf_counter( ) - time_start ) )
# 30033 not right
| [
"adam.pletcher@gmail.com"
] | adam.pletcher@gmail.com |
24bd168a6115316ace8b48edc0afa61f7aac6c80 | e3365bc8fa7da2753c248c2b8a5c5e16aef84d9f | /indices/nnural.py | b7c94e1d4c00f113af7a8b0a0ae5eb3e11df83b9 | [] | no_license | psdh/WhatsintheVector | e8aabacc054a88b4cb25303548980af9a10c12a8 | a24168d068d9c69dc7a0fd13f606c080ae82e2a6 | refs/heads/master | 2021-01-25T10:34:22.651619 | 2015-09-23T11:54:06 | 2015-09-23T11:54:06 | 42,749,205 | 2 | 3 | null | 2015-09-23T11:54:07 | 2015-09-18T22:06:38 | Python | UTF-8 | Python | false | false | 161 | py | ii = [('AubePRP2.py', 1), ('CarlTFR.py', 1), ('CoolWHM.py', 3), ('GilmCRS.py', 1), ('WestJIT2.py', 1), ('CoolWHM3.py', 2), ('StorJCC.py', 1), ('JacoWHI2.py', 1)] | [
"varunwachaspati@gmail.com"
] | varunwachaspati@gmail.com |
06fa9bef1f87a28a6e7fa0953a214270b2aaadfe | 2845f06c6be4262e9a5e56ebf407d824543f42cc | /tests/test_roles_pages_database.py | 6275f1d40dd5252fb381f743fe2eef31f78d63e0 | [
"CC0-1.0"
] | permissive | silky/WALKOFF | 42c315b35aadf42dc5f31074b7b6eff441338f61 | d4f4afad47e8c57b71647175978650520c061f87 | refs/heads/master | 2021-08-31T15:10:11.347414 | 2017-12-20T15:22:59 | 2017-12-20T15:22:59 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 7,415 | py | import unittest
from server.database import db, Role, ResourcePermission, default_resources
class TestRoles(unittest.TestCase):
@classmethod
def setUpClass(cls):
import server.flaskserver
cls.context = server.flaskserver.app.test_request_context()
cls.context.push()
db.create_all()
def tearDown(self):
db.session.rollback()
for role in [role for role in Role.query.all() if role.name != 'admin']:
db.session.delete(role)
for resource in [resource for resource in ResourcePermission.query.all() if
resource.resource not in default_resources]:
db.session.delete(resource)
db.session.commit()
def assertRoleConstructionIsCorrect(self, role, name, description='', resources=None):
self.assertEqual(role.name, name)
self.assertEqual(role.description, description)
expected_resources = set(resources) if resources is not None else set()
self.assertSetEqual({resource.resource for resource in role.resources}, expected_resources)
def test_resources_init(self):
resource = ResourcePermission(resource='/test/resource')
self.assertEqual(resource.resource, '/test/resource')
def test_resources_as_json(self):
resource = ResourcePermission(resource='/test/resource')
self.assertDictEqual(resource.as_json(), {'resource': '/test/resource'})
def test_role_init_default(self):
role = Role(name='test')
self.assertRoleConstructionIsCorrect(role, 'test')
def test_role_init_with_description(self):
role = Role(name='test', description='desc')
self.assertRoleConstructionIsCorrect(role, 'test', description='desc')
def test_role_init_with_resources_none_in_db(self):
resources = ['resource1', 'resource2', 'resource3']
role = Role(name='test', resources=resources)
db.session.add(role)
self.assertRoleConstructionIsCorrect(role, 'test', resources=resources)
self.assertSetEqual({resource.resource for resource in ResourcePermission.query.all()},
set(default_resources) | set(resources))
def test_role_init_with_some_in_db(self):
resources = ['resource1', 'resource2', 'resource3']
db.session.add(ResourcePermission('resource1'))
role = Role(name='test', resources=resources)
db.session.add(role)
self.assertRoleConstructionIsCorrect(role, 'test', resources=resources)
self.assertSetEqual({resource.resource for resource in ResourcePermission.query.all()},
set(resources) | set(default_resources))
for resource in (resource for resource in ResourcePermission.query.all() if resource.resource in resources):
self.assertListEqual([role.name for role in resource.roles], ['test'])
def test_set_resources_to_role_no_resources_to_add(self):
role = Role(name='test')
role.set_resources([])
self.assertListEqual(role.resources, [])
def test_set_resources_to_role_with_no_resources_and_no_resources_in_db(self):
role = Role(name='test')
resources = ['resource1', 'resource2']
role.set_resources(resources)
db.session.add(role)
self.assertSetEqual({resource.resource for resource in role.resources}, set(resources))
self.assertEqual({resource.resource for resource in ResourcePermission.query.all()},
set(resources) | set(default_resources))
def test_set_resources_to_role_with_no_resources_and_resources_in_db(self):
role = Role(name='test')
db.session.add(ResourcePermission('resource1'))
resources = ['resource1', 'resource2']
role.set_resources(resources)
db.session.add(role)
self.assertSetEqual({resource.resource for resource in role.resources}, set(resources))
self.assertEqual({resource.resource for resource in ResourcePermission.query.all()},
set(resources) | set(default_resources))
def test_set_resources_to_role_with_existing_resources_with_overlap(self):
resources = ['resource1', 'resource2', 'resource3']
role = Role(name='test', resources=resources)
new_resources = ['resource3', 'resource4', 'resource5']
role.set_resources(new_resources)
db.session.add(role)
self.assertSetEqual({resource.resource for resource in role.resources}, set(new_resources))
self.assertEqual({resource.resource for resource in ResourcePermission.query.all()},
set(new_resources) | set(default_resources))
def test_set_resources_to_role_shared_resources(self):
resources1 = ['resource1', 'resource2', 'resource3', 'resource4']
overlap_resources = ['resource3', 'resource4']
resources2 = ['resource3', 'resource4', 'resource5', 'resource6']
role1 = Role(name='test1', resources=resources1)
db.session.add(role1)
role2 = Role(name='test2', resources=resources2)
db.session.add(role2)
db.session.commit()
self.assertSetEqual({resource.resource for resource in role1.resources}, set(resources1))
self.assertSetEqual({resource.resource for resource in role2.resources}, set(resources2))
def assert_resources_have_correct_roles(resources, roles):
for resource in resources:
resource = ResourcePermission.query.filter_by(resource=resource).first()
self.assertSetEqual({role.name for role in resource.roles}, roles)
assert_resources_have_correct_roles(['resource1', 'resource2'], {'test1'})
assert_resources_have_correct_roles(overlap_resources, {'test1', 'test2'})
assert_resources_have_correct_roles(['resource5', 'resource6'], {'test2'})
def test_resource_as_json_with_multiple_roles(self):
resources1 = ['resource1', 'resource2', 'resource3', 'resource4']
overlap_resources = ['resource3', 'resource4']
resources2 = ['resource3', 'resource4', 'resource5', 'resource6']
role1 = Role(name='test1', resources=resources1)
db.session.add(role1)
role2 = Role(name='test2', resources=resources2)
db.session.add(role2)
db.session.commit()
def assert_resource_json_is_correct(resources, roles):
for resource in resources:
resource_json = ResourcePermission.query.filter_by(resource=resource).first().as_json(with_roles=True)
self.assertEqual(resource_json['resource'], resource)
self.assertSetEqual(set(resource_json['roles']), roles)
assert_resource_json_is_correct(['resource1', 'resource2'], {'test1'})
assert_resource_json_is_correct(overlap_resources, {'test1', 'test2'})
assert_resource_json_is_correct(['resource5', 'resource6'], {'test2'})
def test_role_as_json(self):
resources = ['resource1', 'resource2', 'resource3']
role = Role(name='test', description='desc', resources=resources)
role_json = role.as_json()
self.assertSetEqual(set(role_json.keys()), {'name', 'description', 'resources', 'id'})
self.assertEqual(role_json['name'], 'test')
self.assertEqual(role_json['description'], 'desc')
self.assertSetEqual(set(role_json['resources']), set(resources))
| [
"Tervala_Justin@bah.com"
] | Tervala_Justin@bah.com |
aabcf429ef53a8ac43a20aa0dedcc5f2bbefab71 | 27e890f900bd4bfb2e66f4eab85bc381cf4d5d3f | /tests/unit/modules/remote_management/oneview/test_oneview_ethernet_network_info.py | 566e168bfb951fa3ff89894ff981cfced17feebf | [] | no_license | coll-test/notstdlib.moveitallout | eb33a560070bbded5032385d0aea2f3cf60e690b | 0987f099b783c6cf977db9233e1c3d9efcbcb3c7 | refs/heads/master | 2020-12-19T22:28:33.369557 | 2020-01-23T18:51:26 | 2020-01-23T18:51:26 | 235,865,139 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,645 | py | # Copyright (c) 2016-2017 Hewlett Packard Enterprise Development LP
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from ansible_collections.notstdlib.moveitallout.tests.unit.compat import unittest
from .oneview_module_loader import EthernetNetworkInfoModule
from .hpe_test_utils import FactsParamsTestCase
ERROR_MSG = 'Fake message error'
PARAMS_GET_ALL = dict(
config='config.json',
name=None
)
PARAMS_GET_BY_NAME = dict(
config='config.json',
name="Test Ethernet Network",
options=[]
)
PARAMS_GET_BY_NAME_WITH_OPTIONS = dict(
config='config.json',
name="Test Ethernet Network",
options=['associatedProfiles', 'associatedUplinkGroups']
)
PRESENT_ENETS = [{
"name": "Test Ethernet Network",
"uri": "/rest/ethernet-networks/d34dcf5e-0d8e-441c-b00d-e1dd6a067188"
}]
ENET_ASSOCIATED_UPLINK_GROUP_URIS = [
"/rest/uplink-sets/c6bf9af9-48e7-4236-b08a-77684dc258a5",
"/rest/uplink-sets/e2f0031b-52bd-4223-9ac1-d91cb519d548"
]
ENET_ASSOCIATED_PROFILE_URIS = [
"/rest/server-profiles/83e2e117-59dc-4e33-9f24-462af951cbbe",
"/rest/server-profiles/57d3af2a-b6d2-4446-8645-f38dd808ea4d"
]
ENET_ASSOCIATED_UPLINK_GROUPS = [dict(uri=ENET_ASSOCIATED_UPLINK_GROUP_URIS[0], name='Uplink Set 1'),
dict(uri=ENET_ASSOCIATED_UPLINK_GROUP_URIS[1], name='Uplink Set 2')]
ENET_ASSOCIATED_PROFILES = [dict(uri=ENET_ASSOCIATED_PROFILE_URIS[0], name='Server Profile 1'),
dict(uri=ENET_ASSOCIATED_PROFILE_URIS[1], name='Server Profile 2')]
class EthernetNetworkInfoSpec(unittest.TestCase,
FactsParamsTestCase
):
def setUp(self):
self.configure_mocks(self, EthernetNetworkInfoModule)
self.ethernet_networks = self.mock_ov_client.ethernet_networks
FactsParamsTestCase.configure_client_mock(self, self.ethernet_networks)
def test_should_get_all_enets(self):
self.ethernet_networks.get_all.return_value = PRESENT_ENETS
self.mock_ansible_module.params = PARAMS_GET_ALL
EthernetNetworkInfoModule().run()
self.mock_ansible_module.exit_json.assert_called_once_with(
changed=False,
ethernet_networks=(PRESENT_ENETS)
)
def test_should_get_enet_by_name(self):
self.ethernet_networks.get_by.return_value = PRESENT_ENETS
self.mock_ansible_module.params = PARAMS_GET_BY_NAME
EthernetNetworkInfoModule().run()
self.mock_ansible_module.exit_json.assert_called_once_with(
changed=False,
ethernet_networks=(PRESENT_ENETS)
)
def test_should_get_enet_by_name_with_options(self):
self.ethernet_networks.get_by.return_value = PRESENT_ENETS
self.ethernet_networks.get_associated_profiles.return_value = ENET_ASSOCIATED_PROFILE_URIS
self.ethernet_networks.get_associated_uplink_groups.return_value = ENET_ASSOCIATED_UPLINK_GROUP_URIS
self.mock_ov_client.server_profiles.get.side_effect = ENET_ASSOCIATED_PROFILES
self.mock_ov_client.uplink_sets.get.side_effect = ENET_ASSOCIATED_UPLINK_GROUPS
self.mock_ansible_module.params = PARAMS_GET_BY_NAME_WITH_OPTIONS
EthernetNetworkInfoModule().run()
self.mock_ansible_module.exit_json.assert_called_once_with(
changed=False,
ethernet_networks=PRESENT_ENETS,
enet_associated_profiles=ENET_ASSOCIATED_PROFILES,
enet_associated_uplink_groups=ENET_ASSOCIATED_UPLINK_GROUPS
)
if __name__ == '__main__':
unittest.main()
| [
"wk@sydorenko.org.ua"
] | wk@sydorenko.org.ua |
d05e46ed8437f7d527414745aa1d21a565f77bcf | 2c4763aa544344a3a615f9a65d1ded7d0f59ae50 | /playground/maxjobs2/compute/look_busy.py | 1efd78e04fb1aacdaf55a0b72015541ac14e3022 | [] | no_license | afeldman/waf | 572bf95d6b11571bbb2941ba0fe463402b1e39f3 | 4c489b38fe1520ec1bc0fa7e1521f7129c20f8b6 | refs/heads/master | 2021-05-09T18:18:16.598191 | 2019-03-05T06:33:42 | 2019-03-05T06:33:42 | 58,713,085 | 0 | 0 | null | 2016-05-13T07:34:33 | 2016-05-13T07:34:33 | null | UTF-8 | Python | false | false | 161 | py | #! /usr/bin/env python
import sys, time
loops = int(sys.argv[1])
if not loops:
time.sleep(1)
pass
else:
for i in range(loops):
time.sleep(1)
| [
"anton.feldmann@outlook.de"
] | anton.feldmann@outlook.de |
58b4b646201129910d97928a3ebbcb3ee03945ce | e04dbc32247accf073e3089ed4013427ad182c7c | /ABC116/ABC116Canother.py | b53f770c6c899ccae48911c0efb0f9a92ef13287 | [] | no_license | twobooks/atcoder_training | 9deb237aed7d9de573c1134a858e96243fb73ca0 | aa81799ec87cc9c9d76de85c55e99ad5fa7676b5 | refs/heads/master | 2021-10-28T06:33:19.459975 | 2021-10-20T14:16:57 | 2021-10-20T14:16:57 | 233,233,854 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 959 | py | # from math import factorial,sqrt,ceil,gcd
# from itertools import permutations as permus
# from collections import deque,Counter
# import re
# from functools import lru_cache # 簡単メモ化 @lru_cache(maxsize=1000)
# from decimal import Decimal, getcontext
# # getcontext().prec = 1000
# # eps = Decimal(10) ** (-100)
# import numpy as np
# import networkx as nx
# from scipy.sparse.csgraph import shortest_path, dijkstra, floyd_warshall, bellman_ford, johnson
# from scipy.sparse import csr_matrix
# from scipy.special import comb
# slist = "abcdefghijklmnopqrstuvwxyz"
N = int(input())
arrA = list(map(int,input().split()))
pre = 0
ans = 0
for i in arrA:
if pre<i:
ans += i - pre
pre = i
print(ans)
# print(*ans) # unpackして出力。間にスペースが入る
# for row in board:
# print(*row,sep="") #unpackして間にスペース入れずに出力する
# print("{:.10f}".format(ans))
# print("{:0=10d}".format(ans))
| [
"twobookscom@gmail.com"
] | twobookscom@gmail.com |
281bea9f30ded0b2fb8d1e0706f225679bb9f704 | 5ca5a7120c3c147b3ae86c2271c60c82745997ea | /my_python/my_selenium/base/test_13_download.py | fd85ceb34ea10aa5513974884abf4560202e7507 | [] | no_license | JR1QQ4/auto_test | 6b9ea7bd317fd4338ac0964ffd4042b293640af3 | 264b991b4dad72986e2aeb1a30812baf74e42bc6 | refs/heads/main | 2023-03-21T01:32:29.192030 | 2021-03-16T14:07:11 | 2021-03-16T14:07:11 | 321,591,405 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 455 | py | #!/usr/bin/python
# -*- coding:utf-8 -*-
import os
from selenium import webdriver
options = webdriver.ChromeOptions()
prefs = {'profile.default_content_settings.popups': 0,
'download.default_directory': os.getcwd()}
options.add_experimental_option('prefs', prefs)
driver = webdriver.Chrome(chrome_options=options)
driver.get("https://pypi.org/project/selenium/#files")
driver.find_element_by_partial_link_text("selenium-3.141.0.tar.gz").click()
| [
"chenjunrenyx@163.com"
] | chenjunrenyx@163.com |
3f1d5036eb1e0ed7004a294759da99858e649cde | de24f83a5e3768a2638ebcf13cbe717e75740168 | /moodledata/vpl_data/458/usersdata/323/109641/submittedfiles/programa.py | ad68b922ad244e7ce1ace15c98e16edce333698a | [] | no_license | rafaelperazzo/programacao-web | 95643423a35c44613b0f64bed05bd34780fe2436 | 170dd5440afb9ee68a973f3de13a99aa4c735d79 | refs/heads/master | 2021-01-12T14:06:25.773146 | 2017-12-22T16:05:45 | 2017-12-22T16:05:45 | 69,566,344 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 607 | py | # -*- coding: utf-8 -*-
import numpy as np
soma=0
pesos=[]
while True:
n=int(input('Digite a dimensão do tabuleiro(n>=3): '))
if n>=3:
break
a=np.zeros((n,n))
for i in range(0,n,1):
for j in range(0,n,1):
a[i,j]= int(input('Digite o elemento da matriz: '))
somalin=0
somacol=0
for i in range(0,n,1):
for e in range(0,n,1):
for j in range(0,n,1):
somalin= somalin+ a[i,j]
somacol=somacol+a[j,e]
soma= somalin+somacol - 2*a[i,e]
pesos.append(soma)
somalin=0
somacol=0
soma=0
print((max(pesos))) | [
"rafael.mota@ufca.edu.br"
] | rafael.mota@ufca.edu.br |
c019752c841e55e8463b38905d9be4765ca6ac17 | 860628a9330d18d5c803c24eb314bd3756411c32 | /tweet-sentiment-extraction/src/conf/model_config_roberta.py | 0c483b4a10fbccb760230e605508629ed3980f27 | [] | no_license | yphacker/kaggle | fb24bdcc88d55c2a9cee347fcac48f13cb30ca45 | fd3c1c2d5ddf53233560ba4bbd68a2c5c17213ad | refs/heads/master | 2021-07-11T00:22:04.472777 | 2020-06-10T17:01:51 | 2020-06-10T17:01:51 | 145,675,378 | 1 | 1 | null | null | null | null | UTF-8 | Python | false | false | 533 | py | # coding=utf-8
# author=yphacker
import os
from conf import config
from tokenizers import ByteLevelBPETokenizer
model_name = 'roberta'
# pretrain_model_name = 'roberta-base'
pretrain_model_name = 'roberta-large'
pretrain_model_path = os.path.join(config.input_path, pretrain_model_name)
tokenizer = ByteLevelBPETokenizer(
vocab_file='{}/vocab.json'.format(pretrain_model_path),
merges_file='{}/merges.txt'.format(pretrain_model_path),
lowercase=True,
add_prefix_space=True
)
learning_rate = 5e-5
adjust_lr_num = 0
| [
"yphacker@163.com"
] | yphacker@163.com |
00edb4322460d93f2d3711152adb763567f1e9d3 | ad69e42fbe0cdb27406497855885ad7fcdad55aa | /lib/ViDE/Shell/InstallTools.py | f58a12f3aa13c051b39a1d892257bbef56bfd256 | [] | no_license | pombredanne/ViDE | 880b921a8240bbc9ac136f98bf4a12662684a5d1 | 0ec9860a9ebccc7bcc62e8fb39d6ebbc196c1360 | refs/heads/master | 2021-01-22T17:15:20.518689 | 2013-03-23T22:10:04 | 2013-03-23T22:10:49 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,769 | py | from Misc import InteractiveCommandLineProgram as ICLP
from ViDE import Log
from ViDE.Core.Action import CompoundException
from ViDE.Core.ExecutionReport import ExecutionReport
from ViDE.Context import Context
class InstallTools( ICLP.Command ):
def __init__( self, program ):
ICLP.Command.__init__( self, program )
self.jobs = 1
self.addOption( [ "j", "jobs" ], "jobs", ICLP.StoreArgument( "JOBS" ), "use JOBS parallel jobs" )
self.keepGoing = False
self.addOption( [ "k", "keep-going" ], "keepGoing", ICLP.StoreConstant( True ), "keep going in case of failure" )
self.dryRun = False
self.addOption( [ "n", "dry-run" ], "dryRun", ICLP.StoreConstant( True ), "print commands instead of executing them" )
self.downloadOnly = False
self.addOption( [ "d", "dl-only" ], "downloadOnly", ICLP.StoreConstant( True ), "only do the part of installation which needs internet access" )
def execute( self, args ):
context = Context( self.program )
if self.downloadOnly:
artifact = context.toolset.getFetchArtifact()
else:
artifact = context.toolset.getInstallArtifact()
action = artifact.getProductionAction()
if self.dryRun:
print "\n".join( action.preview() )
else:
try:
action.execute( self.keepGoing, self.jobs )
except CompoundException, e:
Log.error( "installation failed", e )
finally:
report = ExecutionReport( action, 800 )
report.drawTo( "installation-report.png" )
artifact.getGraph().drawTo( "installation-artifacts.png" )
action.getGraph().drawTo( "installation-actions.png" )
| [
"vincent@vincent-jacques.net"
] | vincent@vincent-jacques.net |
e3dcbf63e2b1ef8d9c2004697f7e2841b4f570ac | 139780fb3effe57428c21764a799e18bc454dee9 | /app/app.py | a1acbe5e64f0f9594989036648a2d81f46cf2eb5 | [
"Apache-2.0"
] | permissive | naqintosh/dl | e239d8cb2fc59cb6d0bea37b40aa4d4c7da4c6f2 | 3615b78b5fd7a7808a49bb0b9124b41842100a2e | refs/heads/master | 2022-08-19T05:02:28.229105 | 2020-05-30T20:26:45 | 2020-05-30T20:26:45 | 256,092,284 | 0 | 0 | Apache-2.0 | 2020-05-30T21:36:12 | 2020-04-16T02:47:06 | Python | UTF-8 | Python | false | false | 10,815 | py | import io
import inspect
from importlib.util import spec_from_file_location, module_from_spec
import os
import sys
from contextlib import redirect_stdout
from flask import Flask
from flask import request
from flask import jsonify
import core.simulate
import slot.a
import slot.d
import slot.w
from core.afflic import AFFLICT_LIST
from conf import coability_dict, skillshare
app = Flask(__name__)
# Helpers
ROOT_DIR = os.getenv('ROOT_DIR', '..')
ADV_DIR = 'adv'
MEANS_ADV = {
'addis': 'addis.py.means.py',
'sazanka': 'sazanka.py.means.py',
'victor': 'victor.py.means.py'
}
NORMAL_ADV = ['halloween_lowen']
MASS_SIM_ADV = []
with open(os.path.join(ROOT_DIR, 'chara_quick.txt')) as f:
for l in f:
NORMAL_ADV.append(l.strip().replace('.py', ''))
with open(os.path.join(ROOT_DIR, 'chara_slow.txt')) as f:
for l in f:
MASS_SIM_ADV.append(l.strip().replace('.py', ''))
SPECIAL_ADV = {
'chelsea_rollfs': {
'fn': 'chelsea.py.rollfs.py',
'nc': ['wp', 'coab']
},
'gala_luca_maxstacks': {
'fn': 'gala_luca.py.maxstacks.py',
'nc': []
},
'veronica_1hp': {
'fn': 'veronica.py.1hp.py',
'nc': []
},
'natalie_1hp': {
'fn': 'natalie.py.1hp.py',
'nc': []
},
'valentines_addis_1hp': {
'fn': 'valentines_addis.py.1hp.py',
'nc': []
},
'bellina_1hp': {
'fn': 'bellina.py.1hp.py',
'nc': []
},
'fjorm_stack': {
'fn': 'fjorm.py.x4.py',
'nc': ['acl']
}
}
def get_adv_module(adv_name):
if adv_name in SPECIAL_ADV or adv_name in MEANS_ADV:
if adv_name in MEANS_ADV:
adv_file = MEANS_ADV[adv_name]
else:
adv_file = SPECIAL_ADV[adv_name]['fn']
fn = os.path.join(ROOT_DIR, ADV_DIR, adv_file)
spec = spec_from_file_location(adv_name, fn)
module = module_from_spec(spec)
sys.modules[adv_name] = module
spec.loader.exec_module(module)
return module.module()
else:
return getattr(
__import__('adv.{}'.format(adv_name.lower())),
adv_name.lower()
).module()
ADV_MODULES = {}
for adv in NORMAL_ADV+MASS_SIM_ADV:
module = get_adv_module(adv)
name = module.__name__
ADV_MODULES[name.lower()] = module
for name, info in SPECIAL_ADV.items():
module = get_adv_module(name)
ADV_MODULES[name.lower()] = module
def is_amulet(obj):
return (inspect.isclass(obj) and issubclass(obj, slot.a.Amulet)
and obj.__module__ != 'slot.a'
and obj.__module__ != 'slot')
def is_dragon(obj):
return (inspect.isclass(obj) and issubclass(obj, slot.d.DragonBase)
and obj.__module__ != 'slot.d'
and obj.__module__ != 'slot')
def is_weapon(obj):
return (inspect.isclass(obj) and issubclass(obj, slot.d.WeaponBase)
and obj.__module__ != 'slot.w'
and obj.__module__ != 'slot')
def list_members(module, predicate, element=None):
members = inspect.getmembers(module, predicate)
member_list = []
for m in members:
_, c = m
if element is not None:
if issubclass(c, slot.d.WeaponBase) and element not in getattr(c, 'ele'):
continue
if c.__qualname__ not in member_list:
member_list.append(c.__qualname__)
return member_list
def set_teamdps_res(result, logs, real_d, suffix=''):
result['extra' + suffix] = {}
if logs.team_buff > 0:
result['extra' + suffix]['team_buff'] = '+{}%'.format(round(logs.team_buff / real_d * 100))
for tension, count in logs.team_tension.items():
if count > 0:
result['extra' + suffix]['team_{}'.format(tension)] = '{} stacks'.format(round(count))
return result
def run_adv_test(adv_name, wp1=None, wp2=None, dra=None, wep=None, acl=None, conf=None, cond=None, teamdps=None, t=180, log=-2, mass=0):
adv_module = ADV_MODULES[adv_name.lower()]
def acl_injection(self):
if acl is not None:
self.conf['acl'] = acl
adv_module.acl_backdoor = acl_injection
if conf is None:
conf = {}
conf['slots.forced'] = True
if wp1 is not None and wp2 is not None:
conf['slots.a'] = getattr(slot.a, wp1)() + getattr(slot.a, wp2)()
if dra is not None:
conf['slots.d'] = getattr(slot.d, dra)()
if wep is not None:
conf['slots.w'] = getattr(slot.w, wep)()
result = {}
fn = io.StringIO()
try:
run_res = core.simulate.test(adv_module, conf, t, log, mass, output=fn, team_dps=teamdps, cond=cond)
result['test_output'] = fn.getvalue()
except Exception as e:
result['error'] = str(e)
return result
result['logs'] = {}
adv = run_res[0][0]
fn = io.StringIO()
adv.logs.write_logs(output=fn, log_filter=[str(type(adv.slots.d).__name__), str(type(adv).__name__)])
result['logs']['dragon'] = fn.getvalue()
fn = io.StringIO()
core.simulate.act_sum(adv.logs.act_seq, fn)
result['logs']['action'] = fn.getvalue()
result['logs']['summation'] = '\n'.join(['{}: {}'.format(k, v) for k, v in adv.logs.counts.items() if v])
fn = io.StringIO()
adv.logs.write_logs(output=fn)
result['logs']['timeline'] = fn.getvalue()
result = set_teamdps_res(result, adv.logs, run_res[0][1])
if adv.condition.exist():
result['condition'] = dict(adv.condition)
adv_2 = run_res[1][0]
result = set_teamdps_res(result, adv_2.logs, run_res[0][1], '_no_cond')
return result
# API
@app.route('/simc_adv_test', methods=['POST'])
def simc_adv_test():
if not request.method == 'POST':
return 'Wrong request method.'
params = request.get_json(silent=True)
adv_name = 'euden' if not 'adv' in params or params['adv'] is None else params['adv'].lower()
wp1 = params['wp1'] if 'wp1' in params else None
wp2 = params['wp2'] if 'wp2' in params else None
dra = params['dra'] if 'dra' in params else None
wep = params['wep'] if 'wep' in params else None
# ex = params['ex'] if 'ex' in params else ''
acl = params['acl'] if 'acl' in params else None
cond = params['condition'] if 'condition' in params and params['condition'] != {} else None
teamdps = None if not 'teamdps' in params else abs(float(params['teamdps']))
t = 180 if not 't' in params else abs(float(params['t']))
log = -2
mass = 25 if adv_name in MASS_SIM_ADV and adv_name not in MEANS_ADV else 0
coab = None if 'coab' not in params else params['coab']
share = None if 'share' not in params else params['share']
# latency = 0 if 'latency' not in params else abs(float(params['latency']))
print(params, flush=True)
if adv_name in SPECIAL_ADV:
not_customizable = SPECIAL_ADV[adv_name]['nc']
if 'wp' in not_customizable:
wp1 = None
wp2 = None
if 'acl' in not_customizable:
acl = None
if 'coab' in not_customizable:
coab = None
conf = {}
if 'missile' in params:
missile = abs(float(params['missile']))
if missile > 0:
conf['missile_iv'] = {'fs': missile, 'x1': missile, 'x2': missile, 'x3': missile, 'x4': missile, 'x5': missile}
if coab is not None:
conf['coabs'] = coab
if share is not None:
conf['skill_share'] = share
for afflic in AFFLICT_LIST:
try:
conf['afflict_res.'+afflic] = min(abs(int(params['afflict_res'][afflic])), 100)
except:
pass
try:
if params['sim_afflict_type'] in ['burn', 'paralysis', 'poison', 'frostbite']:
conf['sim_afflict.efficiency'] = abs(float(params['sim_afflict_time'])) / 100
conf['sim_afflict.type'] = params['sim_afflict_type']
except:
pass
try:
conf['sim_buffbot.buff'] = min(max(int(params['sim_buff_str']), -1000), 1000)/100
except:
pass
try:
conf['sim_buffbot.debuff'] = min(max(int(params['sim_buff_def']), -50), 50)/100
except:
pass
result = run_adv_test(adv_name, wp1, wp2, dra, wep, acl, conf, cond, teamdps, t=t, log=log, mass=mass)
return jsonify(result)
@app.route('/simc_adv_slotlist', methods=['GET', 'POST'])
def get_adv_slotlist():
result = {}
result['adv'] = {}
if request.method == 'GET':
result['adv']['name'] = request.args.get('adv', default=None)
elif request.method == 'POST':
params = request.get_json(silent=True)
result['adv']['name'] = params['adv'].lower() if 'adv' in params else None
else:
return 'Wrong request method.'
adv_ele = None
dragon_module = slot.d
weap_module = slot.w
if result['adv']['name'] is not None:
adv_instance = ADV_MODULES[result['adv']['name'].lower()]()
adv_ele = adv_instance.slots.c.ele.lower()
result['adv']['fullname'] = adv_instance.__class__.__name__
result['adv']['ele'] = adv_ele
dragon_module = getattr(slot.d, adv_ele)
result['adv']['wt'] = adv_instance.slots.c.wt.lower()
weap_module = getattr(slot.w, result['adv']['wt'])
result['coab'] = coability_dict(adv_ele)
result['adv']['pref_dra'] = type(adv_instance.slots.d).__qualname__
result['adv']['pref_wep'] = type(adv_instance.slots.w).__qualname__
result['adv']['pref_wp'] = {
'wp1': type(adv_instance.slots.a).__qualname__,
'wp2': type(adv_instance.slots.a.a2).__qualname__
}
result['adv']['pref_coab'] = adv_instance.coab
result['adv']['pref_share'] = adv_instance.share
result['adv']['acl'] = adv_instance.conf.acl
if 'afflict_res' in adv_instance.conf:
res_conf = adv_instance.conf.afflict_res
res_dict = {}
for afflic in AFFLICT_LIST:
if afflic in res_conf:
res_dict[afflic] = res_conf[afflic]
if len(res_dict.keys()) > 0:
result['adv']['afflict_res'] = res_dict
if result['adv']['name'] in SPECIAL_ADV:
result['adv']['no_config'] = SPECIAL_ADV[result['adv']['name']]['nc']
# result['amulets'] = list_members(slot.a, is_amulet)
result['dragons'] = list_members(dragon_module, is_dragon, element=adv_ele)
result['weapons'] = list_members(weap_module, is_weapon, element=adv_ele)
return jsonify(result)
@app.route('/simc_adv_wp_list', methods=['GET', 'POST'])
def get_adv_wp_list():
if not (request.method == 'GET' or request.method == 'POST'):
return 'Wrong request method.'
result = {}
result['amulets'] = list_members(slot.a, is_amulet)
result['adv'] = list(ADV_MODULES.keys())
result['skillshare'] = dict(sorted(skillshare.items()))
return jsonify(result) | [
"wildshinobu@gmail.com"
] | wildshinobu@gmail.com |
00fc9864ba2b2d84933b7b3eacee03e80f648546 | 71c0121fb47df8ce11f33e7617dd262525ffea81 | /orderlunch/project/foodshop/wsgi.py | 008cc0e18f40efc3371b3a2d20e04c4c5b782c6f | [] | no_license | igor35hh/PythonTraining | 33d09b045b0f8676f23a5b43410aaa6a7c6a5631 | 020bc274bba0ffb70f1cdc45e18ea8b6467110fb | refs/heads/master | 2021-05-01T23:46:54.919344 | 2018-03-11T21:30:30 | 2018-03-11T21:30:30 | 77,922,552 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 394 | py | """
WSGI config for foodshop project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/1.11/howto/deployment/wsgi/
"""
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "foodshop.settings")
application = get_wsgi_application()
| [
"igor35hh@gmail.com"
] | igor35hh@gmail.com |
50c6a00dfd95ff6c1120ee95bc98fc84f8de9851 | 269a82e7f8702acf495d7d8d446594f4f23a3ced | /whispers/apps.py | c97bdb8ab8f5589a19ea09f654a71132445cf5e8 | [
"MIT"
] | permissive | cbroms/telematicEnvironment | 8cfbc9515594f0275c6fe91b450b7d43fb811bb0 | 6b3130347cad06c6b3aa453010c91d9990bc9cb8 | refs/heads/master | 2021-06-19T23:52:19.943721 | 2020-01-18T16:13:01 | 2020-01-18T16:13:01 | 150,203,122 | 0 | 0 | MIT | 2021-06-10T20:51:01 | 2018-09-25T03:35:00 | JavaScript | UTF-8 | Python | false | false | 91 | py | from django.apps import AppConfig
class WhispersConfig(AppConfig):
name = 'whispers'
| [
"cb@christianbroms.com"
] | cb@christianbroms.com |
2b321f3e9f564f12d0040086f7d63bdaa06e9062 | b24e45267a8d01b7d3584d062ac9441b01fd7b35 | /Usuario/.history/serializers_20191103100253.py | a533fa8f2dba6f7033a69b0426c8326793d51d3a | [] | no_license | slalbertojesus/merixo-rest | 1707b198f31293ced38930a31ab524c0f9a6696c | 5c12790fd5bc7ec457baad07260ca26a8641785d | refs/heads/master | 2022-12-10T18:56:36.346159 | 2020-05-02T00:42:39 | 2020-05-02T00:42:39 | 212,175,889 | 0 | 0 | null | 2022-12-08T07:00:07 | 2019-10-01T18:56:45 | Python | UTF-8 | Python | false | false | 1,033 | py | from rest_framework import serializers
from django.db import models
from .models import Usuario
class UsuarioSerializer(serializers.ModelSerializer):
contraseñaConfirmacion = serializers.CharField(style={'input_type':'password'}, write_only= True)
class Meta:
model = Usuario
fields = ['usuario', 'password', 'nombre', 'correo', 'passwordConfirm', 'rol']
extra_kwargs = {
'password':{'write_only': True}
}
def save(self):
usuario = Usuario(
usuario = self.validated_data['usuario'],
correo = self.validated_data['correo'],
rol = "usuario"
)
contraseña = self.validated_data['password']
contraseñaConfirmacion = self.validated_data['passwordConfirm']
if password != passwordConfirm:
raise serializers.ValidationError({'password': 'Las contraseñas no son iguales'})
usuario.password = password
usuario.save()
return usuario
| [
"slalbertojesus@gmail.com"
] | slalbertojesus@gmail.com |
caa393008241d3cd3ea683c33ff81cd65a23c46b | 207e93eca4bc1a9bb66d94820a52f51e3626db22 | /1024文章/test.py | 29bf11b6bac61b5a445e0c5f257659bd1630c59d | [] | no_license | 99tian/Python-spider | 055d19ab72b3cd2f0ed3da187920e52edddf8c24 | 8718f75fbe95ad37105212a437279de107995fb8 | refs/heads/master | 2020-06-26T07:19:36.827530 | 2019-07-29T11:36:22 | 2019-07-29T11:36:22 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,114 | py | import requests
import re
new_url = "https://1024.fil6.tk/htm_data/1907/16/3586699.html"
header = {
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36",
"cookie": "__cfduid=d991240ded8f691413d0a5238e78525ee1563844869; UM_distinctid=16c1c6ba0a3d2-0247f6cbe42a12-c343162-100200-16c1c6ba0a462d; PHPSESSID=cjkptobgiir2bmui06ttr75gi6; 227c9_lastvisit=0%091563848591%09%2Findex.php%3Fu%3D481320%26vcencode%3D5793155999; CNZZDATA950900=cnzz_eid%3D589544733-1563841323-%26ntime%3D1563848242"
}
print(new_url)
res = requests.get(new_url, headers=header)
# print(res.content.decode('gbk', 'ignore'))
# res_html = res.text()
# with open("1.html", 'w') as f:
# f.write(res.content.decode('gbk', 'ignore'))
res_html = res.content.decode('gbk', 'ignore')
res_jpg = re.findall(r"data-src='(.*?)'", res_html)
print(res_jpg)
number = 0
for i in res_jpg:
print(i)
number += 1
res_jpg = requests.get(i, headers=header)
save_path = "{}.jpg".format(number)
with open(save_path, "wb") as f:
f.write(res_jpg.content) | [
"15670339118@qq.com"
] | 15670339118@qq.com |
35c350e7307608d6ea6d0e070989b6efc28cdd32 | 8bd1cc4e963766d34f635d870e2ad792afdc6ae1 | /0x01-python-if_else_loops_functions/12-fizzbuzz.py | 26d488fac778767d9b279079df2e61f5a24ca59b | [] | no_license | nildiert/holbertonschool-higher_level_programming | 71e6747163a04510b056b898503213c90419e5dc | 5e9ea8cd2f2d0ed01e65e6866d4df751f7771099 | refs/heads/master | 2021-07-06T06:21:01.143189 | 2020-09-22T17:45:26 | 2020-09-22T17:45:26 | 184,036,414 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 309 | py | #!/usr/bin/python3
def fizzbuzz():
for i in range(1, 101):
if (i % 3 != 0 and i % 5 != 0):
print("{:d}".format(i), end=' ')
else:
print("{}".format("Fizz" if ((i % 3) == 0) else ""), end='')
print("{}".format("Buzz" if ((i % 5) == 0) else ""), end=' ')
| [
"niljordan23@gmail.com"
] | niljordan23@gmail.com |
6bffacdc5e48bcf1112d0cf9725bcc9795592413 | 933833285c122fea63e2397717469a9701139e7b | /app/api/patients/actions/put_patient.py | 822af27482bdc004b7e6811b65c32ce5212583b3 | [] | no_license | borgishmorg/mis-backend | ea4371b7ae5ebb46b7cd0c9eb26640c47dc67656 | f4dbcbfb8c9af717e54eadab24a1dfee3e537cee | refs/heads/master | 2023-05-31T05:02:18.947337 | 2021-06-11T22:55:01 | 2021-06-11T22:55:01 | 361,906,170 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 711 | py | from fastapi import Depends, Path, HTTPException, status
from app.dependencies import token_payload, TokenPayload, Permission
from ..controller import PatientsController, PatientDoesNotExistException
from ..schemas import Patient, PatientIn
def put_patient(
patient_in: PatientIn,
id: int = Path(...),
patients: PatientsController = Depends(),
token_payload: TokenPayload = Depends(token_payload(permissions=[Permission.PATIENTS_EDIT]))
) -> Patient:
try:
return patients.update_patient(id, patient_in)
except PatientDoesNotExistException as exception:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail=str(exception)
)
| [
"43957407+borgishmorg@users.noreply.github.com"
] | 43957407+borgishmorg@users.noreply.github.com |
83940fd7a2aaea529631f8864801e929ecc5f18f | 9743d5fd24822f79c156ad112229e25adb9ed6f6 | /xai/brain/wordbase/adjectives/_narrower.py | dd7f198f8464d92004e1bced39ed083f41cb636d | [
"MIT"
] | permissive | cash2one/xai | de7adad1758f50dd6786bf0111e71a903f039b64 | e76f12c9f4dcf3ac1c7c08b0cc8844c0b0a104b6 | refs/heads/master | 2021-01-19T12:33:54.964379 | 2017-01-28T02:00:50 | 2017-01-28T02:00:50 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 250 | py |
from xai.brain.wordbase.adjectives._narrow import _NARROW
#calss header
class _NARROWER(_NARROW, ):
def __init__(self,):
_NARROW.__init__(self)
self.name = "NARROWER"
self.specie = 'adjectives'
self.basic = "narrow"
self.jsondata = {}
| [
"xingwang1991@gmail.com"
] | xingwang1991@gmail.com |
759dfb6d81f3985f988c86f63623e57a5662a124 | 2e8643fa9f4e2c93469569ad700544cf518d1c79 | /colmena/xtb-active-learning-debug/run.py | 8a3d23a43d653bef1ccdb2e7193bdcafe5d96292 | [] | no_license | exalearn/electrolyte-design | f8cdef032c24ab77495f3019854d4be946e45f27 | ef9e586e89053d1f6bea541717db8be43dbce0a4 | refs/heads/master | 2022-11-05T17:18:33.485938 | 2022-10-31T13:07:22 | 2022-10-31T13:07:22 | 212,376,159 | 4 | 1 | null | 2022-01-17T17:33:28 | 2019-10-02T15:27:28 | Jupyter Notebook | UTF-8 | Python | false | false | 27,591 | py | from threading import Event, Semaphore
from typing import List
from functools import partial, update_wrapper
from pathlib import Path
from datetime import datetime
from queue import Queue
import argparse
import logging
import hashlib
import json
import sys
import os
import rdkit
import numpy as np
import pandas as pd
import tensorflow as tf
from rdkit import Chem
from colmena.task_server import ParslTaskServer
from colmena.models import Result
from colmena.redis.queue import ClientQueues, make_queue_pairs
from colmena.thinker import BaseThinker, agent, result_processor, event_responder, task_submitter
from colmena.thinker.resources import ResourceCounter
from config import local_config
from moldesign.score.mpnn import evaluate_mpnn, update_mpnn, retrain_mpnn, MPNNMessage, custom_objects
from moldesign.store.models import MoleculeData
from moldesign.store.recipes import apply_recipes
from moldesign.utils import get_platform_info
from sim import run_simulation
# Disable all GPUs for the planning process
tf.config.set_visible_devices([], 'GPU')
visible_devices = tf.config.get_visible_devices()
for device in visible_devices:
assert device.device_type != 'GPU'
class Thinker(BaseThinker):
"""ML-enhanced optimization loop for molecular design"""
def __init__(self, queues: ClientQueues,
num_workers: int,
database: List[MoleculeData],
search_space: List[str],
n_to_evaluate: int,
n_complete_before_retrain: int,
retrain_from_initial: bool,
mpnns: List[tf.keras.Model],
inference_chunk_size: int,
output_dir: str,
beta: float,
random_seed: int,
output_property: str):
"""
Args:
queues: Queues used to communicate with the method server
database: Link to the MongoDB instance used to store results
search_space: List of InChI strings which define the search space
n_complete_before_retrain: Number of simulations to complete before retraining
retrain_from_initial: Whether to update the model or retrain it from initial weights
mpnns: List of MPNNs to use for selecting samples
output_dir: Where to write the output files
beta: Amount to weight uncertainty in the activation function
random_seed: Random seed for the model (re)trainings
output_property: Name of the property to be computed
"""
super().__init__(queues, ResourceCounter(num_workers, ['training', 'inference', 'simulation']), daemon=True)
# Configuration for the run
self.inference_chunk_size = inference_chunk_size
self.n_complete_before_retrain = n_complete_before_retrain
self.retrain_from_initial = retrain_from_initial
self.mpnns = mpnns.copy()
self.output_dir = Path(output_dir)
self.beta = beta
self.nodes_per_qc = 1
self.random_seed = random_seed
self.output_property = output_property
# Get the initial database
self.database = database
self.logger.info(f'Populated an initial database of {len(self.database)} entries')
# Get the target database size
self.n_to_evaluate = n_to_evaluate
self.target_size = n_to_evaluate + len(self.database)
# List the molecules that have already been searched
self.already_searched = set([d.identifier['inchi'] for d in self.database])
# Prepare search space
self.mols = search_space.copy()
self.inference_chunks = np.array_split(self.mols, max(len(self.mols) // self.inference_chunk_size, 1))
self.logger.info(f'Split {len(self.mols)} molecules into {len(self.inference_chunks)} chunks for inference')
# Inter-thread communication stuff
self.start_inference = Event() # Mark that inference should start
self.inference_finished = Event() # Mark that inference has finished
self.inference_slots = Semaphore(value=0) # Number of slots available for inference tasks
self.start_training = Event() # Mark that retraining should start
self.all_training_started = Event() # Mark that training has begun
self.task_queue = Queue() # Holds a list of tasks for inference
self.max_ml_workers = 1
self.inference_batch = 0
# Allocate maximum number of ML workers to inference, remainder to simulation
self.rec.reallocate(None, 'inference', self.max_ml_workers)
self.rec.acquire('inference', self.max_ml_workers)
if num_workers > self.max_ml_workers:
self.rec.reallocate(None, 'simulation', num_workers - self.max_ml_workers)
# Mark the number of inference slots we have available
for _ in range(self.rec.allocated_slots('inference') * 2):
self.inference_slots.release()
@agent
def allocator(self):
"""Allocates resources to different tasks"""
# Start off by running inference loop
self.start_inference.set()
# Wait until the inference tasks event to finish
self.inference_finished.wait()
self.inference_finished.clear()
self.logger.info(f'Finished first round of inference')
while not self.done.is_set():
# Reallocate all resources from inference to QC tasks
self.rec.reallocate('inference', 'simulation', self.rec.allocated_slots('inference'))
self.inference_slots = Semaphore(value=0) # Reset the number of inference slots to zero
# Wait until QC tasks complete
retrain_size = len(self.database) + self.n_complete_before_retrain
self.logger.info(f'Waiting until database reaches {retrain_size}. Current size: {len(self.database)}')
while len(self.database) < retrain_size:
if self.done.wait(15):
return
# Start the training process
self.logger.info('Triggered retraining process. Beginning to allocate nodes from simulation to training')
self.start_training.set()
# Gather nodes for training until either training finishes or we have 1 node per model.
n_allocated = 0
self.all_training_started.clear()
while not (self.all_training_started.is_set() or n_allocated >= self.max_ml_workers
or self.done.is_set()):
if self.rec.reallocate('simulation', 'training', self.nodes_per_qc,
cancel_if=self.all_training_started):
n_allocated += self.nodes_per_qc
self.logger.info('All training tasks have been submitted.'
' Waiting for them to finish before deallocating to inference')
while not self.all_training_started.wait(timeout=15):
if self.done.is_set(): # Exit if we have finished
return
self.all_training_started.clear()
# Allocate initial nodes for inference
self.logger.info('Waiting for training tasks to complete.')
n_to_reallocate = self.rec.allocated_slots('training')
self.rec.reallocate('training', 'inference', n_to_reallocate)
for _ in range(n_to_reallocate * 2):
self.inference_slots.release()
# Trigger inference
self.logger.info('Beginning inference process. Will gradually scavenge nodes from simulation tasks')
self.start_inference.set()
while not (self.inference_finished.is_set() or self.done.is_set() or
self.rec.allocated_slots("inference") == self.max_ml_workers):
# Request a block of nodes for inference
acq_success = self.rec.reallocate('simulation', 'inference', self.nodes_per_qc,
cancel_if=self.inference_finished)
self.rec.acquire('inference', self.nodes_per_qc)
# Make them available to the task submission thread
if acq_success:
self.logger.info(f'Allocated {self.nodes_per_qc} more nodes to inference')
for _ in range(self.nodes_per_qc * 2): # 1 execution slot per worker
self.inference_slots.release()
while not self.inference_finished.wait(timeout=15):
if self.done.is_set():
return
self.inference_finished.clear()
self.rec.release('inference', self.rec.allocated_slots('inference'))
self.logger.info(f'Completed inference and task selection')
@task_submitter(task_type='simulation', n_slots=1)
def launch_qc(self):
# Submit the next task
inchi, info = self.task_queue.get()
mol = Chem.MolFromInchi(inchi)
smiles = Chem.MolToSmiles(mol)
self.logger.info(f'Submitted {smiles} to simulate with NWChem')
self.already_searched.add(inchi)
self.queues.send_inputs(smiles, task_info=info,
method='run_simulation', keep_inputs=True,
topic='simulate')
@result_processor(topic='simulate')
def record_qc(self, result: Result):
# Get basic task information
smiles, = result.args
# Release nodes for use by other processes
self.rec.release("simulation", self.nodes_per_qc)
# If successful, add to the database
if result.success:
# Store the data in a molecule data object
data = MoleculeData.from_identifier(smiles=smiles)
opt_records, hess_records = result.value
for r in opt_records:
data.add_geometry(r)
for r in hess_records:
data.add_single_point(r)
apply_recipes(data) # Compute the IP
# Add to database
with open(self.output_dir.joinpath('moldata-records.json'), 'a') as fp:
print(json.dumps([datetime.now().timestamp(), data.json()]), file=fp)
self.database.append(data)
# If the database is complete, set "done"
if len(self.database) >= self.target_size:
self.logger.info(f'Database has reached target size of {len(self.database)}. Exiting')
self.done.set()
# Write to disk
with open(self.output_dir.joinpath('qcfractal-records.json'), 'a') as fp:
for r in opt_records + hess_records:
print(r.json(), file=fp)
self.logger.info(f'Added complete calculation for {smiles} to database.')
else:
self.logger.info(f'Computations failed for {smiles}. Check JSON file for stacktrace')
# Write out the result to disk
with open(self.output_dir.joinpath('simulation-results.json'), 'a') as fp:
print(result.json(exclude={'value'}), file=fp)
@event_responder(event_name='start_training')
def train_models(self):
"""Train machine learning models"""
self.start_training.clear()
self.logger.info('Started retraining')
for mid, model in enumerate(self.mpnns):
# Wait until we have nodes
if not self.rec.acquire('training', 1, cancel_if=self.done):
# If unsuccessful, exit because we are finished
return
# Make the database
train_data = dict(
(d.identifier['smiles'], d.oxidation_potential[self.output_property])
for d in self.database
if self.output_property in d.oxidation_potential
)
# Make the MPNN message
if self.retrain_from_initial:
self.queues.send_inputs(model.get_config(), train_data, method='retrain_mpnn', topic='train',
task_info={'model_id': mid}, # , 'molecules': list(train_data.keys())},
keep_inputs=False,
input_kwargs={'random_state': mid + self.random_seed})
else:
model_msg = MPNNMessage(model)
self.queues.send_inputs(model_msg, train_data, method='update_mpnn', topic='train',
task_info={'model_id': mid}, #'molecules': list(train_data.keys())},
keep_inputs=False,
input_kwargs={'random_state': mid + self.random_seed})
self.logger.info(f'Submitted model {mid} to train with {len(train_data)} entries')
self.all_training_started.set()
@result_processor(topic='train')
def update_weights(self, result: Result):
"""Process the results of the saved model"""
self.rec.release('training', 1)
# Save results to disk
with open(self.output_dir.joinpath('training-results.json'), 'a') as fp:
print(result.json(exclude={'inputs', 'value'}), file=fp)
# Make sure the run completed
model_id = result.task_info['model_id']
if not result.success:
self.logger.warning(f'Training failed for {model_id}')
return
# Update weights
weights, history = result.value
self.mpnns[model_id].set_weights(weights)
# Print out some status info
self.logger.info(f'Model {model_id} finished training.')
with open(self.output_dir.joinpath('training-history.json'), 'a') as fp:
print(repr(history), file=fp)
@event_responder(event_name='start_inference')
def launch_inference(self):
"""Submit inference tasks for the yet-unlabelled samples"""
self.logger.info('Beginning to submit inference tasks')
# Make a folder for the models
model_folder = self.output_dir.joinpath('models')
model_folder.mkdir(exist_ok=True)
# Submit the chunks to the workflow engine
for mid, model in enumerate(self.mpnns):
# Save the current model to disk
model_path = model_folder.joinpath(f'model-{mid}-{self.inference_batch}.h5')
model.save(model_path)
# Read the model in
for cid, chunk in enumerate(self.inference_chunks):
self.inference_slots.acquire() # Wait to get a slot
self.queues.send_inputs([str(model_path)], chunk['smiles'].tolist(),
topic='infer', method='evaluate_mpnn',
keep_inputs=False,
task_info={'chunk_id': cid, 'chunk_size': len(chunk), 'model_id': mid})
self.logger.info(f'Submitted chunk {cid+1}/{len(self.inference_chunks)} for model {mid+1}/{len(self.mpnns)}')
self.logger.info('Finished submitting molecules for inference')
@event_responder(event_name='start_inference')
def record_inference(self):
"""Re-prioritize the machine learning tasks"""
self.start_inference.clear()
# Make arrays that will hold the output results from each run
y_pred = [np.zeros((len(x), len(self.mpnns)), dtype=np.float32) for x in self.inference_chunks]
# Collect the inference runs
n_tasks = len(self.inference_chunks) * len(self.mpnns)
for i in range(n_tasks):
# Wait for a result
result = self.queues.get_result(topic='infer')
self.logger.info(f'Received inference task {i + 1}/{n_tasks}')
# Free up resources to submit another
self.inference_slots.release()
# Save the inference information to disk
with open(self.output_dir.joinpath('inference-records.json'), 'a') as fp:
print(result.json(exclude={'value'}), file=fp)
# Determine the data
if not result.success:
raise ValueError('Result failed! Check the JSON')
# Store the outputs
chunk_id = result.task_info.get('chunk_id')
model_id = result.task_info.get('model_id')
y_pred[chunk_id][:, model_id] = np.squeeze(result.value)
self.logger.info('All inference tasks are complete')
# Compute the mean and std for each prediction
y_pred = np.concatenate(y_pred, axis=0)
self._select_molecules(y_pred)
# Free up resources
self.rec.release('inference', self.rec.allocated_slots('inference'))
# Mark that inference is complete
self.inference_finished.set()
self.inference_batch += 1
def _select_molecules(self, y_pred):
"""Select a list of molecules given the predictions from each model
Adds them to the task queue
Args:
y_pred: List of predictions for each molecule in self.search_space
"""
# Compute the average and std of predictions
y_mean = y_pred.mean(axis=1)
y_std = y_pred.std(axis=1)
# Rank compounds according to the upper confidence bound
molecules = self.mols['inchi'].values
ucb = y_mean + self.beta * y_std
sort_ids = np.argsort(ucb)
best_list = list(zip(molecules[sort_ids].tolist(),
y_mean[sort_ids], y_std[sort_ids], ucb[sort_ids]))
# Make a temporary copy as a List, which means we can easily judge its size
task_queue = []
while len(task_queue) < self.n_to_evaluate * 2:
# Pick a molecule
mol, mean, std, ucb = best_list.pop()
# Add it to list if not in database or not already in queue
if mol not in self.already_searched and mol not in task_queue:
# Note: converting to float b/c np.float32 is not JSON serializable
task_queue.append((mol, {'mean': float(mean), 'std': float(std), 'ucb': float(ucb),
'batch': self.inference_batch}))
# Clear out the current queue
while not self.task_queue.empty():
try:
self.task_queue.get(False)
except Empty:
continue
# Put new tasks in the queue
for i in task_queue:
self.task_queue.put(i)
self.logger.info('Updated task list')
if __name__ == '__main__':
# User inputs
parser = argparse.ArgumentParser()
parser.add_argument("--redishost", default="127.0.0.1",
help="Address at which the redis server can be reached")
parser.add_argument("--redisport", default="6379",
help="Port on which redis is available")
parser.add_argument('--mpnn-config-directory', help='Directory containing the MPNN-related JSON files',
required=True)
parser.add_argument('--mpnn-model-files', nargs="+", help='Path to the MPNN h5 files', required=True)
parser.add_argument('--init-dataset', default='init_dataset.json', help='Path to the initial dataset.')
parser.add_argument('--retrain-from-scratch', action='store_true', help='Whether to retrain models from scratch')
parser.add_argument('--search-space', help='Path to molecules to be screened', required=True)
parser.add_argument("--search-size", default=1000, type=int,
help="Number of new molecules to evaluate during this search")
parser.add_argument('--retrain-frequency', default=50, type=int,
help="Number of completed computations that will trigger a retraining")
parser.add_argument("--molecules-per-ml-task", default=200000, type=int,
help="Number molecules per inference task")
parser.add_argument("--ml-prefetch", default=1, help="Number of ML tasks to prefech on each node", type=int)
parser.add_argument("--beta", default=1, help="Degree of exploration for active learning. "
"This is the beta from the UCB acquistion function", type=float)
parser.add_argument("--learning-rate", default=3e-4, help="Initial Learning rate for re-training the models", type=float)
parser.add_argument('--num-epochs', default=512, type=int, help='Maximum number of epochs for the model training')
parser.add_argument('--batch-size', default=64, type=int, help='Batch size for model training')
parser.add_argument('--random-seed', default=0, type=int, help='Random seed for model (re)trainings')
parser.add_argument('--dilation-factor', default=1, type=float,
help='Factor by which to artificially increase simulation time')
parser.add_argument('--num-workers', default=1, type=int, help='Number of workers')
parser.add_argument('--train-timeout', default=None, type=float, help='Timeout for training operation (s)')
parser.add_argument('--train-patience', default=None, type=int, help='Patience for training operation (epochs)')
parser.add_argument('--solvent', default=None, choices=[None, 'acetonitrile'],
help='Whether to compute solvation energy in a solvent')
# Parse the arguments
args = parser.parse_args()
run_params = args.__dict__
# Load in the models, initial dataset, agent and search space
models = [tf.keras.models.load_model(path, custom_objects=custom_objects) for path in args.mpnn_model_files]
with open(os.path.join(args.mpnn_config_directory, 'atom_types.json')) as fp:
atom_types = json.load(fp)
with open(os.path.join(args.mpnn_config_directory, 'bond_types.json')) as fp:
bond_types = json.load(fp)
# Load in the initial dataset
with open(args.init_dataset) as fp:
database = [MoleculeData.parse_raw(line.strip()) for line in fp]
# Load in the search space
def _only_known_elements(inchi: str):
mol = rdkit.Chem.MolFromInchi(inchi)
if mol is None:
return False
return all(
e.GetAtomicNum() in atom_types for e in mol.GetAtoms()
)
full_search = pd.read_csv(args.search_space, delim_whitespace=True)
search_space = full_search[full_search['inchi'].apply(_only_known_elements)]
# Create an output directory with the time and run parameters
start_time = datetime.utcnow()
params_hash = hashlib.sha256(json.dumps(run_params).encode()).hexdigest()[:6]
out_dir = os.path.join('runs', f'ensemble-{start_time.strftime("%d%b%y-%H%M%S")}-{params_hash}')
os.makedirs(out_dir, exist_ok=False)
# Save the run parameters to disk
with open(os.path.join(out_dir, 'run_params.json'), 'w') as fp:
json.dump(run_params, fp, indent=2)
with open(os.path.join(out_dir, 'environment.json'), 'w') as fp:
json.dump(dict(os.environ), fp, indent=2)
# Save the platform information to disk
host_info = get_platform_info()
with open(os.path.join(out_dir, 'host_info.json'), 'w') as fp:
json.dump(host_info, fp, indent=2)
# Set up the logging
handlers = [logging.FileHandler(os.path.join(out_dir, 'runtime.log')),
logging.StreamHandler(sys.stdout)]
class ParslFilter(logging.Filter):
"""Filter out Parsl debug logs"""
def filter(self, record):
return not (record.levelno == logging.DEBUG and '/parsl/' in record.pathname)
for h in handlers:
h.addFilter(ParslFilter())
logging.basicConfig(format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
level=logging.INFO, handlers=handlers)
# Write the configuration
# ML nodes: N for updating models, 1 for MolDQN, 1 for inference runs
config = local_config(os.path.join(out_dir, 'run-info'), args.num_workers)
# Save Parsl configuration
with open(os.path.join(out_dir, 'parsl_config.txt'), 'w') as fp:
print(str(config), file=fp)
# Connect to the redis server
client_queues, server_queues = make_queue_pairs(args.redishost, args.redisport,
serialization_method="pickle",
topics=['simulate', 'infer', 'train'],
keep_inputs=False)
# Apply wrappers to functions to affix static settings
# Update wrapper changes the __name__ field, which is used by the Method Server
my_evaluate_mpnn = partial(evaluate_mpnn, atom_types=atom_types, bond_types=bond_types,
batch_size=32, cache=False)
my_evaluate_mpnn = update_wrapper(my_evaluate_mpnn, evaluate_mpnn)
my_update_mpnn = partial(update_mpnn, num_epochs=args.num_epochs, atom_types=atom_types, bond_types=bond_types,
learning_rate=args.learning_rate, bootstrap=True, batch_size=args.batch_size,
patience=args.train_patience, timeout=args.train_timeout)
my_update_mpnn = update_wrapper(my_update_mpnn, update_mpnn)
my_retrain_mpnn = partial(retrain_mpnn, num_epochs=args.num_epochs, atom_types=atom_types, bond_types=bond_types,
learning_rate=args.learning_rate, bootstrap=True, batch_size=args.batch_size,
patience=args.train_patience, timeout=args.train_timeout)
my_retrain_mpnn = update_wrapper(my_retrain_mpnn, retrain_mpnn)
my_run_simulation = partial(run_simulation, dilation_factor=args.dilation_factor, solvent=args.solvent)
my_run_simulation = update_wrapper(my_run_simulation, run_simulation)
# Get the name of the output property
output_prop = {
None: 'xtb-vacuum',
'acetonitrile': 'xtb-acn'
}[args.solvent]
# Create the method server and task generator
inf_cfg = {'executors': ['ml-worker']}
tra_cfg = {'executors': ['ml-worker']}
dft_cfg = {'executors': ['qc-worker']}
doer = ParslTaskServer([(my_evaluate_mpnn, inf_cfg), (my_run_simulation, dft_cfg),
(my_update_mpnn, tra_cfg), (my_retrain_mpnn, tra_cfg)],
server_queues, config)
# Configure the "thinker" application
thinker = Thinker(client_queues,
args.num_workers,
database,
search_space,
args.search_size,
args.retrain_frequency,
args.retrain_from_scratch,
models,
args.molecules_per_ml_task,
out_dir,
args.beta,
args.random_seed,
output_prop)
logging.info('Created the task server and task generator')
try:
# Launch the servers
# The method server is a Thread, so that it can access the Parsl DFK
# The task generator is a Thread, so that all debugging methods get cast to screen
doer.start()
thinker.start()
logging.info(f'Running on {os.getpid()}')
logging.info('Launched the servers')
# Wait for the task generator to complete
thinker.join()
logging.info('Task generator has completed')
finally:
client_queues.send_kill_signal()
# Wait for the method server to complete
doer.join()
| [
"ward.logan.t@gmail.com"
] | ward.logan.t@gmail.com |
68e5a38491a77f0069d982c1c17594d41e32c283 | ddf2e85b8e8fda8cbaf92fc79a53abdb962c8bde | /tests/violated/2225_regression/0.py | 1ec803ab695356b4701513cb8e2f5bb9266fef43 | [
"Apache-2.0"
] | permissive | p4gauntlet/toz3 | 359bd20bdc8fe2b7ccf3564a90988823d94df078 | 0fddd9e21ac7b80e4a0bf8a4e6b1bdcc01308724 | refs/heads/master | 2023-05-11T17:23:10.972917 | 2023-05-09T16:02:56 | 2023-05-09T16:02:56 | 329,900,719 | 4 | 0 | Apache-2.0 | 2023-02-22T23:28:49 | 2021-01-15T12:03:53 | Python | UTF-8 | Python | false | false | 21,670 | py | from p4z3 import *
def p4_program(prog_state):
prog_state.declare_global(
Enum( "error", ["NoError", "PacketTooShort", "NoMatch", "StackOutOfBounds", "HeaderTooShort", "ParserTimeout", "ParserInvalidArgument", ])
)
prog_state.declare_global(
P4Extern("packet_in", type_params=[], methods=[P4Declaration("extract", P4Method("extract", type_params=(None, [
"T",]), params=[
P4Parameter("out", "hdr", "T", None),])), P4Declaration("extract", P4Method("extract", type_params=(None, [
"T",]), params=[
P4Parameter("out", "variableSizeHeader", "T", None),
P4Parameter("in", "variableFieldSizeInBits", z3.BitVecSort(32), None),])), P4Declaration("lookahead", P4Method("lookahead", type_params=("T", [
"T",]), params=[])), P4Declaration("advance", P4Method("advance", type_params=(None, []), params=[
P4Parameter("in", "sizeInBits", z3.BitVecSort(32), None),])), P4Declaration("length", P4Method("length", type_params=(z3.BitVecSort(32), []), params=[])), ])
)
prog_state.declare_global(
P4Extern("packet_out", type_params=[], methods=[P4Declaration("emit", P4Method("emit", type_params=(None, [
"T",]), params=[
P4Parameter("in", "hdr", "T", None),])), ])
)
prog_state.declare_global(
P4Declaration("verify", P4Method("verify", type_params=(None, []), params=[
P4Parameter("in", "check", z3.BoolSort(), None),
P4Parameter("in", "toSignal", "error", None),]))
)
prog_state.declare_global(
P4Declaration("NoAction", P4Action("NoAction", params=[], body=BlockStatement([]
) ))
)
prog_state.declare_global(
P4Declaration("match_kind", ["exact", "ternary", "lpm", ])
)
prog_state.declare_global(
P4Declaration("match_kind", ["range", "optional", "selector", ])
)
prog_state.declare_global(
ValueDeclaration("__v1model_version", 20180101, z3_type=z3.BitVecSort(32))
)
prog_state.declare_global(
StructType("standard_metadata_t", prog_state, fields=[("ingress_port", z3.BitVecSort(9)), ("egress_spec", z3.BitVecSort(9)), ("egress_port", z3.BitVecSort(9)), ("instance_type", z3.BitVecSort(32)), ("packet_length", z3.BitVecSort(32)), ("enq_timestamp", z3.BitVecSort(32)), ("enq_qdepth", z3.BitVecSort(19)), ("deq_timedelta", z3.BitVecSort(32)), ("deq_qdepth", z3.BitVecSort(19)), ("ingress_global_timestamp", z3.BitVecSort(48)), ("egress_global_timestamp", z3.BitVecSort(48)), ("mcast_grp", z3.BitVecSort(16)), ("egress_rid", z3.BitVecSort(16)), ("checksum_error", z3.BitVecSort(1)), ("parser_error", "error"), ("priority", z3.BitVecSort(3)), ], type_params=[])
)
prog_state.declare_global(
Enum( "CounterType", ["packets", "bytes", "packets_and_bytes", ])
)
prog_state.declare_global(
Enum( "MeterType", ["packets", "bytes", ])
)
prog_state.declare_global(
P4Extern("counter", type_params=[], methods=[P4Declaration("counter", P4Method("counter", type_params=(None, []), params=[
P4Parameter("none", "size", z3.BitVecSort(32), None),
P4Parameter("none", "type", "CounterType", None),])), P4Declaration("count", P4Method("count", type_params=(None, []), params=[
P4Parameter("in", "index", z3.BitVecSort(32), None),])), ])
)
prog_state.declare_global(
P4Extern("direct_counter", type_params=[], methods=[P4Declaration("direct_counter", P4Method("direct_counter", type_params=(None, []), params=[
P4Parameter("none", "type", "CounterType", None),])), P4Declaration("count", P4Method("count", type_params=(None, []), params=[])), ])
)
prog_state.declare_global(
P4Extern("meter", type_params=[], methods=[P4Declaration("meter", P4Method("meter", type_params=(None, []), params=[
P4Parameter("none", "size", z3.BitVecSort(32), None),
P4Parameter("none", "type", "MeterType", None),])), P4Declaration("execute_meter", P4Method("execute_meter", type_params=(None, [
"T",]), params=[
P4Parameter("in", "index", z3.BitVecSort(32), None),
P4Parameter("out", "result", "T", None),])), ])
)
prog_state.declare_global(
P4Extern("direct_meter", type_params=[
"T",], methods=[P4Declaration("direct_meter", P4Method("direct_meter", type_params=(None, []), params=[
P4Parameter("none", "type", "MeterType", None),])), P4Declaration("read", P4Method("read", type_params=(None, []), params=[
P4Parameter("out", "result", "T", None),])), ])
)
prog_state.declare_global(
P4Extern("register", type_params=[
"T",], methods=[P4Declaration("register", P4Method("register", type_params=(None, []), params=[
P4Parameter("none", "size", z3.BitVecSort(32), None),])), P4Declaration("read", P4Method("read", type_params=(None, []), params=[
P4Parameter("out", "result", "T", None),
P4Parameter("in", "index", z3.BitVecSort(32), None),])), P4Declaration("write", P4Method("write", type_params=(None, []), params=[
P4Parameter("in", "index", z3.BitVecSort(32), None),
P4Parameter("in", "value", "T", None),])), ])
)
prog_state.declare_global(
P4Extern("action_profile", type_params=[], methods=[P4Declaration("action_profile", P4Method("action_profile", type_params=(None, []), params=[
P4Parameter("none", "size", z3.BitVecSort(32), None),])), ])
)
prog_state.declare_global(
P4Declaration("random", P4Method("random", type_params=(None, [
"T",]), params=[
P4Parameter("out", "result", "T", None),
P4Parameter("in", "lo", "T", None),
P4Parameter("in", "hi", "T", None),]))
)
prog_state.declare_global(
P4Declaration("digest", P4Method("digest", type_params=(None, [
"T",]), params=[
P4Parameter("in", "receiver", z3.BitVecSort(32), None),
P4Parameter("in", "data", "T", None),]))
)
prog_state.declare_global(
Enum( "HashAlgorithm", ["crc32", "crc32_custom", "crc16", "crc16_custom", "random", "identity", "csum16", "xor16", ])
)
prog_state.declare_global(
P4Declaration("mark_to_drop", P4Method("mark_to_drop", type_params=(None, []), params=[]))
)
prog_state.declare_global(
P4Declaration("mark_to_drop", P4Method("mark_to_drop", type_params=(None, []), params=[
P4Parameter("inout", "standard_metadata", "standard_metadata_t", None),]))
)
prog_state.declare_global(
P4Declaration("hash", P4Method("hash", type_params=(None, [
"O",
"T",
"D",
"M",]), params=[
P4Parameter("out", "result", "O", None),
P4Parameter("in", "algo", "HashAlgorithm", None),
P4Parameter("in", "base", "T", None),
P4Parameter("in", "data", "D", None),
P4Parameter("in", "max", "M", None),]))
)
prog_state.declare_global(
P4Extern("action_selector", type_params=[], methods=[P4Declaration("action_selector", P4Method("action_selector", type_params=(None, []), params=[
P4Parameter("none", "algorithm", "HashAlgorithm", None),
P4Parameter("none", "size", z3.BitVecSort(32), None),
P4Parameter("none", "outputWidth", z3.BitVecSort(32), None),])), ])
)
prog_state.declare_global(
Enum( "CloneType", ["I2E", "E2E", ])
)
prog_state.declare_global(
P4Extern("Checksum16", type_params=[], methods=[P4Declaration("Checksum16", P4Method("Checksum16", type_params=(None, []), params=[])), P4Declaration("get", P4Method("get", type_params=(z3.BitVecSort(16), [
"D",]), params=[
P4Parameter("in", "data", "D", None),])), ])
)
prog_state.declare_global(
P4Declaration("verify_checksum", P4Method("verify_checksum", type_params=(None, [
"T",
"O",]), params=[
P4Parameter("in", "condition", z3.BoolSort(), None),
P4Parameter("in", "data", "T", None),
P4Parameter("in", "checksum", "O", None),
P4Parameter("none", "algo", "HashAlgorithm", None),]))
)
prog_state.declare_global(
P4Declaration("update_checksum", P4Method("update_checksum", type_params=(None, [
"T",
"O",]), params=[
P4Parameter("in", "condition", z3.BoolSort(), None),
P4Parameter("in", "data", "T", None),
P4Parameter("inout", "checksum", "O", None),
P4Parameter("none", "algo", "HashAlgorithm", None),]))
)
prog_state.declare_global(
P4Declaration("verify_checksum_with_payload", P4Method("verify_checksum_with_payload", type_params=(None, [
"T",
"O",]), params=[
P4Parameter("in", "condition", z3.BoolSort(), None),
P4Parameter("in", "data", "T", None),
P4Parameter("in", "checksum", "O", None),
P4Parameter("none", "algo", "HashAlgorithm", None),]))
)
prog_state.declare_global(
P4Declaration("update_checksum_with_payload", P4Method("update_checksum_with_payload", type_params=(None, [
"T",
"O",]), params=[
P4Parameter("in", "condition", z3.BoolSort(), None),
P4Parameter("in", "data", "T", None),
P4Parameter("inout", "checksum", "O", None),
P4Parameter("none", "algo", "HashAlgorithm", None),]))
)
prog_state.declare_global(
P4Declaration("resubmit", P4Method("resubmit", type_params=(None, [
"T",]), params=[
P4Parameter("in", "data", "T", None),]))
)
prog_state.declare_global(
P4Declaration("recirculate", P4Method("recirculate", type_params=(None, [
"T",]), params=[
P4Parameter("in", "data", "T", None),]))
)
prog_state.declare_global(
P4Declaration("clone", P4Method("clone", type_params=(None, []), params=[
P4Parameter("in", "type", "CloneType", None),
P4Parameter("in", "session", z3.BitVecSort(32), None),]))
)
prog_state.declare_global(
P4Declaration("clone3", P4Method("clone3", type_params=(None, [
"T",]), params=[
P4Parameter("in", "type", "CloneType", None),
P4Parameter("in", "session", z3.BitVecSort(32), None),
P4Parameter("in", "data", "T", None),]))
)
prog_state.declare_global(
P4Declaration("truncate", P4Method("truncate", type_params=(None, []), params=[
P4Parameter("in", "length", z3.BitVecSort(32), None),]))
)
prog_state.declare_global(
P4Declaration("assert", P4Method("assert", type_params=(None, []), params=[
P4Parameter("in", "check", z3.BoolSort(), None),]))
)
prog_state.declare_global(
P4Declaration("assume", P4Method("assume", type_params=(None, []), params=[
P4Parameter("in", "check", z3.BoolSort(), None),]))
)
prog_state.declare_global(
P4Declaration("log_msg", P4Method("log_msg", type_params=(None, []), params=[
P4Parameter("none", "msg", z3.StringSort(), None),]))
)
prog_state.declare_global(
P4Declaration("log_msg", P4Method("log_msg", type_params=(None, [
"T",]), params=[
P4Parameter("none", "msg", z3.StringSort(), None),
P4Parameter("in", "data", "T", None),]))
)
prog_state.declare_global(
ControlDeclaration(P4ParserType("Parser", params=[
P4Parameter("none", "b", "packet_in", None),
P4Parameter("out", "parsedHdr", "H", None),
P4Parameter("inout", "meta", "M", None),
P4Parameter("inout", "standard_metadata", "standard_metadata_t", None),], type_params=[
"H",
"M",]))
)
prog_state.declare_global(
ControlDeclaration(P4ControlType("VerifyChecksum", params=[
P4Parameter("inout", "hdr", "H", None),
P4Parameter("inout", "meta", "M", None),], type_params=[
"H",
"M",]))
)
prog_state.declare_global(
ControlDeclaration(P4ControlType("Ingress", params=[
P4Parameter("inout", "hdr", "H", None),
P4Parameter("inout", "meta", "M", None),
P4Parameter("inout", "standard_metadata", "standard_metadata_t", None),], type_params=[
"H",
"M",]))
)
prog_state.declare_global(
ControlDeclaration(P4ControlType("Egress", params=[
P4Parameter("inout", "hdr", "H", None),
P4Parameter("inout", "meta", "M", None),
P4Parameter("inout", "standard_metadata", "standard_metadata_t", None),], type_params=[
"H",
"M",]))
)
prog_state.declare_global(
ControlDeclaration(P4ControlType("ComputeChecksum", params=[
P4Parameter("inout", "hdr", "H", None),
P4Parameter("inout", "meta", "M", None),], type_params=[
"H",
"M",]))
)
prog_state.declare_global(
ControlDeclaration(P4ControlType("Deparser", params=[
P4Parameter("none", "b", "packet_out", None),
P4Parameter("in", "hdr", "H", None),], type_params=[
"H",]))
)
prog_state.declare_global(
ControlDeclaration(P4Package("V1Switch", params=[
P4Parameter("none", "p", TypeSpecializer("Parser", "H", "M", ), None),
P4Parameter("none", "vr", TypeSpecializer("VerifyChecksum", "H", "M", ), None),
P4Parameter("none", "ig", TypeSpecializer("Ingress", "H", "M", ), None),
P4Parameter("none", "eg", TypeSpecializer("Egress", "H", "M", ), None),
P4Parameter("none", "ck", TypeSpecializer("ComputeChecksum", "H", "M", ), None),
P4Parameter("none", "dep", TypeSpecializer("Deparser", "H", ), None),],type_params=[
"H",
"M",]))
)
prog_state.declare_global(
HeaderType("ethernet_t", prog_state, fields=[("dst_addr", z3.BitVecSort(48)), ("src_addr", z3.BitVecSort(48)), ("eth_type", z3.BitVecSort(16)), ], type_params=[])
)
prog_state.declare_global(
HeaderType("H", prog_state, fields=[("a", z3.BitVecSort(8)), ("b", z3.BitVecSort(8)), ("c", z3.BitVecSort(8)), ("d", z3.BitVecSort(8)), ("e", z3.BitVecSort(8)), ("f", z3.BitVecSort(8)), ("g", z3.BitVecSort(8)), ("h", z3.BitVecSort(8)), ("i", z3.BitVecSort(8)), ("j", z3.BitVecSort(8)), ("k", z3.BitVecSort(8)), ("l", z3.BitVecSort(8)), ("m", z3.BitVecSort(8)), ], type_params=[])
)
prog_state.declare_global(
HeaderType("B", prog_state, fields=[("a", z3.BitVecSort(8)), ("b", z3.BitVecSort(8)), ("c", z3.BitVecSort(8)), ("d", z3.BitVecSort(8)), ], type_params=[])
)
prog_state.declare_global(
StructType("Headers", prog_state, fields=[("eth_hdr", "ethernet_t"), ("h", "H"), ("b", "B"), ], type_params=[])
)
prog_state.declare_global(
StructType("Meta", prog_state, fields=[], type_params=[])
)
prog_state.declare_global(
P4Declaration("function_with_side_effect", P4Function("function_with_side_effect", return_type=z3.BitVecSort(8), params=[
P4Parameter("inout", "val", z3.BitVecSort(8), None),], body=BlockStatement([
AssignmentStatement("val", z3.BitVecVal(1, 8)),
P4Return(z3.BitVecVal(2, 8)),]
) ) )
)
prog_state.declare_global(
P4Declaration("bool_with_side_effect", P4Function("bool_with_side_effect", return_type=z3.BoolSort(), params=[
P4Parameter("inout", "val", z3.BitVecSort(8), None),], body=BlockStatement([
AssignmentStatement("val", z3.BitVecVal(1, 8)),
P4Return(z3.BoolVal(True)),]
) ) )
)
prog_state.declare_global(
ControlDeclaration(P4Parser(
name="p",
type_params=[],
params=[
P4Parameter("none", "pkt", "packet_in", None),
P4Parameter("out", "hdr", "Headers", None),
P4Parameter("inout", "m", "Meta", None),
P4Parameter("inout", "sm", "standard_metadata_t", None),],
const_params=[],
local_decls=[],
body=ParserTree([
ParserState(name="start", select="accept",
components=[
MethodCallStmt(MethodCallExpr(P4Member("pkt", "extract"), ["ethernet_t", ], P4Member("hdr", "eth_hdr"), )),
MethodCallStmt(MethodCallExpr(P4Member("pkt", "extract"), ["H", ], P4Member("hdr", "h"), )),
MethodCallStmt(MethodCallExpr(P4Member("pkt", "extract"), ["B", ], P4Member("hdr", "b"), )), ]),
])
))
)
prog_state.declare_global(
ControlDeclaration(P4Control(
name="ingress",
type_params=[],
params=[
P4Parameter("inout", "h", "Headers", None),
P4Parameter("inout", "m", "Meta", None),
P4Parameter("inout", "sm", "standard_metadata_t", None),],
const_params=[],
body=BlockStatement([
ValueDeclaration("dummy_var", None, z3_type=z3.BitVecSort(8)),
ValueDeclaration("dummy_bool", None, z3_type=z3.BoolSort()),
AssignmentStatement("dummy_var", z3.BitVecVal(0, 8)),
AssignmentStatement("dummy_var", z3.BitVecVal(0, 8)),
AssignmentStatement("dummy_var", z3.BitVecVal(0, 8)),
AssignmentStatement("dummy_var", z3.BitVecVal(0, 8)),
AssignmentStatement("dummy_var", z3.BitVecVal(0, 8)),
AssignmentStatement("dummy_var", z3.BitVecVal(0, 8)),
AssignmentStatement("dummy_var", MethodCallExpr("function_with_side_effect", [], P4Member(P4Member("h", "h"), "g"), )),
AssignmentStatement("dummy_var", P4subsat(z3.BitVecVal(0, 8), MethodCallExpr("function_with_side_effect", [], P4Member(P4Member("h", "h"), "h"), ))),
AssignmentStatement("dummy_var", P4addsat(z3.BitVecVal(255, 8), MethodCallExpr("function_with_side_effect", [], P4Member(P4Member("h", "h"), "i"), ))),
AssignmentStatement("dummy_var", P4add(z3.BitVecVal(255, 8), MethodCallExpr("function_with_side_effect", [], P4Member(P4Member("h", "h"), "j"), ))),
AssignmentStatement("dummy_var", P4bor(z3.BitVecVal(255, 8), MethodCallExpr("function_with_side_effect", [], P4Member(P4Member("h", "h"), "k"), ))),
AssignmentStatement("dummy_var", P4neg(MethodCallExpr("function_with_side_effect", [], P4Member(P4Member("h", "h"), "l"), ))),
AssignmentStatement("dummy_var", P4Slice(z3.BitVecVal(1, 16), 7, 0)),
AssignmentStatement("dummy_bool", z3.BoolVal(True)),
AssignmentStatement("dummy_bool", z3.BoolVal(False)),
AssignmentStatement("dummy_bool", P4ne(MethodCallExpr("function_with_side_effect", [], P4Member(P4Member("h", "b"), "c"), ), MethodCallExpr("function_with_side_effect", [], P4Member(P4Member("h", "b"), "c"), ))),
AssignmentStatement("dummy_bool", P4eq(MethodCallExpr("function_with_side_effect", [], P4Member(P4Member("h", "b"), "d"), ), MethodCallExpr("function_with_side_effect", [], P4Member(P4Member("h", "b"), "d"), ))),]
),
local_decls=[]
))
)
prog_state.declare_global(
ControlDeclaration(P4Control(
name="vrfy",
type_params=[],
params=[
P4Parameter("inout", "h", "Headers", None),
P4Parameter("inout", "m", "Meta", None),],
const_params=[],
body=BlockStatement([]
),
local_decls=[]
))
)
prog_state.declare_global(
ControlDeclaration(P4Control(
name="update",
type_params=[],
params=[
P4Parameter("inout", "h", "Headers", None),
P4Parameter("inout", "m", "Meta", None),],
const_params=[],
body=BlockStatement([]
),
local_decls=[]
))
)
prog_state.declare_global(
ControlDeclaration(P4Control(
name="egress",
type_params=[],
params=[
P4Parameter("inout", "h", "Headers", None),
P4Parameter("inout", "m", "Meta", None),
P4Parameter("inout", "sm", "standard_metadata_t", None),],
const_params=[],
body=BlockStatement([]
),
local_decls=[]
))
)
prog_state.declare_global(
ControlDeclaration(P4Control(
name="deparser",
type_params=[],
params=[
P4Parameter("none", "b", "packet_out", None),
P4Parameter("in", "h", "Headers", None),],
const_params=[],
body=BlockStatement([
MethodCallStmt(MethodCallExpr(P4Member("b", "emit"), ["Headers", ], "h", )),]
),
local_decls=[]
))
)
prog_state.declare_global(
InstanceDeclaration("main", TypeSpecializer("V1Switch", "Headers", "Meta", ), ConstCallExpr("p", ), ConstCallExpr("vrfy", ), ConstCallExpr("ingress", ), ConstCallExpr("egress", ), ConstCallExpr("update", ), ConstCallExpr("deparser", ), )
)
var = prog_state.get_main_function()
return var if isinstance(var, P4Package) else None
| [
"noreply@github.com"
] | p4gauntlet.noreply@github.com |
1324f492677704a221972cd8a2be003c924f9c79 | 608bc4314c5d91744c0731b91882e124fd44fb9a | /protomol-test/MDLTests/data/XYZTest.py | 19bd5d247903a2a89a0b393778ccf410bdf2e151 | [] | no_license | kuangchen/ProtoMolAddon | bfd1a4f10e7d732b8ed22d38bfa3c7d1f0b228c0 | 78c96b72204e301d36f8cbe03397f2a02377279f | refs/heads/master | 2021-01-10T19:55:40.467574 | 2015-06-09T21:18:51 | 2015-06-09T21:18:51 | 19,328,104 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 530 | py | # USING THE NEW STRUCTURE
from MDL import *
# PHYSICAL
phys = Physical()
io = IO()
io.readPDBPos(phys, "data/alanine.pdb")
io.readPSF(phys, "data/alanine.psf")
io.readPAR(phys, "data/alanine.par")
phys.bc = "Vacuum"
phys.cellsize = 5
phys.exclude = "scaled1-4"
phys.temperature = 300
phys.seed = 1234
# FORCES
forces = Forces()
ff = forces.makeForceField(phys, "charmm")
# EXECUTE
prop = Propagator(phys, forces, io)
gamma = prop.propagate("Leapfrog", steps=20, dt=0.5, forcefield=ff)
io.writeXYZPos(phys, 'data/XYZTest.xyz')
| [
"kuangchen@ucla.edu"
] | kuangchen@ucla.edu |
feb6ea6de93053f94bd6847ba66ec91a7becadda | 86a03fa5371909a8ae8a5df02414753ab7826136 | /polls/views.py | 2283ef7746d3a557add5e2e8105a6f2efeec0f32 | [] | no_license | lnvc/backup | dc6fc1503f7b691f7533e086afba49243cc18a27 | aec426cafcf6966bf333e5d7ea3cb59ae71c23c5 | refs/heads/master | 2020-03-23T22:08:44.042727 | 2018-07-24T12:41:12 | 2018-07-24T12:41:12 | 142,156,078 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,986 | py | from django.http import HttpResponseRedirect
from django.shortcuts import get_object_or_404, render
from django.urls import reverse
from django.views import generic
from django.utils import timezone
from rest_framework.views import APIView
from rest_framework.response import Response
from rest_framework import status, viewsets, generics
from . serializers import QuestionSerializer, PersonSerializer, SubjectSerializer, QuestionSubjectSerializer, TestSerializer, TestQuestionSerializer, StudentSerializer, QuestionMakerSerializer, SchoolSerializer, AttendSchoolSerializer, CoachSerializer, CoachStudentSerializer, ParentSerializer, ParentStudentSerializer, TeamSerializer, StudentTeamSerializer, TestSubmissionSerializer, SubmissionSerializer
from rest_framework.parsers import MultiPartParser, FormParser, JSONParser
from .models import Question, Person, Subject, Question_Subject, Test, Test_Question, Student, Question_Maker, School, Attend_School, Coach, Coach_Student, Parent, Parent_Student, Team, Student_Team, Test_Submission, Submission
# def index(request):
# latest_question_list = Question.objects.order_by('-pub_date')[:5]
# context = {'latest_question_list': latest_question_list }
# return render(request, 'polls/index.html', context)
# def detail(request, question_id):
# question = get_object_or_404(Question, pk=question_id)
# return render(request, 'polls/detail.html', {'question': question})
# def results(request, question_id):
# result = get_object_or_404(Question, pk=question_id)
# return render(request, 'polls/results.html', {'result': result})
class PersonViewSet(viewsets.ModelViewSet):
queryset=Person.objects.all()
serializer_class=PersonSerializer
class QuestionViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser, )
queryset=Question.objects.all()
serializer_class = QuestionSerializer
class SubjectViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser, )
queryset=Subject.objects.all()
serializer_class = SubjectSerializer
class QuestionSubjectViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser, )
queryset = Question_Subject.objects.all()
serializer_class = QuestionSubjectSerializer
class TestViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser, )
queryset = Test.objects.all()
serializer_class = TestSerializer
class TestQuestionViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser, )
queryset = Test_Question.objects.all()
serializer_class = TestQuestionSerializer
class StudentViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser, )
queryset = Student.objects.all()
serializer_class = StudentSerializer
class QuestionMakerViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser, )
queryset = Question_Maker.objects.all()
serializer_class = QuestionMakerSerializer
class SchoolViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser, )
queryset=School.objects.all()
serializer_class = SchoolSerializer
class AttendSchoolViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser, )
queryset=Attend_School.objects.all()
serializer_class = AttendSchoolSerializer
class CoachViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser, )
queryset = Coach.objects.all()
serializer_class = CoachSerializer
class CoachStudentViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser, )
queryset = Coach_Student.objects.all()
serializer_class = CoachStudentSerializer
class ParentViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser, )
queryset = Parent.objects.all()
serializer_class = ParentSerializer
class ParentStudentViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser, )
queryset = Parent_Student.objects.all()
serializer_class = ParentStudentSerializer
class TeamViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser, )
queryset = Team.objects.all()
serializer_class = TeamSerializer
class StudentTeamViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser, )
queryset = Student_Team.objects.all()
serializer_class = StudentTeamSerializer
class TestSubmissionViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser, )
queryset = Test_Submission.objects.all()
serializer_class = TestSubmissionSerializer
class SubmissionViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser, )
queryset = Submission.objects.all()
serializer_class = SubmissionSerializer
class QuestionList(APIView):
def get(self,request):
q = Question.objects.all()
serializer = QuestionSerializer(q, many=False)
return Response(serializer.data)
def post(self,request):
serializer = QuestionSerializer(data=request.data)
if serializer.is_valid():
serializer.save()
return Response(serializer.data, status=status.HTTP_201_CREATED)
return Response(serializer.errors, status = status.HTTP_400_BAD_REQUEST)
class IndexView(generic.ListView):
template_name='polls/index.html'
context_object_name='latest_question_list'
def get_queryset(self):
"""Return the last five published quesions
not including those set to be published in
the future)."""
return Question[:5]
class DetailView(generic.DetailView):
model=Question
template_name='polls/detail.html'
def get_queryset(self):
return Question.objects.filter(pub_date__lte=timezone.now())
class ResultsView(generic.DetailView):
model=Question
template_name='polls/results.html'
| [
"you@example.com"
] | you@example.com |
d95520c50cf58eb54fb6a5be256649eab0022be7 | deb826125ca2f3959d30598aedd5919eea76e2d7 | /probabilistic_automata/distributions.py | 96b7cb728e312035c9fdbd8b45cad4d33791ea10 | [
"MIT"
] | permissive | pangzhan27/probabilistic_automata | 2fc77bc81381a303b48a5e2f98a5dac32dd689a0 | ac04e3c142688b1f66bea086c2779061d0c5d9b7 | refs/heads/master | 2023-03-17T13:20:05.148389 | 2020-10-27T05:31:39 | 2020-10-27T05:31:39 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,204 | py | from __future__ import annotations
import random
from typing import Callable, Mapping, Set, Union
from itertools import product
import attr
from dfa.dfa import Letter, State
Action = Letter
@attr.s(frozen=True, auto_attribs=True)
class ExplicitDistribution:
"""Object representing a discrete distribution over environment actions."""
_dist: Mapping[Action, float]
def sample(self) -> Action:
"""Sample an envionment action."""
actions, weights = zip(*self._dist.items())
return random.choices(actions, weights)[0]
def __call__(self, action):
"""Evaluates the probability of an action."""
return self._dist.get(action, 0)
def items(self):
"""Sequence of Action, Probability pairs defining the distribution."""
return self._dist.items()
@attr.s(frozen=True, auto_attribs=True)
class ProductDistribution:
"""Object representing the product distribution of left and right."""
left: Distribution
right: Distribution
def sample(self) -> Action:
"""Sample an envionment action."""
return (self.left.sample(), self.right.sample())
def __call__(self, action):
"""Evaluates the probability of an action."""
left_a, right_a = action
return self.left(left_a), self.right(right_a)
def items(self):
"""Sequence of Action, Probability pairs defining the distribution."""
prod = product(self.left.items(), self.right.items())
for (a1, p1), (a2, p2) in prod:
yield (a1, a2), p1 * p2
Distribution = Union[ProductDistribution, ExplicitDistribution]
EnvDist = Callable[[State, Action], Distribution]
def prod_dist(left: EnvDist, right: EnvDist) -> EnvDist:
return lambda s, a: ProductDistribution(
left=left(s[0], a[0]),
right=right(s[1], a[1]),
)
def uniform(actions: Set[Action]) -> EnvDist:
"""
Encodes an environment that selects actions uniformly at random,
i.e., maps all state/action combinations to a Uniform distribution
of the input (environment) actions.
"""
size = len(actions)
dist = ExplicitDistribution({a: 1/size for a in actions})
return lambda *_: dist
| [
"mvc@linux.com"
] | mvc@linux.com |
af5cb3d4ba037f04b21027d29eb8310732d8e2c5 | 6fa7f99d3d3d9b177ef01ebf9a9da4982813b7d4 | /5KqHNS9wS97zN7Xyy_24.py | 9056e0dc650712661fa2dd8e680363a1a547a6fc | [] | no_license | daniel-reich/ubiquitous-fiesta | 26e80f0082f8589e51d359ce7953117a3da7d38c | 9af2700dbe59284f5697e612491499841a6c126f | refs/heads/master | 2023-04-05T06:40:37.328213 | 2021-04-06T20:17:44 | 2021-04-06T20:17:44 | 355,318,759 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 123 | py |
def top_note(student):
top = max(student["notes"])
student.pop("notes")
student["top_note"] = top
return student
| [
"daniel.reich@danielreichs-MacBook-Pro.local"
] | daniel.reich@danielreichs-MacBook-Pro.local |
650ce6e69bd38870e944308884c26a7866b15c9e | 50afc0db7ccfc6c80e1d3877fc61fb67a2ba6eb7 | /challenge3(yeartocentury)/Isaac.py | ebec690b0b53c6daeb10860c12db594d25ff2bf0 | [
"MIT"
] | permissive | banana-galaxy/challenges | 792caa05e7b8aa10aad8e04369fc06aaf05ff398 | 8655c14828607535a677e2bb18689681ee6312fa | refs/heads/master | 2022-12-26T23:58:12.660152 | 2020-10-06T13:38:04 | 2020-10-06T13:38:04 | 268,851,516 | 11 | 8 | MIT | 2020-09-22T21:21:30 | 2020-06-02T16:24:41 | Python | UTF-8 | Python | false | false | 163 | py | def yearToCentury(year):
tmp = year/100
if not tmp.is_integer():
century = int(tmp) + 1
else:
century = int(tmp)
return century | [
"cawasp@gmail.com"
] | cawasp@gmail.com |
9bf65574b384aa67fe6a8e1a0886f8d1d618b557 | c6588d0e7d361dba019743cacfde83f65fbf26b8 | /x12/5030/270005030.py | 5f04838af1dae128686436547fdfee1244dd5f23 | [] | no_license | djfurman/bots-grammars | 64d3b3a3cd3bd95d625a82204c3d89db6934947c | a88a02355aa4ca900a7b527b16a1b0f78fbc220c | refs/heads/master | 2021-01-12T06:59:53.488468 | 2016-12-19T18:37:57 | 2016-12-19T18:37:57 | 76,887,027 | 0 | 0 | null | 2016-12-19T18:30:43 | 2016-12-19T18:30:43 | null | UTF-8 | Python | false | false | 1,326 | py | from bots.botsconfig import *
from records005030 import recorddefs
syntax = {
'version' : '00403', #version of ISA to send
'functionalgroup' : 'HS',
}
structure = [
{ID: 'ST', MIN: 1, MAX: 1, LEVEL: [
{ID: 'BHT', MIN: 1, MAX: 1},
{ID: 'HL', MIN: 1, MAX: 99999, LEVEL: [
{ID: 'TRN', MIN: 0, MAX: 9},
{ID: 'NM1', MIN: 1, MAX: 99999, LEVEL: [
{ID: 'REF', MIN: 0, MAX: 9},
{ID: 'N2', MIN: 0, MAX: 1},
{ID: 'N3', MIN: 0, MAX: 1},
{ID: 'N4', MIN: 0, MAX: 1},
{ID: 'PER', MIN: 0, MAX: 3},
{ID: 'PRV', MIN: 0, MAX: 1},
{ID: 'DMG', MIN: 0, MAX: 1},
{ID: 'INS', MIN: 0, MAX: 1},
{ID: 'HI', MIN: 0, MAX: 1},
{ID: 'DTP', MIN: 0, MAX: 9},
{ID: 'MPI', MIN: 0, MAX: 9},
{ID: 'EQ', MIN: 0, MAX: 99, LEVEL: [
{ID: 'AMT', MIN: 0, MAX: 2},
{ID: 'VEH', MIN: 0, MAX: 1},
{ID: 'PDR', MIN: 0, MAX: 1},
{ID: 'PDP', MIN: 0, MAX: 1},
{ID: 'III', MIN: 0, MAX: 10},
{ID: 'REF', MIN: 0, MAX: 1},
{ID: 'DTP', MIN: 0, MAX: 9},
]},
]},
]},
{ID: 'SE', MIN: 1, MAX: 1},
]}
]
| [
"jason.capriotti@gmail.com"
] | jason.capriotti@gmail.com |
ac339d72416eae3e5bbcc4fd5d4f226aa27e9c3d | 951a84f6fafa763ba74dc0ad6847aaf90f76023c | /P2/ZS103_2.py | ab42cb20c9766cacd43b87fbb837e2183593b473 | [] | no_license | SakuraGo/leetcodepython3 | 37258531f1994336151f8b5c8aec5139f1ba79f8 | 8cedddb997f4fb6048b53384ac014d933b6967ac | refs/heads/master | 2020-09-27T15:55:28.353433 | 2020-02-15T12:00:02 | 2020-02-15T12:00:02 | 226,550,406 | 1 | 1 | null | null | null | null | UTF-8 | Python | false | false | 2,780 | py | # 909. 蛇梯棋
# 在一块 N x N 的棋盘 board 上,从棋盘的左下角开始,每一行交替方向,按从 1 到 N*N 的数字给方格编号。例如,对于一块 6 x 6 大小的棋盘,可以编号如下:
# 玩家从棋盘上的方格 1 (总是在最后一行、第一列)开始出发。
#
# 每一次从方格 x 起始的移动都由以下部分组成:
#
# 你选择一个目标方块 S,它的编号是 x+1,x+2,x+3,x+4,x+5,或者 x+6,只要这个数字 <= N*N。
# 如果 S 有一个蛇或梯子,你就移动到那个蛇或梯子的目的地。否则,你会移动到 S。
# 在 r 行 c 列上的方格里有 “蛇” 或 “梯子”;如果 board[r][c] != -1,那个蛇或梯子的目的地将会是 board[r][c]。
#
# 注意,你每次移动最多只能爬过蛇或梯子一次:就算目的地是另一条蛇或梯子的起点,你也不会继续移动。
#
# 返回达到方格 N*N 所需的最少移动次数,如果不可能,则返回 -1。
from typing import List
class Solution:
def num2pos(self,num:int,aa:int):
i = (num-1)//aa
j = 0
if i%2 == 0:
j = (num-1)%aa
else:
j = aa-1 -(num-1)%aa
return j,i
def pos2num(self,j:int,i:int,aa:int):
num = 0
if i %2== 0:
num = aa*(i) + j + 1
else:
num = aa*(i) +1
num += (aa-1-j)
return num
def snakesAndLadders(self, board: List[List[int]]) -> int:
board = board[::-1]
steps = [1,2,3,4,5,6]
aa = len(board)
visited = [[0 for j in range(aa)] for i in range(aa)]
que = []
que.append((0,0,0)) ##存入pos与cnt
visited[0][0] = 1
while len(que)>0:
point = que.pop(0)
j,i,stepCnt = point
if i == 5 and j == 1:
print("qwer")
oldPoint = self.pos2num(j,i,aa)
for juli in steps:
newPoint = oldPoint + juli
# print("newPoint:",newPoint)
if newPoint == aa * aa: ##找到了
return stepCnt+1
if newPoint > aa * aa :
continue
newJ,newI = self.num2pos(newPoint,aa)
if board[newI][newJ] != -1:
if board[newI][newJ] == aa* aa : ##跳到了终点
return stepCnt+1
newJ,newI = self.num2pos(board[newI][newJ],aa)
print("newj:",newJ,"newi:",newI)
if visited[newI][newJ] == 0:
que.append((newJ,newI,stepCnt+1))
visited[newI][newJ] = 1
return -1
res = Solution().snakesAndLadders([[-1,-1,-1],[-1,9,8],[-1,8,9]])
print(res) | [
"452681917@qq.com"
] | 452681917@qq.com |
dc64082107082a4f160588b00be0abeb5d516633 | 32eeb97dff5b1bf18cf5be2926b70bb322e5c1bd | /benchmark/uhabits/testcase/firstcases/testcase7_014.py | 2f2ec0fff145e0b56af49965d81293bb7e30f162 | [] | no_license | Prefest2018/Prefest | c374d0441d714fb90fca40226fe2875b41cf37fc | ac236987512889e822ea6686c5d2e5b66b295648 | refs/heads/master | 2021-12-09T19:36:24.554864 | 2021-12-06T12:46:14 | 2021-12-06T12:46:14 | 173,225,161 | 5 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,784 | py | #coding=utf-8
import os
import subprocess
import time
import traceback
from appium import webdriver
from appium.webdriver.common.touch_action import TouchAction
from selenium.common.exceptions import NoSuchElementException, WebDriverException
desired_caps = {
'platformName' : 'Android',
'deviceName' : 'Android Emulator',
'platformVersion' : '4.4',
'appPackage' : 'org.isoron.uhabits',
'appActivity' : 'org.isoron.uhabits.activities.habits.list.ListHabitsActivity',
'resetKeyboard' : True,
'androidCoverage' : 'org.isoron.uhabits/org.isoron.uhabits.JacocoInstrumentation',
'noReset' : True
}
def command(cmd, timeout=5):
p = subprocess.Popen(cmd, stderr=subprocess.STDOUT, stdout=subprocess.PIPE, shell=True)
time.sleep(timeout)
p.terminate()
return
def getElememt(driver, str) :
for i in range(0, 5, 1):
try:
element = driver.find_element_by_android_uiautomator(str)
except NoSuchElementException:
time.sleep(1)
else:
return element
os.popen("adb shell input tap 50 50")
element = driver.find_element_by_android_uiautomator(str)
return element
def getElememtBack(driver, str1, str2) :
for i in range(0, 2, 1):
try:
element = driver.find_element_by_android_uiautomator(str1)
except NoSuchElementException:
time.sleep(1)
else:
return element
for i in range(0, 5, 1):
try:
element = driver.find_element_by_android_uiautomator(str2)
except NoSuchElementException:
time.sleep(1)
else:
return element
os.popen("adb shell input tap 50 50")
element = driver.find_element_by_android_uiautomator(str2)
return element
def swipe(driver, startxper, startyper, endxper, endyper) :
size = driver.get_window_size()
width = size["width"]
height = size["height"]
try:
driver.swipe(start_x=int(width * startxper), start_y=int(height * startyper), end_x=int(width * endxper),
end_y=int(height * endyper), duration=2000)
except WebDriverException:
time.sleep(1)
driver.swipe(start_x=int(width * startxper), start_y=int(height * startyper), end_x=int(width * endxper),
end_y=int(height * endyper), duration=2000)
return
# testcase014
try :
starttime = time.time()
driver = webdriver.Remote('http://localhost:4723/wd/hub', desired_caps)
driver.press_keycode(4)
except Exception, e:
print 'FAIL'
print 'str(e):\t\t', str(e)
print 'repr(e):\t', repr(e)
print traceback.format_exc()
else:
print 'OK'
finally:
cpackage = driver.current_package
endtime = time.time()
print 'consumed time:', str(endtime - starttime), 's'
command("adb shell am broadcast -a com.example.pkg.END_EMMA --es name \"7_014\"")
jacocotime = time.time()
print 'jacoco time:', str(jacocotime - endtime), 's'
driver.quit()
if (cpackage != 'org.isoron.uhabits'):
cpackage = "adb shell am force-stop " + cpackage
os.popen(cpackage) | [
"prefest2018@gmail.com"
] | prefest2018@gmail.com |
8d949557ef10722cedbae45f3fd2a9d282ab3f43 | e588da296dd6ec3bedee9d24444dfca6e8780aef | /zip.py | 28af78f6e23bf9a9807804c1dec5fd9415d28c26 | [] | no_license | sujith1919/TCS-Python | 98eac61a02500a0e8f3139e431c98a509828c867 | c988cf078616540fe7f56e3ebdfd964aebd14519 | refs/heads/master | 2023-03-02T09:03:10.052633 | 2021-02-02T16:40:18 | 2021-02-02T16:40:18 | 335,355,862 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 298 | py | import zipfile
# Create zip file
f = zipfile.ZipFile('test.zip', 'w')
# add some files
f.write('file1.txt')
# add file as a new name
f.write('file2.txt', 'file-two.txt')
# add content from program (string)
f.writestr('file3.txt', 'Hello how are you')
# flush and close
f.close() | [
"jayarajan.sujith@oracle.com"
] | jayarajan.sujith@oracle.com |
a759ee12bac0d12f6a9906296363dc9ddf2ce2e0 | 9e988c0dfbea15cd23a3de860cb0c88c3dcdbd97 | /sdBs/AllRun/sbss_1219+551a/sdB_sbss_1219+551a_coadd.py | cda740b4da8b1b88c24b94f699a32d9f50cfa24e | [] | no_license | tboudreaux/SummerSTScICode | 73b2e5839b10c0bf733808f4316d34be91c5a3bd | 4dd1ffbb09e0a599257d21872f9d62b5420028b0 | refs/heads/master | 2021-01-20T18:07:44.723496 | 2016-08-08T16:49:53 | 2016-08-08T16:49:53 | 65,221,159 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 442 | py | from gPhoton.gMap import gMap
def main():
gMap(band="NUV", skypos=[185.475792,54.846639], skyrange=[0.0333333333333,0.0333333333333], stepsz = 30., cntfile="/data2/fleming/GPHOTON_OUTPUT/LIGHTCURVES/sdBs/sdB_sbss_1219+551a/sdB_sbss_1219+551a_movie_count.fits", cntcoaddfile="/data2/fleming/GPHOTON_OUTPUT/LIGHTCURVES/sdB/sdB_sbss_1219+551a/sdB_sbss_1219+551a_count_coadd.fits", overwrite=True, verbose=3)
if __name__ == "__main__":
main()
| [
"thomas@boudreauxmail.com"
] | thomas@boudreauxmail.com |
5b0fcb23f5d7e62a3e96dbbe59aea8ed206d81ac | 65329299fca8dcf2e204132624d9b0f8f8f39af7 | /napalm_yang/models/openconfig/network_instances/network_instance/protocols/protocol/bgp/global_/afi_safis/afi_safi/use_multiple_paths/__init__.py | b00cd176160b104741be976eb6b48c1a6149a998 | [
"Apache-2.0"
] | permissive | darylturner/napalm-yang | bf30420e22d8926efdc0705165ed0441545cdacf | b14946b884ad2019b896ee151285900c89653f44 | refs/heads/master | 2021-05-14T12:17:37.424659 | 2017-11-17T07:32:49 | 2017-11-17T07:32:49 | 116,404,171 | 0 | 0 | null | 2018-01-05T16:21:37 | 2018-01-05T16:21:36 | null | UTF-8 | Python | false | false | 25,284 | py |
from operator import attrgetter
from pyangbind.lib.yangtypes import RestrictedPrecisionDecimalType, RestrictedClassType, TypedListType
from pyangbind.lib.yangtypes import YANGBool, YANGListType, YANGDynClass, ReferenceType
from pyangbind.lib.base import PybindBase
from decimal import Decimal
from bitarray import bitarray
import __builtin__
import config
import state
import ebgp
import ibgp
class use_multiple_paths(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module openconfig-network-instance - based on the path /network-instances/network-instance/protocols/protocol/bgp/global/afi-safis/afi-safi/use-multiple-paths. Each member element of
the container is represented as a class variable - with a specific
YANG type.
YANG Description: Parameters related to the use of multiple paths for the
same NLRI
"""
__slots__ = ('_pybind_generated_by', '_path_helper', '_yang_name', '_extmethods', '__config','__state','__ebgp','__ibgp',)
_yang_name = 'use-multiple-paths'
_pybind_generated_by = 'container'
def __init__(self, *args, **kwargs):
self._path_helper = False
self._extmethods = False
self.__ibgp = YANGDynClass(base=ibgp.ibgp, is_container='container', yang_name="ibgp", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='container', is_config=True)
self.__ebgp = YANGDynClass(base=ebgp.ebgp, is_container='container', yang_name="ebgp", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='container', is_config=True)
self.__state = YANGDynClass(base=state.state, is_container='container', yang_name="state", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='container', is_config=True)
self.__config = YANGDynClass(base=config.config, is_container='container', yang_name="config", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='container', is_config=True)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path()+[self._yang_name]
else:
return [u'network-instances', u'network-instance', u'protocols', u'protocol', u'bgp', u'global', u'afi-safis', u'afi-safi', u'use-multiple-paths']
def _get_config(self):
"""
Getter method for config, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/global/afi_safis/afi_safi/use_multiple_paths/config (container)
YANG Description: Configuration parameters relating to multipath
"""
return self.__config
def _set_config(self, v, load=False):
"""
Setter method for config, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/global/afi_safis/afi_safi/use_multiple_paths/config (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_config is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_config() directly.
YANG Description: Configuration parameters relating to multipath
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=config.config, is_container='container', yang_name="config", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='container', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """config must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=config.config, is_container='container', yang_name="config", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='container', is_config=True)""",
})
self.__config = t
if hasattr(self, '_set'):
self._set()
def _unset_config(self):
self.__config = YANGDynClass(base=config.config, is_container='container', yang_name="config", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='container', is_config=True)
def _get_state(self):
"""
Getter method for state, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/global/afi_safis/afi_safi/use_multiple_paths/state (container)
YANG Description: State parameters relating to multipath
"""
return self.__state
def _set_state(self, v, load=False):
"""
Setter method for state, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/global/afi_safis/afi_safi/use_multiple_paths/state (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_state is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_state() directly.
YANG Description: State parameters relating to multipath
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=state.state, is_container='container', yang_name="state", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='container', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """state must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=state.state, is_container='container', yang_name="state", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='container', is_config=True)""",
})
self.__state = t
if hasattr(self, '_set'):
self._set()
def _unset_state(self):
self.__state = YANGDynClass(base=state.state, is_container='container', yang_name="state", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='container', is_config=True)
def _get_ebgp(self):
"""
Getter method for ebgp, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/global/afi_safis/afi_safi/use_multiple_paths/ebgp (container)
YANG Description: Multipath parameters for eBGP
"""
return self.__ebgp
def _set_ebgp(self, v, load=False):
"""
Setter method for ebgp, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/global/afi_safis/afi_safi/use_multiple_paths/ebgp (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_ebgp is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_ebgp() directly.
YANG Description: Multipath parameters for eBGP
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=ebgp.ebgp, is_container='container', yang_name="ebgp", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='container', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """ebgp must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=ebgp.ebgp, is_container='container', yang_name="ebgp", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='container', is_config=True)""",
})
self.__ebgp = t
if hasattr(self, '_set'):
self._set()
def _unset_ebgp(self):
self.__ebgp = YANGDynClass(base=ebgp.ebgp, is_container='container', yang_name="ebgp", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='container', is_config=True)
def _get_ibgp(self):
"""
Getter method for ibgp, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/global/afi_safis/afi_safi/use_multiple_paths/ibgp (container)
YANG Description: Multipath parameters for iBGP
"""
return self.__ibgp
def _set_ibgp(self, v, load=False):
"""
Setter method for ibgp, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/global/afi_safis/afi_safi/use_multiple_paths/ibgp (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_ibgp is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_ibgp() directly.
YANG Description: Multipath parameters for iBGP
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=ibgp.ibgp, is_container='container', yang_name="ibgp", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='container', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """ibgp must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=ibgp.ibgp, is_container='container', yang_name="ibgp", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='container', is_config=True)""",
})
self.__ibgp = t
if hasattr(self, '_set'):
self._set()
def _unset_ibgp(self):
self.__ibgp = YANGDynClass(base=ibgp.ibgp, is_container='container', yang_name="ibgp", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='container', is_config=True)
config = __builtin__.property(_get_config, _set_config)
state = __builtin__.property(_get_state, _set_state)
ebgp = __builtin__.property(_get_ebgp, _set_ebgp)
ibgp = __builtin__.property(_get_ibgp, _set_ibgp)
_pyangbind_elements = {'config': config, 'state': state, 'ebgp': ebgp, 'ibgp': ibgp, }
import config
import state
import ebgp
import ibgp
class use_multiple_paths(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module openconfig-network-instance-l2 - based on the path /network-instances/network-instance/protocols/protocol/bgp/global/afi-safis/afi-safi/use-multiple-paths. Each member element of
the container is represented as a class variable - with a specific
YANG type.
YANG Description: Parameters related to the use of multiple paths for the
same NLRI
"""
__slots__ = ('_pybind_generated_by', '_path_helper', '_yang_name', '_extmethods', '__config','__state','__ebgp','__ibgp',)
_yang_name = 'use-multiple-paths'
_pybind_generated_by = 'container'
def __init__(self, *args, **kwargs):
self._path_helper = False
self._extmethods = False
self.__ibgp = YANGDynClass(base=ibgp.ibgp, is_container='container', yang_name="ibgp", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='container', is_config=True)
self.__ebgp = YANGDynClass(base=ebgp.ebgp, is_container='container', yang_name="ebgp", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='container', is_config=True)
self.__state = YANGDynClass(base=state.state, is_container='container', yang_name="state", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='container', is_config=True)
self.__config = YANGDynClass(base=config.config, is_container='container', yang_name="config", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='container', is_config=True)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path()+[self._yang_name]
else:
return [u'network-instances', u'network-instance', u'protocols', u'protocol', u'bgp', u'global', u'afi-safis', u'afi-safi', u'use-multiple-paths']
def _get_config(self):
"""
Getter method for config, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/global/afi_safis/afi_safi/use_multiple_paths/config (container)
YANG Description: Configuration parameters relating to multipath
"""
return self.__config
def _set_config(self, v, load=False):
"""
Setter method for config, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/global/afi_safis/afi_safi/use_multiple_paths/config (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_config is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_config() directly.
YANG Description: Configuration parameters relating to multipath
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=config.config, is_container='container', yang_name="config", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='container', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """config must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=config.config, is_container='container', yang_name="config", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='container', is_config=True)""",
})
self.__config = t
if hasattr(self, '_set'):
self._set()
def _unset_config(self):
self.__config = YANGDynClass(base=config.config, is_container='container', yang_name="config", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='container', is_config=True)
def _get_state(self):
"""
Getter method for state, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/global/afi_safis/afi_safi/use_multiple_paths/state (container)
YANG Description: State parameters relating to multipath
"""
return self.__state
def _set_state(self, v, load=False):
"""
Setter method for state, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/global/afi_safis/afi_safi/use_multiple_paths/state (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_state is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_state() directly.
YANG Description: State parameters relating to multipath
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=state.state, is_container='container', yang_name="state", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='container', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """state must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=state.state, is_container='container', yang_name="state", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='container', is_config=True)""",
})
self.__state = t
if hasattr(self, '_set'):
self._set()
def _unset_state(self):
self.__state = YANGDynClass(base=state.state, is_container='container', yang_name="state", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='container', is_config=True)
def _get_ebgp(self):
"""
Getter method for ebgp, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/global/afi_safis/afi_safi/use_multiple_paths/ebgp (container)
YANG Description: Multipath parameters for eBGP
"""
return self.__ebgp
def _set_ebgp(self, v, load=False):
"""
Setter method for ebgp, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/global/afi_safis/afi_safi/use_multiple_paths/ebgp (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_ebgp is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_ebgp() directly.
YANG Description: Multipath parameters for eBGP
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=ebgp.ebgp, is_container='container', yang_name="ebgp", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='container', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """ebgp must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=ebgp.ebgp, is_container='container', yang_name="ebgp", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='container', is_config=True)""",
})
self.__ebgp = t
if hasattr(self, '_set'):
self._set()
def _unset_ebgp(self):
self.__ebgp = YANGDynClass(base=ebgp.ebgp, is_container='container', yang_name="ebgp", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='container', is_config=True)
def _get_ibgp(self):
"""
Getter method for ibgp, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/global/afi_safis/afi_safi/use_multiple_paths/ibgp (container)
YANG Description: Multipath parameters for iBGP
"""
return self.__ibgp
def _set_ibgp(self, v, load=False):
"""
Setter method for ibgp, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/global/afi_safis/afi_safi/use_multiple_paths/ibgp (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_ibgp is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_ibgp() directly.
YANG Description: Multipath parameters for iBGP
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=ibgp.ibgp, is_container='container', yang_name="ibgp", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='container', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """ibgp must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=ibgp.ibgp, is_container='container', yang_name="ibgp", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='container', is_config=True)""",
})
self.__ibgp = t
if hasattr(self, '_set'):
self._set()
def _unset_ibgp(self):
self.__ibgp = YANGDynClass(base=ibgp.ibgp, is_container='container', yang_name="ibgp", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='container', is_config=True)
config = __builtin__.property(_get_config, _set_config)
state = __builtin__.property(_get_state, _set_state)
ebgp = __builtin__.property(_get_ebgp, _set_ebgp)
ibgp = __builtin__.property(_get_ibgp, _set_ibgp)
_pyangbind_elements = {'config': config, 'state': state, 'ebgp': ebgp, 'ibgp': ibgp, }
| [
"dbarrosop@dravetech.com"
] | dbarrosop@dravetech.com |
81a2c8da318194cf868c51bd344f3defcc47b246 | 9d611e18ef40e96ed852f2b7cf7842dc3de33e18 | /examples/django_demo/mysite/settings.py | 8b59f1fc45b7f0e842aa521c47d8b8fad7bd049f | [] | no_license | dantezhu/xstat | 03cc39923d3d33a75e2a38801ee5f7eabf3b33cf | cc92e1d860abdaf6e4304127edfe595e3fcbf308 | refs/heads/master | 2023-04-13T15:23:02.398594 | 2023-04-08T17:21:26 | 2023-04-08T17:21:26 | 34,090,855 | 1 | 2 | null | 2018-06-02T07:50:37 | 2015-04-17T01:52:58 | Python | UTF-8 | Python | false | false | 2,073 | py | """
Django settings for mysite project.
For more information on this file, see
https://docs.djangoproject.com/en/1.6/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.6/ref/settings/
"""
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
import os
BASE_DIR = os.path.dirname(os.path.dirname(__file__))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.6/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'qwjbg&!+mkd&i-p%2^!vb!m4@^=*2(tp4k2b3e$h_x%no@6^h2'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
TEMPLATE_DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'test_stat',
)
MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'xstat.DjangoStat',
)
ROOT_URLCONF = 'mysite.urls'
WSGI_APPLICATION = 'mysite.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.6/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Internationalization
# https://docs.djangoproject.com/en/1.6/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.6/howto/static-files/
STATIC_URL = '/static/'
# stat
XSTAT_TITLE = 'dante.test'
XSTAT_HOST = '127.0.0.1'
| [
"dantezhu@qq.com"
] | dantezhu@qq.com |
95a39df1b8dec23d1064ab63236ad790d07a8ed9 | 3b84c4b7b16ccfd0154f8dcb75ddbbb6636373be | /google-cloud-sdk/lib/googlecloudsdk/third_party/apis/ml/v1alpha3/ml_v1alpha3_client.py | c537f1ec3c1707fd84ca86c65aaa15c0a138636f | [
"Apache-2.0",
"LicenseRef-scancode-unknown-license-reference"
] | permissive | twistedpair/google-cloud-sdk | 37f04872cf1ab9c9ce5ec692d2201a93679827e3 | 1f9b424c40a87b46656fc9f5e2e9c81895c7e614 | refs/heads/master | 2023-08-18T18:42:59.622485 | 2023-08-15T00:00:00 | 2023-08-15T12:14:05 | 116,506,777 | 58 | 24 | null | 2022-02-14T22:01:53 | 2018-01-06T18:40:35 | Python | UTF-8 | Python | false | false | 23,755 | py | """Generated client library for ml version v1alpha3."""
# NOTE: This file is autogenerated and should not be edited by hand.
from apitools.base.py import base_api
from googlecloudsdk.third_party.apis.ml.v1alpha3 import ml_v1alpha3_messages as messages
class MlV1alpha3(base_api.BaseApiClient):
"""Generated client library for service ml version v1alpha3."""
MESSAGES_MODULE = messages
BASE_URL = u'https://ml.googleapis.com/'
_PACKAGE = u'ml'
_SCOPES = [u'https://www.googleapis.com/auth/cloud-platform']
_VERSION = u'v1alpha3'
_CLIENT_ID = '1042881264118.apps.googleusercontent.com'
_CLIENT_SECRET = 'x_Tw5K8nnjoRAqULM9PFAC2b'
_USER_AGENT = 'x_Tw5K8nnjoRAqULM9PFAC2b'
_CLIENT_CLASS_NAME = u'MlV1alpha3'
_URL_VERSION = u'v1alpha3'
_API_KEY = None
def __init__(self, url='', credentials=None,
get_credentials=True, http=None, model=None,
log_request=False, log_response=False,
credentials_args=None, default_global_params=None,
additional_http_headers=None):
"""Create a new ml handle."""
url = url or self.BASE_URL
super(MlV1alpha3, self).__init__(
url, credentials=credentials,
get_credentials=get_credentials, http=http, model=model,
log_request=log_request, log_response=log_response,
credentials_args=credentials_args,
default_global_params=default_global_params,
additional_http_headers=additional_http_headers)
self.projects_models_versions = self.ProjectsModelsVersionsService(self)
self.projects_models = self.ProjectsModelsService(self)
self.projects_operations = self.ProjectsOperationsService(self)
self.projects = self.ProjectsService(self)
class ProjectsModelsVersionsService(base_api.BaseApiService):
"""Service class for the projects_models_versions resource."""
_NAME = u'projects_models_versions'
def __init__(self, client):
super(MlV1alpha3.ProjectsModelsVersionsService, self).__init__(client)
self._upload_configs = {
}
def Delete(self, request, global_params=None):
"""Delete a version.
Args:
request: (MlProjectsModelsVersionsDeleteRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleProtobufEmpty) The response message.
"""
config = self.GetMethodConfig('Delete')
return self._RunMethod(
config, request, global_params=global_params)
Delete.method_config = lambda: base_api.ApiMethodInfo(
http_method=u'DELETE',
method_id=u'ml.projects.models.versions.delete',
ordered_params=[u'projectsId', u'modelsId', u'versionsId'],
path_params=[u'modelsId', u'projectsId', u'versionsId'],
query_params=[],
relative_path=u'v1alpha3/projects/{projectsId}/models/{modelsId}/versions/{versionsId}',
request_field='',
request_type_name=u'MlProjectsModelsVersionsDeleteRequest',
response_type_name=u'GoogleProtobufEmpty',
supports_download=False,
)
def Get(self, request, global_params=None):
"""Get version metadata.
Args:
request: (MlProjectsModelsVersionsGetRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudMlV1alpha3Version) The response message.
"""
config = self.GetMethodConfig('Get')
return self._RunMethod(
config, request, global_params=global_params)
Get.method_config = lambda: base_api.ApiMethodInfo(
http_method=u'GET',
method_id=u'ml.projects.models.versions.get',
ordered_params=[u'projectsId', u'modelsId', u'versionsId'],
path_params=[u'modelsId', u'projectsId', u'versionsId'],
query_params=[],
relative_path=u'v1alpha3/projects/{projectsId}/models/{modelsId}/versions/{versionsId}',
request_field='',
request_type_name=u'MlProjectsModelsVersionsGetRequest',
response_type_name=u'GoogleCloudMlV1alpha3Version',
supports_download=False,
)
def List(self, request, global_params=None):
"""List versions in the model.
Args:
request: (MlProjectsModelsVersionsListRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudMlV1alpha3ListVersionsResponse) The response message.
"""
config = self.GetMethodConfig('List')
return self._RunMethod(
config, request, global_params=global_params)
List.method_config = lambda: base_api.ApiMethodInfo(
http_method=u'GET',
method_id=u'ml.projects.models.versions.list',
ordered_params=[u'projectsId', u'modelsId'],
path_params=[u'modelsId', u'projectsId'],
query_params=[u'filter', u'orderBy', u'pageSize', u'pageToken'],
relative_path=u'v1alpha3/projects/{projectsId}/models/{modelsId}/versions',
request_field='',
request_type_name=u'MlProjectsModelsVersionsListRequest',
response_type_name=u'GoogleCloudMlV1alpha3ListVersionsResponse',
supports_download=False,
)
def SetDefault(self, request, global_params=None):
"""Mark the version as default within the model.
Args:
request: (MlProjectsModelsVersionsSetDefaultRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudMlV1alpha3Version) The response message.
"""
config = self.GetMethodConfig('SetDefault')
return self._RunMethod(
config, request, global_params=global_params)
SetDefault.method_config = lambda: base_api.ApiMethodInfo(
http_method=u'POST',
method_id=u'ml.projects.models.versions.setDefault',
ordered_params=[u'projectsId', u'modelsId', u'versionsId'],
path_params=[u'modelsId', u'projectsId', u'versionsId'],
query_params=[],
relative_path=u'v1alpha3/projects/{projectsId}/models/{modelsId}/versions/{versionsId}:setDefault',
request_field=u'googleCloudMlV1alpha3SetDefaultVersionRequest',
request_type_name=u'MlProjectsModelsVersionsSetDefaultRequest',
response_type_name=u'GoogleCloudMlV1alpha3Version',
supports_download=False,
)
class ProjectsModelsService(base_api.BaseApiService):
"""Service class for the projects_models resource."""
_NAME = u'projects_models'
def __init__(self, client):
super(MlV1alpha3.ProjectsModelsService, self).__init__(client)
self._upload_configs = {
}
def Create(self, request, global_params=None):
"""Create a model which will later contain a set of model versions.
Args:
request: (MlProjectsModelsCreateRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudMlV1alpha3Model) The response message.
"""
config = self.GetMethodConfig('Create')
return self._RunMethod(
config, request, global_params=global_params)
Create.method_config = lambda: base_api.ApiMethodInfo(
http_method=u'POST',
method_id=u'ml.projects.models.create',
ordered_params=[u'projectsId'],
path_params=[u'projectsId'],
query_params=[],
relative_path=u'v1alpha3/projects/{projectsId}/models',
request_field=u'googleCloudMlV1alpha3Model',
request_type_name=u'MlProjectsModelsCreateRequest',
response_type_name=u'GoogleCloudMlV1alpha3Model',
supports_download=False,
)
def CreateVersion(self, request, global_params=None):
"""Upload a trained TensorFlow model version. The result of the operation.
is a Version..
Args:
request: (MlProjectsModelsCreateVersionRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleLongrunningOperation) The response message.
"""
config = self.GetMethodConfig('CreateVersion')
return self._RunMethod(
config, request, global_params=global_params)
CreateVersion.method_config = lambda: base_api.ApiMethodInfo(
http_method=u'POST',
method_id=u'ml.projects.models.createVersion',
ordered_params=[u'projectsId', u'modelsId'],
path_params=[u'modelsId', u'projectsId'],
query_params=[],
relative_path=u'v1alpha3/projects/{projectsId}/models/{modelsId}',
request_field=u'googleCloudMlV1alpha3Version',
request_type_name=u'MlProjectsModelsCreateVersionRequest',
response_type_name=u'GoogleLongrunningOperation',
supports_download=False,
)
def Delete(self, request, global_params=None):
"""Delete the model and all versions in it.
Args:
request: (MlProjectsModelsDeleteRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleProtobufEmpty) The response message.
"""
config = self.GetMethodConfig('Delete')
return self._RunMethod(
config, request, global_params=global_params)
Delete.method_config = lambda: base_api.ApiMethodInfo(
http_method=u'DELETE',
method_id=u'ml.projects.models.delete',
ordered_params=[u'projectsId', u'modelsId'],
path_params=[u'modelsId', u'projectsId'],
query_params=[],
relative_path=u'v1alpha3/projects/{projectsId}/models/{modelsId}',
request_field='',
request_type_name=u'MlProjectsModelsDeleteRequest',
response_type_name=u'GoogleProtobufEmpty',
supports_download=False,
)
def Get(self, request, global_params=None):
"""Describe a model and versions in it.
Args:
request: (MlProjectsModelsGetRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudMlV1alpha3Model) The response message.
"""
config = self.GetMethodConfig('Get')
return self._RunMethod(
config, request, global_params=global_params)
Get.method_config = lambda: base_api.ApiMethodInfo(
http_method=u'GET',
method_id=u'ml.projects.models.get',
ordered_params=[u'projectsId', u'modelsId'],
path_params=[u'modelsId', u'projectsId'],
query_params=[],
relative_path=u'v1alpha3/projects/{projectsId}/models/{modelsId}',
request_field='',
request_type_name=u'MlProjectsModelsGetRequest',
response_type_name=u'GoogleCloudMlV1alpha3Model',
supports_download=False,
)
def List(self, request, global_params=None):
"""List models in the project.
Args:
request: (MlProjectsModelsListRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudMlV1alpha3ListModelsResponse) The response message.
"""
config = self.GetMethodConfig('List')
return self._RunMethod(
config, request, global_params=global_params)
List.method_config = lambda: base_api.ApiMethodInfo(
http_method=u'GET',
method_id=u'ml.projects.models.list',
ordered_params=[u'projectsId'],
path_params=[u'projectsId'],
query_params=[u'filter', u'orderBy', u'pageSize', u'pageToken'],
relative_path=u'v1alpha3/projects/{projectsId}/models',
request_field='',
request_type_name=u'MlProjectsModelsListRequest',
response_type_name=u'GoogleCloudMlV1alpha3ListModelsResponse',
supports_download=False,
)
class ProjectsOperationsService(base_api.BaseApiService):
"""Service class for the projects_operations resource."""
_NAME = u'projects_operations'
def __init__(self, client):
super(MlV1alpha3.ProjectsOperationsService, self).__init__(client)
self._upload_configs = {
}
def Cancel(self, request, global_params=None):
"""Starts asynchronous cancellation on a long-running operation. The server.
makes a best effort to cancel the operation, but success is not
guaranteed. If the server doesn't support this method, it returns
`google.rpc.Code.UNIMPLEMENTED`. Clients can use
Operations.GetOperation or
other methods to check whether the cancellation succeeded or whether the
operation completed despite cancellation.
Args:
request: (MlProjectsOperationsCancelRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleProtobufEmpty) The response message.
"""
config = self.GetMethodConfig('Cancel')
return self._RunMethod(
config, request, global_params=global_params)
Cancel.method_config = lambda: base_api.ApiMethodInfo(
http_method=u'POST',
method_id=u'ml.projects.operations.cancel',
ordered_params=[u'projectsId', u'operationsId'],
path_params=[u'operationsId', u'projectsId'],
query_params=[],
relative_path=u'v1alpha3/projects/{projectsId}/operations/{operationsId}:cancel',
request_field='',
request_type_name=u'MlProjectsOperationsCancelRequest',
response_type_name=u'GoogleProtobufEmpty',
supports_download=False,
)
def Delete(self, request, global_params=None):
"""Deletes a long-running operation. This method indicates that the client is.
no longer interested in the operation result. It does not cancel the
operation. If the server doesn't support this method, it returns
`google.rpc.Code.UNIMPLEMENTED`.
Args:
request: (MlProjectsOperationsDeleteRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleProtobufEmpty) The response message.
"""
config = self.GetMethodConfig('Delete')
return self._RunMethod(
config, request, global_params=global_params)
Delete.method_config = lambda: base_api.ApiMethodInfo(
http_method=u'DELETE',
method_id=u'ml.projects.operations.delete',
ordered_params=[u'projectsId', u'operationsId'],
path_params=[u'operationsId', u'projectsId'],
query_params=[],
relative_path=u'v1alpha3/projects/{projectsId}/operations/{operationsId}',
request_field='',
request_type_name=u'MlProjectsOperationsDeleteRequest',
response_type_name=u'GoogleProtobufEmpty',
supports_download=False,
)
def Get(self, request, global_params=None):
"""Gets the latest state of a long-running operation. Clients can use this.
method to poll the operation result at intervals as recommended by the API
service.
Args:
request: (MlProjectsOperationsGetRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleLongrunningOperation) The response message.
"""
config = self.GetMethodConfig('Get')
return self._RunMethod(
config, request, global_params=global_params)
Get.method_config = lambda: base_api.ApiMethodInfo(
http_method=u'GET',
method_id=u'ml.projects.operations.get',
ordered_params=[u'projectsId', u'operationsId'],
path_params=[u'operationsId', u'projectsId'],
query_params=[],
relative_path=u'v1alpha3/projects/{projectsId}/operations/{operationsId}',
request_field='',
request_type_name=u'MlProjectsOperationsGetRequest',
response_type_name=u'GoogleLongrunningOperation',
supports_download=False,
)
def List(self, request, global_params=None):
"""Lists operations that match the specified filter in the request. If the.
server doesn't support this method, it returns `UNIMPLEMENTED`.
NOTE: the `name` binding below allows API services to override the binding
to use different resource name schemes, such as `users/*/operations`.
Args:
request: (MlProjectsOperationsListRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleLongrunningListOperationsResponse) The response message.
"""
config = self.GetMethodConfig('List')
return self._RunMethod(
config, request, global_params=global_params)
List.method_config = lambda: base_api.ApiMethodInfo(
http_method=u'GET',
method_id=u'ml.projects.operations.list',
ordered_params=[u'projectsId'],
path_params=[u'projectsId'],
query_params=[u'filter', u'pageSize', u'pageToken'],
relative_path=u'v1alpha3/projects/{projectsId}/operations',
request_field='',
request_type_name=u'MlProjectsOperationsListRequest',
response_type_name=u'GoogleLongrunningListOperationsResponse',
supports_download=False,
)
class ProjectsService(base_api.BaseApiService):
"""Service class for the projects resource."""
_NAME = u'projects'
def __init__(self, client):
super(MlV1alpha3.ProjectsService, self).__init__(client)
self._upload_configs = {
}
def GetConfig(self, request, global_params=None):
"""Get the service config associated with a given project.
Args:
request: (MlProjectsGetConfigRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudMlV1alpha3GetConfigResponse) The response message.
"""
config = self.GetMethodConfig('GetConfig')
return self._RunMethod(
config, request, global_params=global_params)
GetConfig.method_config = lambda: base_api.ApiMethodInfo(
http_method=u'GET',
method_id=u'ml.projects.getConfig',
ordered_params=[u'projectsId'],
path_params=[u'projectsId'],
query_params=[],
relative_path=u'v1alpha3/projects/{projectsId}:getConfig',
request_field='',
request_type_name=u'MlProjectsGetConfigRequest',
response_type_name=u'GoogleCloudMlV1alpha3GetConfigResponse',
supports_download=False,
)
def Hyperparameters(self, request, global_params=None):
"""Get the hyperparameters assigned to the given run.
Args:
request: (MlProjectsHyperparametersRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudMlV1alpha3GetHyperparametersResponse) The response message.
"""
config = self.GetMethodConfig('Hyperparameters')
return self._RunMethod(
config, request, global_params=global_params)
Hyperparameters.method_config = lambda: base_api.ApiMethodInfo(
http_method=u'POST',
method_id=u'ml.projects.hyperparameters',
ordered_params=[u'projectsId'],
path_params=[u'projectsId'],
query_params=[],
relative_path=u'v1alpha3/projects/{projectsId}/hyperparameters',
request_field=u'googleCloudMlV1alpha3GetHyperparametersRequest',
request_type_name=u'MlProjectsHyperparametersRequest',
response_type_name=u'GoogleCloudMlV1alpha3GetHyperparametersResponse',
supports_download=False,
)
def Predict(self, request, global_params=None):
"""Performs prediction on the data in the request.
Args:
request: (MlProjectsPredictRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudMlV1alpha3PredictResponse) The response message.
"""
config = self.GetMethodConfig('Predict')
return self._RunMethod(
config, request, global_params=global_params)
Predict.method_config = lambda: base_api.ApiMethodInfo(
http_method=u'POST',
method_id=u'ml.projects.predict',
ordered_params=[u'projectsId'],
path_params=[u'projectsId'],
query_params=[],
relative_path=u'v1alpha3/projects/{projectsId}:predict',
request_field=u'googleCloudMlV1alpha3PredictRequest',
request_type_name=u'MlProjectsPredictRequest',
response_type_name=u'GoogleCloudMlV1alpha3PredictResponse',
supports_download=False,
)
def ReportMetric(self, request, global_params=None):
"""Report the progress of a Training Job.
Args:
request: (MlProjectsReportMetricRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleProtobufEmpty) The response message.
"""
config = self.GetMethodConfig('ReportMetric')
return self._RunMethod(
config, request, global_params=global_params)
ReportMetric.method_config = lambda: base_api.ApiMethodInfo(
http_method=u'POST',
method_id=u'ml.projects.reportMetric',
ordered_params=[u'projectsId'],
path_params=[u'projectsId'],
query_params=[],
relative_path=u'v1alpha3/projects/{projectsId}:reportMetric',
request_field=u'googleCloudMlV1alpha3ReportMetricRequest',
request_type_name=u'MlProjectsReportMetricRequest',
response_type_name=u'GoogleProtobufEmpty',
supports_download=False,
)
def SubmitPredictionJob(self, request, global_params=None):
"""Performs batch prediction on the files specified in the request.
JobMetadata and will contain PredictionJobResult when completed.
Args:
request: (MlProjectsSubmitPredictionJobRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleLongrunningOperation) The response message.
"""
config = self.GetMethodConfig('SubmitPredictionJob')
return self._RunMethod(
config, request, global_params=global_params)
SubmitPredictionJob.method_config = lambda: base_api.ApiMethodInfo(
http_method=u'POST',
method_id=u'ml.projects.submitPredictionJob',
ordered_params=[u'projectsId'],
path_params=[u'projectsId'],
query_params=[],
relative_path=u'v1alpha3/projects/{projectsId}:submitPredictionJob',
request_field=u'googleCloudMlV1alpha3SubmitPredictionJobRequest',
request_type_name=u'MlProjectsSubmitPredictionJobRequest',
response_type_name=u'GoogleLongrunningOperation',
supports_download=False,
)
def SubmitTrainingJob(self, request, global_params=None):
"""Create a training job. The resulting operation contains.
JobMetadata and will contain JobResult when completed.
Args:
request: (MlProjectsSubmitTrainingJobRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleLongrunningOperation) The response message.
"""
config = self.GetMethodConfig('SubmitTrainingJob')
return self._RunMethod(
config, request, global_params=global_params)
SubmitTrainingJob.method_config = lambda: base_api.ApiMethodInfo(
http_method=u'POST',
method_id=u'ml.projects.submitTrainingJob',
ordered_params=[u'projectsId'],
path_params=[u'projectsId'],
query_params=[],
relative_path=u'v1alpha3/projects/{projectsId}:submitTrainingJob',
request_field=u'googleCloudMlV1alpha3SubmitTrainingJobRequest',
request_type_name=u'MlProjectsSubmitTrainingJobRequest',
response_type_name=u'GoogleLongrunningOperation',
supports_download=False,
)
| [
"joe@longreen.io"
] | joe@longreen.io |
22d637c796c8f43e52c4175e5c7ca2bbe816a6d7 | 23eb09e0054f64b53fb447ec14449a63bc0a092d | /misc_Functions.py | c9561a411fa573151bbff8450b9e50d892d340d5 | [] | no_license | bopopescu/External-Ports | 180574c51abd7b4672864469fbf52a5ca14fc72d | bc518f6738f42aa70217af9c39c2f327d03b52b2 | refs/heads/master | 2022-11-29T14:34:07.663767 | 2020-03-12T20:36:49 | 2020-03-12T20:36:49 | 281,930,192 | 0 | 0 | null | 2020-07-23T11:14:29 | 2020-07-23T11:14:28 | null | UTF-8 | Python | false | false | 15,946 | py | # -*- coding: utf-8 -*-
"""
Created on Wed Aug 09 14:11:14 2017
@author: v-stpurc
"""
from openpyxl import load_workbook
def reversed_string(a_string):
return a_string[::-1]
def parseFileName(fileName):
str_Length = len(fileName)
str_rev = reversed_string(fileName)
# rev_dot_index = str_rev.index('.')
dot_Index = fileName.index('.')
file_Extension = fileName[dot_Index + 1:]
rev_Slash_Index = str_rev.index("/")
slash_Index = str_Length - rev_Slash_Index
file_Location = fileName[:slash_Index]
file_Name = fileName[slash_Index:dot_Index]
return fileName, file_Name, file_Location, file_Extension
def loadTestSequence(fileName):
PortMap = {}
TestList = {}
file_Path, file_Name, file_Location, file_Extension = parseFileName(fileName)
TestList[0] = {}
TestList[0]['file_Path'] = file_Path
TestList[0]['file_Location'] = file_Location
TestList[0]['file_Name'] = file_Name
TestList[0]['file_Extension'] = file_Extension
configWorkBook = load_workbook(filename = fileName, read_only=True, data_only=True)
#Config work book must have tabs matching these names
#New code will hard code all this stuff, we don't need the functionality David created. That was made for a whole test rack for MTE
seq_sheet = configWorkBook.get_sheet_by_name(name = 'Test_Sequence')
map_sheet = configWorkBook.get_sheet_by_name(name = 'Port_Load_Map')
#Look for header cell on Port Map sheet
#Port Map defines the port names and what individual loads in the Chroma mainframe they are attached to
for loop in range(1, 30): #Search for "*header*" cell row
value = map_sheet.cell(row=loop, column=1).value
if value == "*header*":
map_header_row = loop + 1
break
for loop in range(1, 10): #Search for columns
value = map_sheet.cell(row=map_header_row, column=loop).value
#Frame Num is used for multiple Chroma Mainframes
if value == "Frame Num":
map_frame_col = loop
#Load Num is the load number within each Chroma Mainframe
elif value == "Load Num":
map_load_col = loop
#Console Num is the xbox console under test
elif value == "Console Num":
map_console_col = loop
#Port Name is the individual port on the xbox for testing
elif value == "Port Name":
map_port_col = loop
#Can test up to 40 ports, any Chroma mainframe, any xbox
#A port is a unique output on a specific xbox and is represented by a
#unique combination of Frame Num, Load Num, and Console Num
max_number_ports = 40
port_count = 0
is_okay = True
#This loop counts ports
#A port is a unique combination of Chroma mainframe, load number, console number
for loop in range(map_header_row + 1, max_number_ports + map_header_row + 1):
frame = map_sheet.cell(row=loop, column=map_frame_col).value
channel = map_sheet.cell(row=loop, column=map_load_col).value
console = map_sheet.cell(row=loop, column=map_console_col).value
port = map_sheet.cell(row=loop, column=map_port_col).value
if (frame and channel and console and port): # no empty cells in row
port_count = port_count + 1
ThisPort = {'Frame':frame, 'Channel':channel, 'Console':console, 'Port':port}
#PortMap is a list of dictionaries each key in PortMap represents a port or a line on the Port_Load_Map tab
PortMap[port_count] = ThisPort
#Scan for duplicates went here. For full implementation refer to LUA soucre code by David Furhman.
else:
print "Number of ports " + str(port_count)
#break will skip over is_okay setting
break
if port_count == 0:
is_okay = False
#Search for header cell on Test Sequence sheet
#The test sequence sheet defines the test parameters for each step of the test
for loop in range(1, 30): #Search for "*header*" cell row
value = seq_sheet.cell(row=loop, column=1).value
if value == "*header*":
seq_header_row = loop + 1
break
seq_step_num_col = None
seq_console_col = None
seq_port_col = None
seq_test_id_col = None
seq_amps_col = None
seq_duration_col = None
seq_trigger_col = None
seq_trigger_source_col = None
seq_volts_open_min_col = None
seq_volts_open_max_col = None
seq_volts_loaded_min_col = None
seq_volts_loaded_max_col = None
seq_delay_time_col = None
seq_write_csv_col = None
seq_excel_command_col = None
seq_post_process_col = None
seq_slew_rate_col = None
seq_rise_slew_rate_col = None
seq_fall_slew_rate_col = None
seq_pulse_t1_col = None
seq_pulse_t1_amps_col = None
seq_pulse_t2_col = None
seq_pulse_count_col = None
#Column names in config spreadsheet must match these although they may not all be required
#Probably these would never need to change so they could be hard coded
for loop in range(1, 100): #Search for columns
value = seq_sheet.cell(row=seq_header_row, column=loop).value
if value == "Step Num":
seq_step_num_col = loop
elif value == "Console":
seq_console_col = loop
elif value == "Port":
seq_port_col = loop
elif value == "Test ID":
seq_test_id_col = loop
elif value == "Amps":
seq_amps_col = loop
elif value == "Duration":
seq_duration_col = loop
elif value == "Trigger":
seq_trigger_col = loop
elif value == "Trigger Source":
seq_trigger_source_col = loop
elif value == "Volts Open Min":
seq_volts_open_min_col = loop
elif value == "Volts Open Max":
seq_volts_open_max_col = loop
elif value == "Volts Loaded Min":
seq_volts_loaded_min_col = loop
elif value == "Volts Loaded Max":
seq_volts_loaded_max_col = loop
elif value == "Delay Time":
seq_delay_time_col = loop
elif value == "Raw CSV?":
seq_write_csv_col = loop
elif value == "Excel New":
seq_excel_command_col = loop
elif value == "Post Process?":
seq_post_process_col = loop
elif value == "Slew Rate":
seq_slew_rate_col = loop
elif value == "Rise Slew Rate":
seq_rise_slew_rate_col = loop
elif value == "Fall Slew Rate":
seq_fall_slew_rate_col = loop
elif value == "Pulse T1":
seq_pulse_t1_col = loop
elif value == "Pulse T1 Amps":
seq_pulse_t1_amps_col = loop
elif value == "Pulse T2":
seq_pulse_t2_col = loop
elif value == "Pulse Count":
seq_pulse_count_col = loop
#This is the max number of steps the code will do
#This value would represent 100 individual port tests
#or some lower number of channels combined into group tests
#For example with five ports grouped 100 would represent 20 individual test runs.
max_number_tests = 100
step_count = 0
#Go through each step in the test and get values from columns
for loop in range (seq_header_row + 1, max_number_tests + seq_header_row + 1):
test_id = None
step_num = seq_sheet.cell(row=loop, column=seq_step_num_col).value
#Probably can hard code this to 1 since in our testing we will only ever have one console.
console = seq_sheet.cell(row=loop, column=seq_console_col).value
#port is a string
port = seq_sheet.cell(row=loop, column=seq_port_col).value
test_id = seq_sheet.cell(row=loop, column=seq_test_id_col).value
amps = seq_sheet.cell(row=loop, column=seq_amps_col).value
duration = seq_sheet.cell(row=loop, column=seq_duration_col).value
#trigger is the time to wait after turning on load before triggering the measurement
#This is converted to a trigger sample later in the code
trigger = seq_sheet.cell(row=loop, column=seq_trigger_col).value
#Some sheets don't have a trigger source column.
if seq_trigger_source_col != None:
trigger_source = seq_sheet.cell(row=loop, column=seq_trigger_source_col).value
if seq_pulse_t1_col != None: # all pulse columns must be there, so read them all (except pulse_count which is optional)
pulse_t1 = seq_sheet.cell(row=loop, column=seq_pulse_t1_col).value
pulse_t1_amps = seq_sheet.cell(row=loop, column=seq_pulse_t1_amps_col).value
pulse_t2 = seq_sheet.cell(row=loop, column=seq_pulse_t2_col).value
if seq_pulse_count_col != None:
pulse_count = seq_sheet.cell(row=loop, column=seq_pulse_count_col).value
#The following are where specs would be entered to determine pass fail or margin.
volts_open_min = seq_sheet.cell(row=loop, column=seq_volts_open_min_col).value
volts_open_max = seq_sheet.cell(row=loop, column=seq_volts_open_max_col).value
volts_loaded_min = seq_sheet.cell(row=loop, column=seq_volts_loaded_min_col).value
volts_loaded_max = seq_sheet.cell(row=loop, column=seq_volts_loaded_max_col).value
delay_time = seq_sheet.cell(row=loop, column=seq_delay_time_col).value
write_csv = seq_sheet.cell(row=loop, column=seq_write_csv_col).value
excel_command = seq_sheet.cell(row=loop, column=seq_excel_command_col).value
post_process = seq_sheet.cell(row=loop, column=seq_post_process_col).value
if seq_slew_rate_col != None:
slew_rate = seq_sheet.cell(row=loop, column=seq_slew_rate_col).value
rise_slew_rate = slew_rate
fall_slew_rate = slew_rate
if seq_rise_slew_rate_col != None:
rise_slew_rate = seq_sheet.cell(row=loop, column=seq_rise_slew_rate_col).value
if seq_fall_slew_rate_col != None:
fall_slew_rate = seq_sheet.cell(row=loop, column=seq_fall_slew_rate_col).value
#Cell has formulas in it by default. Do these read back as None?
#No not sure how this worked before. Maybe a LUA thing.
#Cell below last test ID row must be blank.
#Here is where we build a complete list reflecting the entirety of the configuration spreadsheet.
#Beyond this point the spreadsheet is no longer referenced and all the info is in this list.
if test_id != None: # must have a Test ID, if not, then it's the end of the list
step_count = step_count + 1
#Much of the original LUA code consisted of this kind of check. This error would represent an error
#in the config spreadsheet and we're already exposed to that issue without having a check
#Most of this kind of checking has been removed and the onus is on the engineer to avoid errors in the spreadsheet.
if step_num != step_count:
print("Test Sequence import error: Step Num column value and current import count do not match.")
print "Step num: " + str(step_num) + " Step Count " + str(step_count)
print "Test ID : " + str(test_id)
is_okay = False
break
# Port, Channel, Amps, Duration, Trigger, etc.
#Create structure holding all information for entire test
#This structure will have the same number of keys as the number of lines
#on the Test_Sequence tab regardless of whether these are grouped or individual tests.
#Test grouping is handled later and is a re-organization of data from this this list.
TestList[step_count] = {} # instantiate new test entry
TestList[step_count]['TestID'] = test_id
TestList[step_count]['Console'] = console
TestList[step_count]['Port'] = port
TestList[step_count]['Amps'] = amps
TestList[step_count]['Duration'] = duration
TestList[step_count]['Trigger'] = trigger
TestList[step_count]['TriggerSource'] = trigger_source
TestList[step_count]['PulseT1'] = pulse_t1
TestList[step_count]['PulseT1Amps'] = pulse_t1_amps
TestList[step_count]['PulseT2'] = pulse_t2
TestList[step_count]['PulseCount'] = pulse_count
TestList[step_count]['VoltsOpenMin'] = volts_open_min
TestList[step_count]['VoltsOpenMax'] = volts_open_max
TestList[step_count]['VoltsLoadedMin'] = volts_loaded_min
TestList[step_count]['VoltsLoadedMax'] = volts_loaded_max
TestList[step_count]['DelayTime'] = delay_time
if write_csv == "yes":
TestList[step_count]['WriteCsv'] = True
else:
TestList[step_count]['WriteCsv'] = False
TestList[step_count]['ExcelCommand'] = excel_command
if post_process == "yes":
TestList[step_count]['PostProcess'] = True
else:
TestList[step_count]['PostProcess'] = False
TestList[step_count]['RiseSlewRate'] = rise_slew_rate
TestList[step_count]['FallSlewRate'] = fall_slew_rate
#Check Test Parameters went here
#Go through Port Map list and check to make sure console number and port name
#on test sequence tab have a match on port load map tab.
# Assign mapped values (Port/Load Map)
if test_id < 9000:
match_found = False
#Port count is set in loop above
for loop2 in range (1, port_count):
ThisPort = PortMap[loop2]
if (ThisPort['Console'] == console) and (ThisPort['Port'] == port): # match found
if match_found: #duplicate, not allowed! (NOTE: this is a backup check to the one done above when importing the Port_Load_Map
#We would only get here if there was a duplicate line on Port Load Map tab
print("Internal error: Duplicate port mapping found (backup check).")
is_okay = False
break
else:
#TestList is the main list that holds all test parameters
#The frame number and load (channel) number are not on the test sequence tab
#so copy them from port map list which came from port load map tab.
#This section of code should only execute one in each time through the number of tests loop
TestList[step_count]['Frame'] = ThisPort['Frame']
TestList[step_count]['Channel'] = ThisPort['Channel']
match_found = True
if not match_found:
print(" **** ERROR: No match found in port/load map")
is_okay = False
else: #if testID
#test ID field was blank
print "Number of lines on sequence sheet" + str(step_count)
break
return TestList, is_okay
| [
"noreply@github.com"
] | bopopescu.noreply@github.com |
8ebed524c2137dfd56be352f2713160bd5d0e472 | fb7c95127adc8ecd137568da5658f8c2b748b09b | /pwncat/modules/agnostic/enumerate/__init__.py | 58cf45f67d3bddbf85708c4195b3d45e2dbb75b9 | [] | no_license | Aman-Dhimann/pwncat | 37fcd3d0ab9acb668efb73024bc8dfc8c2c0d150 | f74510afb6c1d0f880462034adf78359145a89b4 | refs/heads/master | 2023-05-24T15:12:24.603790 | 2021-06-12T21:45:39 | 2021-06-12T21:45:39 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 134 | py | #!/usr/bin/env python3
# Alias `run enumerate` to `run enumerate.gather`
from pwncat.modules.agnostic.enumerate.gather import Module
| [
"caleb.stewart94@gmail.com"
] | caleb.stewart94@gmail.com |
f5940373af05f87a0a7e2d86d85bd2a415853b19 | 466f1d55748b6082e78b26c89ef75e1fd9555f66 | /test/unit_test/test_sample.py | 223b20fa7635aa9ca0f88413fb761b7aea639cac | [] | no_license | kiccho1101/kaggle_disaster_tweets_gokart | 6f9c07c1225767b2c4ec9c05764646ada6e43192 | 389f9582ba1b1208b5eb4e758eff1b967794cc34 | refs/heads/master | 2022-04-21T08:23:44.279540 | 2020-04-01T13:39:57 | 2020-04-01T13:39:57 | 247,387,879 | 4 | 0 | null | null | null | null | UTF-8 | Python | false | false | 508 | py | from logging import getLogger
import unittest
from unittest.mock import MagicMock
from kaggle_disaster_tweets_gokart.model.sample import Sample
logger = getLogger(__name__)
class TestSample(unittest.TestCase):
def setup(self):
self.output_data = None
def test_run(self):
task = Sample()
task.dump = MagicMock(side_effect=self._dump)
task.run()
self.assertEqual(self.output_data, "sample output")
def _dump(self, data):
self.output_data = data
| [
"youodf11khp@gmail.com"
] | youodf11khp@gmail.com |
8c07d5f47490fcaa49882b14eec700f4432e1f5b | cf3549c5200e78dd81095cd3e05b3015d6bc2290 | /spiderman/misc/logger.py | 25050f75ba990aecda5cc123944f902caee9e470 | [
"Apache-2.0"
] | permissive | zzcv/python | e0c56a363188b8a3dcc030b10a7bd4aa1fc426b2 | 69ac0cabb7154816b1df415c0cc32966d6335718 | refs/heads/master | 2020-09-14T12:57:08.046356 | 2019-11-18T11:54:54 | 2019-11-18T11:54:54 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,089 | py | #/usr/bin/env python
#coding=utf8
"""
# Author: kellanfan
# Created Time : Tue 28 Aug 2018 10:55:09 PM CST
# File Name: logger.py
# Description:
"""
import logging
import yaml
import threading
from logging.handlers import RotatingFileHandler
class Logger(object):
def __init__(self, config_file):
'''init logger'''
self.logger = logging.getLogger('Logger')
config = self.__getconfig(config_file)
mythread=threading.Lock()
mythread.acquire()
self.log_file_path = config.get('log_file_path')
self.maxBytes = eval(config.get('maxBytes'))
self.backupCount = int(config.get('backupCount'))
self.outputConsole_level = int(config.get('outputConsole_level'))
self.outputFile_level = int(config.get('outputFile_level'))
self.outputConsole = int(config.get('outputConsole'))
self.outputFile = int(config.get('outputFile'))
self.formatter = logging.Formatter('%(asctime)s %(levelname)s -%(thread)d- %(filename)s : %(message)s')
mythread.release()
def __call__(self):
return self.outputLog()
def outputLog(self):
'''put out log'''
if self.outputConsole == 1:
console_handler = logging.StreamHandler()
console_handler.setFormatter(self.formatter)
self.logger.setLevel(self.outputConsole_level)
self.logger.addHandler(console_handler)
else:
pass
if self.outputFile == 1:
file_handler = RotatingFileHandler(self.log_file_path, maxBytes=self.maxBytes, backupCount=self.backupCount)
file_handler.setFormatter(self.formatter)
self.logger.setLevel(self.outputFile_level)
self.logger.addHandler(file_handler)
else:
pass
return self.logger
def __getconfig(self,config_file):
with open(config_file) as f:
configs = yaml.load(f.read())
return configs
if __name__ == '__main__':
mylog = Logger('logger.yml')
aa = mylog()
aa.error('aaa')
| [
"icyfk1989@163.com"
] | icyfk1989@163.com |
55d0e7eaad9e1b13ec5d11db16c9b9034bcd39e5 | 35b6013c1943f37d1428afd2663c8aba0a02628d | /functions/v2/response_streaming/main_test.py | 9f850a1718585ed2fceea23f8e35d73d9493e9f9 | [
"Apache-2.0"
] | permissive | GoogleCloudPlatform/python-docs-samples | d2a251805fbeab15d76ed995cf200727f63f887d | 44e819e713c3885e38c99c16dc73b7d7478acfe8 | refs/heads/main | 2023-08-28T12:52:01.712293 | 2023-08-28T11:18:28 | 2023-08-28T11:18:28 | 35,065,876 | 7,035 | 7,593 | Apache-2.0 | 2023-09-14T20:20:56 | 2015-05-04T23:26:13 | Jupyter Notebook | UTF-8 | Python | false | false | 968 | py | # Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the 'License');
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an 'AS IS' BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import flask
import pytest
import main
# Create a fake "app" for generating test request contexts.
@pytest.fixture(scope="module")
def app() -> flask.Flask:
return flask.Flask(__name__)
def test_main(app):
with app.test_request_context():
response = main.stream_big_query_output(flask.request)
assert response.is_streamed
assert response.status_code == 200
| [
"noreply@github.com"
] | GoogleCloudPlatform.noreply@github.com |
914f45793a8304544515d3639215795706f93065 | 3c6b0521eb788dc5e54e46370373e37eab4a164b | /predictive_engagement/pytorch_src/create_utt_embed.py | 9d6db792c1991cb86869136008c3cec28720be98 | [
"MIT"
] | permissive | y12uc231/DialEvalMetrics | 7402f883390b94854f5d5ae142f700a697d7a21c | f27d717cfb02b08ffd774e60faa6b319a766ae77 | refs/heads/main | 2023-09-02T21:56:07.232363 | 2021-11-08T21:25:24 | 2021-11-08T21:25:24 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,888 | py |
import argparse
from bert_serving.client import BertClient
import csv
import os
import pickle
#In order to create utterance embeddings, you need to first start BertServer (follow https://github.com/hanxiao/bert-as-service) with following command:
#bert-serving-start -model_dir /tmp/english_L-12_H-768_A-12/ -num_worker=4 -max_seq_len=128 -pooling_strategy=REDUCE_MEAN
#model_dir is the directory that pretrained Bert model has been downloaded
def make_Bert_embeddings(data_dir, fname, f_queries_embed, f_replies_embed, type):
'''Create embedding file for all queries and replies in the given files
Param:
data_dir: the directory of data
fname: name of the input file containing queries, replies, engagement_score
f_queries_embed: name of the output file containing the queries bert embeddings
f_replies_embed: name of the output file containing the replies bert embeddings
type: indicate train/valid/test set
'''
csv_file = open(data_dir + fname)
csv_reader = csv.reader(csv_file, delimiter=',')
foutput_q = os.path.join(data_dir + f_queries_embed)
foutput_r = os.path.join(data_dir + f_replies_embed)
queries,replies = [],[]
next(csv_reader)
for row in csv_reader:
queries.append(row[1].split('\n')[0])
replies.append(row[2].split('\n')[0])
if os.path.exists(foutput_q) and os.path.exists(foutput_r) :
print('Bert embedding files for utterances exist!')
return
else:
print("Bert embedding files for utterances do not exist")
queries_vectors = {}
replies_vectors = {}
bc = BertClient()
has_empty = False
fwq = open(foutput_q, 'wb')
for idx, q in enumerate(queries):
print(str(idx)+'query {}'.format(type))
if q not in queries_vectors.keys() and q !='':
queries_vectors[q] = bc.encode([q])
if q not in queries_vectors.keys() and q =='':
queries_vectors[q] = bc.encode(['[PAD]'])
has_empty=True
if has_empty == False:
queries_vectors[''] = bc.encode(['[PAD]'])
pickle.dump(queries_vectors, fwq)
fwr = open(foutput_r, 'wb')
has_empty = False
for idx, r in enumerate(replies):
print(str(idx)+'reply {}'.format(type))
if r not in replies_vectors.keys() and r !='':
replies_vectors[r] = bc.encode([r])
if r not in replies_vectors.keys() and r =='':
replies_vectors[r] = bc.encode(['[PAD]'])
has_empty = True
if has_empty == False:
replies_vectors[''] = bc.encode(['[PAD]'])
pickle.dump(replies_vectors, fwr)
def load_Bert_embeddings(data_dir, f_queries_embed, f_replies_embed):
'''Load embeddings of queries and replies
Param:
data_dir: the directory of data
f_queries_embed: name of the input file containing the queries bert embeddings
f_replies_embed: name of the input file containing the replies bert embeddings
'''
print('Loading Bert embeddings of sentences')
queries_vectors = {}
replies_vectors = {}
print('query embedding')
fwq = open(data_dir + f_queries_embed, 'rb')
dict_queries = pickle.load(fwq)
for query, embeds in dict_queries.items():
queries_vectors[query] = embeds[0]
print('len of embeddings is '+str(len(queries_vectors)))
print('reply embedding')
fwr = open(data_dir + f_replies_embed, 'rb')
dict_replies = pickle.load(fwr)
for reply, embeds in dict_replies.items():
replies_vectors[reply] = embeds[0]
print('len of embeddings is '+str(len(replies_vectors)))
return queries_vectors, replies_vectors
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Parameters for engagement classification')
parser.add_argument('--data', type=str)
args = parser.parse_args()
data_dir = './../data/'
pooling = 'mean'
ifname= 'ConvAI_utts'
dd_ifname = 'DD_finetune'
ofname = ''
#make_Bert_embeddings(data_dir, ifname+'_train.csv', ifname+'_train_queries_embed_'+pooling, ifname+'_train_replies_embed_'+pooling, 'train')
#make_Bert_embeddings(data_dir, ifname+'_valid.csv', ifname+'_valid_queries_embed_'+pooling, ifname+'_valid_replies_embed_'+pooling, 'valid')
#make_Bert_embeddings(data_dir, ifname+'_test.csv', ifname+'_test_queries_embed_'+pooling, ifname+'_test_replies_embed_'+pooling, 'test')
#make_Bert_embeddings(data_dir, 'humanAMT_engscores_utt.csv', 'humanAMT_queries_embed_'+pooling, 'humanAMT_replies_embed_'+pooling, 'testAMT')
#make_Bert_embeddings(data_dir, dd_ifname+'_train.csv', dd_ifname+'_queries_train_embed_'+pooling, dd_ifname+'_replies_train_embed_'+pooling, 'train')
#make_Bert_embeddings(data_dir, dd_ifname+'_valid.csv', dd_ifname+'_queries_valid_embed_'+pooling, dd_ifname+'_replies_valid_embed_'+pooling, 'valid')
#make_Bert_embeddings(data_dir, 'DD_queries_generated_replies.csv', 'DD_queries_embed_'+pooling, 'DD_generated_replies_embed_'+pooling, 'test')
make_Bert_embeddings(data_dir, args.data, f'{args.data}_queries_embed_'+pooling, f'{args.data}_replies_embed_'+pooling, 'test') | [
"yitingye@cs.cmu.edu"
] | yitingye@cs.cmu.edu |
d506e2c4c46364a2576d48fc2c2cc52ef154142b | ace30d0a4b1452171123c46eb0f917e106a70225 | /filesystems/vnx_rootfs_lxc_ubuntu64-16.04-v025-openstack-compute/rootfs/usr/lib/python2.7/dist-packages/openstackclient/tests/functional/base.py | 857432962124a08b874a970806fe3f3b60643f25 | [
"Python-2.0"
] | permissive | juancarlosdiaztorres/Ansible-OpenStack | e98aa8c1c59b0c0040c05df292964520dd796f71 | c01951b33e278de9e769c2d0609c0be61d2cb26b | refs/heads/master | 2022-11-21T18:08:21.948330 | 2018-10-15T11:39:20 | 2018-10-15T11:39:20 | 152,568,204 | 0 | 3 | null | 2022-11-19T17:38:49 | 2018-10-11T09:45:48 | Python | UTF-8 | Python | false | false | 4,414 | py | # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import re
import shlex
import subprocess
import testtools
from tempest.lib.cli import output_parser
from tempest.lib import exceptions
COMMON_DIR = os.path.dirname(os.path.abspath(__file__))
FUNCTIONAL_DIR = os.path.normpath(os.path.join(COMMON_DIR, '..'))
ROOT_DIR = os.path.normpath(os.path.join(FUNCTIONAL_DIR, '..'))
EXAMPLE_DIR = os.path.join(ROOT_DIR, 'examples')
def execute(cmd, fail_ok=False, merge_stderr=False):
"""Executes specified command for the given action."""
cmdlist = shlex.split(cmd)
result = ''
result_err = ''
stdout = subprocess.PIPE
stderr = subprocess.STDOUT if merge_stderr else subprocess.PIPE
proc = subprocess.Popen(cmdlist, stdout=stdout, stderr=stderr)
result, result_err = proc.communicate()
result = result.decode('utf-8')
if not fail_ok and proc.returncode != 0:
raise exceptions.CommandFailed(proc.returncode, cmd, result,
result_err)
return result
class TestCase(testtools.TestCase):
delimiter_line = re.compile('^\+\-[\+\-]+\-\+$')
@classmethod
def openstack(cls, cmd, fail_ok=False):
"""Executes openstackclient command for the given action."""
return execute('openstack ' + cmd, fail_ok=fail_ok)
@classmethod
def get_openstack_configuration_value(cls, configuration):
opts = cls.get_opts([configuration])
return cls.openstack('configuration show ' + opts)
@classmethod
def get_openstack_extention_names(cls):
opts = cls.get_opts(['Name'])
return cls.openstack('extension list ' + opts)
@classmethod
def get_opts(cls, fields, output_format='value'):
return ' -f {0} {1}'.format(output_format,
' '.join(['-c ' + it for it in fields]))
@classmethod
def assertOutput(cls, expected, actual):
if expected != actual:
raise Exception(expected + ' != ' + actual)
@classmethod
def assertInOutput(cls, expected, actual):
if expected not in actual:
raise Exception(expected + ' not in ' + actual)
@classmethod
def assertsOutputNotNone(cls, observed):
if observed is None:
raise Exception('No output observed')
def assert_table_structure(self, items, field_names):
"""Verify that all items have keys listed in field_names."""
for item in items:
for field in field_names:
self.assertIn(field, item)
def assert_show_fields(self, show_output, field_names):
"""Verify that all items have keys listed in field_names."""
# field_names = ['name', 'description']
# show_output = [{'name': 'fc2b98d8faed4126b9e371eda045ade2'},
# {'description': 'description-821397086'}]
# this next line creates a flattened list of all 'keys' (like 'name',
# and 'description' out of the output
all_headers = [item for sublist in show_output for item in sublist]
for field_name in field_names:
self.assertIn(field_name, all_headers)
def parse_show_as_object(self, raw_output):
"""Return a dict with values parsed from cli output."""
items = self.parse_show(raw_output)
o = {}
for item in items:
o.update(item)
return o
def parse_show(self, raw_output):
"""Return list of dicts with item values parsed from cli output."""
items = []
table_ = output_parser.table(raw_output)
for row in table_['values']:
item = {}
item[row[0]] = row[1]
items.append(item)
return items
def parse_listing(self, raw_output):
"""Return list of dicts with basic item parsed from cli output."""
return output_parser.listing(raw_output)
| [
"jcdiaztorres96@gmail.com"
] | jcdiaztorres96@gmail.com |
0c8551b52d52cd8af6da7aa30601f5ab2777e761 | 7041c85dffb757c3e7063118730363f32ebb9b8a | /코테대비/20190227/색종이 배치.py | 4b6cbe8beff587fecc1ed3ff818e29dd5e36fc7d | [] | no_license | woonji913/til | efae551baff56f3ca16169b93185a65f4d81cd7a | a05efc68f88f535c26cb4d4a396a1e9cd6bf0248 | refs/heads/master | 2021-06-06T23:17:54.504620 | 2019-06-19T04:29:18 | 2019-06-19T04:29:18 | 163,778,844 | 1 | 0 | null | 2021-05-08T16:27:17 | 2019-01-02T01:08:19 | HTML | UTF-8 | Python | false | false | 562 | py | x1, y1, dx1, dy1 = map(int, input().split())
x2, y2, dx2, dy2 = map(int, input().split())
# if x1+dx1 == x2 and y1+dy1 == y2:
# print('1')
# elif x1+dx1 > x2 or y1+dy1 > y2:
# print('2')
# elif x1+dx1 > x2 and y1+dy1 > y2:
# print('3')
# elif x1+dx1 < x2 and y1+dy1 < y2:
# print('4')
x = set(range(x1, x1+dx1+1)) & set(range(x2, x2+dx2+1))
y = set(range(y1, y1+dy1+1)) & set(range(y2, y2+dy2+1))
if len(x) == 1 and len(y) == 1:
print(1)
elif len(x) == 1 or len(y) == 1:
print(2)
elif len(x) and len(y):
print(3)
else:
print(4) | [
"johnnyboy0913@gmail.com"
] | johnnyboy0913@gmail.com |
28b7be7be7b0d1fc78f01bddcaab4e7accb7b6cf | a17ab912e585db05931830be9b35943c31c5db4a | /Algo1.py | 4f007729e8cf958b62733bdf56b43ec72007a921 | [] | no_license | syuuhei-yama/Python_Algorithm | ef15b65d25ab458f7ddb784573f26a2de2718409 | 1f6b39ada0261dd02d2808d157ddbaaa8e3a1e24 | refs/heads/master | 2023-03-29T08:21:35.550924 | 2021-04-09T10:43:45 | 2021-04-09T10:43:45 | 327,771,890 | 0 | 0 | null | 2021-01-08T02:28:47 | 2021-01-08T01:56:34 | Python | UTF-8 | Python | false | false | 1,058 | py | #最大値
# a = [1, 3, 10, 2, 8]
# max = a[0]
#
# for i in range(1,len(a)):
#
# if (max < a[i]): #最小値はここをmin変更
# max = a[i]
#
# print(max)
#データの交換
# a = 10
# b = 20
#
# t = a
# a = b
# b = t
#
# print("a=",a,",b=",b)
#サーチアルゴリズム
# a = [10,3,1,4,2]
#
# search_s = 4
# findID = -1
#
# for i in range(len(a)):
# if a[i] == search_s:
# findID = i
# break
# print("見つかったID=",findID)
#完全数
# Q = int(input())
#
# for i in range(Q):
# n = int(input())
# num = []
#
# for i in range(1, n):
# if n % i == 0:
# num.append(i)
# nums = sum(num)
#
# if n == nums:
# print('perfect')
# elif n - nums == 1:
# print('nearly')
# else:
# print('neither')
#競技プログラミング
# ls = list(map(int,input().split()))
# ls = [s for s in input()]
# print(ls)
# n = int(input())
# l = [int(input()) for _ in range(n)]
# a, b, x = map(int, input().split())
# print('YES' if a <= x <= b+a else 'NO')
| [
"syuuhei0615@icloud.com"
] | syuuhei0615@icloud.com |
2437f835bb3468bf06a61053f5ee3b1fa1ae36c9 | 08a2a4550d725c1f7ed6fb1d3bfc9abc35de5e1e | /tencentcloud/ocr/v20181119/errorcodes.py | f1172d3098f29b6dc4a40b523dccded1ab7120ee | [
"Apache-2.0"
] | permissive | wearetvxq/tencentcloud-sdk-python | 8fac40c7ea756ec222d3f41b2321da0c731bf496 | cf5170fc83130744b8b631000efacd1b7ba03262 | refs/heads/master | 2023-07-16T15:11:54.409444 | 2021-08-23T09:27:56 | 2021-08-23T09:27:56 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,671 | py | # -*- coding: utf8 -*-
# Copyright (c) 2017-2021 THL A29 Limited, a Tencent company. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# 帐号已欠费。
FAILEDOPERATION_ARREARSERROR = 'FailedOperation.ArrearsError'
# 今日次数达到限制。
FAILEDOPERATION_COUNTLIMITERROR = 'FailedOperation.CountLimitError'
# 检测失败。
FAILEDOPERATION_DETECTFAILED = 'FailedOperation.DetectFailed'
# 文件下载失败。
FAILEDOPERATION_DOWNLOADERROR = 'FailedOperation.DownLoadError'
# 图片内容为空。
FAILEDOPERATION_EMPTYIMAGEERROR = 'FailedOperation.EmptyImageError'
# 引擎识别超时。
FAILEDOPERATION_ENGINERECOGNIZETIMEOUT = 'FailedOperation.EngineRecognizeTimeout'
# 身份证信息不合法(身份证号、姓名字段校验非法等)。
FAILEDOPERATION_IDCARDINFOILLEGAL = 'FailedOperation.IdCardInfoIllegal'
# 图片模糊。
FAILEDOPERATION_IMAGEBLUR = 'FailedOperation.ImageBlur'
# 图片解码失败。
FAILEDOPERATION_IMAGEDECODEFAILED = 'FailedOperation.ImageDecodeFailed'
# 照片未检测到名片。
FAILEDOPERATION_IMAGENOBUSINESSCARD = 'FailedOperation.ImageNoBusinessCard'
# 图片中未检测到身份证。
FAILEDOPERATION_IMAGENOIDCARD = 'FailedOperation.ImageNoIdCard'
# 图片中未检测到文本。
FAILEDOPERATION_IMAGENOTEXT = 'FailedOperation.ImageNoText'
# 图片尺寸过大,请参考输出参数中关于图片大小限制的说明。
FAILEDOPERATION_IMAGESIZETOOLARGE = 'FailedOperation.ImageSizeTooLarge'
# 发票数据不一致。
FAILEDOPERATION_INVOICEMISMATCH = 'FailedOperation.InvoiceMismatch'
# 输入的Language不支持。
FAILEDOPERATION_LANGUAGENOTSUPPORT = 'FailedOperation.LanguageNotSupport'
# 照片中存在多张卡。
FAILEDOPERATION_MULTICARDERROR = 'FailedOperation.MultiCardError'
# 非香港身份证。
FAILEDOPERATION_NOHKIDCARD = 'FailedOperation.NoHKIDCard'
# 非护照。
FAILEDOPERATION_NOPASSPORT = 'FailedOperation.NoPassport'
# OCR识别失败。
FAILEDOPERATION_OCRFAILED = 'FailedOperation.OcrFailed'
# 查询无记录。
FAILEDOPERATION_QUERYNORECORD = 'FailedOperation.QueryNoRecord'
# 未知错误。
FAILEDOPERATION_UNKNOWERROR = 'FailedOperation.UnKnowError'
# 服务未开通。
FAILEDOPERATION_UNOPENERROR = 'FailedOperation.UnOpenError'
# 内部错误。
INTERNALERROR = 'InternalError'
# Config不是有效的JSON格式。
INVALIDPARAMETER_CONFIGFORMATERROR = 'InvalidParameter.ConfigFormatError'
# 图片解码失败。
INVALIDPARAMETER_ENGINEIMAGEDECODEFAILED = 'InvalidParameter.EngineImageDecodeFailed'
# 无效的GTIN。
INVALIDPARAMETER_INVALIDGTINERROR = 'InvalidParameter.InvalidGTINError'
# 参数值错误。
INVALIDPARAMETERVALUE_INVALIDPARAMETERVALUELIMIT = 'InvalidParameterValue.InvalidParameterValueLimit'
# 文件内容太大。
LIMITEXCEEDED_TOOLARGEFILEERROR = 'LimitExceeded.TooLargeFileError'
# 发票不存在。
RESOURCENOTFOUND_NOINVOICE = 'ResourceNotFound.NoInvoice'
# 不支持当天发票查询。
RESOURCENOTFOUND_NOTSUPPORTCURRENTINVOICEQUERY = 'ResourceNotFound.NotSupportCurrentInvoiceQuery'
# 计费状态异常。
RESOURCESSOLDOUT_CHARGESTATUSEXCEPTION = 'ResourcesSoldOut.ChargeStatusException'
| [
"tencentcloudapi@tenent.com"
] | tencentcloudapi@tenent.com |
275b76afbe63a3413985b5472a69d50bf3e62d67 | a7f442bc306d1a8366a3e30db50af0c2c90e9091 | /blockchain-env/Lib/site-packages/Cryptodome/Signature/DSS.pyi | 8860aa173625e356921ff29bb410f001ef975c4a | [] | no_license | Patreva/Python-flask-react-blockchain | cbdce3e0f55d4ba68be6ecfba35620585894bbbc | 474a9795820d8a4b5a370d400d55b52580055a2e | refs/heads/main | 2023-03-29T01:18:53.985398 | 2021-04-06T08:01:24 | 2021-04-06T08:01:24 | 318,560,922 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,129 | pyi | from typing import Union, Optional, Callable
from typing_extensions import Protocol
from Cryptodome.PublicKey.DSA import DsaKey
from Cryptodome.PublicKey.ECC import EccKey
class Hash(Protocol):
def digest(self) -> bytes: ...
__all__ = ['new']
class DssSigScheme:
def __init__(self, key: Union[DsaKey, EccKey], encoding: str, order: int) -> None: ...
def can_sign(self) -> bool: ...
def sign(self, msg_hash: Hash) -> bytes: ...
def verify(self, msg_hash: Hash, signature: bytes) -> bool: ...
class DeterministicDsaSigScheme(DssSigScheme):
def __init__(self, key, encoding, order, private_key) -> None: ...
class FipsDsaSigScheme(DssSigScheme):
def __init__(self, key: DsaKey, encoding: str, order: int, randfunc: Callable) -> None: ...
class FipsEcDsaSigScheme(DssSigScheme):
def __init__(self, key: EccKey, encoding: str, order: int, randfunc: Callable) -> None: ...
def new(key: Union[DsaKey, EccKey], mode: str, encoding: Optional[str]='binary', randfunc: Optional[Callable]=None) -> Union[DeterministicDsaSigScheme, FipsDsaSigScheme, FipsEcDsaSigScheme]: ...
| [
"patrickwahome74@gmail.com"
] | patrickwahome74@gmail.com |
d11b7ecf23ce5efea923c536b66dba11bd9dbde5 | 52f4426d2776871cc7f119de258249f674064f78 | /baekjoon/brute_force/16637.py | ab3dbfea614db4c9828f52df66a085568f73d432 | [] | no_license | namhyun-gu/algorithm | 8ad98d336366351e715465643dcdd9f04eeb0ad2 | d99c44f9825576c16aaca731888e0c32f2ae6e96 | refs/heads/master | 2023-06-06T02:28:16.514422 | 2021-07-02T10:34:03 | 2021-07-02T10:34:03 | 288,646,740 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,168 | py | # region Input redirection
import io
import sys
example = """
5
8*3+5
"""
sys.stdin = io.StringIO(example.strip())
# endregion
#
# ⛔ DO NOT COPY ABOVE CONTENTS
#
import sys
def calculate(op1, operator, op2):
if operator == "+":
return op1 + op2
elif operator == "-":
return op1 - op2
else:
return op1 * op2
def dfs(value, index=0):
global answer
if index >= len(operator):
answer = max(answer, value)
return
no_bracket_val = calculate(value, operator[index], operand[index + 1])
dfs(no_bracket_val, index + 1)
if index + 1 < len(operator):
in_bracket = calculate(
operand[index + 1], operator[index + 1], operand[index + 2]
)
bracket_val = calculate(value, operator[index], in_bracket)
dfs(bracket_val, index + 2)
if __name__ == "__main__":
input = sys.stdin.readline
N = input()
answer = -sys.maxsize
operand = []
operator = []
for idx, ch in enumerate(input().rstrip()):
if idx % 2:
operator.append(ch)
else:
operand.append(int(ch))
dfs(operand[0])
print(answer) | [
"mnhan0403@gmail.com"
] | mnhan0403@gmail.com |
a29aa2f427cebc2eaf7d63646e7b5923666c047e | 6b6a18001a3a0931bbe8b5185179223b7bd2879a | /python_selenium/SL_UW/src/SL_UM/testsuite_03.py | 8f115ce2f48dec6270948f9e63daa4667f324eae | [] | no_license | hufengping/percolata | cfa02fcf445983415b99c8ec77a08b3f0b270015 | b643e89b48c97a9be3b5509120f325455643b7af | refs/heads/master | 2020-04-06T07:04:22.076039 | 2016-04-25T08:02:40 | 2016-04-25T08:02:40 | 42,929,818 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,606 | py | # -*- coding: utf-8 -*-
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import Select
from selenium.common.exceptions import NoSuchElementException
from selenium.common.exceptions import NoAlertPresentException
from selenium.webdriver.common.action_chains import ActionChains
import test_unittest, time, re,os,string
from public import Autotest,db2file
from testsuite_01 import *
import xml.dom.minidom
#打开XML文档
xmlpath = os.path.split(os.path.realpath(__file__))[0] #获取当前路径
xmlpath2 = xmlpath.split('src')[0]
dom = xml.dom.minidom.parse(xmlpath2+'testdata\\config.xml')
#得到文档元素对象
root = dom.documentElement
#获取当前时间
now = time.strftime("%Y-%m-%d_%H_%M_%S")
tdata= time.strftime("%Y-%m-%d")
class TestCase_01(test_unittest.TestCase):
u'''模拟扫描'''
@classmethod
def setUpClass(cls):#所有用例执行前,执行
#判断条形码是否获取正确
if barcode == 0:
print "条形码获取失败,测试终止!"
print "请检查条形码生产环境是否正常访问。"
quit(cls)
#firefox下载默认设置
fp = webdriver.FirefoxProfile()
fp.set_preference("browser.download.folderList",2)
fp.set_preference("browser.download.manager.showWhenStarting",False)
fp.set_preference("browser.download.dir", xmlpath2+'log')
fp.set_preference("browser.helperApps.neverAsk.saveToDisk",
"application/octet-stream") #下载文件的类型
cls.driver = webdriver.Firefox(firefox_profile=fp)
cls.driver.implicitly_wait(30)
logins = root.getElementsByTagName('url_vscan')
cls.base_url = logins[0].firstChild.data
#print self.base_url
cls.verificationErrors = []
cls.accept_next_alert = True
def setUp(self):#每条用例执行前,执行
pass
def test_01_login(self):
u'''登录'''
driver = self.driver
driver.get(self.base_url)
driver.maximize_window()
logins = root.getElementsByTagName('vuser')
#获得null 标签的username、passwrod 属性值
username=logins[0].getAttribute("username")
password=logins[0].getAttribute("password")
prompt_info = logins[0].firstChild.data
#登录
Autotest.login(self,username,password)
#获取断言信息进行断言
text = driver.find_element_by_xpath("/html/body/div[2]/div/div[3]/div/div[2]/div/table/tbody/tr/td").text
try:
self.assertEqual(text,prompt_info,U'登录信息验证错误,请检查网络或登录信息!')
except AssertionError,e:
print e
print ' 请查看截图文件 '+now+'.png'
driver.get_screenshot_as_file(xmlpath2+"log\\"+now+U'登录信息验证不通过.png')#如果没有找到上面的元素就截取当前页面。
def test_02_mnsm(self):
u'''模拟扫描'''
driver = self.driver
#选择菜单
driver.find_element_by_xpath('/html/body/div[2]/div/div[4]/div/div[1]/div/div/div/div[2]/div/div/div/div/table/tbody[2]/tr/td/div/nobr/table/tbody/tr/td[2]').click()
driver.find_element_by_xpath('/html/body/div[2]/div/div[4]/div/div[1]/div/div/div/div[2]/div/div/div/div/table/tbody[2]/tr[2]/td/div/nobr').click()
#切换frame
driver.switch_to_frame(driver.find_element_by_xpath('/html/body/div[2]/div/div[4]/div/div[2]/div/div[1]/div/iframe'))
#填写条形码
driver.find_element_by_id('barCode').send_keys(barcode)
#删除标志位:最后两个
driver.find_element_by_id('barCode').send_keys(Keys.BACK_SPACE)
driver.find_element_by_id('barCode').send_keys(Keys.BACK_SPACE)
#勾选投保书(个人营销渠道)
driver.find_element_by_xpath("//input[@value='1111']").click()
#点击提交
driver.find_element_by_id('loginform').click()
try:
text = driver.find_element_by_xpath('/html/body/div/font').text
self.assertEqual(text,"操作成功!",U'虚拟扫描操作失败!')
except AssertionError,e:
print e
print ' 虚拟扫描失败 '+now+'.png'
driver.get_screenshot_as_file(xmlpath2+"log\\"+now+U'虚拟扫描失败.png')
def is_element_present(self, how, what):
try: self.driver.find_element(by=how, value=what)
except NoSuchElementException, e: return False
return True
def is_alert_present(self):
try:
self.driver.switch_to_alert()
except NoAlertPresentException, e:
return False
return True
def close_alert_and_get_its_text(self):
try:
alert = self.driver.switch_to_alert()
alert_text = alert.text
if self.accept_next_alert:
alert.accept()
else:
alert.dismiss()
return alert_text
finally: self.accept_next_alert = True
def tearDown(self):#每条用例执行完,执行
self.assertEqual([], self.verificationErrors)
@classmethod #所有用例执行完,执行
def tearDownClass(cls):
cls.driver.quit()
if __name__ == "__main__":
test_unittest.main()
| [
"fengping.hu@percolata.com"
] | fengping.hu@percolata.com |
2a466abd81f68c9562f93904ea12db543b2386bd | 34ed92a9593746ccbcb1a02630be1370e8524f98 | /lib/pints/pints/toy/__init__.py | 62a438a380de519170d87bfec4dd2216d11adb7a | [
"LicenseRef-scancode-unknown-license-reference",
"BSD-3-Clause"
] | permissive | HOLL95/Cytochrome_SV | 87b7a680ed59681230f79e1de617621680ea0fa0 | d02b3469f3ee5a4c85d756053bc87651093abea1 | refs/heads/master | 2022-08-01T05:58:16.161510 | 2021-02-01T16:09:31 | 2021-02-01T16:09:31 | 249,424,867 | 0 | 0 | null | 2022-06-22T04:09:11 | 2020-03-23T12:29:29 | Jupyter Notebook | UTF-8 | Python | false | false | 2,282 | py | #
# Root of the toy module.
# Provides a number of toy models and logpdfs for tests of Pints' functions.
#
# This file is part of PINTS.
# Copyright (c) 2017-2019, University of Oxford.
# For licensing information, see the LICENSE file distributed with the PINTS
# software package.
#
from __future__ import absolute_import, division
from __future__ import print_function, unicode_literals
from ._toy_classes import ToyLogPDF, ToyModel, ToyODEModel # noqa
from ._annulus import AnnulusLogPDF # noqa
from ._beeler_reuter_model import ActionPotentialModel # noqa
from ._cone import ConeLogPDF # noqa
from ._constant_model import ConstantModel # noqa
from ._fitzhugh_nagumo_model import FitzhughNagumoModel # noqa
from ._gaussian import GaussianLogPDF # noqa
from ._german_credit import GermanCreditLogPDF # noqa
from ._german_credit_hierarchical import GermanCreditHierarchicalLogPDF # noqa
from ._goodwin_oscillator_model import GoodwinOscillatorModel # noqa
from ._hes1_michaelis_menten import Hes1Model # noqa
from ._hh_ik_model import HodgkinHuxleyIKModel # noqa
from ._high_dimensional_gaussian import HighDimensionalGaussianLogPDF # noqa
from ._logistic_model import LogisticModel # noqa
from ._lotka_volterra_model import LotkaVolterraModel # noqa
from ._multimodal_gaussian import MultimodalGaussianLogPDF # noqa
from ._neals_funnel import NealsFunnelLogPDF # noqa
from ._parabola import ParabolicError # noqa
from ._repressilator_model import RepressilatorModel # noqa
from ._rosenbrock import RosenbrockError, RosenbrockLogPDF # noqa
from ._sho_model import SimpleHarmonicOscillatorModel # noqa
from ._simple_egg_box import SimpleEggBoxLogPDF # noqa
from ._sir_model import SIRModel # noqa
from ._twisted_gaussian_banana import TwistedGaussianLogPDF # noqa
from ._stochastic_degradation_model import StochasticDegradationModel # noqa
| [
"henney@localhost.localdomain"
] | henney@localhost.localdomain |
03bd8be91946ecefcef85ec3815b2aac64eb4c10 | a5a99f646e371b45974a6fb6ccc06b0a674818f2 | /Geometry/HGCalCommonData/python/testHGCalDD4hepV17ShiftReco_cff.py | b91b5bcc3b5a765e452e85505b5af6ee752a7b5d | [
"Apache-2.0"
] | permissive | cms-sw/cmssw | 4ecd2c1105d59c66d385551230542c6615b9ab58 | 19c178740257eb48367778593da55dcad08b7a4f | refs/heads/master | 2023-08-23T21:57:42.491143 | 2023-08-22T20:22:40 | 2023-08-22T20:22:40 | 10,969,551 | 1,006 | 3,696 | Apache-2.0 | 2023-09-14T19:14:28 | 2013-06-26T14:09:07 | C++ | UTF-8 | Python | false | false | 3,329 | py | import FWCore.ParameterSet.Config as cms
# This config came from a copy of 2 files from Configuration/Geometry/python
from Configuration.Geometry.GeometryDD4hep_cff import *
DDDetectorESProducer.confGeomXMLFiles = cms.FileInPath("Geometry/HGCalCommonData/data/dd4hep/testHGCalV17Shift.xml")
from Geometry.TrackerNumberingBuilder.trackerNumberingGeometry_cff import *
from SLHCUpgradeSimulations.Geometry.fakePhase2OuterTrackerConditions_cff import *
from Geometry.EcalCommonData.ecalSimulationParameters_cff import *
from Geometry.HcalCommonData.hcalDDDSimConstants_cff import *
from Geometry.HGCalCommonData.hgcalParametersInitialization_cfi import *
from Geometry.HGCalCommonData.hgcalNumberingInitialization_cfi import *
from Geometry.MuonNumbering.muonGeometryConstants_cff import *
from Geometry.MuonNumbering.muonOffsetESProducer_cff import *
from Geometry.MTDNumberingBuilder.mtdNumberingGeometry_cff import *
# tracker
from Geometry.CommonTopologies.globalTrackingGeometry_cfi import *
from RecoTracker.GeometryESProducer.TrackerRecoGeometryESProducer_cfi import *
from Geometry.TrackerGeometryBuilder.trackerParameters_cff import *
from Geometry.TrackerNumberingBuilder.trackerTopology_cfi import *
from Geometry.TrackerGeometryBuilder.idealForDigiTrackerGeometry_cff import *
trackerGeometry.applyAlignment = False
# calo
from Geometry.CaloEventSetup.HGCalTopology_cfi import *
from Geometry.HGCalGeometry.HGCalGeometryESProducer_cfi import *
from Geometry.CaloEventSetup.CaloTopology_cfi import *
from Geometry.CaloEventSetup.CaloGeometryBuilder_cfi import *
CaloGeometryBuilder = cms.ESProducer("CaloGeometryBuilder",
SelectedCalos = cms.vstring("HCAL",
"ZDC",
"EcalBarrel",
"TOWER",
"HGCalEESensitive",
"HGCalHESiliconSensitive",
"HGCalHEScintillatorSensitive"
)
)
from Geometry.EcalAlgo.EcalBarrelGeometry_cfi import *
from Geometry.HcalEventSetup.HcalGeometry_cfi import *
from Geometry.HcalEventSetup.CaloTowerGeometry_cfi import *
from Geometry.HcalEventSetup.CaloTowerTopology_cfi import *
from Geometry.HcalCommonData.hcalDDDRecConstants_cfi import *
from Geometry.HcalEventSetup.hcalTopologyIdeal_cfi import *
from Geometry.CaloEventSetup.EcalTrigTowerConstituents_cfi import *
from Geometry.EcalMapping.EcalMapping_cfi import *
from Geometry.EcalMapping.EcalMappingRecord_cfi import *
# muon
from Geometry.MuonNumbering.muonNumberingInitialization_cfi import *
from RecoMuon.DetLayers.muonDetLayerGeometry_cfi import *
from Geometry.GEMGeometryBuilder.gemGeometry_cff import *
from Geometry.CSCGeometryBuilder.idealForDigiCscGeometry_cff import *
from Geometry.DTGeometryBuilder.idealForDigiDtGeometry_cff import *
# forward
from Geometry.ForwardGeometry.ForwardGeometry_cfi import *
# timing
from RecoMTD.DetLayers.mtdDetLayerGeometry_cfi import *
from Geometry.MTDGeometryBuilder.mtdParameters_cff import *
from Geometry.MTDNumberingBuilder.mtdNumberingGeometry_cff import *
from Geometry.MTDNumberingBuilder.mtdTopology_cfi import *
from Geometry.MTDGeometryBuilder.mtdGeometry_cfi import *
from Geometry.MTDGeometryBuilder.idealForDigiMTDGeometry_cff import *
mtdGeometry.applyAlignment = False
| [
"sunanda.banerjee@cern.ch"
] | sunanda.banerjee@cern.ch |
26bd13736c8d11c30c34f742b5b9c47f85ab65ca | b3c47795e8b6d95ae5521dcbbb920ab71851a92f | /Leetcode/Algorithm/python/2000/01128-Number of Equivalent Domino Pairs.py | c27c95103c1b64e55092e2f04f8949fd3efbaa18 | [
"LicenseRef-scancode-warranty-disclaimer"
] | no_license | Wizmann/ACM-ICPC | 6afecd0fd09918c53a2a84c4d22c244de0065710 | 7c30454c49485a794dcc4d1c09daf2f755f9ecc1 | refs/heads/master | 2023-07-15T02:46:21.372860 | 2023-07-09T15:30:27 | 2023-07-09T15:30:27 | 3,009,276 | 51 | 23 | null | null | null | null | UTF-8 | Python | false | false | 369 | py | from collections import defaultdict
class Solution(object):
def numEquivDominoPairs(self, dominoes):
d = defaultdict(int)
for (a, b) in dominoes:
key = (min(a, b), max(a, b))
d[key] += 1
res = 0
for key, value in d.items():
res += value * (value - 1) / 2
return res
| [
"noreply@github.com"
] | Wizmann.noreply@github.com |
d0c7a838d3ac45a39f6e97b95ffc799933ba0a3b | 3414c15e7333e2702818cd81de387a4def13a011 | /discord/message.py | 8981de2cb265cca36b37a28be84535a4595f322c | [
"MIT"
] | permissive | maxpowa/discord.py | b191f50de3ce34a48dcacb9802ee334f84841ae0 | 740b9a95c2a80caac59dd8f0ec6ea0cefa6b731c | refs/heads/async | 2020-12-28T19:57:08.739397 | 2015-12-24T13:24:23 | 2015-12-24T13:40:00 | 54,578,528 | 0 | 1 | null | 2016-03-23T17:08:59 | 2016-03-23T17:08:59 | null | UTF-8 | Python | false | false | 7,336 | py | # -*- coding: utf-8 -*-
"""
The MIT License (MIT)
Copyright (c) 2015 Rapptz
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the "Software"),
to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.
"""
from . import utils
from .user import User
from .member import Member
from .object import Object
import re
class Message:
"""Represents a message from Discord.
There should be no need to create one of these manually.
Attributes
-----------
edited_timestamp : Optional[datetime.datetime]
A naive UTC datetime object containing the edited time of the message.
timestamp : datetime.datetime
A naive UTC datetime object containing the time the message was created.
tts : bool
Specifies if the message was done with text-to-speech.
author
A :class:`Member` that sent the message. If :attr:`channel` is a
private channel, then it is a :class:`User` instead.
content : str
The actual contents of the message.
embeds : list
A list of embedded objects. The elements are objects that meet oEmbed's specification_.
.. _specification: http://oembed.com/
channel
The :class:`Channel` that the message was sent from.
Could be a :class:`PrivateChannel` if it's a private message.
In :issue:`very rare cases <21>` this could be a :class:`Object` instead.
For the sake of convenience, this :class:`Object` instance has an attribute ``is_private`` set to ``True``.
server : Optional[:class:`Server`]
The server that the message belongs to. If not applicable (i.e. a PM) then it's None instead.
mention_everyone : bool
Specifies if the message mentions everyone.
.. note::
This does not check if the ``@everyone`` text is in the message itself.
Rather this boolean indicates if the ``@everyone`` text is in the message
**and** it did end up mentioning everyone.
mentions : list
A list of :class:`Member` that were mentioned. If the message is in a private message
then the list is always empty.
.. warning::
The order of the mentions list is not in any particular order so you should
not rely on it. This is a discord limitation, not one with the library.
channel_mentions : list
A list of :class:`Channel` that were mentioned. If the message is in a private message
then the list is always empty.
id : str
The message ID.
attachments : list
A list of attachments given to a message.
"""
def __init__(self, **kwargs):
# at the moment, the timestamps seem to be naive so they have no time zone and operate on UTC time.
# we can use this to our advantage to use strptime instead of a complicated parsing routine.
# example timestamp: 2015-08-21T12:03:45.782000+00:00
# sometimes the .%f modifier is missing
self.edited_timestamp = utils.parse_time(kwargs.get('edited_timestamp'))
self.timestamp = utils.parse_time(kwargs.get('timestamp'))
self.tts = kwargs.get('tts')
self.content = kwargs.get('content')
self.mention_everyone = kwargs.get('mention_everyone')
self.embeds = kwargs.get('embeds')
self.id = kwargs.get('id')
self.channel = kwargs.get('channel')
self.author = User(**kwargs.get('author', {}))
self.attachments = kwargs.get('attachments')
self._handle_upgrades(kwargs.get('channel_id'))
self._handle_mentions(kwargs.get('mentions', []))
def _handle_mentions(self, mentions):
self.mentions = []
self.channel_mentions = []
if getattr(self.channel, 'is_private', True):
return
if self.channel is not None:
for mention in mentions:
id_search = mention.get('id')
member = utils.find(lambda m: m.id == id_search, self.server.members)
if member is not None:
self.mentions.append(member)
if self.server is not None:
for mention in self.raw_channel_mentions:
channel = utils.find(lambda m: m.id == mention, self.server.channels)
if channel is not None:
self.channel_mentions.append(channel)
@utils.cached_property
def raw_mentions(self):
"""A property that returns an array of user IDs matched with
the syntax of <@user_id> in the message content.
This allows you receive the user IDs of mentioned users
even in a private message context.
"""
return re.findall(r'<@([0-9]+)>', self.content)
@utils.cached_property
def raw_channel_mentions(self):
"""A property that returns an array of channel IDs matched with
the syntax of <#channel_id> in the message content.
This allows you receive the channel IDs of mentioned users
even in a private message context.
"""
return re.findall(r'<#([0-9]+)>', self.content)
@utils.cached_property
def clean_content(self):
"""A property that returns the content in a "cleaned up"
manner. This basically means that mentions are transformed
into the way the client shows it. e.g. ``<#id>`` will transform
into ``#name``.
"""
transformations = {
re.escape('<#{0.id}>'.format(channel)): '#' + channel.name
for channel in self.channel_mentions
}
mention_transforms = {
re.escape('<@{0.id}>'.format(member)): '@' + member.name
for member in self.mentions
}
transformations.update(mention_transforms)
def repl(obj):
return transformations.get(re.escape(obj.group(0)), '')
pattern = re.compile('|'.join(transformations.keys()))
return pattern.sub(repl, self.content)
def _handle_upgrades(self, channel_id):
self.server = None
if self.channel is None:
if channel_id is not None:
self.channel = Object(id=channel_id)
self.channel.is_private = True
return
if not self.channel.is_private:
self.server = self.channel.server
found = utils.find(lambda m: m.id == self.author.id, self.server.members)
if found is not None:
self.author = found
| [
"rapptz@gmail.com"
] | rapptz@gmail.com |
e202015319e99b6a80ad917d5e2b370f2dd271a5 | 9743d5fd24822f79c156ad112229e25adb9ed6f6 | /xai/brain/wordbase/nouns/_morphine.py | 943506387fc8ca7d5796f182ab71f36d81361708 | [
"MIT"
] | permissive | cash2one/xai | de7adad1758f50dd6786bf0111e71a903f039b64 | e76f12c9f4dcf3ac1c7c08b0cc8844c0b0a104b6 | refs/heads/master | 2021-01-19T12:33:54.964379 | 2017-01-28T02:00:50 | 2017-01-28T02:00:50 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 375 | py |
#calss header
class _MORPHINE():
def __init__(self,):
self.name = "MORPHINE"
self.definitions = [u'a drug made from opium, used to stop people from feeling pain or to make people feel calmer']
self.parents = []
self.childen = []
self.properties = []
self.jsondata = {}
self.specie = 'nouns'
def run(self, obj1 = [], obj2 = []):
return self.jsondata
| [
"xingwang1991@gmail.com"
] | xingwang1991@gmail.com |
dd2ffa10dfcbccf5bba097a474bc3960747cc90a | e17483ba000de9c6135e26ae6c09d9aa33004574 | /ipynbs/python性能优化/使用Cython优化python程序的性能/使用cython加速python源码/静态化python模块/logistic_A.py | fd14d103b55e5a2a96dda6457df38711fa0c5c6f | [
"Apache-2.0"
] | permissive | HAOzj/TutorialForPython | 27ae50c6b9fb3289ae7f67b8106d3d4996d145a7 | df7a6db94b77f4861b11966399f5359d00911a16 | refs/heads/master | 2020-03-17T09:19:45.199165 | 2018-04-02T13:33:27 | 2018-04-02T13:33:27 | 133,470,105 | 1 | 0 | null | 2018-05-15T06:35:01 | 2018-05-15T06:35:01 | null | UTF-8 | Python | false | false | 252 | py | #cython: language_level=3
import cython
from math import exp
if cython.compiled:
print("Yep, I'm compiled.")
else:
print("Just a lowly interpreted script.")
@cython.boundscheck(False)
@cython.ccall
def logistic(x):
return 1/(1+exp(-x)) | [
"hsz1273327@gmail.com"
] | hsz1273327@gmail.com |
d45c8e5024e20d5a3c8cfd01ad650dea3b3917cc | c08cfe3c8feb5b04314557481e1f635cd20750cd | /write_idea.py | 87c6e69ed46acc09c49be0427a27da1c187f448f | [] | no_license | steinbachr/write-idea | 80c2d52d5f581174583902254b2177c2a058e016 | 2110f92872913db5f385efa828ab244a31945027 | refs/heads/master | 2016-09-06T13:49:07.931996 | 2014-10-01T16:34:25 | 2014-10-01T16:34:25 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,738 | py | from bs4 import BeautifulSoup
import csv
import random
import requests
#in the future, we can have this passed in to the program from command line
MAX_TO_CHOOSE = 3
MAX_ATTEMPTS_TO_GET_DEFINITIONS = 3
def get_random_words(max_to_choose):
"""
get one or more random words from the dictionary
:return: list of words of at most size max_to_choose as chosen from the dictionary
"""
all_words = []
with open('dictionary.csv', 'rb') as dictionary:
for word in dictionary:
all_words.append(word.strip().replace("\n", ""))
to_choose = random.randint(1, max_to_choose)
return random.sample(all_words, to_choose)
def get_definition_from_merriam(word):
"""
for the given word, get its definition from merriam webster
:param word: str the word to get the definition for
:return: str the definition of the word
"""
api_key = 'c2b1a784-7bd2-4efe-b2c9-b328ce42ed4e'
api_url = 'http://www.dictionaryapi.com/api/v1/references/collegiate/xml/{word}?key={key}'.format(word=word, key=api_key)
resp = requests.get(api_url)
definition = None
try:
soup = BeautifulSoup(resp.content)
definition = soup.dt.find(text=True).strip(":")
except Exception:
pass
return definition
print "\n\nWords Of The Day:"
print ">>>>>>>>>>>>>>>>>>>>>>>>>>>"
definitions = {}
i = 0
while len(definitions) < 1 and i < MAX_ATTEMPTS_TO_GET_DEFINITIONS:
for word in get_random_words(MAX_TO_CHOOSE):
definition = get_definition_from_merriam(word)
if definition:
definitions[word] = definition
i += 1
print "\n".join(["{word}: {defn}".format(word=word, defn=definition) for word, definition in definitions.items()])
| [
"steinbach.rj@gmail.com"
] | steinbach.rj@gmail.com |
33ee4f9be70261fa3f9375fad072eb4c621512dc | 36620131b411892abf1072694c3ac39b0da6d75e | /object_detection/models/tta.py | 0a110ef2584abf9300f527600d7f242c8c53c785 | [] | no_license | zhangyahui520/object-detection | 82f3e61fb0f6a39881b9ed3a750478a095023eff | b83b55a05e911d5132c79e3f0029a449e99f948d | refs/heads/master | 2022-12-23T17:39:19.281547 | 2020-09-21T08:40:53 | 2020-09-21T08:40:53 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,001 | py | import torch
from functools import partial
from torch import nn
from typing import Tuple, List, Callable
from object_detection.entities import (
YoloBoxes,
Confidences,
ImageBatch,
yolo_hflip,
yolo_vflip,
)
class HFlipTTA:
def __init__(self, to_boxes: Callable,) -> None:
self.img_transform = partial(torch.flip, dims=(3,))
self.to_boxes = to_boxes
self.box_transform = yolo_hflip
def __call__(
self, model: nn.Module, images: ImageBatch
) -> Tuple[List[YoloBoxes], List[Confidences]]:
images = ImageBatch(self.img_transform(images))
outputs = model(images)
box_batch, conf_batch = self.to_boxes(outputs)
box_batch = [self.box_transform(boxes) for boxes in box_batch]
return box_batch, conf_batch
class VHFlipTTA:
def __init__(self, to_boxes: Callable,) -> None:
self.img_transform = partial(torch.flip, dims=(2, 3))
self.to_boxes = to_boxes
self.box_transform = lambda x: yolo_vflip(yolo_hflip(x))
def __call__(
self, model: nn.Module, images: ImageBatch
) -> Tuple[List[YoloBoxes], List[Confidences]]:
images = ImageBatch(self.img_transform(images))
outputs = model(images)
box_batch, conf_batch = self.to_boxes(outputs)
box_batch = [self.box_transform(boxes) for boxes in box_batch] # type:ignore
return box_batch, conf_batch
class VFlipTTA:
def __init__(self, to_boxes: Callable,) -> None:
self.img_transform = partial(torch.flip, dims=(2,))
self.to_boxes = to_boxes
self.box_transform = yolo_vflip
def __call__(
self, model: nn.Module, images: ImageBatch
) -> Tuple[List[YoloBoxes], List[Confidences]]:
images = ImageBatch(self.img_transform(images))
outputs = model(images)
box_batch, conf_batch = self.to_boxes(outputs)
box_batch = [self.box_transform(boxes) for boxes in box_batch]
return box_batch, conf_batch
| [
"yao.ntno@gmail.com"
] | yao.ntno@gmail.com |
019b805bb5bfb35d449a227190521a6eeb6c52fb | 3f42c5e33e58921754000b41db0156d0def70cf3 | /Snakefile | f5d32c6189ffed86ddbee3452a9118303866f682 | [] | no_license | SilasK/oldSRA_download | ab2536708f513e583ab012b747688211ca302779 | 96b121b53a8008b78d6d7f7b42c6ff13ef427ab9 | refs/heads/master | 2022-02-09T00:32:41.101141 | 2019-06-06T12:16:58 | 2019-06-06T12:16:58 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,930 |
import pandas as pd
SRR_list = pd.read_csv(config['url_table'],sep='\t',index_col=0).index
if 'outdir' in config:
outdir = config['outdir']
else:
outdir= config['url_table'].replace("_info.tab.txt",'')
rule paired:
input:
expand("{outdir}/{SRR}_{direction}.fastq.gz",SRR=SRR_list,outdir=outdir,direction=['R1','R2']),
#expand("{outdir}/{SRR}.msh",SRR=SRR_list,outdir=outdir)
rule single:
input:
expand("{outdir}/{SRR}.fastq.gz",SRR=SRR_list,outdir=outdir),
rule download_SRR_single:
output:
"{outdir}/{SRR}.fastq.gz",
wildcard_constraints:
SRR="[A-Z0-9]+"
params:
outdir=outdir
threads:
4
conda:
"envs/download.yaml"
shell:
"parallel-fastq-dump --sra-id {wildcards.SRR} --threads {threads} --gzip --outdir {params.outdir}"
rule download_SRR_paired:
output:
"{outdir}/{SRR}_1.fastq.gz",
"{outdir}/{SRR}_2.fastq.gz"
params:
outdir=outdir
#wildcard_constraints:
# SRR="SRR[A-Z0-9]+"
threads:
4
conda:
"envs/download.yaml"
shell:
"parallel-fastq-dump --sra-id {wildcards.SRR} --threads {threads} --gzip --split-files --outdir {params.outdir}"
localrules: rename_SRR
rule rename_SRR:
input:
"{outdir}/{SRR}_1.fastq.gz",
"{outdir}/{SRR}_2.fastq.gz"
output:
"{outdir}/{SRR}_R1.fastq.gz",
"{outdir}/{SRR}_R2.fastq.gz"
threads:
1
shell:
"mv {input[0]} {output[0]}; "
"mv {input[1]} {output[1]}"
rule sketch_reads:
input:
"{folder}/{sample}_R1.fastq.gz",
"{folder}/{sample}_R2.fastq.gz"
output:
"{folder}/{sample}.msh"
params:
prefix= lambda wc, output: os.path.splitext(output[0])[0]
conda:
"envs/mash.yaml"
threads:
4
shell:
"mash sketch -p {threads} -o {params.prefix} {input}"
| [
"silas.kieser@gmail.com"
] | silas.kieser@gmail.com | |
f70aa53f367b064e21d312d73f67784a25bcfafa | 08bd0c20e99bac54760441de061bb74818837575 | /0x0A-python-inheritance/1-my_list.py | 5f43596d29f2a834cffdc0ceb90333663c1b3464 | [] | no_license | MachinEmmus/holbertonschool-higher_level_programming | 961874eb51e06bc823911e943573123ead0483e5 | 2b7ebe4dc2005db1fe0ca4c330c0bd00897bb157 | refs/heads/master | 2021-08-08T19:43:48.933704 | 2020-10-09T22:22:17 | 2020-10-09T22:22:17 | 226,937,346 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 168 | py | #!/usr/bin/python3
class MyList(list):
"""Class MyList hereda of List"""
def print_sorted(self):
"""Print a sorted list"""
print(sorted(self))
| [
"emonsalvep38@gmail.com"
] | emonsalvep38@gmail.com |
1562209b19b54551227f8146ef20185326bfbc6d | 1ba085b962452dd6c305803ad709f2b00aa305d1 | /venv/bin/pip2 | 1ad73f2a4913aacdcd403a5c4e127a8e1323253f | [] | no_license | tongri/lab_test | 2b9aea40ad72a0316238174b0c89e78779c47bf7 | e73ec5b60dfb8bde06302fce07dd15780e6a766c | refs/heads/master | 2023-04-15T18:56:45.950878 | 2021-04-19T10:41:44 | 2021-04-19T10:41:44 | 355,573,697 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 241 | #!/home/user/tutor/venv/bin/python2
# -*- coding: utf-8 -*-
import re
import sys
from pip._internal.cli.main import main
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
sys.exit(main())
| [
"alexander.ksenzov@gmail.com"
] | alexander.ksenzov@gmail.com | |
8e6b88adc39dca84b244727245ff6e71b2828f61 | 0ce040a6ed4bc4ef131da9cb3df5672d995438fc | /apps/auth_ext/templatetags/url_login_tags.py | 5d696407431609c12da1fec012dcb3f912841cfc | [
"MIT"
] | permissive | frol/Fling-receiver | 59b553983b345312a96c11aec7e71e1c83ab334f | e4312ce4bd522ec0edfbfe7c325ca59e8012581a | refs/heads/master | 2016-09-05T16:10:28.270931 | 2014-02-10T07:52:51 | 2014-02-10T07:52:51 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 341 | py | from django.conf import settings
if 'coffin' in settings.INSTALLED_APPS:
from coffin.template import Library
else:
from django.template import Library
from auth_ext_15.models import UrlLoginToken
register = Library()
@register.simple_tag
def url_login_token(user):
return "url_login_token=%s" % UrlLoginToken.get_token(user)
| [
"frolvlad@gmail.com"
] | frolvlad@gmail.com |
22fcaae921154451a61d72f1c2467f394839b037 | 09e57dd1374713f06b70d7b37a580130d9bbab0d | /data/p2DJ/New/program/qiskit/class/startQiskit_Class29.py | fa2c3be3d8e26646725aac76f06a2446786f107b | [
"BSD-3-Clause"
] | permissive | UCLA-SEAL/QDiff | ad53650034897abb5941e74539e3aee8edb600ab | d968cbc47fe926b7f88b4adf10490f1edd6f8819 | refs/heads/main | 2023-08-05T04:52:24.961998 | 2021-09-19T02:56:16 | 2021-09-19T02:56:16 | 405,159,939 | 2 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,963 | py | # qubit number=2
# total number=6
import cirq
import qiskit
from qiskit import IBMQ
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister
from qiskit import BasicAer, execute, transpile
from pprint import pprint
from qiskit.test.mock import FakeVigo
from math import log2,floor, sqrt, pi
import numpy as np
import networkx as nx
def build_oracle(n: int, f) -> QuantumCircuit:
# implement the oracle O_f^\pm
# NOTE: use U1 gate (P gate) with \lambda = 180 ==> CZ gate
# or multi_control_Z_gate (issue #127)
controls = QuantumRegister(n, "ofc")
target = QuantumRegister(1, "oft")
oracle = QuantumCircuit(controls, target, name="Of")
for i in range(2 ** n):
rep = np.binary_repr(i, n)
if f(rep) == "1":
for j in range(n):
if rep[j] == "0":
oracle.x(controls[j])
oracle.mct(controls, target[0], None, mode='noancilla')
for j in range(n):
if rep[j] == "0":
oracle.x(controls[j])
# oracle.barrier()
# oracle.draw('mpl', filename='circuit/deutsch-oracle.png')
return oracle
def make_circuit(n:int,f) -> QuantumCircuit:
# circuit begin
input_qubit = QuantumRegister(n, "qc")
target = QuantumRegister(1, "qt")
prog = QuantumCircuit(input_qubit, target)
# inverse last one (can be omitted if using O_f^\pm)
prog.x(target)
# apply H to get superposition
for i in range(n):
prog.h(input_qubit[i])
prog.h(input_qubit[1]) # number=1
prog.h(target)
prog.barrier()
# apply oracle O_f
oracle = build_oracle(n, f)
prog.append(
oracle.to_gate(),
[input_qubit[i] for i in range(n)] + [target])
# apply H back (QFT on Z_2^n)
for i in range(n):
prog.h(input_qubit[i])
prog.barrier()
# measure
prog.y(input_qubit[1]) # number=2
prog.y(input_qubit[1]) # number=3
prog.cx(input_qubit[1],input_qubit[0]) # number=4
prog.cx(input_qubit[1],input_qubit[0]) # number=5
# circuit end
return prog
if __name__ == '__main__':
n = 2
f = lambda rep: rep[-1]
# f = lambda rep: "1" if rep[0:2] == "01" or rep[0:2] == "10" else "0"
# f = lambda rep: "0"
prog = make_circuit(n, f)
sample_shot =2800
backend = BasicAer.get_backend('statevector_simulator')
circuit1 = transpile(prog,FakeVigo())
circuit1.x(qubit=3)
circuit1.x(qubit=3)
prog = circuit1
info = execute(prog, backend=backend).result().get_statevector()
qubits = round(log2(len(info)))
info = {
np.binary_repr(i, qubits): round((info[i]*(info[i].conjugate())).real,3)
for i in range(2 ** qubits)
}
writefile = open("../data/startQiskit_Class29.csv","w")
print(info,file=writefile)
print("results end", file=writefile)
print(circuit1.depth(),file=writefile)
print(circuit1,file=writefile)
writefile.close()
| [
"wangjiyuan123@yeah.net"
] | wangjiyuan123@yeah.net |
dea529b9c44b8eae30d6e13b1b93487a76945c7d | ea4e3ac0966fe7b69f42eaa5a32980caa2248957 | /download/unzip/pyobjc/pyobjc-14/pyobjc/stable/PyOpenGL-2.0.2.01/src/shadow/WGL.ARB.buffer_region.0100.py | e5a49cc1445d9af64e1e0388088e5cd8a8402c82 | [] | no_license | hyl946/opensource_apple | 36b49deda8b2f241437ed45113d624ad45aa6d5f | e0f41fa0d9d535d57bfe56a264b4b27b8f93d86a | refs/heads/master | 2023-02-26T16:27:25.343636 | 2020-03-29T08:50:45 | 2020-03-29T08:50:45 | 249,169,732 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,838 | py | # This file was created automatically by SWIG.
# Don't modify this file, modify the SWIG interface instead.
# This file is compatible with both classic and new-style classes.
import _buffer_region
def _swig_setattr_nondynamic(self,class_type,name,value,static=1):
if (name == "this"):
if isinstance(value, class_type):
self.__dict__[name] = value.this
if hasattr(value,"thisown"): self.__dict__["thisown"] = value.thisown
del value.thisown
return
method = class_type.__swig_setmethods__.get(name,None)
if method: return method(self,value)
if (not static) or hasattr(self,name) or (name == "thisown"):
self.__dict__[name] = value
else:
raise AttributeError("You cannot add attributes to %s" % self)
def _swig_setattr(self,class_type,name,value):
return _swig_setattr_nondynamic(self,class_type,name,value,0)
def _swig_getattr(self,class_type,name):
method = class_type.__swig_getmethods__.get(name,None)
if method: return method(self)
raise AttributeError,name
import types
try:
_object = types.ObjectType
_newclass = 1
except AttributeError:
class _object : pass
_newclass = 0
del types
__version__ = _buffer_region.__version__
__date__ = _buffer_region.__date__
__api_version__ = _buffer_region.__api_version__
__author__ = _buffer_region.__author__
__doc__ = _buffer_region.__doc__
wglInitBufferRegionARB = _buffer_region.wglInitBufferRegionARB
__info = _buffer_region.__info
wglCreateBufferRegionARB = _buffer_region.wglCreateBufferRegionARB
wglDeleteBufferRegionARB = _buffer_region.wglDeleteBufferRegionARB
wglSaveBufferRegionARB = _buffer_region.wglSaveBufferRegionARB
wglRestoreBufferRegionARB = _buffer_region.wglRestoreBufferRegionARB
| [
"hyl946@163.com"
] | hyl946@163.com |
8012ec6497a89fc0ec225279447a91a004faebf2 | 07a9d52a91df135c82660c601811e3f623fe440c | /timereporter/commands/command.py | ecc3bff895ea285dbf057c436d9fa73495511e68 | [] | no_license | Godsmith/timereporter | ea9e622db880721bbf82f17e3e004434878ccd51 | 384ad973ea913a77593f665c97b337b54bc09b4a | refs/heads/master | 2021-07-05T09:14:28.327572 | 2021-06-15T21:31:52 | 2021-06-15T21:31:52 | 102,777,809 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,649 | py | import datetime
from typing import Union, Dict, Tuple
from datetime import date
from typing import List
from timereporter.calendar import Calendar
from timereporter.mydatetime import timedelta
from timereporter.views.view import View
from timereporter.views.console_week_view import ConsoleWeekView
class Command:
TIMEDELTA = timedelta(weeks=1)
WRITE_TO_DISK = True
def __init__(self, calendar: Calendar, date_: date, args: Union[list, str]):
self.calendar = calendar
self.date = date_
self.args = args
# TODO: use the new argument splitter method instead
if isinstance(self.args, str):
self.args = self.args.split()
if "last" in self.args:
self.date -= self.TIMEDELTA
elif "next" in self.args:
self.date += self.TIMEDELTA
self.options = self._parse_options()
def _parse_options(self) -> Dict[str, str]:
options = {}
new_args = []
assert isinstance(self.args, list)
for arg in self.args:
if arg.startswith("--"):
name = arg.split("=")[0]
value = arg.split("=")[1] if "=" in arg else True
options[name] = value
if name not in self.valid_options():
raise UnexpectedOptionError(name)
else:
new_args.append(arg)
self.args = new_args
return options
@classmethod
def can_handle(cls, args) -> bool:
args = [arg for arg in args if not arg.startswith("--")]
args = [arg for arg in args if arg not in ("last", "next")]
return cls._can_handle(args)
@classmethod
def _can_handle(cls, args: List[str]) -> bool:
raise NotImplementedError
def valid_options(self) -> List[str]:
return []
def execute(
self, created_at: datetime.datetime = datetime.datetime.now()
) -> Tuple[Calendar, View]:
return self.new_calendar(created_at), self.view()
def view(self) -> View:
return ConsoleWeekView(self.date)
def new_calendar(self, created_at: datetime.datetime) -> Calendar:
return self.calendar
class CommandError(Exception):
pass
class UnexpectedOptionError(CommandError):
"""Raised when there is an option not expected by the command."""
def __init__(self, option: Union[str, list]):
suffix = ""
if isinstance(option, list):
option = ", ".join(option)
suffix = "s"
# TODO: this should print the help for the command instead
super().__init__(f"Error: unexpected option{suffix}: {option}")
| [
"filip.lange@gmail.com"
] | filip.lange@gmail.com |
538ac952035739975c914adbf70b6b475a3e7114 | 05b42178aaefd7efdb2fb19fdea8e58056d8d4bd | /geeksforgeeks/graph/bfs/recursive/test.py | 5b140e1541e1d4ea37276028ba914fdf1ee91a6a | [] | no_license | chrisjdavie/interview_practice | 43ca3df25fb0538d685a59ac752a6a4b269c44e9 | 2d47d583ed9c838a802b4aa4cefe649c77f5dd7f | refs/heads/master | 2023-08-16T18:22:46.492623 | 2023-08-16T16:04:01 | 2023-08-16T16:04:01 | 247,268,317 | 0 | 0 | null | 2020-03-14T17:35:12 | 2020-03-14T12:01:43 | Python | UTF-8 | Python | false | false | 1,403 | py | from unittest import TestCase
from run import initialise_graph, bfs
class TestExamples(TestCase):
def test_raises(self):
N = 201
edges = [(0, i+1) for i in range(N)]
graph = initialise_graph(edges)
with self.assertRaises(ValueError):
bfs(graph, N)
def test_example_1(self):
num_nodes = 5
edges = [(0, 1), (0, 2), (0, 3), (2, 4)]
graph = initialise_graph(edges)
self.assertEqual(bfs(graph, num_nodes), [0, 1, 2, 3, 4])
def test_example_2(self):
num_nodes = 3
edges = [(0, 1), (0, 2)]
graph = initialise_graph(edges)
self.assertEqual(bfs(graph, num_nodes), [0, 1, 2])
def test_circular(self):
# it's a graph not a tree!
num_nodes = 2
edges = [(0, 1), (1, 0)]
graph = initialise_graph(edges)
self.assertEqual(bfs(graph, num_nodes), [0, 1])
def test_circular_not_root(self):
# it's a graph not a tree!
num_nodes = 2
edges = [(0, 1), (1, 2), (2, 1)]
graph = initialise_graph(edges)
self.assertEqual(bfs(graph, num_nodes), [0, 1, 2])
def test_double_link(self):
# it's a graph not a tree!
num_nodes = 2
edges = [(0, 1), (0, 2), (1, 3), (2, 3)]
graph = initialise_graph(edges)
self.assertEqual(bfs(graph, num_nodes), [0, 1, 2, 3])
| [
"cjdavie@googlemail.com"
] | cjdavie@googlemail.com |
ad3a856cb3f801a06193c47798147f8d62bf9219 | 6cecdc007a3aafe0c0d0160053811a1197aca519 | /apps/reports/templatetags/blacklist_tags.py | aeaa55dde3ba57c9a67cbc0f798f30964de013e5 | [] | no_license | commtrack/temp-aquatest | 91d678c927cc4b2dce6f709afe7faf2768b58157 | 3b10d179552b1e9d6a0e4ad5e91a92a05dba19c7 | refs/heads/master | 2016-08-04T18:06:47.582196 | 2010-09-29T13:20:13 | 2010-09-29T13:20:13 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,007 | py | from datetime import timedelta
from django import template
from hq.utils import build_url as build_url_util
register = template.Library()
@register.simple_tag
def build_device_url(domain, device_id):
"""Builds the link on the report when you click on the device
to get a filtered view of the metadata."""
return build_url_util("/reports/%s/custom/metadata?filter_deviceid=%s" %\
(domain.id, device_id))
@register.simple_tag
def build_count_url(domain, device_id, date):
"""Builds the link on the report when you click on the device
to get a filtered view of the metadata."""
# this is pretty ugly, but one way to get the URL's working in email
day_after_date = date + timedelta(days=1)
return build_url_util\
("/reports/%s/custom/metadata?filter_deviceid=%s&filter_timeend__gte=%s&filter_timeend__lte=%s" \
% (domain.id, device_id, date, day_after_date))
| [
"allen.machary@gmail.com"
] | allen.machary@gmail.com |
1d0cb7bf6626f99b695aa23d6257d11309fdb68b | 260817cebb942bf825e1f68f4566240f097419d5 | /day21/7.把模块当成脚本来使用.py | cf7d75246dd9335c79d04127f427d6c0f5b74a18 | [] | no_license | microease/old-boys-python-15 | ff55d961192a0b31aa8fd33a548f161497b12785 | 7e9c5f74201db9ea26409fb9cfe78277f93e360e | refs/heads/master | 2020-05-19T10:32:21.022217 | 2019-11-25T17:00:54 | 2019-11-25T17:00:54 | 184,972,964 | 2 | 0 | null | null | null | null | UTF-8 | Python | false | false | 876 | py | # by luffycity.com
# 执行一个py文件的方式:
# 在cmd执行,在python执行 : 直接执行这个文件 - 以脚本的形式运行这个文件
# 导入这个文件
# 都是py文件
# 直接运行这个文件 这个文件就是一个脚本
# 导入这个文件 这个文件就是一个模块
# import re
# import time
#
# import my_module
# import calculate
#
# ret = calculate.main('1*2+3')
# print(ret)
# 当一个py文件
# 当做一个脚本的时候 : 能够独立的提供一个功能,能自主完成交互
# 当成一个模块的时候 : 能够被导入这调用这个功能,不能自主交互
# 一个文件中的__name__变量
# 当这个文件被当做脚本执行的时候 __name__ == '__main__'
# 当这个文件被当做模块导入的时候 __name__ == '模块的名字'
import calculate
print(calculate.main('1+2')) | [
"microease@163.com"
] | microease@163.com |
317cee2c4b85442fafb9023514db5abc7453bfab | f5d1e8b54ddbc51a9ef1b868eee93096d9b0fbeb | /weapp/services/__init__.py | 9fd36e6d2324464ee357370cd93995ae62682b7f | [] | no_license | chengdg/weizoom | 97740c121724fae582b10cdbe0ce227a1f065ece | 8b2f7befe92841bcc35e0e60cac5958ef3f3af54 | refs/heads/master | 2021-01-22T20:29:30.297059 | 2017-03-30T08:39:25 | 2017-03-30T08:39:25 | 85,268,003 | 1 | 3 | null | null | null | null | UTF-8 | Python | false | false | 76 | py |
# -*- coding: utf-8 -*-
#import os
#from django.conf import settings
| [
"jiangzhe@weizoom.com"
] | jiangzhe@weizoom.com |
9551710717c579f61e12f178765ee0e9b2e661a6 | 0376a3528032dc8637123eb4307fac53fe33c631 | /openstack/_hacking.py | 94952630c61025d4208e5f250312451351a2910a | [
"Apache-2.0"
] | permissive | FrontSide/openstacksdk | d5a7461721baf67d3590d2611538620b15939999 | 9fc0fdaed3114f06c7fc90ce8cf338c5ae01df2f | refs/heads/master | 2021-02-10T21:14:37.463978 | 2020-02-26T17:23:45 | 2020-02-26T17:23:45 | 244,419,992 | 0 | 0 | Apache-2.0 | 2020-03-02T16:32:40 | 2020-03-02T16:32:39 | null | UTF-8 | Python | false | false | 1,401 | py | # Copyright (c) 2019, Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import re
"""
Guidelines for writing new hacking checks
- Use only for openstacksdk specific tests. OpenStack general tests
should be submitted to the common 'hacking' module.
- Pick numbers in the range O3xx. Find the current test with
the highest allocated number and then pick the next value.
- Keep the test method code in the source file ordered based
on the O3xx value.
- List the new rule in the top level HACKING.rst file
- Add test cases for each new rule to nova/tests/unit/test_hacking.py
"""
SETUPCLASS_RE = re.compile(r"def setUpClass\(")
def assert_no_setupclass(logical_line):
"""Check for use of setUpClass
O300
"""
if SETUPCLASS_RE.match(logical_line):
yield (0, "O300: setUpClass not allowed")
def factory(register):
register(assert_no_setupclass)
| [
"mordred@inaugust.com"
] | mordred@inaugust.com |
4b983a60a39a1f46e6e9b4ca7b66a505cf8aaedf | 7698a74a06e10dd5e1f27e6bd9f9b2a5cda1c5fb | /zzz.scripts_from_reed/getposesfast.py | b472fb639bae241dfbfb02f717d2a318ea0f5f08 | [] | no_license | kingbo2008/teb_scripts_programs | ef20b24fe8982046397d3659b68f0ad70e9b6b8b | 5fd9d60c28ceb5c7827f1bd94b1b8fdecf74944e | refs/heads/master | 2023-02-11T00:57:59.347144 | 2021-01-07T17:42:11 | 2021-01-07T17:42:11 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,897 | py |
# This script is written by Reed Stein and Trent Balius in April, 2017.
# This a fast get poses script
import os
import sys
import gzip
def get_zinc_names_and_chunks_from_extract_all_sort_uniq(filename,N):
print "running function: get_zinc_names_and_chunks_from_extract_all_sort_uniq"
fh = open(filename,'r')
zinc_dic = {}
count = 0
for line in fh:
splitline = line.split()
zincid = splitline[2]
chunk = splitline[0]
zinc_dic[zincid] = chunk
#print chunk, zincid
count = count + 1
if count > N:
break
return zinc_dic
def process_gz(filename, lig_dict, chunk, zinclist):
print "running function: process_gz"
#this function collects the name, grids, and total energy of the best scoring pose of each ligand
lig_list = []
ligandFile = gzip.open(filename,'rb')
lig_line = []
line_count = 0
for line in ligandFile:
if len(lig_line) == 3:
lig_list.append(lig_line)
lig_line = []
line1 = line.strip().split()
if len(line1) > 1:
if line1[1] == "Name:":
lig_line.append(line_count)
lig_line.append(line1[2])
elif line1[1] == "Total":
lig_line.append(line1[3])
line_count += 1
for comp in lig_list:
line_num = comp[0]
name = comp[1]
tot_e = float(comp[2])
if not (name in zinclist): #skip any ligand not in list (dictionary)
#print name
continue
if not (name in lig_dict):
lig_dict[name] = [line_num, tot_e, chunk]
#print name, line_num, tot_e
elif name in lig_dict and lig_dict[name][1] > tot_e:
lig_dict[name][0] = line_num
lig_dict[name][1] = tot_e
lig_dict[name][2] = chunk
#print "update", name, line_num, tot_e
return lig_dict
def write_out_poses(lig_dict, lig_dir):
print "running function: write_out_poses"
line_number_dict = {}
for name in lig_dict:
line_num = lig_dict[name][0]
#chunk_num = lig_dict[name][2].split("chunk")[-1]
dirname = lig_dict[name][2]
#if chunk_num in line_number_dict:
if dirname in line_number_dict:
line_number_dict[dirname].append(line_num)
else:
line_number_dict[dirname] = [line_num]
output = open("poses.mol2",'w')
for dirname_l in line_number_dict:
#print dirname_l
line_number_list = line_number_dict[dirname_l]
#print line_number_dict[dirname_l]
db2_gz_file = lig_dir+dirname_l+"/test.mol2.gz"
ligandFile = gzip.open(db2_gz_file,'rb')
new_line_count = 0
name_found = False
header_flag = False
for new_line in ligandFile:
splitline = new_line.strip().split()
if new_line_count in line_number_list: # line is in the list so set flag to True for writting out
#print(new_line)
if len(splitline) > 1:
if splitline[1] == "Name:":
name_found = True
#output.write(new_line)
else: # otherwise see if you are have reached a new pose if that is not in the list then stop writting.
if (len(splitline) > 1):
if splitline[1] == "Name:" and name_found and new_line_count not in line_number_list:
name_found = False
if new_line[0] == "#" and not header_flag:
#print new_line
header_flag = True
header = ''
header=header+new_line
new_line_count += 1
continue
if new_line[0] == "#":
header=header+new_line
new_line_count += 1
continue
if name_found:
if header_flag: # line 91. if the header flag is true and it is no longer in the header and it is a pose you want to write, write the header frist.
output.write(header)
output.write(new_line)
if new_line[0] != "#": # if line does not start with a # symbol then set header flag to false. setting to false must go after line 91.
header_flag = False
new_line_count += 1
ligandFile.close()
output.close()
def main():
if (len(sys.argv) != 4):
print "Give this script 3 inputs:"
print "(1) path to where docking is located. "
print "(2) path to where the extract all file is. "
print "(3) number of molecules (poses) to get. "
exit()
docking_dir = sys.argv[1]
#extractname = sys.argv[2]
extractfile = sys.argv[2]
number_of_poses = int(sys.argv[3])
print "docking_dir: "+docking_dir
#print "extractname: "+extractname
#extractfile = docking_dir+extractname
print "extract file path: "+extractfile
print "number_of_poses: "+str(number_of_poses)
#os.chdir(lig_dir)
#if os.path.isfile(lig_dir+"poses.mol2"):
#if os.path.isfile(docking_dir+"poses.mol2"):
if os.path.isfile("poses.mol2"):
print "poses.mol2 already exists. Quitting."
sys.exit()
#extractfile = docking_dir+"extract_all.sort.uniq.txt"
if not os.path.isfile(extractfile):
print "there needs to be an extract_all.sort.uniq.txt. "
exit()
zinc_dic = get_zinc_names_and_chunks_from_extract_all_sort_uniq(extractfile,number_of_poses)
#print zinc_dic.keys()
#chunk_list = [name for name in os.listdir(".") if os.path.isdir(name) and name[0:5] == "chunk"]
chunk_dic = {}
chunk_list = []
for key in zinc_dic:
if not (zinc_dic[key] in chunk_dic):
chunk_list.append(zinc_dic[key])
chunk_dic[zinc_dic[key]]=0
chunk_dic[zinc_dic[key]] = chunk_dic[zinc_dic[key]] + 1
lig_dict = {}
for chunk in chunk_list:
print chunk, chunk_dic[chunk]
gz_file = docking_dir+chunk+"/test.mol2.gz"
lig_dict = process_gz(gz_file, lig_dict, chunk, zinc_dic)
write_out_poses(lig_dict, docking_dir)
main()
| [
"tbalius@gimel.cluster.ucsf.bkslab.org"
] | tbalius@gimel.cluster.ucsf.bkslab.org |
42c29af69aa89f208e6e81d0a1d3e3ef3ad6c156 | e8dba002d8916a468e559a52f254c0d92532d6b2 | /tests/components/zha/test_climate.py | fd8bcaa1085d34629d61d06fffb1371eb6e9e5e3 | [
"Apache-2.0"
] | permissive | thomasgermain/home-assistant | 32b0f4d888220f4ce49dc85e506d0db39445c6c0 | 9673b93842ddcecc7e6a6d65e6d4f5b8a1089c43 | refs/heads/vaillant | 2023-08-21T23:50:24.679456 | 2020-05-20T21:01:18 | 2023-08-03T07:11:35 | 197,781,893 | 8 | 4 | Apache-2.0 | 2023-02-10T06:56:47 | 2019-07-19T13:57:53 | Python | UTF-8 | Python | false | false | 50,549 | py | """Test ZHA climate."""
from unittest.mock import patch
import pytest
import zhaquirks.sinope.thermostat
import zhaquirks.tuya.ts0601_trv
import zigpy.profiles
import zigpy.types
import zigpy.zcl.clusters
from zigpy.zcl.clusters.hvac import Thermostat
import zigpy.zcl.foundation as zcl_f
from homeassistant.components.climate import (
ATTR_CURRENT_TEMPERATURE,
ATTR_FAN_MODE,
ATTR_FAN_MODES,
ATTR_HVAC_ACTION,
ATTR_HVAC_MODE,
ATTR_HVAC_MODES,
ATTR_PRESET_MODE,
ATTR_TARGET_TEMP_HIGH,
ATTR_TARGET_TEMP_LOW,
DOMAIN as CLIMATE_DOMAIN,
FAN_AUTO,
FAN_LOW,
FAN_ON,
PRESET_AWAY,
PRESET_BOOST,
PRESET_COMFORT,
PRESET_ECO,
PRESET_NONE,
SERVICE_SET_FAN_MODE,
SERVICE_SET_HVAC_MODE,
SERVICE_SET_PRESET_MODE,
SERVICE_SET_TEMPERATURE,
HVACAction,
HVACMode,
)
from homeassistant.components.zha.climate import HVAC_MODE_2_SYSTEM, SEQ_OF_OPERATION
from homeassistant.components.zha.core.const import PRESET_COMPLEX, PRESET_SCHEDULE
from homeassistant.const import (
ATTR_ENTITY_ID,
ATTR_TEMPERATURE,
STATE_UNKNOWN,
Platform,
)
from homeassistant.core import HomeAssistant
from .common import async_enable_traffic, find_entity_id, send_attributes_report
from .conftest import SIG_EP_INPUT, SIG_EP_OUTPUT, SIG_EP_PROFILE, SIG_EP_TYPE
CLIMATE = {
1: {
SIG_EP_PROFILE: zigpy.profiles.zha.PROFILE_ID,
SIG_EP_TYPE: zigpy.profiles.zha.DeviceType.THERMOSTAT,
SIG_EP_INPUT: [
zigpy.zcl.clusters.general.Basic.cluster_id,
zigpy.zcl.clusters.general.Identify.cluster_id,
zigpy.zcl.clusters.hvac.Thermostat.cluster_id,
zigpy.zcl.clusters.hvac.UserInterface.cluster_id,
],
SIG_EP_OUTPUT: [zigpy.zcl.clusters.general.Ota.cluster_id],
}
}
CLIMATE_FAN = {
1: {
SIG_EP_PROFILE: zigpy.profiles.zha.PROFILE_ID,
SIG_EP_TYPE: zigpy.profiles.zha.DeviceType.THERMOSTAT,
SIG_EP_INPUT: [
zigpy.zcl.clusters.general.Basic.cluster_id,
zigpy.zcl.clusters.general.Identify.cluster_id,
zigpy.zcl.clusters.hvac.Fan.cluster_id,
zigpy.zcl.clusters.hvac.Thermostat.cluster_id,
zigpy.zcl.clusters.hvac.UserInterface.cluster_id,
],
SIG_EP_OUTPUT: [zigpy.zcl.clusters.general.Ota.cluster_id],
}
}
CLIMATE_SINOPE = {
1: {
SIG_EP_PROFILE: zigpy.profiles.zha.PROFILE_ID,
SIG_EP_TYPE: zigpy.profiles.zha.DeviceType.THERMOSTAT,
SIG_EP_INPUT: [
zigpy.zcl.clusters.general.Basic.cluster_id,
zigpy.zcl.clusters.general.Identify.cluster_id,
zigpy.zcl.clusters.hvac.Thermostat.cluster_id,
zigpy.zcl.clusters.hvac.UserInterface.cluster_id,
65281,
],
SIG_EP_OUTPUT: [zigpy.zcl.clusters.general.Ota.cluster_id, 65281],
},
196: {
SIG_EP_PROFILE: 0xC25D,
SIG_EP_TYPE: zigpy.profiles.zha.DeviceType.THERMOSTAT,
SIG_EP_INPUT: [zigpy.zcl.clusters.general.PowerConfiguration.cluster_id],
SIG_EP_OUTPUT: [],
},
}
CLIMATE_ZEN = {
1: {
SIG_EP_PROFILE: zigpy.profiles.zha.PROFILE_ID,
SIG_EP_TYPE: zigpy.profiles.zha.DeviceType.THERMOSTAT,
SIG_EP_INPUT: [
zigpy.zcl.clusters.general.Basic.cluster_id,
zigpy.zcl.clusters.general.Identify.cluster_id,
zigpy.zcl.clusters.hvac.Fan.cluster_id,
zigpy.zcl.clusters.hvac.Thermostat.cluster_id,
zigpy.zcl.clusters.hvac.UserInterface.cluster_id,
],
SIG_EP_OUTPUT: [zigpy.zcl.clusters.general.Ota.cluster_id],
}
}
CLIMATE_MOES = {
1: {
SIG_EP_PROFILE: zigpy.profiles.zha.PROFILE_ID,
SIG_EP_TYPE: zigpy.profiles.zha.DeviceType.THERMOSTAT,
SIG_EP_INPUT: [
zigpy.zcl.clusters.general.Basic.cluster_id,
zigpy.zcl.clusters.general.Identify.cluster_id,
zigpy.zcl.clusters.hvac.Thermostat.cluster_id,
zigpy.zcl.clusters.hvac.UserInterface.cluster_id,
61148,
],
SIG_EP_OUTPUT: [zigpy.zcl.clusters.general.Ota.cluster_id],
}
}
CLIMATE_ZONNSMART = {
1: {
SIG_EP_PROFILE: zigpy.profiles.zha.PROFILE_ID,
SIG_EP_TYPE: zigpy.profiles.zha.DeviceType.THERMOSTAT,
SIG_EP_INPUT: [
zigpy.zcl.clusters.general.Basic.cluster_id,
zigpy.zcl.clusters.hvac.Thermostat.cluster_id,
zigpy.zcl.clusters.hvac.UserInterface.cluster_id,
61148,
],
SIG_EP_OUTPUT: [zigpy.zcl.clusters.general.Ota.cluster_id],
}
}
MANUF_SINOPE = "Sinope Technologies"
MANUF_ZEN = "Zen Within"
MANUF_MOES = "_TZE200_ckud7u2l"
MANUF_ZONNSMART = "_TZE200_hue3yfsn"
ZCL_ATTR_PLUG = {
"abs_min_heat_setpoint_limit": 800,
"abs_max_heat_setpoint_limit": 3000,
"abs_min_cool_setpoint_limit": 2000,
"abs_max_cool_setpoint_limit": 4000,
"ctrl_sequence_of_oper": Thermostat.ControlSequenceOfOperation.Cooling_and_Heating,
"local_temperature": None,
"max_cool_setpoint_limit": 3900,
"max_heat_setpoint_limit": 2900,
"min_cool_setpoint_limit": 2100,
"min_heat_setpoint_limit": 700,
"occupancy": 1,
"occupied_cooling_setpoint": 2500,
"occupied_heating_setpoint": 2200,
"pi_cooling_demand": None,
"pi_heating_demand": None,
"running_mode": Thermostat.RunningMode.Off,
"running_state": None,
"system_mode": Thermostat.SystemMode.Off,
"unoccupied_heating_setpoint": 2200,
"unoccupied_cooling_setpoint": 2300,
}
@pytest.fixture(autouse=True)
def climate_platform_only():
"""Only set up the climate and required base platforms to speed up tests."""
with patch(
"homeassistant.components.zha.PLATFORMS",
(
Platform.BUTTON,
Platform.CLIMATE,
Platform.BINARY_SENSOR,
Platform.NUMBER,
Platform.SELECT,
Platform.SENSOR,
Platform.SWITCH,
),
):
yield
@pytest.fixture
def device_climate_mock(hass, zigpy_device_mock, zha_device_joined):
"""Test regular thermostat device."""
async def _dev(clusters, plug=None, manuf=None, quirk=None):
if plug is None:
plugged_attrs = ZCL_ATTR_PLUG
else:
plugged_attrs = {**ZCL_ATTR_PLUG, **plug}
zigpy_device = zigpy_device_mock(clusters, manufacturer=manuf, quirk=quirk)
zigpy_device.node_desc.mac_capability_flags |= 0b_0000_0100
zigpy_device.endpoints[1].thermostat.PLUGGED_ATTR_READS = plugged_attrs
zha_device = await zha_device_joined(zigpy_device)
await async_enable_traffic(hass, [zha_device])
await hass.async_block_till_done()
return zha_device
return _dev
@pytest.fixture
async def device_climate(device_climate_mock):
"""Plain Climate device."""
return await device_climate_mock(CLIMATE)
@pytest.fixture
async def device_climate_fan(device_climate_mock):
"""Test thermostat with fan device."""
return await device_climate_mock(CLIMATE_FAN)
@pytest.fixture
@patch.object(
zigpy.zcl.clusters.manufacturer_specific.ManufacturerSpecificCluster,
"ep_attribute",
"sinope_manufacturer_specific",
)
async def device_climate_sinope(device_climate_mock):
"""Sinope thermostat."""
return await device_climate_mock(
CLIMATE_SINOPE,
manuf=MANUF_SINOPE,
quirk=zhaquirks.sinope.thermostat.SinopeTechnologiesThermostat,
)
@pytest.fixture
async def device_climate_zen(device_climate_mock):
"""Zen Within thermostat."""
return await device_climate_mock(CLIMATE_ZEN, manuf=MANUF_ZEN)
@pytest.fixture
async def device_climate_moes(device_climate_mock):
"""MOES thermostat."""
return await device_climate_mock(
CLIMATE_MOES, manuf=MANUF_MOES, quirk=zhaquirks.tuya.ts0601_trv.MoesHY368_Type1
)
@pytest.fixture
async def device_climate_zonnsmart(device_climate_mock):
"""ZONNSMART thermostat."""
return await device_climate_mock(
CLIMATE_ZONNSMART,
manuf=MANUF_ZONNSMART,
quirk=zhaquirks.tuya.ts0601_trv.ZonnsmartTV01_ZG,
)
def test_sequence_mappings() -> None:
"""Test correct mapping between control sequence -> HVAC Mode -> Sysmode."""
for hvac_modes in SEQ_OF_OPERATION.values():
for hvac_mode in hvac_modes:
assert hvac_mode in HVAC_MODE_2_SYSTEM
assert Thermostat.SystemMode(HVAC_MODE_2_SYSTEM[hvac_mode]) is not None
async def test_climate_local_temperature(hass: HomeAssistant, device_climate) -> None:
"""Test local temperature."""
thrm_cluster = device_climate.device.endpoints[1].thermostat
entity_id = find_entity_id(Platform.CLIMATE, device_climate, hass)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_CURRENT_TEMPERATURE] is None
await send_attributes_report(hass, thrm_cluster, {0: 2100})
state = hass.states.get(entity_id)
assert state.attributes[ATTR_CURRENT_TEMPERATURE] == 21.0
async def test_climate_hvac_action_running_state(
hass: HomeAssistant, device_climate_sinope
) -> None:
"""Test hvac action via running state."""
thrm_cluster = device_climate_sinope.device.endpoints[1].thermostat
entity_id = find_entity_id(Platform.CLIMATE, device_climate_sinope, hass)
sensor_entity_id = find_entity_id(
Platform.SENSOR, device_climate_sinope, hass, "hvac"
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_HVAC_ACTION] == HVACAction.OFF
hvac_sensor_state = hass.states.get(sensor_entity_id)
assert hvac_sensor_state.state == HVACAction.OFF
await send_attributes_report(
hass, thrm_cluster, {0x001E: Thermostat.RunningMode.Off}
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_HVAC_ACTION] == HVACAction.OFF
hvac_sensor_state = hass.states.get(sensor_entity_id)
assert hvac_sensor_state.state == HVACAction.OFF
await send_attributes_report(
hass, thrm_cluster, {0x001C: Thermostat.SystemMode.Auto}
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_HVAC_ACTION] == HVACAction.IDLE
hvac_sensor_state = hass.states.get(sensor_entity_id)
assert hvac_sensor_state.state == HVACAction.IDLE
await send_attributes_report(
hass, thrm_cluster, {0x001E: Thermostat.RunningMode.Cool}
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_HVAC_ACTION] == HVACAction.COOLING
hvac_sensor_state = hass.states.get(sensor_entity_id)
assert hvac_sensor_state.state == HVACAction.COOLING
await send_attributes_report(
hass, thrm_cluster, {0x001E: Thermostat.RunningMode.Heat}
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_HVAC_ACTION] == HVACAction.HEATING
hvac_sensor_state = hass.states.get(sensor_entity_id)
assert hvac_sensor_state.state == HVACAction.HEATING
await send_attributes_report(
hass, thrm_cluster, {0x001E: Thermostat.RunningMode.Off}
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_HVAC_ACTION] == HVACAction.IDLE
hvac_sensor_state = hass.states.get(sensor_entity_id)
assert hvac_sensor_state.state == HVACAction.IDLE
await send_attributes_report(
hass, thrm_cluster, {0x0029: Thermostat.RunningState.Fan_State_On}
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_HVAC_ACTION] == HVACAction.FAN
hvac_sensor_state = hass.states.get(sensor_entity_id)
assert hvac_sensor_state.state == HVACAction.FAN
async def test_climate_hvac_action_running_state_zen(
hass: HomeAssistant, device_climate_zen
) -> None:
"""Test Zen hvac action via running state."""
thrm_cluster = device_climate_zen.device.endpoints[1].thermostat
entity_id = find_entity_id(Platform.CLIMATE, device_climate_zen, hass)
sensor_entity_id = find_entity_id(Platform.SENSOR, device_climate_zen, hass)
state = hass.states.get(entity_id)
assert ATTR_HVAC_ACTION not in state.attributes
hvac_sensor_state = hass.states.get(sensor_entity_id)
assert hvac_sensor_state.state == "unknown"
await send_attributes_report(
hass, thrm_cluster, {0x0029: Thermostat.RunningState.Cool_2nd_Stage_On}
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_HVAC_ACTION] == HVACAction.COOLING
hvac_sensor_state = hass.states.get(sensor_entity_id)
assert hvac_sensor_state.state == HVACAction.COOLING
await send_attributes_report(
hass, thrm_cluster, {0x0029: Thermostat.RunningState.Fan_State_On}
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_HVAC_ACTION] == HVACAction.FAN
hvac_sensor_state = hass.states.get(sensor_entity_id)
assert hvac_sensor_state.state == HVACAction.FAN
await send_attributes_report(
hass, thrm_cluster, {0x0029: Thermostat.RunningState.Heat_2nd_Stage_On}
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_HVAC_ACTION] == HVACAction.HEATING
hvac_sensor_state = hass.states.get(sensor_entity_id)
assert hvac_sensor_state.state == HVACAction.HEATING
await send_attributes_report(
hass, thrm_cluster, {0x0029: Thermostat.RunningState.Fan_2nd_Stage_On}
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_HVAC_ACTION] == HVACAction.FAN
hvac_sensor_state = hass.states.get(sensor_entity_id)
assert hvac_sensor_state.state == HVACAction.FAN
await send_attributes_report(
hass, thrm_cluster, {0x0029: Thermostat.RunningState.Cool_State_On}
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_HVAC_ACTION] == HVACAction.COOLING
hvac_sensor_state = hass.states.get(sensor_entity_id)
assert hvac_sensor_state.state == HVACAction.COOLING
await send_attributes_report(
hass, thrm_cluster, {0x0029: Thermostat.RunningState.Fan_3rd_Stage_On}
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_HVAC_ACTION] == HVACAction.FAN
hvac_sensor_state = hass.states.get(sensor_entity_id)
assert hvac_sensor_state.state == HVACAction.FAN
await send_attributes_report(
hass, thrm_cluster, {0x0029: Thermostat.RunningState.Heat_State_On}
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_HVAC_ACTION] == HVACAction.HEATING
hvac_sensor_state = hass.states.get(sensor_entity_id)
assert hvac_sensor_state.state == HVACAction.HEATING
await send_attributes_report(
hass, thrm_cluster, {0x0029: Thermostat.RunningState.Idle}
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_HVAC_ACTION] == HVACAction.OFF
hvac_sensor_state = hass.states.get(sensor_entity_id)
assert hvac_sensor_state.state == HVACAction.OFF
await send_attributes_report(
hass, thrm_cluster, {0x001C: Thermostat.SystemMode.Heat}
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_HVAC_ACTION] == HVACAction.IDLE
hvac_sensor_state = hass.states.get(sensor_entity_id)
assert hvac_sensor_state.state == HVACAction.IDLE
async def test_climate_hvac_action_pi_demand(
hass: HomeAssistant, device_climate
) -> None:
"""Test hvac action based on pi_heating/cooling_demand attrs."""
thrm_cluster = device_climate.device.endpoints[1].thermostat
entity_id = find_entity_id(Platform.CLIMATE, device_climate, hass)
state = hass.states.get(entity_id)
assert ATTR_HVAC_ACTION not in state.attributes
await send_attributes_report(hass, thrm_cluster, {0x0007: 10})
state = hass.states.get(entity_id)
assert state.attributes[ATTR_HVAC_ACTION] == HVACAction.COOLING
await send_attributes_report(hass, thrm_cluster, {0x0008: 20})
state = hass.states.get(entity_id)
assert state.attributes[ATTR_HVAC_ACTION] == HVACAction.HEATING
await send_attributes_report(hass, thrm_cluster, {0x0007: 0})
await send_attributes_report(hass, thrm_cluster, {0x0008: 0})
state = hass.states.get(entity_id)
assert state.attributes[ATTR_HVAC_ACTION] == HVACAction.OFF
await send_attributes_report(
hass, thrm_cluster, {0x001C: Thermostat.SystemMode.Heat}
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_HVAC_ACTION] == HVACAction.IDLE
await send_attributes_report(
hass, thrm_cluster, {0x001C: Thermostat.SystemMode.Cool}
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_HVAC_ACTION] == HVACAction.IDLE
@pytest.mark.parametrize(
("sys_mode", "hvac_mode"),
(
(Thermostat.SystemMode.Auto, HVACMode.HEAT_COOL),
(Thermostat.SystemMode.Cool, HVACMode.COOL),
(Thermostat.SystemMode.Heat, HVACMode.HEAT),
(Thermostat.SystemMode.Pre_cooling, HVACMode.COOL),
(Thermostat.SystemMode.Fan_only, HVACMode.FAN_ONLY),
(Thermostat.SystemMode.Dry, HVACMode.DRY),
),
)
async def test_hvac_mode(
hass: HomeAssistant, device_climate, sys_mode, hvac_mode
) -> None:
"""Test HVAC mode."""
thrm_cluster = device_climate.device.endpoints[1].thermostat
entity_id = find_entity_id(Platform.CLIMATE, device_climate, hass)
state = hass.states.get(entity_id)
assert state.state == HVACMode.OFF
await send_attributes_report(hass, thrm_cluster, {0x001C: sys_mode})
state = hass.states.get(entity_id)
assert state.state == hvac_mode
await send_attributes_report(
hass, thrm_cluster, {0x001C: Thermostat.SystemMode.Off}
)
state = hass.states.get(entity_id)
assert state.state == HVACMode.OFF
await send_attributes_report(hass, thrm_cluster, {0x001C: 0xFF})
state = hass.states.get(entity_id)
assert state.state == STATE_UNKNOWN
@pytest.mark.parametrize(
("seq_of_op", "modes"),
(
(0xFF, {HVACMode.OFF}),
(0x00, {HVACMode.OFF, HVACMode.COOL}),
(0x01, {HVACMode.OFF, HVACMode.COOL}),
(0x02, {HVACMode.OFF, HVACMode.HEAT}),
(0x03, {HVACMode.OFF, HVACMode.HEAT}),
(0x04, {HVACMode.OFF, HVACMode.COOL, HVACMode.HEAT, HVACMode.HEAT_COOL}),
(0x05, {HVACMode.OFF, HVACMode.COOL, HVACMode.HEAT, HVACMode.HEAT_COOL}),
),
)
async def test_hvac_modes(
hass: HomeAssistant, device_climate_mock, seq_of_op, modes
) -> None:
"""Test HVAC modes from sequence of operations."""
device_climate = await device_climate_mock(
CLIMATE, {"ctrl_sequence_of_oper": seq_of_op}
)
entity_id = find_entity_id(Platform.CLIMATE, device_climate, hass)
state = hass.states.get(entity_id)
assert set(state.attributes[ATTR_HVAC_MODES]) == modes
@pytest.mark.parametrize(
("sys_mode", "preset", "target_temp"),
(
(Thermostat.SystemMode.Heat, None, 22),
(Thermostat.SystemMode.Heat, PRESET_AWAY, 16),
(Thermostat.SystemMode.Cool, None, 25),
(Thermostat.SystemMode.Cool, PRESET_AWAY, 27),
),
)
async def test_target_temperature(
hass: HomeAssistant, device_climate_mock, sys_mode, preset, target_temp
) -> None:
"""Test target temperature property."""
device_climate = await device_climate_mock(
CLIMATE_SINOPE,
{
"occupied_cooling_setpoint": 2500,
"occupied_heating_setpoint": 2200,
"system_mode": sys_mode,
"unoccupied_heating_setpoint": 1600,
"unoccupied_cooling_setpoint": 2700,
},
manuf=MANUF_SINOPE,
quirk=zhaquirks.sinope.thermostat.SinopeTechnologiesThermostat,
)
entity_id = find_entity_id(Platform.CLIMATE, device_climate, hass)
if preset:
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_PRESET_MODE,
{ATTR_ENTITY_ID: entity_id, ATTR_PRESET_MODE: preset},
blocking=True,
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_TEMPERATURE] == target_temp
@pytest.mark.parametrize(
("preset", "unoccupied", "target_temp"),
(
(None, 1800, 17),
(PRESET_AWAY, 1800, 18),
(PRESET_AWAY, None, None),
),
)
async def test_target_temperature_high(
hass: HomeAssistant, device_climate_mock, preset, unoccupied, target_temp
) -> None:
"""Test target temperature high property."""
device_climate = await device_climate_mock(
CLIMATE_SINOPE,
{
"occupied_cooling_setpoint": 1700,
"system_mode": Thermostat.SystemMode.Auto,
"unoccupied_cooling_setpoint": unoccupied,
},
manuf=MANUF_SINOPE,
quirk=zhaquirks.sinope.thermostat.SinopeTechnologiesThermostat,
)
entity_id = find_entity_id(Platform.CLIMATE, device_climate, hass)
if preset:
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_PRESET_MODE,
{ATTR_ENTITY_ID: entity_id, ATTR_PRESET_MODE: preset},
blocking=True,
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_TARGET_TEMP_HIGH] == target_temp
@pytest.mark.parametrize(
("preset", "unoccupied", "target_temp"),
(
(None, 1600, 21),
(PRESET_AWAY, 1600, 16),
(PRESET_AWAY, None, None),
),
)
async def test_target_temperature_low(
hass: HomeAssistant, device_climate_mock, preset, unoccupied, target_temp
) -> None:
"""Test target temperature low property."""
device_climate = await device_climate_mock(
CLIMATE_SINOPE,
{
"occupied_heating_setpoint": 2100,
"system_mode": Thermostat.SystemMode.Auto,
"unoccupied_heating_setpoint": unoccupied,
},
manuf=MANUF_SINOPE,
quirk=zhaquirks.sinope.thermostat.SinopeTechnologiesThermostat,
)
entity_id = find_entity_id(Platform.CLIMATE, device_climate, hass)
if preset:
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_PRESET_MODE,
{ATTR_ENTITY_ID: entity_id, ATTR_PRESET_MODE: preset},
blocking=True,
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_TARGET_TEMP_LOW] == target_temp
@pytest.mark.parametrize(
("hvac_mode", "sys_mode"),
(
(HVACMode.AUTO, None),
(HVACMode.COOL, Thermostat.SystemMode.Cool),
(HVACMode.DRY, None),
(HVACMode.FAN_ONLY, None),
(HVACMode.HEAT, Thermostat.SystemMode.Heat),
(HVACMode.HEAT_COOL, Thermostat.SystemMode.Auto),
),
)
async def test_set_hvac_mode(
hass: HomeAssistant, device_climate, hvac_mode, sys_mode
) -> None:
"""Test setting hvac mode."""
thrm_cluster = device_climate.device.endpoints[1].thermostat
entity_id = find_entity_id(Platform.CLIMATE, device_climate, hass)
state = hass.states.get(entity_id)
assert state.state == HVACMode.OFF
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_HVAC_MODE,
{ATTR_ENTITY_ID: entity_id, ATTR_HVAC_MODE: hvac_mode},
blocking=True,
)
state = hass.states.get(entity_id)
if sys_mode is not None:
assert state.state == hvac_mode
assert thrm_cluster.write_attributes.call_count == 1
assert thrm_cluster.write_attributes.call_args[0][0] == {
"system_mode": sys_mode
}
else:
assert thrm_cluster.write_attributes.call_count == 0
assert state.state == HVACMode.OFF
# turn off
thrm_cluster.write_attributes.reset_mock()
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_HVAC_MODE,
{ATTR_ENTITY_ID: entity_id, ATTR_HVAC_MODE: HVACMode.OFF},
blocking=True,
)
state = hass.states.get(entity_id)
assert state.state == HVACMode.OFF
assert thrm_cluster.write_attributes.call_count == 1
assert thrm_cluster.write_attributes.call_args[0][0] == {
"system_mode": Thermostat.SystemMode.Off
}
async def test_preset_setting(hass: HomeAssistant, device_climate_sinope) -> None:
"""Test preset setting."""
entity_id = find_entity_id(Platform.CLIMATE, device_climate_sinope, hass)
thrm_cluster = device_climate_sinope.device.endpoints[1].thermostat
state = hass.states.get(entity_id)
assert state.attributes[ATTR_PRESET_MODE] == PRESET_NONE
# unsuccessful occupancy change
thrm_cluster.write_attributes.return_value = [
zcl_f.WriteAttributesResponse.deserialize(b"\x01\x00\x00")[0]
]
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_PRESET_MODE,
{ATTR_ENTITY_ID: entity_id, ATTR_PRESET_MODE: PRESET_AWAY},
blocking=True,
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_PRESET_MODE] == PRESET_NONE
assert thrm_cluster.write_attributes.call_count == 1
assert thrm_cluster.write_attributes.call_args[0][0] == {"set_occupancy": 0}
# successful occupancy change
thrm_cluster.write_attributes.reset_mock()
thrm_cluster.write_attributes.return_value = [
zcl_f.WriteAttributesResponse.deserialize(b"\x00")[0]
]
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_PRESET_MODE,
{ATTR_ENTITY_ID: entity_id, ATTR_PRESET_MODE: PRESET_AWAY},
blocking=True,
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_PRESET_MODE] == PRESET_AWAY
assert thrm_cluster.write_attributes.call_count == 1
assert thrm_cluster.write_attributes.call_args[0][0] == {"set_occupancy": 0}
# unsuccessful occupancy change
thrm_cluster.write_attributes.reset_mock()
thrm_cluster.write_attributes.return_value = [
zcl_f.WriteAttributesResponse.deserialize(b"\x01\x01\x01")[0]
]
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_PRESET_MODE,
{ATTR_ENTITY_ID: entity_id, ATTR_PRESET_MODE: PRESET_NONE},
blocking=True,
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_PRESET_MODE] == PRESET_AWAY
assert thrm_cluster.write_attributes.call_count == 1
assert thrm_cluster.write_attributes.call_args[0][0] == {"set_occupancy": 1}
# successful occupancy change
thrm_cluster.write_attributes.reset_mock()
thrm_cluster.write_attributes.return_value = [
zcl_f.WriteAttributesResponse.deserialize(b"\x00")[0]
]
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_PRESET_MODE,
{ATTR_ENTITY_ID: entity_id, ATTR_PRESET_MODE: PRESET_NONE},
blocking=True,
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_PRESET_MODE] == PRESET_NONE
assert thrm_cluster.write_attributes.call_count == 1
assert thrm_cluster.write_attributes.call_args[0][0] == {"set_occupancy": 1}
async def test_preset_setting_invalid(
hass: HomeAssistant, device_climate_sinope
) -> None:
"""Test invalid preset setting."""
entity_id = find_entity_id(Platform.CLIMATE, device_climate_sinope, hass)
thrm_cluster = device_climate_sinope.device.endpoints[1].thermostat
state = hass.states.get(entity_id)
assert state.attributes[ATTR_PRESET_MODE] == PRESET_NONE
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_PRESET_MODE,
{ATTR_ENTITY_ID: entity_id, ATTR_PRESET_MODE: "invalid_preset"},
blocking=True,
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_PRESET_MODE] == PRESET_NONE
assert thrm_cluster.write_attributes.call_count == 0
async def test_set_temperature_hvac_mode(hass: HomeAssistant, device_climate) -> None:
"""Test setting HVAC mode in temperature service call."""
entity_id = find_entity_id(Platform.CLIMATE, device_climate, hass)
thrm_cluster = device_climate.device.endpoints[1].thermostat
state = hass.states.get(entity_id)
assert state.state == HVACMode.OFF
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_TEMPERATURE,
{
ATTR_ENTITY_ID: entity_id,
ATTR_HVAC_MODE: HVACMode.HEAT_COOL,
ATTR_TEMPERATURE: 20,
},
blocking=True,
)
state = hass.states.get(entity_id)
assert state.state == HVACMode.HEAT_COOL
assert thrm_cluster.write_attributes.await_count == 1
assert thrm_cluster.write_attributes.call_args[0][0] == {
"system_mode": Thermostat.SystemMode.Auto
}
async def test_set_temperature_heat_cool(
hass: HomeAssistant, device_climate_mock
) -> None:
"""Test setting temperature service call in heating/cooling HVAC mode."""
device_climate = await device_climate_mock(
CLIMATE_SINOPE,
{
"occupied_cooling_setpoint": 2500,
"occupied_heating_setpoint": 2000,
"system_mode": Thermostat.SystemMode.Auto,
"unoccupied_heating_setpoint": 1600,
"unoccupied_cooling_setpoint": 2700,
},
manuf=MANUF_SINOPE,
quirk=zhaquirks.sinope.thermostat.SinopeTechnologiesThermostat,
)
entity_id = find_entity_id(Platform.CLIMATE, device_climate, hass)
thrm_cluster = device_climate.device.endpoints[1].thermostat
state = hass.states.get(entity_id)
assert state.state == HVACMode.HEAT_COOL
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_TEMPERATURE,
{ATTR_ENTITY_ID: entity_id, ATTR_TEMPERATURE: 21},
blocking=True,
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_TARGET_TEMP_LOW] == 20.0
assert state.attributes[ATTR_TARGET_TEMP_HIGH] == 25.0
assert thrm_cluster.write_attributes.await_count == 0
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_TEMPERATURE,
{
ATTR_ENTITY_ID: entity_id,
ATTR_TARGET_TEMP_HIGH: 26,
ATTR_TARGET_TEMP_LOW: 19,
},
blocking=True,
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_TARGET_TEMP_LOW] == 19.0
assert state.attributes[ATTR_TARGET_TEMP_HIGH] == 26.0
assert thrm_cluster.write_attributes.await_count == 2
assert thrm_cluster.write_attributes.call_args_list[0][0][0] == {
"occupied_heating_setpoint": 1900
}
assert thrm_cluster.write_attributes.call_args_list[1][0][0] == {
"occupied_cooling_setpoint": 2600
}
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_PRESET_MODE,
{ATTR_ENTITY_ID: entity_id, ATTR_PRESET_MODE: PRESET_AWAY},
blocking=True,
)
thrm_cluster.write_attributes.reset_mock()
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_TEMPERATURE,
{
ATTR_ENTITY_ID: entity_id,
ATTR_TARGET_TEMP_HIGH: 30,
ATTR_TARGET_TEMP_LOW: 15,
},
blocking=True,
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_TARGET_TEMP_LOW] == 15.0
assert state.attributes[ATTR_TARGET_TEMP_HIGH] == 30.0
assert thrm_cluster.write_attributes.await_count == 2
assert thrm_cluster.write_attributes.call_args_list[0][0][0] == {
"unoccupied_heating_setpoint": 1500
}
assert thrm_cluster.write_attributes.call_args_list[1][0][0] == {
"unoccupied_cooling_setpoint": 3000
}
async def test_set_temperature_heat(hass: HomeAssistant, device_climate_mock) -> None:
"""Test setting temperature service call in heating HVAC mode."""
device_climate = await device_climate_mock(
CLIMATE_SINOPE,
{
"occupied_cooling_setpoint": 2500,
"occupied_heating_setpoint": 2000,
"system_mode": Thermostat.SystemMode.Heat,
"unoccupied_heating_setpoint": 1600,
"unoccupied_cooling_setpoint": 2700,
},
manuf=MANUF_SINOPE,
quirk=zhaquirks.sinope.thermostat.SinopeTechnologiesThermostat,
)
entity_id = find_entity_id(Platform.CLIMATE, device_climate, hass)
thrm_cluster = device_climate.device.endpoints[1].thermostat
state = hass.states.get(entity_id)
assert state.state == HVACMode.HEAT
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_TEMPERATURE,
{
ATTR_ENTITY_ID: entity_id,
ATTR_TARGET_TEMP_HIGH: 30,
ATTR_TARGET_TEMP_LOW: 15,
},
blocking=True,
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_TARGET_TEMP_LOW] is None
assert state.attributes[ATTR_TARGET_TEMP_HIGH] is None
assert state.attributes[ATTR_TEMPERATURE] == 20.0
assert thrm_cluster.write_attributes.await_count == 0
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_TEMPERATURE,
{ATTR_ENTITY_ID: entity_id, ATTR_TEMPERATURE: 21},
blocking=True,
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_TARGET_TEMP_LOW] is None
assert state.attributes[ATTR_TARGET_TEMP_HIGH] is None
assert state.attributes[ATTR_TEMPERATURE] == 21.0
assert thrm_cluster.write_attributes.await_count == 1
assert thrm_cluster.write_attributes.call_args_list[0][0][0] == {
"occupied_heating_setpoint": 2100
}
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_PRESET_MODE,
{ATTR_ENTITY_ID: entity_id, ATTR_PRESET_MODE: PRESET_AWAY},
blocking=True,
)
thrm_cluster.write_attributes.reset_mock()
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_TEMPERATURE,
{ATTR_ENTITY_ID: entity_id, ATTR_TEMPERATURE: 22},
blocking=True,
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_TARGET_TEMP_LOW] is None
assert state.attributes[ATTR_TARGET_TEMP_HIGH] is None
assert state.attributes[ATTR_TEMPERATURE] == 22.0
assert thrm_cluster.write_attributes.await_count == 1
assert thrm_cluster.write_attributes.call_args_list[0][0][0] == {
"unoccupied_heating_setpoint": 2200
}
async def test_set_temperature_cool(hass: HomeAssistant, device_climate_mock) -> None:
"""Test setting temperature service call in cooling HVAC mode."""
device_climate = await device_climate_mock(
CLIMATE_SINOPE,
{
"occupied_cooling_setpoint": 2500,
"occupied_heating_setpoint": 2000,
"system_mode": Thermostat.SystemMode.Cool,
"unoccupied_cooling_setpoint": 1600,
"unoccupied_heating_setpoint": 2700,
},
manuf=MANUF_SINOPE,
quirk=zhaquirks.sinope.thermostat.SinopeTechnologiesThermostat,
)
entity_id = find_entity_id(Platform.CLIMATE, device_climate, hass)
thrm_cluster = device_climate.device.endpoints[1].thermostat
state = hass.states.get(entity_id)
assert state.state == HVACMode.COOL
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_TEMPERATURE,
{
ATTR_ENTITY_ID: entity_id,
ATTR_TARGET_TEMP_HIGH: 30,
ATTR_TARGET_TEMP_LOW: 15,
},
blocking=True,
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_TARGET_TEMP_LOW] is None
assert state.attributes[ATTR_TARGET_TEMP_HIGH] is None
assert state.attributes[ATTR_TEMPERATURE] == 25.0
assert thrm_cluster.write_attributes.await_count == 0
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_TEMPERATURE,
{ATTR_ENTITY_ID: entity_id, ATTR_TEMPERATURE: 21},
blocking=True,
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_TARGET_TEMP_LOW] is None
assert state.attributes[ATTR_TARGET_TEMP_HIGH] is None
assert state.attributes[ATTR_TEMPERATURE] == 21.0
assert thrm_cluster.write_attributes.await_count == 1
assert thrm_cluster.write_attributes.call_args_list[0][0][0] == {
"occupied_cooling_setpoint": 2100
}
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_PRESET_MODE,
{ATTR_ENTITY_ID: entity_id, ATTR_PRESET_MODE: PRESET_AWAY},
blocking=True,
)
thrm_cluster.write_attributes.reset_mock()
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_TEMPERATURE,
{ATTR_ENTITY_ID: entity_id, ATTR_TEMPERATURE: 22},
blocking=True,
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_TARGET_TEMP_LOW] is None
assert state.attributes[ATTR_TARGET_TEMP_HIGH] is None
assert state.attributes[ATTR_TEMPERATURE] == 22.0
assert thrm_cluster.write_attributes.await_count == 1
assert thrm_cluster.write_attributes.call_args_list[0][0][0] == {
"unoccupied_cooling_setpoint": 2200
}
async def test_set_temperature_wrong_mode(
hass: HomeAssistant, device_climate_mock
) -> None:
"""Test setting temperature service call for wrong HVAC mode."""
with patch.object(
zigpy.zcl.clusters.manufacturer_specific.ManufacturerSpecificCluster,
"ep_attribute",
"sinope_manufacturer_specific",
):
device_climate = await device_climate_mock(
CLIMATE_SINOPE,
{
"occupied_cooling_setpoint": 2500,
"occupied_heating_setpoint": 2000,
"system_mode": Thermostat.SystemMode.Dry,
"unoccupied_cooling_setpoint": 1600,
"unoccupied_heating_setpoint": 2700,
},
manuf=MANUF_SINOPE,
)
entity_id = find_entity_id(Platform.CLIMATE, device_climate, hass)
thrm_cluster = device_climate.device.endpoints[1].thermostat
state = hass.states.get(entity_id)
assert state.state == HVACMode.DRY
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_TEMPERATURE,
{ATTR_ENTITY_ID: entity_id, ATTR_TEMPERATURE: 24},
blocking=True,
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_TARGET_TEMP_LOW] is None
assert state.attributes[ATTR_TARGET_TEMP_HIGH] is None
assert state.attributes[ATTR_TEMPERATURE] is None
assert thrm_cluster.write_attributes.await_count == 0
async def test_occupancy_reset(hass: HomeAssistant, device_climate_sinope) -> None:
"""Test away preset reset."""
entity_id = find_entity_id(Platform.CLIMATE, device_climate_sinope, hass)
thrm_cluster = device_climate_sinope.device.endpoints[1].thermostat
state = hass.states.get(entity_id)
assert state.attributes[ATTR_PRESET_MODE] == PRESET_NONE
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_PRESET_MODE,
{ATTR_ENTITY_ID: entity_id, ATTR_PRESET_MODE: PRESET_AWAY},
blocking=True,
)
thrm_cluster.write_attributes.reset_mock()
state = hass.states.get(entity_id)
assert state.attributes[ATTR_PRESET_MODE] == PRESET_AWAY
await send_attributes_report(
hass, thrm_cluster, {"occupied_heating_setpoint": zigpy.types.uint16_t(1950)}
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_PRESET_MODE] == PRESET_NONE
async def test_fan_mode(hass: HomeAssistant, device_climate_fan) -> None:
"""Test fan mode."""
entity_id = find_entity_id(Platform.CLIMATE, device_climate_fan, hass)
thrm_cluster = device_climate_fan.device.endpoints[1].thermostat
state = hass.states.get(entity_id)
assert set(state.attributes[ATTR_FAN_MODES]) == {FAN_AUTO, FAN_ON}
assert state.attributes[ATTR_FAN_MODE] == FAN_AUTO
await send_attributes_report(
hass, thrm_cluster, {"running_state": Thermostat.RunningState.Fan_State_On}
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_FAN_MODE] == FAN_ON
await send_attributes_report(
hass, thrm_cluster, {"running_state": Thermostat.RunningState.Idle}
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_FAN_MODE] == FAN_AUTO
await send_attributes_report(
hass, thrm_cluster, {"running_state": Thermostat.RunningState.Fan_2nd_Stage_On}
)
state = hass.states.get(entity_id)
assert state.attributes[ATTR_FAN_MODE] == FAN_ON
async def test_set_fan_mode_not_supported(
hass: HomeAssistant, device_climate_fan
) -> None:
"""Test fan setting unsupported mode."""
entity_id = find_entity_id(Platform.CLIMATE, device_climate_fan, hass)
fan_cluster = device_climate_fan.device.endpoints[1].fan
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_FAN_MODE,
{ATTR_ENTITY_ID: entity_id, ATTR_FAN_MODE: FAN_LOW},
blocking=True,
)
assert fan_cluster.write_attributes.await_count == 0
async def test_set_fan_mode(hass: HomeAssistant, device_climate_fan) -> None:
"""Test fan mode setting."""
entity_id = find_entity_id(Platform.CLIMATE, device_climate_fan, hass)
fan_cluster = device_climate_fan.device.endpoints[1].fan
state = hass.states.get(entity_id)
assert state.attributes[ATTR_FAN_MODE] == FAN_AUTO
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_FAN_MODE,
{ATTR_ENTITY_ID: entity_id, ATTR_FAN_MODE: FAN_ON},
blocking=True,
)
assert fan_cluster.write_attributes.await_count == 1
assert fan_cluster.write_attributes.call_args[0][0] == {"fan_mode": 4}
fan_cluster.write_attributes.reset_mock()
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_FAN_MODE,
{ATTR_ENTITY_ID: entity_id, ATTR_FAN_MODE: FAN_AUTO},
blocking=True,
)
assert fan_cluster.write_attributes.await_count == 1
assert fan_cluster.write_attributes.call_args[0][0] == {"fan_mode": 5}
async def test_set_moes_preset(hass: HomeAssistant, device_climate_moes) -> None:
"""Test setting preset for moes trv."""
entity_id = find_entity_id(Platform.CLIMATE, device_climate_moes, hass)
thrm_cluster = device_climate_moes.device.endpoints[1].thermostat
state = hass.states.get(entity_id)
assert state.attributes[ATTR_PRESET_MODE] == PRESET_NONE
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_PRESET_MODE,
{ATTR_ENTITY_ID: entity_id, ATTR_PRESET_MODE: PRESET_AWAY},
blocking=True,
)
assert thrm_cluster.write_attributes.await_count == 1
assert thrm_cluster.write_attributes.call_args_list[0][0][0] == {
"operation_preset": 0
}
thrm_cluster.write_attributes.reset_mock()
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_PRESET_MODE,
{ATTR_ENTITY_ID: entity_id, ATTR_PRESET_MODE: PRESET_SCHEDULE},
blocking=True,
)
assert thrm_cluster.write_attributes.await_count == 2
assert thrm_cluster.write_attributes.call_args_list[0][0][0] == {
"operation_preset": 2
}
assert thrm_cluster.write_attributes.call_args_list[1][0][0] == {
"operation_preset": 1
}
thrm_cluster.write_attributes.reset_mock()
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_PRESET_MODE,
{ATTR_ENTITY_ID: entity_id, ATTR_PRESET_MODE: PRESET_COMFORT},
blocking=True,
)
assert thrm_cluster.write_attributes.await_count == 2
assert thrm_cluster.write_attributes.call_args_list[0][0][0] == {
"operation_preset": 2
}
assert thrm_cluster.write_attributes.call_args_list[1][0][0] == {
"operation_preset": 3
}
thrm_cluster.write_attributes.reset_mock()
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_PRESET_MODE,
{ATTR_ENTITY_ID: entity_id, ATTR_PRESET_MODE: PRESET_ECO},
blocking=True,
)
assert thrm_cluster.write_attributes.await_count == 2
assert thrm_cluster.write_attributes.call_args_list[0][0][0] == {
"operation_preset": 2
}
assert thrm_cluster.write_attributes.call_args_list[1][0][0] == {
"operation_preset": 4
}
thrm_cluster.write_attributes.reset_mock()
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_PRESET_MODE,
{ATTR_ENTITY_ID: entity_id, ATTR_PRESET_MODE: PRESET_BOOST},
blocking=True,
)
assert thrm_cluster.write_attributes.await_count == 2
assert thrm_cluster.write_attributes.call_args_list[0][0][0] == {
"operation_preset": 2
}
assert thrm_cluster.write_attributes.call_args_list[1][0][0] == {
"operation_preset": 5
}
thrm_cluster.write_attributes.reset_mock()
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_PRESET_MODE,
{ATTR_ENTITY_ID: entity_id, ATTR_PRESET_MODE: PRESET_COMPLEX},
blocking=True,
)
assert thrm_cluster.write_attributes.await_count == 2
assert thrm_cluster.write_attributes.call_args_list[0][0][0] == {
"operation_preset": 2
}
assert thrm_cluster.write_attributes.call_args_list[1][0][0] == {
"operation_preset": 6
}
thrm_cluster.write_attributes.reset_mock()
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_PRESET_MODE,
{ATTR_ENTITY_ID: entity_id, ATTR_PRESET_MODE: PRESET_NONE},
blocking=True,
)
assert thrm_cluster.write_attributes.await_count == 1
assert thrm_cluster.write_attributes.call_args_list[0][0][0] == {
"operation_preset": 2
}
async def test_set_moes_operation_mode(
hass: HomeAssistant, device_climate_moes
) -> None:
"""Test setting preset for moes trv."""
entity_id = find_entity_id(Platform.CLIMATE, device_climate_moes, hass)
thrm_cluster = device_climate_moes.device.endpoints[1].thermostat
await send_attributes_report(hass, thrm_cluster, {"operation_preset": 0})
state = hass.states.get(entity_id)
assert state.attributes[ATTR_PRESET_MODE] == PRESET_AWAY
await send_attributes_report(hass, thrm_cluster, {"operation_preset": 1})
state = hass.states.get(entity_id)
assert state.attributes[ATTR_PRESET_MODE] == PRESET_SCHEDULE
await send_attributes_report(hass, thrm_cluster, {"operation_preset": 2})
state = hass.states.get(entity_id)
assert state.attributes[ATTR_PRESET_MODE] == PRESET_NONE
await send_attributes_report(hass, thrm_cluster, {"operation_preset": 3})
state = hass.states.get(entity_id)
assert state.attributes[ATTR_PRESET_MODE] == PRESET_COMFORT
await send_attributes_report(hass, thrm_cluster, {"operation_preset": 4})
state = hass.states.get(entity_id)
assert state.attributes[ATTR_PRESET_MODE] == PRESET_ECO
await send_attributes_report(hass, thrm_cluster, {"operation_preset": 5})
state = hass.states.get(entity_id)
assert state.attributes[ATTR_PRESET_MODE] == PRESET_BOOST
await send_attributes_report(hass, thrm_cluster, {"operation_preset": 6})
state = hass.states.get(entity_id)
assert state.attributes[ATTR_PRESET_MODE] == PRESET_COMPLEX
async def test_set_zonnsmart_preset(
hass: HomeAssistant, device_climate_zonnsmart
) -> None:
"""Test setting preset from homeassistant for zonnsmart trv."""
entity_id = find_entity_id(Platform.CLIMATE, device_climate_zonnsmart, hass)
thrm_cluster = device_climate_zonnsmart.device.endpoints[1].thermostat
state = hass.states.get(entity_id)
assert state.attributes[ATTR_PRESET_MODE] == PRESET_NONE
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_PRESET_MODE,
{ATTR_ENTITY_ID: entity_id, ATTR_PRESET_MODE: PRESET_SCHEDULE},
blocking=True,
)
assert thrm_cluster.write_attributes.await_count == 1
assert thrm_cluster.write_attributes.call_args_list[0][0][0] == {
"operation_preset": 0
}
thrm_cluster.write_attributes.reset_mock()
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_PRESET_MODE,
{ATTR_ENTITY_ID: entity_id, ATTR_PRESET_MODE: "holiday"},
blocking=True,
)
assert thrm_cluster.write_attributes.await_count == 2
assert thrm_cluster.write_attributes.call_args_list[0][0][0] == {
"operation_preset": 1
}
assert thrm_cluster.write_attributes.call_args_list[1][0][0] == {
"operation_preset": 3
}
thrm_cluster.write_attributes.reset_mock()
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_PRESET_MODE,
{ATTR_ENTITY_ID: entity_id, ATTR_PRESET_MODE: "frost protect"},
blocking=True,
)
assert thrm_cluster.write_attributes.await_count == 2
assert thrm_cluster.write_attributes.call_args_list[0][0][0] == {
"operation_preset": 1
}
assert thrm_cluster.write_attributes.call_args_list[1][0][0] == {
"operation_preset": 4
}
thrm_cluster.write_attributes.reset_mock()
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_PRESET_MODE,
{ATTR_ENTITY_ID: entity_id, ATTR_PRESET_MODE: PRESET_NONE},
blocking=True,
)
assert thrm_cluster.write_attributes.await_count == 1
assert thrm_cluster.write_attributes.call_args_list[0][0][0] == {
"operation_preset": 1
}
async def test_set_zonnsmart_operation_mode(
hass: HomeAssistant, device_climate_zonnsmart
) -> None:
"""Test setting preset from trv for zonnsmart trv."""
entity_id = find_entity_id(Platform.CLIMATE, device_climate_zonnsmart, hass)
thrm_cluster = device_climate_zonnsmart.device.endpoints[1].thermostat
await send_attributes_report(hass, thrm_cluster, {"operation_preset": 0})
state = hass.states.get(entity_id)
assert state.attributes[ATTR_PRESET_MODE] == PRESET_SCHEDULE
await send_attributes_report(hass, thrm_cluster, {"operation_preset": 1})
state = hass.states.get(entity_id)
assert state.attributes[ATTR_PRESET_MODE] == PRESET_NONE
await send_attributes_report(hass, thrm_cluster, {"operation_preset": 2})
state = hass.states.get(entity_id)
assert state.attributes[ATTR_PRESET_MODE] == "holiday"
await send_attributes_report(hass, thrm_cluster, {"operation_preset": 3})
state = hass.states.get(entity_id)
assert state.attributes[ATTR_PRESET_MODE] == "holiday"
await send_attributes_report(hass, thrm_cluster, {"operation_preset": 4})
state = hass.states.get(entity_id)
assert state.attributes[ATTR_PRESET_MODE] == "frost protect"
| [
"noreply@github.com"
] | thomasgermain.noreply@github.com |
767687887e67bf1d760a55cc1e95fe314e57b094 | ae7ba9c83692cfcb39e95483d84610715930fe9e | /bmw9t/nltk/ch_four/15.py | b4241dc542b556abf74ca5ecc1f5e328b4674211 | [] | no_license | xenron/sandbox-github-clone | 364721769ea0784fb82827b07196eaa32190126b | 5eccdd8631f8bad78eb88bb89144972dbabc109c | refs/heads/master | 2022-05-01T21:18:43.101664 | 2016-09-12T12:38:32 | 2016-09-12T12:38:32 | 65,951,766 | 5 | 7 | null | null | null | null | UTF-8 | Python | false | false | 692 | py | # ◑ Write a program that takes a sentence expressed as a single string, splits it and counts up the words. Get it to print out each word and the word's frequency, one per line, in alphabetical order.
from nltk import *
def program(sent):
"""answers the question."""
# splits a sentence
words = sent.split(' ')
# gets the length of the sentence and the frequencies with which they appear.
length = len(words)
fd = FreqDist(words)
# stores the frequencies and sorts them alphabetically
frequencies = sorted(fd.most_common(length))
#prints them out one per line.
for frequency in frequencies:
print(frequency)
sentence = 'this is my sentence yo what up up'
program(sentence) | [
"xenron@outlook.com"
] | xenron@outlook.com |
a71afb8ba73ddffb652d248309425b73e03defb7 | e98a1e360e947a0f91edc3cb603d915a3630cfbc | /stack_medium/0113_verify_preorder_serialization_of_a_binary_tree.py | bfacc96b15fa81dbd938e4770b24a1533f6619ba | [] | no_license | myungwooko/algorithm | 3a6a05cf7efa469aa911fe04871ef368ab98bb65 | 673e51199a2d07198894a283479d459bef0272c5 | refs/heads/master | 2021-07-04T01:17:41.787653 | 2020-12-25T00:59:33 | 2020-12-25T00:59:33 | 213,865,632 | 1 | 1 | null | null | null | null | UTF-8 | Python | false | false | 1,842 | py | """
331. Verify Preorder Serialization of a Binary Tree
Medium
One way to serialize a binary tree is to use pre-order traversal. When we encounter a non-null node, we record the node's value. If it is a null node, we record using a sentinel value such as #.
_9_
/ \
3 2
/ \ / \
4 1 # 6
/ \ / \ / \
# # # # # #
For example, the above binary tree can be serialized to the string "9,3,4,#,#,1,#,#,2,#,6,#,#", where # represents a null node.
Given a string of comma separated values, verify whether it is a correct preorder traversal serialization of a binary tree. Find an algorithm without reconstructing the tree.
Each comma separated value in the string must be either an integer or a character '#' representing null pointer.
You may assume that the input format is always valid, for example it could never contain two consecutive commas such as "1,,3".
Example 1:
Input: "9,3,4,#,#,1,#,#,2,#,6,#,#"
Output: true
Example 2:
Input: "1,#"
Output: false
Example 3:
Input: "9,#,#,1"
Output: false
- 마지막이 모두 #로 끝맺음 되어야 한다.
"""
class Solution(object):
#every last has to finish "#""
def isValidSerialization(self, preorder: str) -> bool:
stack = []
for c in preorder.split(','):
stack.append(c)
print(stack, stack[:-2])
while stack[-2:] == ['#', '#']:
stack.pop()
stack.pop()
if not stack: return False
# even though just below value is '#' it doesn't matter
stack.pop()
#this is for next's child as finishing well as before.
stack.append('#')
print(stack)
return stack == ['#']
preorder = "9,3,4,#,#,1,#,#,2,#,6,#,#"
s = Solution()
test = s.isValidSerialization(preorder)
print(test)
| [
"myungwoo.ko@gmail.com"
] | myungwoo.ko@gmail.com |
730c29e0650bb728a7b7a31f71b116856885c5c5 | 0b491c106daeafe21e0e4e57ea3a7fd25072c902 | /pyspedas/examples/basic/ex_analysis.py | 7bf8a422548ad0d9ca497f6d618dbf9d1db4abb9 | [
"MIT"
] | permissive | jibarnum/pyspedas | 491f7e7bac2485f52a8c2b0b841d85e4a1ce41ff | a956911ed8f2a11e30c527c92d2bda1342bea8e3 | refs/heads/master | 2020-06-18T21:43:06.885769 | 2019-10-08T17:19:43 | 2019-10-08T17:19:43 | 196,460,801 | 0 | 0 | MIT | 2019-07-11T20:26:58 | 2019-07-11T20:26:58 | null | UTF-8 | Python | false | false | 706 | py | # -*- coding: utf-8 -*-
"""
File:
ex_analysis.py
Desrciption:
Basic example using analysis functions.
Downloads THEMIS data and plots it.
"""
import pyspedas
import pytplot
def ex_analysis():
# Print the installed version of pyspedas
pyspedas.version()
# Delete any existing pytplot variables
pytplot.del_data()
# Download THEMIS state data for 2015-12-31
pyspedas.load_data('themis', '2015-12-31', ['tha'], 'state', 'l1')
# Use some analysis functions on tplot variables
pyspedas.subtract_average('tha_pos')
pyspedas.subtract_median('tha_pos')
# Plot
pytplot.tplot(["tha_pos", "tha_pos-d", "tha_pos-m"])
# Run the example code
# ex_analysis()
| [
"egrimes@igpp.ucla.edu"
] | egrimes@igpp.ucla.edu |
327521fba8a42d166df3e832b2af503df40dc25f | 6c92b2faa4d8c328ab855429843f08f4f220a75a | /collective/azindexpage/testing.py | e60cb8f4f6d588913286868a4d2cc593cc7bfaa1 | [] | no_license | collective/collective.azindexpage | d6c5644d95889e806582b003dd43149dc6110fb4 | 2e04bd8c018acf94488deee3bb3d35355ca392a8 | refs/heads/master | 2023-08-12T04:11:54.703896 | 2014-12-11T10:33:50 | 2014-12-11T10:33:50 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 508 | py | from plone.app.testing import *
import collective.azindexpage
FIXTURE = PloneWithPackageLayer(
zcml_filename="configure.zcml",
zcml_package=collective.azindexpage,
additional_z2_products=[],
gs_profile_id='collective.azindexpage:default',
name="collective.azindexpage:FIXTURE"
)
INTEGRATION = IntegrationTesting(
bases=(FIXTURE,),
name="collective.azindexpage:Integration"
)
FUNCTIONAL = FunctionalTesting(
bases=(FIXTURE,),
name="collective.azindexpage:Functional"
)
| [
"toutpt@gmail.com"
] | toutpt@gmail.com |
d0f4aa8a11338fea334ca1061586eee7e025352f | e82b761f53d6a3ae023ee65a219eea38e66946a0 | /All_In_One/addons/io_scene_osi/__init__.py | 39b9e606dd18de1318ae77941cac5cff3c0055e8 | [] | no_license | 2434325680/Learnbgame | f3a050c28df588cbb3b14e1067a58221252e2e40 | 7b796d30dfd22b7706a93e4419ed913d18d29a44 | refs/heads/master | 2023-08-22T23:59:55.711050 | 2021-10-17T07:26:07 | 2021-10-17T07:26:07 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,486 | py | bl_info = {
"name": "Super Mario Odyssey Stage Importer",
"description": "Import Super Mario Odyssey Stages",
"author": "Philippus229, bymlv2 (v1.0.3) by leoetlino, SARCExtract (v0.5) by aboood40091, Sorted Containers (v2.0.4), PyYAML (v3.10)",
"version": (0, 4, 0),
"blender": (2, 79, 0),
"location": "File > Import-Export",
"warning": "This add-on is under development.",
"wiki_url": "https://github.com/Philippus229/io_scene_osi/wiki",
"tracker_url": "https://github.com/Philippus229/io_scene_osi/issues",
"category": "Learnbgame",
}
# Reload the package modules when reloading add-ons in Blender with F8.
if "bpy" in locals():
import importlib
if "addon" in locals():
importlib.reload(addon)
if "importing" in locals():
importlib.reload(importing)
if "byml" in locals():
importlib.reload(byml)
if "sortedcontainers" in locals():
importlib.reload(sortedcontainers)
import bpy
from . import addon
from . import importing
from . import byml
from . import sortedcontainers
def register():
bpy.utils.register_module(__name__)
# Addon
bpy.types.UILayout.osi_colbox = addon.osi_colbox
# Importing
bpy.types.INFO_MT_file_import.append(importing.ImportOperator.menu_func)
def unregister():
bpy.utils.unregister_module(__name__)
# Addon
del bpy.types.UILayout.osi_colbox
# Importing
bpy.types.INFO_MT_file_import.remove(importing.ImportOperator.menu_func)
| [
"root@localhost.localdomain"
] | root@localhost.localdomain |
2ec9aef834da051f7fa5fcf27a06b2c59b2a3aef | 8c186a62d1f60099d0677e2c1233af31f1dca19a | /client/watchman_subscriber.py | a2737ee79af7a08744080468bfecb0cdb2dd1323 | [
"MIT"
] | permissive | darrynza/pyre-check | 3f172625d9b2484190cc0b67805fabb1b8ba00ff | cb94f27b3db824446abf21bbb19d0cef516841ec | refs/heads/master | 2020-04-29T04:19:23.518722 | 2019-03-15T15:00:38 | 2019-03-15T15:03:32 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,009 | py | # Copyright (c) 2019-present, Facebook, Inc.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
# pyre-strict
import functools
import logging
import os
import signal
import sys
from typing import Any, Dict, List, NamedTuple
from .filesystem import AnalysisDirectory, acquire_lock, remove_if_exists
LOG = logging.getLogger(__name__) # type: logging.Logger
Subscription = NamedTuple(
"Subscription", [("root", str), ("name", str), ("subscription", Dict[str, Any])]
)
class WatchmanSubscriber(object):
def __init__(self, analysis_directory: AnalysisDirectory) -> None:
self._base_path = os.path.join(
analysis_directory.get_root(), ".pyre", self._name
) # type: str
self._alive = True # type: bool
@property
def _name(self) -> str:
"""
A name to identify the subscriber. Used as the directory and file names
for the log, lock, and pid files.
"""
raise NotImplementedError
@property
def _subscriptions(self) -> List[Subscription]:
"""
List of subscriptions
"""
raise NotImplementedError
def _handle_response(self, response: Dict[str, Any]) -> None:
"""
Callback invoked when a message is received from watchman
"""
raise NotImplementedError
@property
@functools.lru_cache(1)
def _watchman_client(self) -> "pywatchman.client": # noqa
try:
import pywatchman # noqa
return pywatchman.client(timeout=3600.0)
except ImportError as exception:
LOG.info("Not starting %s due to %s", self._name, str(exception))
sys.exit(1)
def _subscribe_to_watchman(self, subscription: Subscription) -> None:
self._watchman_client.query(
"subscribe", subscription.root, subscription.name, subscription.subscription
)
def _run(self) -> None:
try:
os.makedirs(self._base_path)
except OSError:
pass
lock_path = os.path.join(self._base_path, "{}.lock".format(self._name))
pid_path = os.path.join(self._base_path, "{}.pid".format(self._name))
def cleanup() -> None:
LOG.info("Cleaning up lock and pid files before exiting.")
remove_if_exists(pid_path)
remove_if_exists(lock_path)
# pyre-ignore: missing annotations on underscored parameters, fixed on master
def interrupt_handler(_signal_number=None, _frame=None) -> None:
LOG.info("Interrupt signal received.")
cleanup()
sys.exit(0)
signal.signal(signal.SIGINT, interrupt_handler)
# Die silently if unable to acquire the lock.
with acquire_lock(lock_path, blocking=False):
file_handler = logging.FileHandler(
os.path.join(self._base_path, "%s.log" % self._name)
)
file_handler.setFormatter(
logging.Formatter("%(asctime)s %(levelname)s %(message)s")
)
LOG.addHandler(file_handler)
with open(pid_path, "w+") as pid_file:
pid_file.write(str(os.getpid()))
for subscription in self._subscriptions:
self._subscribe_to_watchman(subscription)
connection = self._watchman_client.recvConn
if not connection:
LOG.error("Connection to Watchman for %s not found", self._name)
sys.exit(1)
while self._alive:
# This call is blocking, which prevents this loop from burning CPU.
response = connection.receive()
try:
if response["is_fresh_instance"]:
LOG.info(
"Ignoring initial watchman message for %s", response["root"]
)
else:
self._handle_response(response)
except KeyError:
pass
cleanup()
def daemonize(self) -> None:
"""We double-fork here to detach the daemon process from the parent.
If we were to just fork the child as a daemon, we'd have to worry about the
parent process exiting zombifying the daemon."""
if os.fork() == 0:
pid = os.fork()
if pid == 0:
try:
# Closing the sys.stdout and stderr file descriptors here causes
# the program to crash when attempting to log.
os.close(sys.stdout.fileno())
os.close(sys.stderr.fileno())
self._run()
sys.exit(0)
except Exception as exception:
LOG.info("Not running %s due to %s", self._name, str(exception))
sys.exit(1)
else:
sys.exit(0)
| [
"facebook-github-bot@users.noreply.github.com"
] | facebook-github-bot@users.noreply.github.com |
55a5b53175bff8fbc181d9a771e652ce7b9c2ed7 | e428fbd5af1b6b54db9f3fb456442a143c57582c | /string/uplow.py | 5ca01d4d8d77527e6e218f612d8e5d0145d61f9a | [] | no_license | tchelmella/w3python | 102be2a4967c1bd381a6f4e678c720100eaeb9af | d350a5b026f1ca2b9f224598319992d3f356312b | refs/heads/master | 2020-07-20T19:11:11.872123 | 2019-09-23T05:04:58 | 2019-09-23T05:04:58 | 206,696,219 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 120 | py | def uplow(data):
up = data.upper()
low = data.lower()
return up, low
a = input("Enter the string")
print(uplow(a))
| [
"tulsi.das1404@gmail.com"
] | tulsi.das1404@gmail.com |
4ac826e55018348c35f3b8ae04750caedaea7eda | 3fba33f91e1f50077dc2cce663b7de0f70a17a51 | /wlhub/users/admin.py | c7ecddd088d8bedf93021a3b00d05411aba80b0f | [] | no_license | azinit/wlhub | 59be2e9f555fa6655965d13580fd05963dc414b6 | 616761ef39f4cdb82d032f737bf50c66a9e935d1 | refs/heads/master | 2022-12-22T12:26:33.907642 | 2020-09-13T21:45:33 | 2020-09-13T21:45:33 | 295,242,617 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 897 | py | from django.contrib import admin
from django.contrib.auth import get_user_model
from core.mixins import ListLinksMixin
from .models import UserSurvey
@admin.register(get_user_model())
class UserAdmin(ListLinksMixin, admin.ModelAdmin):
fields = (
'is_superuser',
'first_name',
'last_name',
'email',
'username',
'password',
"thumb",
)
list_display = (
'username',
'first_name',
'last_name',
'email'
)
@admin.register(UserSurvey)
class UserSurveyAdmin(admin.ModelAdmin):
list_display = (
"__str__",
"rate",
"is_student",
"is_employee",
"is_employer",
"is_manager",
"is_freelancer",
)
list_filter = (
"is_student",
"is_employee",
"is_employer",
"is_manager",
"is_freelancer",
)
| [
"martis.azin@gmail.com"
] | martis.azin@gmail.com |
fd0f9258d690798c7c0594cd8a7cfa3ae6b4ee15 | 31e8b777b8b6da1ef8d172d2c7b5271a892e7dc9 | /frappe/desk/doctype/list_filter/list_filter.py | d2b01d301e11a116ba0b6ac48913556a110227d7 | [
"MIT"
] | permissive | Anurag810/frappe | a4d2f6f3a14cc600cced7146a02303cd1cb347f0 | 620cad18d60f090f5f9c13a5eefb56e86615de06 | refs/heads/develop | 2021-09-28T03:57:02.456172 | 2021-09-07T06:05:46 | 2021-09-07T06:05:46 | 157,325,015 | 5 | 0 | MIT | 2019-09-11T09:20:20 | 2018-11-13T05:25:01 | Python | UTF-8 | Python | false | false | 210 | py | # -*- coding: utf-8 -*-
# Copyright (c) 2018, Frappe Technologies and contributors
# License: MIT. See LICENSE
import frappe, json
from frappe.model.document import Document
class ListFilter(Document):
pass
| [
"rmehta@gmail.com"
] | rmehta@gmail.com |
438efbe8fed719db08deba9e3c76ce17b6fd093e | ee4db47ccecd23559b3b6f3fce1822c9e5982a56 | /Machine Learning/NaiveBSklearn.py | a217b9b46706d1e953309d5bcdbb082786eea1c2 | [] | no_license | meoclark/Data-Science-DropBox | d51e5da75569626affc89fdcca1975bed15422fd | 5f365cedc8d0a780abeb4e595cd0d90113a75d9d | refs/heads/master | 2022-10-30T08:43:22.502408 | 2020-06-16T19:45:05 | 2020-06-16T19:45:05 | 265,558,242 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 479 | py | from reviews import counter, training_counts
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
review = "This crib was perfect great excellent amazing great"
review_counts = counter.transform([review])
classifier = MultinomialNB()
training_labels = [0] * 1000 + [1] * 1000
classifier.fit(training_counts,training_labels)
pred = classifier.predict(review_counts)
print(pred)
print(classifier.predict_proba(review_counts)) | [
"oluchukwuegbo@gmail.com"
] | oluchukwuegbo@gmail.com |
59795eb9686bbaede2bedb7720af166564990330 | ca8627ac06c984aeb8ecd2e51c7a0493c794e3e4 | /azure-mgmt-cdn/azure/mgmt/cdn/models/sku.py | dd68ff0c3fa89c1cb00846dffac40e5d25557979 | [
"MIT"
] | permissive | matthchr/azure-sdk-for-python | ac7208b4403dc4e1348b48a1be9542081a807e40 | 8c0dc461a406e7e2142a655077903216be6d8b16 | refs/heads/master | 2021-01-11T14:16:23.020229 | 2017-03-31T20:39:15 | 2017-03-31T20:39:15 | 81,271,912 | 1 | 1 | null | 2017-02-08T01:07:09 | 2017-02-08T01:07:09 | null | UTF-8 | Python | false | false | 1,023 | py | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from msrest.serialization import Model
class Sku(Model):
"""The pricing tier (defines a CDN provider, feature list and rate) of the CDN
profile.
:param name: Name of the pricing tier. Possible values include:
'Standard_Verizon', 'Premium_Verizon', 'Custom_Verizon',
'Standard_Akamai', 'Standard_ChinaCdn'
:type name: str or :class:`SkuName <azure.mgmt.cdn.models.SkuName>`
"""
_attribute_map = {
'name': {'key': 'name', 'type': 'str'},
}
def __init__(self, name=None):
self.name = name
| [
"lmazuel@microsoft.com"
] | lmazuel@microsoft.com |
a0fb0edae27fc0c39cec9d69877252bc45c2f27e | bc441bb06b8948288f110af63feda4e798f30225 | /alert_service_sdk/model/tuna_service/requirement_instance_pb2.py | e96b38d97aac1d8eb40e480e7762e79325189dbb | [
"Apache-2.0"
] | permissive | easyopsapis/easyops-api-python | 23204f8846a332c30f5f3ff627bf220940137b6b | adf6e3bad33fa6266b5fa0a449dd4ac42f8447d0 | refs/heads/master | 2020-06-26T23:38:27.308803 | 2020-06-16T07:25:41 | 2020-06-16T07:25:41 | 199,773,131 | 5 | 0 | null | null | null | null | UTF-8 | Python | false | true | 8,098 | py | # -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: requirement_instance.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from alert_service_sdk.model.topboard import issue_pb2 as alert__service__sdk_dot_model_dot_topboard_dot_issue__pb2
from google.protobuf import struct_pb2 as google_dot_protobuf_dot_struct__pb2
DESCRIPTOR = _descriptor.FileDescriptor(
name='requirement_instance.proto',
package='tuna_service',
syntax='proto3',
serialized_options=_b('ZFgo.easyops.local/contracts/protorepo-models/easyops/model/tuna_service'),
serialized_pb=_b('\n\x1arequirement_instance.proto\x12\x0ctuna_service\x1a,alert_service_sdk/model/topboard/issue.proto\x1a\x1cgoogle/protobuf/struct.proto\"\x99\x02\n\x13RequirementInstance\x12\x12\n\ninstanceId\x18\x01 \x01(\t\x12\x0c\n\x04name\x18\x02 \x01(\t\x12\x10\n\x08sequence\x18\x03 \x01(\t\x12\r\n\x05given\x18\x04 \x01(\t\x12\x0c\n\x04when\x18\x05 \x01(\t\x12\x0c\n\x04then\x18\x06 \x01(\t\x12\x0c\n\x04type\x18\x07 \x01(\t\x12\x17\n\x0f\x64\x61taDescription\x18\x08 \x01(\t\x12\x0c\n\x04\x64\x61ta\x18\t \x01(\t\x12\x0b\n\x03tag\x18\n \x01(\t\x12\x15\n\rinterfaceName\x18\x0b \x01(\t\x12*\n\tcontracts\x18\x0c \x03(\x0b\x32\x17.google.protobuf.Struct\x12\x1e\n\x05ISSUE\x18\r \x03(\x0b\x32\x0f.topboard.IssueBHZFgo.easyops.local/contracts/protorepo-models/easyops/model/tuna_serviceb\x06proto3')
,
dependencies=[alert__service__sdk_dot_model_dot_topboard_dot_issue__pb2.DESCRIPTOR,google_dot_protobuf_dot_struct__pb2.DESCRIPTOR,])
_REQUIREMENTINSTANCE = _descriptor.Descriptor(
name='RequirementInstance',
full_name='tuna_service.RequirementInstance',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='instanceId', full_name='tuna_service.RequirementInstance.instanceId', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='name', full_name='tuna_service.RequirementInstance.name', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='sequence', full_name='tuna_service.RequirementInstance.sequence', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='given', full_name='tuna_service.RequirementInstance.given', index=3,
number=4, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='when', full_name='tuna_service.RequirementInstance.when', index=4,
number=5, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='then', full_name='tuna_service.RequirementInstance.then', index=5,
number=6, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='type', full_name='tuna_service.RequirementInstance.type', index=6,
number=7, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='dataDescription', full_name='tuna_service.RequirementInstance.dataDescription', index=7,
number=8, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='data', full_name='tuna_service.RequirementInstance.data', index=8,
number=9, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='tag', full_name='tuna_service.RequirementInstance.tag', index=9,
number=10, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='interfaceName', full_name='tuna_service.RequirementInstance.interfaceName', index=10,
number=11, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='contracts', full_name='tuna_service.RequirementInstance.contracts', index=11,
number=12, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='ISSUE', full_name='tuna_service.RequirementInstance.ISSUE', index=12,
number=13, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=121,
serialized_end=402,
)
_REQUIREMENTINSTANCE.fields_by_name['contracts'].message_type = google_dot_protobuf_dot_struct__pb2._STRUCT
_REQUIREMENTINSTANCE.fields_by_name['ISSUE'].message_type = alert__service__sdk_dot_model_dot_topboard_dot_issue__pb2._ISSUE
DESCRIPTOR.message_types_by_name['RequirementInstance'] = _REQUIREMENTINSTANCE
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
RequirementInstance = _reflection.GeneratedProtocolMessageType('RequirementInstance', (_message.Message,), {
'DESCRIPTOR' : _REQUIREMENTINSTANCE,
'__module__' : 'requirement_instance_pb2'
# @@protoc_insertion_point(class_scope:tuna_service.RequirementInstance)
})
_sym_db.RegisterMessage(RequirementInstance)
DESCRIPTOR._options = None
# @@protoc_insertion_point(module_scope)
| [
"service@easyops.cn"
] | service@easyops.cn |
ec872a5755917412c16200e8f53d7cfc56006833 | 1003740abd789902bff88bd86d573252f4fe9d23 | /eventex/core/admin.py | c3eaa933d5c4420cceed77ae4007c7305ebcb480 | [] | no_license | hpfn/wttd-2017 | e73ca9c65fc6dcee78045c7edee0fc42768fbfb7 | c9b284bbd644dcc543a8fd9a11254548441a31bd | refs/heads/master | 2023-08-28T16:12:19.191071 | 2019-01-04T18:17:31 | 2019-01-13T12:42:15 | 91,342,897 | 0 | 1 | null | 2023-09-06T21:55:07 | 2017-05-15T13:46:36 | Python | UTF-8 | Python | false | false | 1,366 | py | from django.contrib import admin
from eventex.core.models import Speaker, Contact, Talk, Course
class ContactInLine(admin.TabularInline):
model = Contact
extra = 1
class SpeakerModelAdmin(admin.ModelAdmin):
inlines = [ContactInLine]
prepopulated_fields = {'slug': ('name',)}
list_display = ['name', 'photo_img', 'website_link',
'email', 'phone']
def website_link(self, obj):
return '<a href="{0}">{0}</a>'.format(obj.website)
website_link.allow_tags = True
website_link.short_description = 'website'
def photo_img(self, obj):
return '<img width="32px" src={} />'.format(obj.photo)
photo_img.allow_tags = True
photo_img.short_description = 'foto'
def email(self, obj):
# return Contact.emails.filter(speaker=obj).first()
return obj.contact_set.emails().first()
email.short_description = 'e-mail'
def phone(self, obj):
# return Contact.phones.filter(speaker=obj).first()
return obj.contact_set.phones().first()
phone.short_description = 'telefone'
class TalkModelAdmin(admin.ModelAdmin):
def get_queryset(self, request):
qs = super().get_queryset(request)
return qs.filter(course=None)
admin.site.register(Speaker, SpeakerModelAdmin)
admin.site.register(Talk, TalkModelAdmin)
admin.site.register(Course)
| [
"hpfn@debian.org"
] | hpfn@debian.org |
db1cb5b24827d4e737ed5955d062f7174cbdf581 | acd4f039e319e845bf499fc6c27d0609ef5a5081 | /cno/boolean/steady.py | 951e538bf43d2025eea3ac69c9046f7e8166cf92 | [
"BSD-2-Clause"
] | permissive | ltobalina/cellnopt | a917d852680320eb620069da0126eecdb9691329 | b5a018299cd06c666c27573067fb848c36c6d388 | refs/heads/master | 2021-01-14T14:16:58.179955 | 2014-11-23T22:25:50 | 2014-11-23T22:25:50 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 12,956 | py | from cno.core.base import CNOBase
import pandas as pd
import numpy as np
import pylab
from cno.misc.profiler import do_profile
import time
class Steady(CNOBase):
"""Naive implementation of Steady state to help in
designing the API"""
def __init__(self, model, data, verbose=True):
super(Steady, self).__init__(model, data, verbose)
self.model = self.pknmodel.copy()
self.time = self.data.times[1]
# just a reference to the conditions
self.inhibitors = self.data.inhibitors
self.stimuli = self.data.stimuli
self.inhibitors_names = self.data.inhibitors.columns
self.stimuli_names = self.data.stimuli.columns
self.N = self.data.df.query('time==0').shape[0]
self.results = self.data.df.copy()
self.results = self.results.query("time==@self.time")
# ignore data of the edge [0:2]
self.toflip = [x[0:2] for x in self.model.edges(data=True) if x[2]['link'] == '-']
self.init(self.time)
self.measures = {}
self.measures[0] = self.data.df.query("time==0").reset_index(drop=True).values
self.measures[self.time] = self.data.df.query("time==@self.time").reset_index(drop=True).values
self.simulated = {}
def init(self, time):
assert time in self.data.times
self.values = {}
for node in self.model.nodes():
self.values[node] = np.array([np.nan for x in range(0,self.N)])
for this in self.stimuli_names:
self.values[this] = self.stimuli[this].values.copy()
for this in self.inhibitors_names:
self.values[this] = 1. - self.inhibitors[this].values.copy()
self.and_gates = [x for x in self.model.nodes() if "^" in x]
self.predecessors = {}
for node in self.model.nodes():
self.predecessors[node] = self.model.predecessors(node)
self.successors = {}
for node in self.model.nodes():
self.successors[node] = self.model.successors(node)
def preprocessing(self, expansion=True, compression=True, cutnonc=True):
self.model.midas = self.data
self.model.preprocessing(expansion=expansion, compression=compression,
cutnonc=cutnonc)
self.init(self.time)
#@do_profile()
def simulate(self, tick, debug=False, reactions=None):
# pandas is very convenient but slower than numpy
# The dataFrame instanciation is costly as well.
# For small models, it has a non-negligeable cost.
# inhibitors will be changed in not ON
self.tochange = [x for x in self.model.nodes() if x not in self.stimuli_names
and x not in self.and_gates ]
# what about a species that is both inhibited and measured
testVal = 1e-3
self.debug_values = []
#self.X0 = pd.DataFrame(self.values)
#self.debug_values.append(self.X0.copy())
self.residuals = []
self.penalties = []
self.count = 0
self.nSp = len(self.values.keys())
residual = 1
frac = 1.2
# #FIXME +1 is to have same resrults as in CellnOptR
# It means that if due to the cycles, you may not end up with same results.
# this happends if you have cyvles with inhbititions
# and an odd number of edges.
while (self.count < self.nSp * frac +1) and residual > testVal:
self.previous = self.values.copy()
#self.X0 = pd.DataFrame(self.values)
#self.X0 = self.values.copy()
# compute AND gates first. why
for node in self.and_gates:
# replace na by large number so that min is unchanged
self.values[node] = np.nanmin(np.array([self.values[x].copy() for x in
self.predecessors[node]]), axis=0)
for node in self.tochange:
# easy one, just the value of predecessors
#if len(self.predecessors[node]) == 1:
# self.values[node] = self.values[self.predecessors[node][0]].copy()
if len(self.predecessors[node]) == 0:
pass # nothing to change
else:
self.values[node] = np.nanmax(
np.array([self.values[x] if (x,node) not in self.toflip
else 1-self.values[x] for x in
self.predecessors[node]]),
axis=0)
# take inhibitors into account
if node in self.inhibitors_names:
self.values[node] *= 1 - self.inhibitors[node].values
# 30 % of the time is here
# here NAs are set automatically to zero because of the int16 cast
# but it helps speeding up a nit the code by removig needs to take care
# of NAs. if we use sumna, na are ignored even when 1 is compared to NA
self.m1 = np.array([self.previous[k] for k in self.previous.keys() ], dtype=np.int16)
self.m2 = np.array([self.values[k] for k in self.previous.keys() ], dtype=np.int16)
residual = np.nansum(np.square(self.m1 - self.m2))
self.debug_values.append(self.previous.copy())
self.residuals.append(residual)
self.count += 1
# add the latest values simulated in the while loop
self.debug_values.append(self.values.copy())
# Need to set undefined values to NAs
self.simulated[self.time] = np.array([self.values[k]
for k in self.data.df.columns ], dtype=float).transpose()
self.prev = {}
self.prev[self.time] = np.array([self.previous[k]
for k in self.data.df.columns ], dtype=float).transpose()
mask = self.prev[self.time] != self.simulated[self.time]
self.simulated[self.time][mask] = np.nan
# set the non-resolved bits to NA
# TODO
#newInput[which(abs(outputPrev-newInput) > testVal)] <- NA
# loops are handle diffenty
#@do_profile()
def score(self):
# on ExtLiverPCB
# computeScoreT1(cnolist, pknmodel, rep(1,113))
# liverDREAM
#computeScoreT1(cnolist, pknmodel, rep(1,58))
#[1] 0.2574948
# found 0.27
# time 1 only is taken into account
diff = np.square(self.measures[self.time] - self.simulated[self.time])
#debugging
self.diff = diff
N = diff.shape[0] * diff.shape[1]
Nna = np.isnan(diff).sum()
N-= Nna
print(N)
S = np.nansum(self.diff) / float(N)
return S
def plot(self):
self.model.plot()
def test(self, N=100):
# N = 100, all bits on
# CellNOptR on LiverDREAM 0.65 seconds. 0.9 in cno
# CellNOptR on LiverDREAM preprocessed) on:0.75 seconds. 1.42 in cno
# 0.2574948 in CellNOptR
# 0.27
# CellNOptR on ToyMMB : 0.22 ; 0.22s in cno
# 0.09467
# CellNOptR on ExtLiverPCB : 1.4 seconds ; 1.7s in cno
# 0.29199
# cost of pruning models ?
"""
library(CellNOptR)
cnolist = ...
pknmodel = ...
test <- function(cnolist, model, N=100){
t1 = proc.time()
for (i in rep(1,N)){
mse = computeScoreT1(cnolist, model, rep(1,length(model$reacID)))
}
t2 = proc.time()
print(t2-t1)
return(mse)
}
test(cnolist, model)
"""
t1 = time.time()
for i in range(0,N):
self.init(self.time)
self.simulate(1)
self.score()
t2 = time.time()
print(str(t2-t1) + " seconds")
def plotsim(self):
# What do we use here: self.values
import simulator
sor = simulator.Simulator()
#if time is None:
# time = len(self.debug_values) - 1
sor.plot_time_course(self.values)
pylab.title("Steady state for all experiments(x-axis)\n\n\n\n")
pylab.tight_layout()
def plot_errors(self, columns=None):
# What do we use here: self.values
print("Use only time 0..")
if columns is None:
columns = self.data.df.columns
X1 = pd.DataFrame(self.values)[columns].copy()
N = X1.shape[0]
X1['time'] = [self.time] * N
X1['cell'] = [self.data.cellLine] * N
X1['experiment'] = self.data.experiments.index
X1.set_index(['cell', 'experiment', 'time'], inplace=True)
self.data.sim.ix[X1.index] = X1
self.data.plot(mode='mse')
print("MSE= %s(caspo/cno with only 1 time)" % self.score())
print("MSE= %s(cellnoptr with only 1 time)" % str(self.score()/2.))
#@do_profile()
def test():
from cno import cnodata
s = Steady(cnodata("PKN-ToyMMB.sif"), cnodata("MD-ToyMMB.csv"))
s.test()
#test()
"""
$mse
[,1] [,2] [,3] [,4] [,5] [,6] [,7]
[1,] 0.00405 0.500 0.3698 0.02 0.0072 0.00000 0.00
[2,] 0.01620 0.045 0.0050 0.00 0.0000 0.28125 0.18
[3,] 0.00405 0.045 0.0050 0.02 0.0072 0.28125 0.18
[4,] 0.00405 0.000 0.3698 0.00 0.0000 0.00000 0.00
[5,] 0.01620 0.045 0.0050 0.00 0.0000 0.28125 0.18
[6,] 0.00405 0.045 0.0050 0.00 0.0000 0.28125 0.18
[7,] 0.00000 0.500 0.0000 0.02 0.0072 0.00000 0.00
[8,] 0.00000 0.045 0.0050 0.50 0.5000 0.28125 0.18
[9,] 0.00000 0.045 0.0050 0.02 0.0072 0.28125 0.18
$simResults[[1]]$t0
[,1] [,2] [,3] [,4] [,5] [,6] [,7]
[1,] 0 0 0 0 0 0 0
[2,] 0 0 0 0 0 0 0
[3,] 0 0 0 0 0 0 0
[4,] 0 0 0 0 0 0 0
[5,] 0 0 0 0 0 0 0
[6,] 0 0 0 0 0 0 0
[7,] 0 0 0 0 0 0 0
[8,] 0 0 0 0 0 0 0
[9,] 0 0 0 0 0 0 0
s.score()
pd.DataFrame(s.diff/2.)[[0,2,4,1,6,3,5]]
Akt Hsp27 NFkB Erk p90RSK Jnk cJun
[,1] [,2] [,3] [,4] [,5] [,6] [,7]
[1,] 1 1 0 1 1 0 0
[2,] 1 1 1 0 0 1 1
[3,] 1 1 1 1 1 1 1
[4,] 1 0 0 0 0 0 0
[5,] 1 1 1 0 0 1 1
[6,] 1 1 1 0 0 1 1
[7,] 0 1 0 1 1 0 0
[8,] 0 1 1 1 1 1 1
[9,] 0 1 1 1 1 1 1
EGF TNFa Raf PI3K
[1,] 1 0 0 0
[2,] 0 1 0 0
[3,] 1 1 0 0
[4,] 1 0 1 0
[5,] 0 1 1 0
[6,] 1 1 1 0
[7,] 1 0 0 1
[8,] 0 1 0 1
[9,] 1 1 0 1
"""
"""
$mse
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 0.02607682 NA 0.3539345 0.4100143 NA 0.03221782
[2,] 0.09200486 0.03723420 0.2499203 0.6794323 0.3669156 0.02610956
[3,] 0.02845594 NA 0.1943445 0.7901995 0.3425479 0.28019457
[4,] 0.07943784 0.04197267 0.2153080 0.5394515 0.3405881 0.26677940
[5,] 0.06059407 0.03067175 0.2714396 0.4428013 0.3980180 0.03146179
[6,] 0.02276370 NA 0.2004797 0.3669970 0.3474665 0.28991532
[7,] 0.06374754 0.02499234 0.2358515 0.4051598 0.4542814 0.20409221
[8,] 0.01885474 0.01503284 0.2139463 0.6602201 0.3880110 0.03740429
[9,] 0.01601736 0.02506731 0.2179543 0.7714214 0.4299402 0.24682266
[10,] 0.02014269 0.01944989 0.2992465 0.6237731 0.3529949 0.25041879
$simResults
$simResults[[1]]
$simResults[[1]]$t0
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 0 0 0 0 0 0
[2,] 0 0 0 0 0 0
[3,] 0 0 0 0 0 0
[4,] 0 0 0 0 0 0
[5,] 0 0 0 0 0 0
[6,] 0 0 0 0 0 0
[7,] 0 0 0 0 0 0
[8,] 0 0 0 0 0 0
[9,] 0 0 0 0 0 0
[10,] 0 0 0 0 0 0
$simResults[[1]]$t1
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 0 NA 1 1 NA 0
[2,] 1 1 1 0 1 0
[3,] 0 NA 1 0 1 0
[4,] 1 1 1 0 1 0
[5,] 1 1 1 1 1 0
[6,] 0 NA 1 1 1 0
[7,] 1 1 1 1 1 0
[8,] 0 0 1 0 1 0
[9,] 0 0 1 0 1 0
[10,] 0 0 1 0 1 0
SIMPLE LOOP on PB sub pkn
'tnfr=ikk',
'tnfr=pi3k',
'pi3k=raf1',
'nfkb=ex',
'!ikb=nfkb',
'ex=ikb',
'!ikk=ikb',
'tnfa=tnfr']
and data
nfkb raf1
cell experiment time
Cell experiment_0 0 0.93 0.13
2 0.91 0.10
experiment_1 0 0.11 0.13
2 0.74 0.94
experiment_2 0 0.12 0.25
2 0.70 0.25
$mse
[,1] [,2]
[1,] 0.00500 0.41405
[2,] 0.00180 NA
[3,] 0.03125 NA
$simResults
$simResults[[1]]
$simResults[[1]]$t0
[,1] [,2]
[1,] 0 0
[2,] 0 0
[3,] 0 0
$simResults[[1]]$t1
[,1] [,2]
[1,] 0 0
[2,] 1 NA
[3,] 0 NA
"""
| [
"cokelaer@gmail.com"
] | cokelaer@gmail.com |
ca7fe4e826cfb4c5139c30a91811719f01d3ccd7 | ffef4697f09fb321a04f2b3aad98b688f4669fb5 | /tests/ut/python/pipeline/parse/test_create_obj.py | a702f37e0bfbf65e1248e54fe478471796d1b85d | [
"Apache-2.0",
"AGPL-3.0-only",
"BSD-3-Clause-Open-MPI",
"MPL-1.1",
"LicenseRef-scancode-proprietary-license",
"LicenseRef-scancode-unknown-license-reference",
"Unlicense",
"MPL-2.0",
"LGPL-2.1-only",
"GPL-2.0-only",
"Libpng",
"BSL-1.0",
"MIT",
"MPL-2.0-no-copyleft-exception",
"IJG",
"Z... | permissive | Ewenwan/mindspore | 02a0f1fd660fa5fec819024f6feffe300af38c9c | 4575fc3ae8e967252d679542719b66e49eaee42b | refs/heads/master | 2021-05-19T03:38:27.923178 | 2020-03-31T05:49:10 | 2020-03-31T05:49:10 | 251,512,047 | 1 | 0 | Apache-2.0 | 2020-03-31T05:48:21 | 2020-03-31T05:48:20 | null | UTF-8 | Python | false | false | 3,876 | py | # Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""
@File : test_create_obj.py
@Author:
@Date : 2019-06-26
@Desc : test create object instance on parse function, eg: 'construct'
Support class : nn.Cell ops.Primitive
Support parameter: type is define on function 'ValuePtrToPyData'
(int,float,string,bool,tensor)
"""
import logging
import numpy as np
import mindspore.nn as nn
from mindspore.ops import operations as P
from mindspore.common.api import ms_function
from mindspore.common.tensor import Tensor
from ...ut_filter import non_graph_engine
log = logging.getLogger("test")
log.setLevel(level=logging.ERROR)
class Net(nn.Cell):
""" Net definition """
def __init__(self):
super(Net, self).__init__()
self.softmax = nn.Softmax(0)
self.axis = 0
def construct(self, x):
x = nn.Softmax(self.axis)(x)
return x
# Test: creat CELL OR Primitive instance on construct
@non_graph_engine
def test_create_cell_object_on_construct():
""" test_create_cell_object_on_construct """
log.debug("begin test_create_object_on_construct")
np1 = np.random.randn(2, 3, 4, 5).astype(np.float32)
input_me = Tensor(np1)
net = Net()
output = net(input_me)
out_me1 = output.asnumpy()
print(np1)
print(out_me1)
log.debug("finished test_create_object_on_construct")
# Test: creat CELL OR Primitive instance on construct
class Net1(nn.Cell):
""" Net1 definition """
def __init__(self):
super(Net1, self).__init__()
self.add = P.TensorAdd()
@ms_function
def construct(self, x, y):
add = P.TensorAdd()
result = add(x, y)
return result
@non_graph_engine
def test_create_primitive_object_on_construct():
""" test_create_primitive_object_on_construct """
log.debug("begin test_create_object_on_construct")
x = Tensor(np.array([[1, 2, 3], [1, 2, 3]], np.float32))
y = Tensor(np.array([[2, 3, 4], [1, 1, 2]], np.float32))
net = Net1()
net.construct(x, y)
log.debug("finished test_create_object_on_construct")
# Test: creat CELL OR Primitive instance on construct use many parameter
class NetM(nn.Cell):
""" NetM definition """
def __init__(self, name, axis):
super(NetM, self).__init__()
# self.relu = nn.ReLU()
self.name = name
self.axis = axis
self.softmax = nn.Softmax(self.axis)
def construct(self, x):
x = self.softmax(x)
return x
class NetC(nn.Cell):
""" NetC definition """
def __init__(self, tensor):
super(NetC, self).__init__()
self.tensor = tensor
def construct(self, x):
x = NetM("test", 1)(x)
return x
# Test: creat CELL OR Primitive instance on construct
@non_graph_engine
def test_create_cell_object_on_construct_use_many_parameter():
""" test_create_cell_object_on_construct_use_many_parameter """
log.debug("begin test_create_object_on_construct")
np1 = np.random.randn(2, 3, 4, 5).astype(np.float32)
input_me = Tensor(np1)
net = NetC(input_me)
output = net(input_me)
out_me1 = output.asnumpy()
print(np1)
print(out_me1)
log.debug("finished test_create_object_on_construct")
| [
"leon.wanghui@huawei.com"
] | leon.wanghui@huawei.com |
7dbb78d80fe115de113c038eef35d2eb2e41c5e9 | 1e1f7d3687b71e69efa958d5bbda2573178f2acd | /accounts/doctype/tds_control/tds_control.py | 69588d88ceb891f49dca1843969485b1b0c98525 | [] | no_license | ravidey/erpnext | 680a31e2a6b957fd3f3ddc5fd6b383d8ea50f515 | bb4b9bfa1551226a1d58fcef0cfe8150c423f49d | refs/heads/master | 2021-01-17T22:07:36.049581 | 2011-06-10T07:32:01 | 2011-06-10T07:32:01 | 1,869,316 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 6,401 | py | # Please edit this list and import only required elements
import webnotes
from webnotes.utils import add_days, add_months, add_years, cint, cstr, date_diff, default_fields, flt, fmt_money, formatdate, generate_hash, getTraceback, get_defaults, get_first_day, get_last_day, getdate, has_common, month_name, now, nowdate, replace_newlines, sendmail, set_default, str_esc_quote, user_format, validate_email_add
from webnotes.model import db_exists
from webnotes.model.doc import Document, addchild, removechild, getchildren, make_autoname, SuperDocType
from webnotes.model.doclist import getlist, copy_doclist
from webnotes.model.code import get_obj, get_server_obj, run_server_obj, updatedb, check_syntax
from webnotes import session, form, is_testing, msgprint, errprint
set = webnotes.conn.set
sql = webnotes.conn.sql
get_value = webnotes.conn.get_value
in_transaction = webnotes.conn.in_transaction
convert_to_lists = webnotes.conn.convert_to_lists
# -----------------------------------------------------------------------------------------
class DocType:
def __init__(self, doc, doclist=[]):
self.doc = doc
self.doclist = doclist
# ============TDS==================
# Stop payable voucher on which tds is applicable is made before posting date of the
# voucher in which tds was applicable for 1st time
def validate_first_entry(self,obj):
if obj.doc.doctype == 'Payable Voucher':
supp_acc = obj.doc.credit_to
elif obj.doc.doctype == 'Journal Voucher':
supp_acc = obj.doc.supplier_account
if obj.doc.ded_amount:
# first pv
first_pv = sql("select posting_date from `tabPayable Voucher` where credit_to = '%s' and docstatus = 1 and tds_category = '%s' and fiscal_year = '%s' and tds_applicable = 'Yes' and (ded_amount != 0 or ded_amount is not null) order by posting_date asc limit 1"%(supp_acc, obj.doc.tds_category, obj.doc.fiscal_year))
first_pv_date = first_pv and first_pv[0][0] or ''
# first jv
first_jv = sql("select posting_date from `tabJournal Voucher` where supplier_account = '%s'and docstatus = 1 and tds_category = '%s' and fiscal_year = '%s' and tds_applicable = 'Yes' and (ded_amount != 0 or ded_amount is not null) order by posting_date asc limit 1"%(supp_acc, obj.doc.tds_category, obj.doc.fiscal_year))
first_jv_date = first_jv and first_jv[0][0] or ''
#first tds voucher date
first_tds_date = ''
if first_pv_date and first_jv_date:
first_tds_date = first_pv_date < first_jv_date and first_pv_date or first_jv_date
elif first_pv_date:
first_tds_date = first_pv_date
elif first_jv_date:
first_tds_date = first_jv_date
if first_tds_date and getdate(obj.doc.posting_date) < first_tds_date:
msgprint("First tds voucher for this category has been made already. Hence payable voucher cannot be made before posting date of first tds voucher ")
raise Exception
# TDS function definition
#---------------------------
def get_tds_amount(self, obj):
# Validate if posting date b4 first tds entry for this category
self.validate_first_entry(obj)
# get current amount and supplier head
if obj.doc.doctype == 'Payable Voucher':
supplier_account = obj.doc.credit_to
total_amount=flt(obj.doc.grand_total)
for d in getlist(obj.doclist,'advance_allocation_details'):
if flt(d.tds_amount)!=0:
total_amount -= flt(d.allocated_amount)
elif obj.doc.doctype == 'Journal Voucher':
supplier_account = obj.doc.supplier_account
total_amount = obj.doc.total_debit
if obj.doc.tds_category:
# get total billed
total_billed = 0
pv = sql("select sum(ifnull(grand_total,0)), sum(ifnull(ded_amount,0)) from `tabPayable Voucher` where tds_category = %s and credit_to = %s and fiscal_year = %s and docstatus = 1 and name != %s and is_opening != 'Yes'", (obj.doc.tds_category, supplier_account, obj.doc.fiscal_year, obj.doc.name))
jv = sql("select sum(ifnull(total_debit,0)), sum(ifnull(ded_amount,0)) from `tabJournal Voucher` where tds_category = %s and supplier_account = %s and fiscal_year = %s and docstatus = 1 and name != %s and is_opening != 'Yes'", (obj.doc.tds_category, supplier_account, obj.doc.fiscal_year, obj.doc.name))
tds_in_pv = pv and pv[0][1] or 0
tds_in_jv = jv and jv[0][1] or 0
total_billed += flt(pv and pv[0][0] or 0)+flt(jv and jv[0][0] or 0)+flt(total_amount)
# get slab
slab = sql("SELECT * FROM `tabTDS Rate Detail` t1, `tabTDS Rate Chart` t2 WHERE t1.category = '%s' AND t1.parent=t2.name and t2.applicable_from <= '%s' ORDER BY t2.applicable_from DESC LIMIT 1" % (obj.doc.tds_category, obj.doc.posting_date), as_dict = 1)
if slab and flt(slab[0]['slab_from']) <= total_billed:
if flt(tds_in_pv) <= 0 and flt(tds_in_jv) <= 0:
total_amount = total_billed
slab = slab[0]
# special tds rate
special_tds = sql("select special_tds_rate, special_tds_limit, special_tds_rate_applicable from `tabTDS Detail` where parent = '%s' and tds_category = '%s'"% (supplier_account,obj.doc.tds_category))
# get_pan_number
pan_no = sql("select pan_number from `tabAccount` where name = '%s'" % supplier_account)
pan_no = pan_no and cstr(pan_no[0][0]) or ''
if not pan_no and flt(slab.get('rate_without_pan')):
msgprint("As there is no PAN number mentioned in the account head: %s, TDS amount will be calculated at rate %s%%" % (supplier_account, cstr(slab['rate_without_pan'])))
tds_rate = flt(slab.get('rate_without_pan'))
elif special_tds and special_tds[0][2]=='Yes' and (flt(special_tds[0][1])==0 or flt(special_tds[0][1]) >= flt(total_amount)):
tds_rate = flt(special_tds[0][0])
else:
tds_rate=flt(slab['rate'])
# calculate tds amount
if flt(slab['rate']):
ac = sql("SELECT account_head FROM `tabTDS Category Account` where parent=%s and company=%s", (obj.doc.tds_category,obj.doc.company))
if ac:
obj.doc.tax_code = ac[0][0]
obj.doc.rate = tds_rate
obj.doc.ded_amount = round(flt(tds_rate) * flt(total_amount) / 100)
else:
msgprint("TDS Account not selected in TDS Category %s" % (obj.doc.tds_category))
raise Exception
| [
"pdvyas@erpnext.com"
] | pdvyas@erpnext.com |
f3cbd8df8e6fd91388f44a5728142de7a9fa8cbd | 2652fd6261631794535589427a384693365a585e | /trunk/workspace/Squish/src/TestScript/UI/suite_UI_51/tst_UI_51_Router_1841_NVRAM_Save_Erase/test.py | 378fcb31dd1415d8743e1473b7159fa68a9cef1d | [] | no_license | ptqatester1/ptqa | 88c652380167f64a953bfd7a65041e7d8ac48c90 | 5b5997ea459e9aac17db8da2041e2af331927104 | refs/heads/master | 2021-01-21T19:06:49.275364 | 2017-06-19T03:15:00 | 2017-06-19T03:15:00 | 92,115,462 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,253 | py | from API.ComponentBox import ComponentBoxConst
from API.Device.Router.Router import Router
from API.Utility.Util import Util
#Function initialization
util = Util()
#Device initialization
router0 = Router(ComponentBoxConst.DeviceModel.ROUTER_1841, 100, 200, "Router0")
def main():
util.init()
createTopology()
util.clickOnSimulation()
util.clickOnRealtime()
checkPoint1()
checkPoint2()
def createTopology():
router0.create()
def checkPoint1():
router0.select()
router0.clickConfigTab()
router0.config.settings.saveButton()
router0.clickCliTab()
router0.cli.textCheckPoint("Router#copy running-config startup-config")
router0.cli.textCheckPoint("Destination filename \[startup-config\]?")
router0.cli.textCheckPoint("Building configuration...")
router0.cli.textCheckPoint("\[OK\]")
def checkPoint2():
router0.clickConfigTab()
router0.config.settings.eraseButton()
router0.config.settings.popups.eraseStartupConfigNoButton()
router0.clickCliTab()
router0.cli.textCheckPoint("Erase of nvram: complete", 0)
router0.clickConfigTab()
router0.config.settings.eraseNvram()
router0.clickCliTab()
router0.cli.textCheckPoint("Erase of nvram: complete") | [
"ptqatester1@gmail.com"
] | ptqatester1@gmail.com |
5bdf2f8e51db469e4af4c9ca2a139d967f5f99fc | c6f93ccf29f978a7834a01c25e636364adeaa4ea | /setup.py | 86142ec0f903179b1486fcbb97500086415833d8 | [
"MIT"
] | permissive | aviv-julienjehannet/jschon | d099709831fbee740c8cb5466c5964cec8d669fa | c8a0ddbb8202d9e80e8c4e959ec8bfd28297eec1 | refs/heads/main | 2023-04-16T05:51:41.170699 | 2021-04-23T13:33:01 | 2021-04-23T13:33:01 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,399 | py | import pathlib
from setuptools import setup, find_packages
HERE = pathlib.Path(__file__).parent.resolve()
README = (HERE / 'README.md').read_text(encoding='utf-8')
setup(
name='jschon',
version='0.2.0',
description='A pythonic, extensible JSON Schema implementation.',
long_description=README,
long_description_content_type='text/markdown',
url='https://github.com/marksparkza/jschon',
author='Mark Jacobson',
author_email='mark@saeon.ac.za',
license='MIT',
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'License :: OSI Approved :: MIT License',
'Operating System :: OS Independent',
'Programming Language :: Python',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3 :: Only',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
'Topic :: Software Development :: Libraries',
'Topic :: Software Development :: Libraries :: Python Modules',
],
packages=find_packages(exclude=['tests']),
include_package_data=True,
python_requires='~=3.8',
install_requires=['rfc3986'],
extras_require={
'test': ['tox'],
'dev': [
'pytest',
'coverage',
'hypothesis',
'pytest-benchmark',
]
},
)
| [
"52427991+marksparkza@users.noreply.github.com"
] | 52427991+marksparkza@users.noreply.github.com |
1e11232be19b72a2cad2d2b0d8b3a87e17d8fa2c | ce819ddd76427722d967e06190fc24ac98758009 | /PyQT_MySQL/Study_PyQT5/22/chapter22_2.py | f6b7f01c1136095b3696243a8837986f3bf7f369 | [] | no_license | huilizhou/Deeplearning_Python_DEMO | cb4164d21899757a4061836571b389dad0e63094 | 0a2898122b47b3e0196966a2fc61468afa99f67b | refs/heads/master | 2021-08-16T10:28:51.992892 | 2020-04-04T08:26:07 | 2020-04-04T08:26:07 | 148,308,575 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,165 | py | import sys
from PyQt5.QtWidgets import QApplication, QWidget, QInputDialog, QLineEdit, QTextEdit, QPushButton, \
QGridLayout
class Demo(QWidget):
def __init__(self):
super(Demo, self).__init__()
self.name_btn = QPushButton('Name', self)
self.gender_btn = QPushButton('Gender', self)
self.age_btn = QPushButton('Age', self)
self.score_btn = QPushButton('Score', self)
self.info_btn = QPushButton('Info', self)
self.name_btn.clicked.connect(
lambda: self.open_dialog_func(self.name_btn))
self.gender_btn.clicked.connect(
lambda: self.open_dialog_func(self.gender_btn))
self.age_btn.clicked.connect(
lambda: self.open_dialog_func(self.age_btn))
self.score_btn.clicked.connect(
lambda: self.open_dialog_func(self.score_btn))
self.info_btn.clicked.connect(
lambda: self.open_dialog_func(self.info_btn))
self.name_line = QLineEdit(self)
self.gender_line = QLineEdit(self)
self.age_line = QLineEdit(self)
self.score_line = QLineEdit(self)
self.info_textedit = QTextEdit(self)
self.g_layout = QGridLayout()
self.g_layout.addWidget(self.name_btn, 0, 0, 1, 1)
self.g_layout.addWidget(self.name_line, 0, 1, 1, 1)
self.g_layout.addWidget(self.gender_btn, 1, 0, 1, 1)
self.g_layout.addWidget(self.gender_line, 1, 1, 1, 1)
self.g_layout.addWidget(self.age_btn, 2, 0, 1, 1)
self.g_layout.addWidget(self.age_line, 2, 1, 1, 1)
self.g_layout.addWidget(self.score_btn, 3, 0, 1, 1)
self.g_layout.addWidget(self.score_line, 3, 1, 1, 1)
self.g_layout.addWidget(self.info_btn, 4, 0, 1, 1)
self.g_layout.addWidget(self.info_textedit, 4, 1, 1, 1)
self.setLayout(self.g_layout)
def open_dialog_func(self, btn):
if btn == self.name_btn: # 1
name, ok = QInputDialog.getText(
self, 'Name Input', 'Please enter the name:')
if ok:
self.name_line.setText(name)
elif btn == self.gender_btn: # 2
gender_list = ['Female', 'Male']
gender, ok = QInputDialog.getItem(
self, 'Gender Input', 'Please choose the gender:', gender_list, 0, False)
if ok:
self.gender_line.setText(gender)
elif btn == self.age_btn:
age, ok = QInputDialog.getInt(
self, 'Age Input', 'Please select the age:')
if ok:
self.age_line.setText(str(age))
elif btn == self.score_btn:
score, ok = QInputDialog.getDouble(
self, 'Score Input', 'Please select the score:')
if ok:
self.score_line.setText(str(score))
else:
info, ok = QInputDialog.getMultiLineText(
self, 'Info Input', 'Please enter the info:')
if ok:
self.info_textedit.setText(info)
if __name__ == '__main__':
app = QApplication(sys.argv)
demo = Demo()
demo.show()
sys.exit(app.exec_())
| [
"2540278344@qq.com"
] | 2540278344@qq.com |
efaed80c4684aaf515c0b2e38b52d25235d134be | 942ec6d53f40ff43f36594bb607dc7a86f0e6370 | /rasa_core/interpreter.py | 947655e9ce2ad753cfffade9378668eb4bd18f6e | [
"Apache-2.0",
"MIT",
"BSD-3-Clause"
] | permissive | hydercps/rasa_core | 319be5a0646bbc050538aa5ef016ea84183cf0b4 | e0a9db623cbe0bfa2ffd232a1c05a80441dd6ab7 | refs/heads/master | 2021-05-15T13:06:23.482657 | 2017-10-26T11:58:41 | 2017-10-26T11:58:41 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,169 | py | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import logging
import re
import os
import requests
from builtins import str
logger = logging.getLogger(__name__)
class NaturalLanguageInterpreter(object):
def parse(self, text):
raise NotImplementedError(
"Interpreter needs to be able to parse "
"messages into structured output.")
@staticmethod
def create(obj):
if isinstance(obj, NaturalLanguageInterpreter):
return obj
if isinstance(obj, str):
return RasaNLUInterpreter(model_directory=obj)
else:
return RegexInterpreter() # default interpreter
class RegexInterpreter(NaturalLanguageInterpreter):
@staticmethod
def extract_intent_and_entities(user_input):
value_assign_rx = '\s*(.+)\s*=\s*(.+)\s*'
structed_message_rx = '^_([^\[]+)(\[(.+)\])?'
m = re.search(structed_message_rx, user_input)
if m is not None:
intent = m.group(1).lower()
offset = m.start(3)
entities_str = m.group(3)
entities = []
if entities_str is not None:
for entity_str in entities_str.split(','):
for match in re.finditer(value_assign_rx, entity_str):
start = match.start(2) + offset
end = match.end(0) + offset
entity = {
"entity": match.group(1),
"start": start,
"end": end,
"value": match.group(2)}
entities.append(entity)
return intent, entities
else:
return None, []
def parse(self, text):
intent, entities = self.extract_intent_and_entities(text)
return {
'text': text,
'intent': {
'name': intent,
'confidence': 1.0,
},
'intent_ranking': [{
'name': intent,
'confidence': 1.0,
}],
'entities': entities,
}
class RasaNLUHttpInterpreter(NaturalLanguageInterpreter):
def __init__(self, model_name, token, server):
self.model_name = model_name
self.token = token
self.server = server
def parse(self, text):
"""Parses a text message.
Returns a default value if the parsing of the text failed."""
default_return = {"intent": {"name": "", "confidence": 0.0},
"entities": [], "text": ""}
result = self._rasa_http_parse(text)
return result if result is not None else default_return
def _rasa_http_parse(self, text):
"""Send a text message to a running rasa NLU http server.
Returns `None` on failure."""
if not self.server:
logger.error(
"Failed to parse text '{}' using rasa NLU over http. "
"No rasa NLU server specified!".format(text))
return None
params = {
"token": self.token,
"model": self.model_name,
"q": text
}
url = "{}/parse".format(self.server)
try:
result = requests.get(url, params=params)
if result.status_code == 200:
return result.json()
else:
logger.error(
"Failed to parse text '{}' using rasa NLU over http. "
"Error: {}".format(text, result.text))
return None
except Exception as e:
logger.error(
"Failed to parse text '{}' using rasa NLU over http. "
"Error: {}".format(text, e))
return None
class RasaNLUInterpreter(NaturalLanguageInterpreter):
def __init__(self, model_directory, config_file=None, lazy_init=False):
from rasa_nlu.model import Interpreter
from rasa_nlu.model import Metadata
from rasa_nlu.config import RasaNLUConfig
self.metadata = Metadata.load(model_directory)
self.lazy_init = lazy_init
self.config_file = config_file
if not lazy_init:
self.interpreter = Interpreter.load(self.metadata,
RasaNLUConfig(config_file,
os.environ))
else:
self.interpreter = None
def parse(self, text):
"""Parses a text message.
Returns a default value if the parsing of the text failed."""
if self.lazy_init and self.interpreter is None:
from rasa_nlu.model import Interpreter
from rasa_nlu.config import RasaNLUConfig
self.interpreter = Interpreter.load(self.metadata,
RasaNLUConfig(self.config_file,
os.environ))
return self.interpreter.parse(text)
| [
"tom.bocklisch@scalableminds.com"
] | tom.bocklisch@scalableminds.com |
e6bba0955a52988e23383a991c56e01477c61b16 | 201c9dc696159ea684e654fe7f3e3a3b8026cbf0 | /admaren/asgi.py | 2b6a15c77be3db4ace683c797fe32e96ace86a20 | [] | no_license | AthifSaheer/admaren-machine-test | 8a44652bf09f31c54da96db0b9cb9654b7544514 | 2547e4aed9ca23e30bc33151d77b7efd1db6a45c | refs/heads/main | 2023-08-04T03:02:41.844039 | 2021-10-02T06:51:23 | 2021-10-02T06:51:23 | 412,713,951 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 391 | py | """
ASGI config for admaren project.
It exposes the ASGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/3.2/howto/deployment/asgi/
"""
import os
from django.core.asgi import get_asgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'admaren.settings')
application = get_asgi_application()
| [
"liteboook@gmail.com"
] | liteboook@gmail.com |
31ed82385a1930c9ec4ea90def4ed19f98bd1449 | cccc9fa74b16cc4a2ae37dfb2449d6dc1ce215cd | /image_comparison/experiments/experiment2/run_experiment2_atlas_variance.py | 74c5750cd05bcb3469bdb973bab2ad3a4cb44bdd | [] | no_license | nagyistge/brainmeta | 611daf90d77432fa72a79b30fa4b895a60647536 | 105cffebcc0bf1c246ed11b67f3da2fff4a05f99 | refs/heads/master | 2021-05-30T06:44:40.095517 | 2016-01-11T23:21:50 | 2016-01-11T23:21:50 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,659 | py | #!/usr/bin/python
# This batch script will prepare and submit jobs for running on a SLURM cluster
import os
import time
import pandas
import numpy as np
import nibabel as nib
# Input file
basedir = "/scratch/users/vsochat/DATA/BRAINMETA/experiment1"
outdirectory = "%s/atlas_permutations_spearman" %(basedir)
input_file = "%s/openfmri_labels.tsv" %(basedir)
standard = "%s/standard/MNI152_T1_2mm_brain_mask.nii.gz" %(basedir)
# This will be a gold standard correlation data file
gs_file = "%s/gs_comparisons.tsv" %(basedir)
# Range of sampling percentages from 0.05 to 1.0
percentages = np.divide(range(1,101),100.0)
# We will sample some number of regions each time, from 1 through 116, to represent different mask sizes
# This will simulate different levels of coverage!
for percent_sample in percentages:
outfile = "%s/sampled_%s.pkl" %(outdirectory,percent_sample)
if not os.path.exists(outfile):
filey = ".job/coverage_%s.job" %(percent_sample)
filey = open(filey,"w")
filey.writelines("#!/bin/bash\n")
filey.writelines("#SBATCH --job-name=coverage_%s\n" %(percent_sample))
filey.writelines("#SBATCH --output=.out/coverage_%s.out\n" %(percent_sample))
filey.writelines("#SBATCH --error=.out/coverage_%s.err\n" %(percent_sample))
filey.writelines("#SBATCH --time=2-00:00\n")
filey.writelines("#SBATCH --mem=64000\n")
filey.writelines("python /home/vsochat/SCRIPT/python/brainmeta/image_comparison/experiments/experiment2_atlas_variance.py %s %s %s %s %s" %(percent_sample,input_file,outfile,standard,gs_file))
filey.close()
os.system("sbatch -p russpold " + ".job/coverage_%s.job" %(percent_sample))
| [
"vsochat@stanford.edu"
] | vsochat@stanford.edu |
944a3caf4c7f22153433e23e82ef0a72b682b29a | 2e5c0e502216b59a4e348437d4291767e29666ea | /Flask-Web/flasky/Lib/site-packages/paramiko/util.py | 9397028989dc651aa1be34e973a8a07e8116fe8a | [
"Apache-2.0",
"GPL-1.0-or-later"
] | permissive | fengzse/Feng_Repository | 8881b64213eef94ca8b01652e5bc48e92a28e1f5 | db335441fa48440e72eefab6b5fd61103af20c5d | refs/heads/master | 2023-07-24T04:47:30.910625 | 2023-02-16T10:34:26 | 2023-02-16T10:34:26 | 245,704,594 | 1 | 0 | Apache-2.0 | 2023-07-15T00:54:20 | 2020-03-07T20:59:04 | Python | UTF-8 | Python | false | false | 8,640 | py | # Copyright (C) 2003-2007 Robey Pointer <robeypointer@gmail.com>
#
# This file is part of paramiko.
#
# Paramiko is free software; you can redistribute it and/or modify it under the
# terms of the GNU Lesser General Public License as published by the Free
# Software Foundation; either version 2.1 of the License, or (at your option)
# any later version.
#
# Paramiko is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
# A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more
# details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with Paramiko; if not, write to the Free Software Foundation, Inc.,
# 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA.
"""
Useful functions used by the rest of paramiko.
"""
from __future__ import generators
import errno
import sys
import struct
import traceback
import threading
import logging
from paramiko.common import DEBUG, zero_byte, xffffffff, max_byte
from paramiko.py3compat import PY2, long, byte_chr, byte_ord, b
from paramiko.config import SSHConfig
def inflate_long(s, always_positive=False):
"""turns a normalized byte string into a long-int
(adapted from Crypto.Util.number)"""
out = long(0)
negative = 0
if not always_positive and (len(s) > 0) and (byte_ord(s[0]) >= 0x80):
negative = 1
if len(s) % 4:
filler = zero_byte
if negative:
filler = max_byte
# never convert this to ``s +=`` because this is a string, not a number
# noinspection PyAugmentAssignment
s = filler * (4 - len(s) % 4) + s
for i in range(0, len(s), 4):
out = (out << 32) + struct.unpack(">I", s[i : i + 4])[0]
if negative:
out -= long(1) << (8 * len(s))
return out
deflate_zero = zero_byte if PY2 else 0
deflate_ff = max_byte if PY2 else 0xff
def deflate_long(n, add_sign_padding=True):
"""turns a long-int into a normalized byte string
(adapted from Crypto.Util.number)"""
# after much testing, this algorithm was deemed to be the fastest
s = bytes()
n = long(n)
while (n != 0) and (n != -1):
s = struct.pack(">I", n & xffffffff) + s
n >>= 32
# strip off leading zeros, FFs
for i in enumerate(s):
if (n == 0) and (i[1] != deflate_zero):
break
if (n == -1) and (i[1] != deflate_ff):
break
else:
# degenerate case, n was either 0 or -1
i = (0,)
if n == 0:
s = zero_byte
else:
s = max_byte
s = s[i[0] :]
if add_sign_padding:
if (n == 0) and (byte_ord(s[0]) >= 0x80):
s = zero_byte + s
if (n == -1) and (byte_ord(s[0]) < 0x80):
s = max_byte + s
return s
def format_binary(data, prefix=""):
x = 0
out = []
while len(data) > x + 16:
out.append(format_binary_line(data[x : x + 16]))
x += 16
if x < len(data):
out.append(format_binary_line(data[x:]))
return [prefix + line for line in out]
def format_binary_line(data):
left = " ".join(["{:02X}".format(byte_ord(c)) for c in data])
right = "".join(
[".{:c}..".format(byte_ord(c))[(byte_ord(c) + 63) // 95] for c in data]
)
return "{:50s} {}".format(left, right)
def safe_string(s):
out = b""
for c in s:
i = byte_ord(c)
if 32 <= i <= 127:
out += byte_chr(i)
else:
out += b("%{:02X}".format(i))
return out
def bit_length(n):
try:
return n.bit_length()
except AttributeError:
norm = deflate_long(n, False)
hbyte = byte_ord(norm[0])
if hbyte == 0:
return 1
bitlen = len(norm) * 8
while not (hbyte & 0x80):
hbyte <<= 1
bitlen -= 1
return bitlen
def tb_strings():
return "".join(traceback.format_exception(*sys.exc_info())).split("\n")
def generate_key_bytes(hash_alg, salt, key, nbytes):
"""
Given a password, passphrase, or other human-source key, scramble it
through a secure hash into some keyworthy bytes. This specific algorithm
is used for encrypting/decrypting private key files.
:param function hash_alg: A function which creates a new hash object, such
as ``hashlib.sha256``.
:param salt: data to salt the hash with.
:type salt: byte string
:param str key: human-entered password or passphrase.
:param int nbytes: number of bytes to generate.
:return: Key data `str`
"""
keydata = bytes()
digest = bytes()
if len(salt) > 8:
salt = salt[:8]
while nbytes > 0:
hash_obj = hash_alg()
if len(digest) > 0:
hash_obj.update(digest)
hash_obj.update(b(key))
hash_obj.update(salt)
digest = hash_obj.digest()
size = min(nbytes, len(digest))
keydata += digest[:size]
nbytes -= size
return keydata
def load_host_keys(filename):
"""
Read a file of known SSH host keys, in the format used by openssh, and
return a compound dict of ``hostname -> keytype ->`` `PKey
<paramiko.pkey.PKey>`. The hostname may be an IP address or DNS name. The
keytype will be either ``"ssh-rsa"`` or ``"ssh-dss"``.
This type of file unfortunately doesn't exist on Windows, but on posix,
it will usually be stored in ``os.path.expanduser("~/.ssh/known_hosts")``.
Since 1.5.3, this is just a wrapper around `.HostKeys`.
:param str filename: name of the file to read host keys from
:return:
nested dict of `.PKey` objects, indexed by hostname and then keytype
"""
from paramiko.hostkeys import HostKeys
return HostKeys(filename)
def parse_ssh_config(file_obj):
"""
Provided only as a backward-compatible wrapper around `.SSHConfig`.
.. deprecated:: 2.7
Use `SSHConfig.from_file` instead.
"""
config = SSHConfig()
config.parse(file_obj)
return config
def lookup_ssh_host_config(hostname, config):
"""
Provided only as a backward-compatible wrapper around `.SSHConfig`.
"""
return config.lookup(hostname)
def mod_inverse(x, m):
# it's crazy how small Python can make this function.
u1, u2, u3 = 1, 0, m
v1, v2, v3 = 0, 1, x
while v3 > 0:
q = u3 // v3
u1, v1 = v1, u1 - v1 * q
u2, v2 = v2, u2 - v2 * q
u3, v3 = v3, u3 - v3 * q
if u2 < 0:
u2 += m
return u2
_g_thread_ids = {}
_g_thread_counter = 0
_g_thread_lock = threading.Lock()
def get_thread_id():
global _g_thread_ids, _g_thread_counter, _g_thread_lock
tid = id(threading.currentThread())
try:
return _g_thread_ids[tid]
except KeyError:
_g_thread_lock.acquire()
try:
_g_thread_counter += 1
ret = _g_thread_ids[tid] = _g_thread_counter
finally:
_g_thread_lock.release()
return ret
def log_to_file(filename, level=DEBUG):
"""send paramiko logs to a logfile,
if they're not already going somewhere"""
logger = logging.getLogger("paramiko")
if len(logger.handlers) > 0:
return
logger.setLevel(level)
f = open(filename, "a")
handler = logging.StreamHandler(f)
frm = "%(levelname)-.3s [%(asctime)s.%(msecs)03d] thr=%(_threadid)-3d"
frm += " %(name)s: %(message)s"
handler.setFormatter(logging.Formatter(frm, "%Y%m%d-%H:%M:%S"))
logger.addHandler(handler)
# make only one filter object, so it doesn't get applied more than once
class PFilter(object):
def filter(self, record):
record._threadid = get_thread_id()
return True
_pfilter = PFilter()
def get_logger(name):
logger = logging.getLogger(name)
logger.addFilter(_pfilter)
return logger
def retry_on_signal(function):
"""Retries function until it doesn't raise an EINTR error"""
while True:
try:
return function()
except EnvironmentError as e:
if e.errno != errno.EINTR:
raise
def constant_time_bytes_eq(a, b):
if len(a) != len(b):
return False
res = 0
# noinspection PyUnresolvedReferences
for i in (xrange if PY2 else range)(len(a)): # noqa: F821
res |= byte_ord(a[i]) ^ byte_ord(b[i])
return res == 0
class ClosingContextManager(object):
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
self.close()
def clamp_value(minimum, val, maximum):
return max(minimum, min(val, maximum))
| [
"fzhuse@gmail.com"
] | fzhuse@gmail.com |
a78e04a791909b7433738ec6335788795ce7133a | b501a5eae1018c1c26caa96793c6ee17865ebb2d | /Algorithms/itertools/itertools_repeat.py | 917451b04f214801b1302e14db89e40a6d8c75d5 | [] | no_license | jincurry/standard_Library_Learn | 12b02f9e86d31ca574bb6863aefc95d63cc558fc | 6c7197f12747456e0f1f3efd09667682a2d1a567 | refs/heads/master | 2022-10-26T07:28:36.545847 | 2018-05-04T12:54:50 | 2018-05-04T12:54:50 | 125,447,397 | 0 | 1 | null | 2022-10-02T17:21:50 | 2018-03-16T01:32:50 | Python | UTF-8 | Python | false | false | 80 | py | from itertools import repeat
for i in repeat('over_and_over', 5):
print(i)
| [
"jintao422516@gmail.com"
] | jintao422516@gmail.com |
6ac0e3d7295d4a1fca52784c3e95b9cb48dfaf97 | 58d4cabb694ada0ee917a70a3a733757d9448bf2 | /cldfbench_apics.py | e8fdbe41d3ac1769775d071190e4f82b0834e230 | [
"CC-BY-4.0"
] | permissive | silva-carlos/apics | d74b27de4f46d02d8ccd3d2010ad32342b3fbfe3 | f687a830a62aa5e16ad1b5d477e92d6307c10e6f | refs/heads/master | 2023-04-10T06:34:07.839859 | 2020-07-15T13:08:22 | 2020-07-15T13:08:22 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 23,872 | py | import json
import pathlib
import itertools
import collections
from clldutils.misc import lazyproperty
from cldfbench import Dataset as BaseDataset, CLDFSpec
from pycldf.sources import Source, Reference
from csvw.metadata import URITemplate
LEXIFIER_DESC = """\
To help the reader’s orientation, we have classified our languages into English-based, Dutch-based,
Portuguese-based, and so on. This classification is not entirely uncontroversial. On the one hand,
contact languages are characterized by strong influence from multiple languages, so saying, for
instance, that Haitian Creole is French-based is problematic, as it glosses over the very important
contribution of the African languages, especially to the grammar of the language. For this reason,
many authors have used expressions like “French-lexified”, “Dutch-lexified” for such languages,
which only refer to the role of the European languages as primary lexicon-providers. We agree that
such terms are more precise, but they are also more cumbersome, so we have mostly used the older
(and still much more widespread) manner of talking about groups of creoles and pidgins. We think
that it is sufficiently well-known that “English-based” (etc.) is not meant to imply anything other
than that the bulk of the language’s lexicon is derived from English.
On the other hand, the notion of being based on a language is problematic in the case of languages
with several lexifiers, especially Gurindji Kriol and Michif. These are shown as having two
lexifiers (or lexifier "other"). There are also a few other cases where it is not fully clear what
the primary lexifier is. Saramaccan’s vocabulary has a very large Portuguese component, but for
simplicity we classify it as English-based here. Papiamentu is often thought to be originally
(Afro-)Portuguese-based, but as it has long been influenced much more by Spanish, we classify it
as Spanish-based."""
NON_DEFAULT_LECT = """\
Sometimes the languages or varieties that the APiCS language experts described were not internally
homogeneous, but different subvarieties (or lects) had different value choices for some feature.
Such non-default lects are marked with a non-empty "Default_Lect_ID" column, relating the (sub)lect
with a default lect. Thus the default lect that was primarily described by the contributors need
not be representative for the entire language."""
CONFIDENCE_FIX = {
'very certain': 'Very certain',
'unspecified': 'Unspecified',
}
class Dataset(BaseDataset):
dir = pathlib.Path(__file__).parent
id = "apics"
def cldf_specs(self): # A dataset must declare all CLDF sets it creates.
return CLDFSpec(module='StructureDataset', dir=self.cldf_dir)
def cmd_download(self, args):
pass
def cmd_makecldf(self, args):
self.create_schema(args.writer.cldf)
pk2id = collections.defaultdict(dict)
checksums = set()
args.writer.cldf.add_sources(*list(self.itersources(pk2id)))
self.read('source', pkmap=pk2id)
refs = []
for row in self.raw_dir.read_csv('valuesetreference.csv', dicts=True):
if row['source_pk']:
refs.append((row['valueset_pk'], pk2id['source'][row['source_pk']], row['description']))
exrefs = []
for row in self.raw_dir.read_csv('sentencereference.csv', dicts=True):
if row['source_pk']:
exrefs.append((row['sentence_pk'], pk2id['source'][row['source_pk']], row['description']))
editors = {
name: ord for ord, name in enumerate([
'Susanne Maria Michaelis',
'Philippe Maurer',
'Martin Haspelmath',
'Magnus Huber',
], start=1)}
for row in self.read('contributor', pkmap=pk2id, key=lambda r: r['id']).values():
args.writer.objects['contributors.csv'].append({
'ID': row['id'],
'Name': row['name'],
'Address': row['address'],
'URL': row['url'],
'editor_ord': editors.get(row['name']),
})
cc = self.contributor_ids('contributioncontributor', pk2id, 'contribution_pk')
scc = self.contributor_ids('surveycontributor', pk2id, 'survey_pk')
# We put contribution data into the language table!
contribs = self.read('contribution', extended='apicscontribution')
gts = self.add_files(args.writer, contribs.values(), checksums)
contribs = {c['id']: c for c in contribs.values()}
surveys = {c['id']: c for c in self.read('survey').values()}
surveys['51'] = surveys['50']
identifier = self.read('identifier')
lang2id = collections.defaultdict(lambda: collections.defaultdict(list))
for row in self.read('languageidentifier').values():
id_ = identifier[row['identifier_pk']]
lang2id[row['language_pk']][id_['type']].append((id_['name'], id_['description']))
lrefs = {
lpk: set(pk2id['source'][r['source_pk']] for r in rows)
for lpk, rows in itertools.groupby(
self.read('languagesource', key=lambda d: d['language_pk']).values(),
lambda d: d['language_pk'])}
for row in self.read('contributionreference').values():
lrefs[row['contribution_pk']].add(pk2id['source'][row['source_pk']])
ldata = {}
for lpk, rows in itertools.groupby(
self.read('language_data', key=lambda d: (d['object_pk'], int(d['ord']))).values(),
lambda d: d['object_pk'],
):
ldata[lpk] = collections.OrderedDict([(d['key'], d['value']) for d in rows])
lmap = {}
for row in self.read(
'language',
extended='lect',
pkmap=pk2id,
key=lambda l: (bool(l['language_pk']), int(l['id'])),
).values():
lmap[row['pk']] = id = row['id']
contrib = contribs.get(id)
survey = surveys.get(row['id'])
assert survey or (int(id) == 21 or int(id) > 100)
iso_codes = set(i[0] for i in lang2id[row['pk']].get('iso639-3', []))
glottocodes = [i[0] for i in lang2id[row['pk']].get('glottolog', [])]
ethnologue_names = [i[0] for i in lang2id[row['pk']].get('ethnologue', [])]
lrefs_ = [Reference(r, None) for r in sorted(lrefs.get(row['pk'], []))]
gt_pdf, gt_audio = None, None
if contrib:
lrefs_.append(Reference(pk2id['source'][contrib['survey_reference_pk']], desc='survey'))
for f in contrib.get('files', []):
if f['id'].endswith('pdf'):
gt_pdf = gts[(f['jsondata']['objid'], f['jsondata']['original'])]
if f['id'].endswith('mp3'):
gt_audio = gts[(f['jsondata']['objid'], f['jsondata']['original'])]
args.writer.objects['LanguageTable'].append({
'ID': id,
'Name': row['name'],
'Description': contrib['markup_description'] if contrib else '',
'ISO639P3code': list(iso_codes)[0] if len(iso_codes) == 1 else None,
'Glottocode': glottocodes[0] if len(glottocodes) == 1 else None,
'Ethnologue_Name': ', '.join(ethnologue_names),
'Latitude': row['latitude'],
'Longitude': row['longitude'],
'Data_Contributor_ID': cc[contrib['pk']] if contrib else [],
'Survey_Contributor_ID': scc[survey['pk']] if survey else [],
'Survey_Title': '{}. In "The survey of pidgin and creole languages". {}'.format(
survey['name'], survey['description']) if survey else '',
'Source': [str(r) for r in lrefs_],
'Glossed_Text_PDF': gt_pdf,
'Glossed_Text_Audio': gt_audio,
'Metadata': json.dumps(ldata.get(row['pk'], {})),
'Region': row['region'],
'Default_Lect_ID': lmap.get(row['language_pk']),
'Lexifier': row['lexifier'],
})
args.writer.objects['LanguageTable'].sort(key=lambda d: d['ID'])
fcc = self.contributor_ids('featureauthor', pk2id, 'feature_pk')
for row in self.read(
'parameter',
extended='feature',
pkmap=pk2id,
key=lambda d: d['id']).values():
mgp = None
maps = self.add_files(args.writer, [row], checksums)
for f in row.get('files', []):
if f['id'].endswith('pdf'):
mgp = maps[(f['jsondata']['objid'], f['jsondata']['original'])]
args.writer.objects['ParameterTable'].append({
'ID': row['id'],
'Name': row['name'],
'Description': row['markup_description'] if row['id'] != '0' else LEXIFIER_DESC,
'Type': row['feature_type'],
'PHOIBLE_Segment_ID': row['jsondata'].get('phoible', {}).get('id'),
'PHOIBLE_Segment_Name': row['jsondata'].get('phoible', {}).get('segment'),
#multivalued,wals_id,wals_representation,representation,area
'Multivalued': row['multivalued'] == 't',
'WALS_ID': (row['wals_id'] + 'A') if row['wals_id'] else '',
'WALS_Representation': int(row['wals_representation']) if row['wals_representation'] else None,
'Area': row['area'],
'Contributor_ID': fcc.get(row['pk'], []),
'Map_Gall_Peters': mgp,
'metadata': json.dumps(collections.OrderedDict(
sorted(row['jsondata'].items(), key=lambda i: i[0]))),
})
for row in self.read(
'domainelement',
pkmap=pk2id,
key=lambda d: (int(d['id'].split('-')[0]), int(d['number']))).values():
args.writer.objects['CodeTable'].append({
'ID': row['id'],
'Parameter_ID': pk2id['parameter'][row['parameter_pk']],
'Name': row['name'],
'Description': row['description'],
'Number': int(row['number']),
'icon': row['jsondata']['icon'],
'color': row['jsondata']['color'],
'abbr': row['abbr'],
})
refs = {
dpid: [
str(Reference(
source=str(r[1]),
desc=r[2].replace('[', ')').replace(']', ')').replace(';', '.').strip()
if r[2] else None))
for r in refs_
]
for dpid, refs_ in itertools.groupby(refs, lambda r: r[0])}
vsdict = self.read('valueset', pkmap=pk2id)
examples = self.read('sentence', pkmap=pk2id)
mp3 = self.add_files(args.writer, examples.values(), checksums)
igts = {}
exrefs = {
dpid: [
str(Reference(
source=str(r[1]),
desc=r[2].replace('[', ')').replace(']', ')').replace(';', '.').strip()
if r[2] else None))
for r in refs_
]
for dpid, refs_ in itertools.groupby(exrefs, lambda r: r[0])}
for ex in examples.values():
audio, a, g = None, [], []
for f in ex.get('files', []):
if f['id'].endswith('mp3'):
audio = mp3[(f['jsondata']['objid'], f['jsondata']['original'])]
if ex['analyzed']:
a = ex['analyzed'].split()
if ex['gloss']:
g = ex['gloss'].split()
if len(a) != len(g):
a, g = [ex['analyzed']], [ex['gloss']]
igts[ex['pk']] = ex['id']
args.writer.objects['ExampleTable'].append({
'ID': ex['id'],
'Language_ID': pk2id['language'][ex['language_pk']],
'Primary_Text': ex['name'],
'Translated_Text': ex['description'],
'Analyzed_Word': a,
'Gloss': g,
'Source': exrefs.get(ex['pk'], []),
'Audio': audio,
'Type': ex['type'],
'Comment': ex['comment'],
'source_comment': ex['source'],
'original_script': ex['original_script'],
'markup_comment': ex['markup_comment'],
'markup_text': ex['markup_text'],
'markup_analyzed': ex['markup_analyzed'],
'markup_gloss': ex['markup_gloss'],
'sort': ex['jsondata'].get('sort'),
'alt_translation': ex['jsondata'].get('alt_translation'),
})
example_by_value = {
vpk: [r['sentence_pk'] for r in rows]
for vpk, rows in itertools.groupby(
self.read('valuesentence', key=lambda d: d['value_pk']).values(),
lambda d: d['value_pk'])}
for row in self.read('value').values():
vs = vsdict[row['valueset_pk']]
args.writer.objects['ValueTable'].append({
'ID': row['id'],
'Language_ID': pk2id['language'][vs['language_pk']],
'Parameter_ID': pk2id['parameter'][vs['parameter_pk']],
'Value': pk2id['domainelement'][row['domainelement_pk']].split('-')[1],
'Code_ID': pk2id['domainelement'][row['domainelement_pk']],
'Comment': vs['description'],
'Source': refs.get(vs['pk'], []),
'Example_ID': sorted(igts[epk] for epk in example_by_value.get(row['pk'], []) if epk in igts),
'Frequency': float(row['frequency']) if row['frequency'] else None,
'Confidence': CONFIDENCE_FIX.get(row['confidence'], row['confidence']),
'Metadata': json.dumps(collections.OrderedDict(
sorted(vs['jsondata'].items(), key=lambda i: i[0]))),
'source_comment': vs['source'],
})
args.writer.objects['ValueTable'].sort(
key=lambda d: (d['Language_ID'], d['Parameter_ID']))
for row in self.read('glossabbreviation').values():
args.writer.objects['glossabbreviations.csv'].append(
dict(ID=row['id'], Name=row['name']))
def create_schema(self, cldf):
cldf.add_table(
'glossabbreviations.csv',
{
'name': 'ID',
'propertyUrl': 'http://cldf.clld.org/v1.0/terms.rdf#id',
},
{
'name': 'Name',
'propertyUrl': 'http://cldf.clld.org/v1.0/terms.rdf#name',
})
cldf.add_table(
'media.csv',
{
'name': 'ID',
'propertyUrl': 'http://cldf.clld.org/v1.0/terms.rdf#id',
'valueUrl': 'https://cdstar.shh.mpg.de/bitstreams/{Name}',
},
{
'name': 'Name',
'propertyUrl': 'http://cldf.clld.org/v1.0/terms.rdf#name',
},
{
'name': 'Description',
'propertyUrl': 'http://cldf.clld.org/v1.0/terms.rdf#description',
},
'mimetype',
{'name': 'size', 'datatype': 'integer'},
)
cldf.add_component(
'ParameterTable',
{
'name': 'Contributor_ID',
'separator': ' ',
'dc:description': 'Authors of the Atlas chapter describing the feature',
},
'Chapter', # valueUrl: https://apics-online.info/parameters/1.chapter.html
{
'name': 'Type',
'dc:description': "Primary or structural feature, segment or sociolinguistic feature",
},
{
'name': 'PHOIBLE_Segment_ID',
'valueUrl': 'https://phoible.org/parameters/{PHOIBLE_Segment_ID}',
},
'PHOIBLE_Segment_Name',
{'name': 'Multivalued', 'datatype': 'boolean'},
{
'name': 'WALS_ID',
'dc:description': 'ID of the corresponding WALS feature',
},
{'name': 'WALS_Representation', 'datatype': 'integer'},
'Area',
'Map_Gall_Peters',
{'name': 'metadata', 'dc:format': 'application/json'},
)
cldf['ParameterTable', 'id'].valueUrl = URITemplate(
'https://apics-online.info/parameters/{id}')
cldf.add_component(
'CodeTable',
{'name': 'Number', 'datatype': 'integer'},
'icon',
'color',
'abbr',
)
cldf.add_component(
'LanguageTable',
{
'name': 'Description',
'propertyUrl': 'http://cldf.clld.org/v1.0/terms.rdf#description',
'dc:format': 'text/html',
},
{
'name': 'Data_Contributor_ID',
'separator': ' ',
'dc:description': 'Authors contributing the language structure dataset',
},
{
'name': 'Survey_Contributor_ID',
'separator': ' ',
'dc:description': 'Authors of the language survey',
},
'Survey_Title',
{
'name': 'Source',
'propertyUrl': 'http://cldf.clld.org/v1.0/terms.rdf#source',
'separator': ';',
},
'Ethnologue_Name',
'Glossed_Text_PDF',
'Glossed_Text_Audio',
{
'name': 'Metadata',
'dc:format': 'text/json',
},
'Region',
{
'name': 'Default_Lect_ID',
'dc:description': NON_DEFAULT_LECT,
},
{
'name': 'Lexifier',
'dc:description': LEXIFIER_DESC,
},
)
cldf['LanguageTable', 'id'].valueUrl = URITemplate(
'https://apics-online.info/contributions/{id}')
cldf.add_component(
'ExampleTable',
{
'name': 'Source',
'propertyUrl': 'http://cldf.clld.org/v1.0/terms.rdf#source',
'separator': ';',
},
'Audio',
{'name': 'Type', 'propertyUrl': 'dc:type'},
{
'name': 'markup_text',
'dc:format': 'text/html',
},
{
'name': 'markup_analyzed',
'dc:format': 'text/html',
},
{
'name': 'markup_gloss',
'dc:format': 'text/html',
},
{
'name': 'markup_comment',
'dc:format': 'text/html',
},
'source_comment',
'original_script',
'sort',
'alt_translation',
)
t = cldf.add_table(
'contributors.csv',
{
'name': 'ID',
'propertyUrl': 'http://cldf.clld.org/v1.0/terms.rdf#id',
},
{
'name': 'Name',
'propertyUrl': 'http://cldf.clld.org/v1.0/terms.rdf#name',
},
'Address',
'URL',
{
'name': 'editor_ord',
'datatype': 'integer',
}
)
t.common_props['dc:conformsTo'] = None
cldf.add_columns(
'ValueTable',
{
'name': 'Example_ID',
'separator': ' ',
'propertyUrl': 'http://cldf.clld.org/v1.0/terms.rdf#exampleReference',
},
{
'name': 'Frequency',
"datatype": 'number',
},
'Confidence',
{
'name': 'Metadata',
'dc:format': 'text/json',
},
'source_comment',
)
cldf.add_foreign_key('ParameterTable', 'Contributor_ID', 'contributors.csv', 'ID')
cldf.add_foreign_key('ParameterTable', 'Map_Gall_Peters', 'media.csv', 'ID')
cldf.add_foreign_key('LanguageTable', 'Glossed_Text_PDF', 'media.csv', 'ID')
cldf.add_foreign_key('LanguageTable', 'Glossed_Text_Audio', 'media.csv', 'ID')
cldf.add_foreign_key('ExampleTable', 'Audio', 'media.csv', 'ID')
def read(self, core, extended=False, pkmap=None, key=None):
if not key:
key = lambda d: int(d['pk'])
res = collections.OrderedDict()
for row in self.raw_dir.read_csv('{0}.csv'.format(core), dicts=True):
row['jsondata'] = json.loads(row.get('jsondata') or '{}')
res[row['pk']] = row
if pkmap is not None:
pkmap[core][row['pk']] = row['id']
if extended:
for row in self.raw_dir.read_csv('{0}.csv'.format(extended), dicts=True):
res[row['pk']].update(row)
res = collections.OrderedDict(sorted(res.items(), key=lambda item: key(item[1])))
files = self.raw_dir / '{}_files.csv'.format(core)
if files.exists():
for opk, rows in itertools.groupby(
sorted(self.raw_dir.read_csv(files.name, dicts=True), key=lambda d: d['object_pk']),
lambda d: d['object_pk'],
):
res[opk]['files'] = []
for row in rows:
row['jsondata'] = json.loads(row.get('jsondata') or '{}')
res[opk]['files'].append(row)
return res
def add_files(self, writer, objs, checksums):
res = {}
for c in objs:
for f in c.get('files', []):
md = f['jsondata']
id_ = self.cdstar[md['objid'], md['original']][0]
if id_ not in checksums:
checksums.add(id_)
writer.objects['media.csv'].append({
# maybe base64 encode? or use md5 hash from catalog?
'ID': id_,
'Name': '{}/{}'.format(md['objid'], md['original']),
'Description': self.cdstar[md['objid'], md['original']][1],
'mimetype': md['mimetype'],
'size': md['size'],
})
res[(md['objid'], md['original'])] = id_
return res
def itersources(self, pkmap):
for row in self.raw_dir.read_csv('source.csv', dicts=True):
jsondata = json.loads(row.pop('jsondata', '{}') or '{}')
pkmap['source'][row.pop('pk')] = row['id']
row['title'] = row.pop('description')
row['key'] = row.pop('name')
if (not row['url']) and jsondata.get('gbs', {}).get('id'):
row['url'] = 'https://books.google.de/books?id=' + jsondata['gbs']['id']
yield Source(row.pop('bibtex_type'), row.pop('id'), **row)
@lazyproperty
def cdstar(self):
cdstar = {}
for eid, md in self.raw_dir.read_json('cdstar.json').items():
for bs in md['bitstreams']:
cdstar[eid, bs['bitstreamid']] = (bs['checksum'], md['metadata'].get('description'))
return cdstar
def contributor_ids(self, name, pk2id, fkcol):
return {
fid: [pk2id['contributor'][r['contributor_pk']] for r in rows]
for fid, rows in itertools.groupby(
self.read(
name,
key=lambda d: (d[fkcol], d.get('primary') == 'f', int(d['ord']))
).values(),
lambda r: r[fkcol])
}
| [
"xrotwang@googlemail.com"
] | xrotwang@googlemail.com |
0c0f12cec808a46fc7c2dbc0073bbc8c45ac9ffd | 62226afe584a0d7f8d52fc38ca416b19ffafcb7a | /hwtLib/examples/axi/simpleAxiRegs_test.py | 6fb518c70de16523bfc4cfeffee8ecc47957fc42 | [
"MIT"
] | permissive | Nic30/hwtLib | d08a08bdd0bf764971c4aa319ff03d4df8778395 | 4c1d54c7b15929032ad2ba984bf48b45f3549c49 | refs/heads/master | 2023-05-25T16:57:25.232026 | 2023-05-12T20:39:01 | 2023-05-12T20:39:01 | 63,018,738 | 36 | 8 | MIT | 2021-04-06T17:56:14 | 2016-07-10T21:13:00 | Python | UTF-8 | Python | false | false | 1,694 | py | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
import unittest
from hwt.simulator.simTestCase import SimTestCase
from hwtLib.examples.axi.simpleAxiRegs import SimpleAxiRegs
from pyMathBitPrecise.bit_utils import mask
from hwtSimApi.constants import CLK_PERIOD
allMask = mask(32 // 8)
class SimpleAxiRegsTC(SimTestCase):
@classmethod
def setUpClass(cls):
cls.u = SimpleAxiRegs()
cls.compileSim(cls.u)
def test_nop(self):
u = self.u
self.runSim(25 * CLK_PERIOD)
self.assertEmpty(u.axi._ag.r.data)
self.assertEmpty(u.axi._ag.b.data)
def test_falseWrite(self):
u = self.u
axi = u.axi._ag
axi.w.data += [(11, allMask), (37, allMask)]
self.runSim(25 * CLK_PERIOD)
self.assertEqual(len(axi.w.data), 2 - 1)
self.assertEmpty(u.axi._ag.r.data)
self.assertEmpty(u.axi._ag.b.data)
def test_write(self):
u = self.u
axi = u.axi._ag
axi.aw.data += [(0, 0), (4, 0)]
axi.w.data += [(11, allMask), (37, allMask)]
self.runSim(25 * CLK_PERIOD)
self.assertEmpty(axi.aw.data)
self.assertEmpty(axi.w.data)
self.assertEmpty(u.axi._ag.r.data)
self.assertEqual(len(u.axi._ag.b.data), 2)
model = self.rtl_simulator.model.io
self.assertValSequenceEqual(
[model.reg0.val, model.reg1.val],
[11, 37])
if __name__ == "__main__":
testLoader = unittest.TestLoader()
# suite = unittest.TestSuite([SimpleAxiRegsTC("test_write")])
suite = testLoader.loadTestsFromTestCase(SimpleAxiRegsTC)
runner = unittest.TextTestRunner(verbosity=3)
runner.run(suite)
| [
"nic30@seznam.cz"
] | nic30@seznam.cz |
ef5e84556bcd9ad669b28f8863d602498d18cf90 | 3717abc9152c28d8e6cd1c93f61d6935557a4baf | /tests/test_search.py | b9dc8562b61794bf87e77156a6727db22d50f4c5 | [
"MIT"
] | permissive | samuelcolvin/em2 | a81f00ecb35659adfa9c2b059aa209588f1faba5 | a587eaa80c09a2b44d9c221d09a563aad5b05d78 | refs/heads/master | 2023-08-31T07:07:58.450857 | 2019-12-12T15:59:17 | 2019-12-12T15:59:17 | 167,863,820 | 5 | 1 | MIT | 2021-03-30T13:13:10 | 2019-01-27T22:01:52 | Python | UTF-8 | Python | false | false | 13,580 | py | import json
import pytest
from pytest_toolbox.comparison import AnyInt, CloseToNow
from em2.core import Action, ActionTypes, File
from em2.search import search
from .conftest import Factory
async def test_create_conv(cli, factory: Factory, db_conn):
user = await factory.create_user()
r = await cli.post_json(
factory.url('ui:create'), {'subject': 'Discussion of Apples', 'message': 'I prefer red'}, status=201
)
obj = await r.json()
conv_key = obj['key']
conv_id = await db_conn.fetchval('select id from conversations where key=$1', conv_key)
assert 1 == await db_conn.fetchval('select count(*) from search')
s = dict(await db_conn.fetchrow('select * from search'))
assert s == {
'id': AnyInt(),
'conv': conv_id,
'action': 1,
'freeze_action': 0,
'ts': None,
'user_ids': [user.id],
'creator_email': user.email,
'vector': "'appl':3A 'discuss':1A 'example.com':4B 'prefer':7 'red':8 'testing-1@example.com':5B",
}
async def test_publish_conv(factory: Factory, cli, db_conn):
user = await factory.create_user()
participants = [{'email': 'anne@other.com'}, {'email': 'ben@other.com'}, {'email': 'charlie@other.com'}]
conv = await factory.create_conv(subject='apple pie', message='eggs, flour and raisins', participants=participants)
assert 1 == await db_conn.fetchval('select count(*) from search')
assert conv.id == await db_conn.fetchval('select conv from search')
assert [user.id] == await db_conn.fetchval('select user_ids from search')
await cli.post_json(factory.url('ui:publish', conv=conv.key), {'publish': True})
assert 1 == await db_conn.fetchval('select count(*) from search')
assert conv.id == await db_conn.fetchval('select conv from search')
user_ids = await db_conn.fetchval(
'select array_agg(id) from (select id from users where email like $1) as t', '%@other.com'
)
assert len(user_ids) == 3
assert {user.id, *user_ids} == set(await db_conn.fetchval('select user_ids from search'))
async def test_add_prt_add_msg(factory: Factory, db_conn):
user = await factory.create_user()
conv = await factory.create_conv()
assert 1 == await db_conn.fetchval('select count(*) from search')
await factory.act(conv.id, Action(actor_id=user.id, act=ActionTypes.msg_add, body='apple **pie**'))
assert 4 == await db_conn.fetchval('select count(*) from actions')
assert 1 == await db_conn.fetchval('select count(*) from search')
search = dict(await db_conn.fetchrow('select conv, action, user_ids, creator_email, vector from search'))
assert search == {
'conv': conv.id,
'action': 4,
'user_ids': [user.id],
'creator_email': user.email,
'vector': "'appl':7 'example.com':3B 'messag':6 'pie':8 'subject':2A 'test':1A,5 'testing-1@example.com':4B",
}
user2 = await factory.create_user()
await factory.act(conv.id, Action(actor_id=user.id, act=ActionTypes.prt_add, participant=user2.email))
assert 5 == await db_conn.fetchval('select count(*) from actions')
assert 1 == await db_conn.fetchval('select count(*) from search')
search = dict(await db_conn.fetchrow('select conv, action, user_ids, creator_email, vector from search'))
assert search == {
'conv': conv.id,
'action': 5,
'user_ids': [user.id, user2.id],
'creator_email': user.email,
'vector': (
"'appl':7 'example.com':3B,10B 'messag':6 'pie':8 'subject':2A "
"'test':1A,5 'testing-1@example.com':4B 'testing-2@example.com':9B"
),
}
async def test_add_remove_prt(factory: Factory, db_conn):
user = await factory.create_user(email='testing@example.com')
conv = await factory.create_conv()
email2 = 'different@foobar.com'
assert [4] == await factory.act(conv.id, Action(actor_id=user.id, act=ActionTypes.prt_add, participant=email2))
user2_id = await db_conn.fetchval('select id from users where email=$1', email2)
assert 1 == await db_conn.fetchval('select count(*) from search')
assert 4 == await db_conn.fetchval('select action from search')
assert [user.id, user2_id] == await db_conn.fetchval('select user_ids from search')
email3 = 'three@foobar.com'
assert [5] == await factory.act(conv.id, Action(actor_id=user.id, act=ActionTypes.prt_add, participant=email3))
user3_id = await db_conn.fetchval('select id from users where email=$1', email3)
assert 1 == await db_conn.fetchval('select count(*) from search')
assert 5 == await db_conn.fetchval('select action from search')
assert [user.id, user2_id, user3_id] == await db_conn.fetchval('select user_ids from search')
await factory.act(conv.id, Action(actor_id=user.id, act=ActionTypes.prt_remove, participant=email2, follows=4))
assert 2 == await db_conn.fetchval('select count(*) from search')
assert 5, 5 == await db_conn.fetchrow('select action, freeze_action from search where freeze_action!=0')
assert [user2_id] == await db_conn.fetchval('select user_ids from search where freeze_action!=0')
assert 5, 0 == await db_conn.fetchrow('select action, freeze_action from search where freeze_action=0')
assert [user.id, user3_id] == await db_conn.fetchval('select user_ids from search where freeze_action=0')
await factory.act(conv.id, Action(actor_id=user.id, act=ActionTypes.prt_remove, participant=email3, follows=5))
assert 2 == await db_conn.fetchval('select count(*) from search')
assert 5, 5 == await db_conn.fetchrow('select action, freeze_action from search where freeze_action!=0')
assert [user2_id, user3_id] == await db_conn.fetchval('select user_ids from search where freeze_action!=0')
assert 5, 0 == await db_conn.fetchrow('select action, freeze_action from search where freeze_action=0')
assert [user.id] == await db_conn.fetchval('select user_ids from search where freeze_action=0')
assert [8] == await factory.act(conv.id, Action(actor_id=user.id, act=ActionTypes.msg_add, body='spagetti'))
assert 2 == await db_conn.fetchval('select count(*) from search')
assert 5, 5 == await db_conn.fetchrow('select action, freeze_action from search where freeze_action!=0')
assert 8, 0 == await db_conn.fetchrow('select action, freeze_action from search where freeze_action=0')
assert 'spagetti' not in await db_conn.fetchval('select vector from search where freeze_action!=0')
assert 'spagetti' in await db_conn.fetchval('select vector from search where freeze_action=0')
async def test_readd_prt(factory: Factory, db_conn):
user = await factory.create_user(email='testing@example.com')
conv = await factory.create_conv()
email2 = 'different@foobar.com'
assert [4] == await factory.act(conv.id, Action(actor_id=user.id, act=ActionTypes.prt_add, participant=email2))
user2_id = await db_conn.fetchval('select id from users where email=$1', email2)
assert 1 == await db_conn.fetchval('select count(*) from search')
assert [user.id, user2_id] == await db_conn.fetchval('select user_ids from search where freeze_action=0')
await factory.act(conv.id, Action(actor_id=user.id, act=ActionTypes.prt_remove, participant=email2, follows=4))
assert 2 == await db_conn.fetchval('select count(*) from search')
assert [user2_id] == await db_conn.fetchval('select user_ids from search where freeze_action=4')
assert [user.id] == await db_conn.fetchval('select user_ids from search where freeze_action=0')
assert [6] == await factory.act(conv.id, Action(actor_id=user.id, act=ActionTypes.prt_add, participant=email2))
assert 1 == await db_conn.fetchval('select count(*) from search')
assert [user.id, user2_id] == await db_conn.fetchval('select user_ids from search where freeze_action=0')
async def test_search_query(factory: Factory, conns):
user = await factory.create_user()
conv = await factory.create_conv(subject='apple pie', message='eggs, flour and raisins')
await conns.main.execute('update participants set seen=False')
assert 1 == await conns.main.fetchval('select count(*) from search')
r1 = json.loads(await search(conns, user.id, 'apple'))
assert r1 == {
'conversations': [
{
'key': conv.key,
'updated_ts': CloseToNow(),
'publish_ts': None,
'seen': False,
'details': {
'act': 'conv:create',
'sub': 'apple pie',
'email': user.email,
'creator': user.email,
'prev': 'eggs, flour and raisins',
'prts': 1,
'msgs': 1,
},
}
]
}
# this is the other way of building queries, so check the output is the same
r2 = json.loads(await search(conns, user.id, 'includes:' + user.email))
assert r2 == r1
r = json.loads(await search(conns, user.id, 'banana'))
assert r == {'conversations': []}
assert len(json.loads(await search(conns, user.id, conv.key[5:12]))['conversations']) == 1
assert len(json.loads(await search(conns, user.id, conv.key[5:12] + 'aaa'))['conversations']) == 0
assert len(json.loads(await search(conns, user.id, conv.key[5:8]))['conversations']) == 0
@pytest.mark.parametrize(
'query,count',
[
('', 0),
('"flour and raisins"', 1),
('"eggs and raisins"', 0),
('subject:apple', 1),
('subject:flour', 0),
('from:testing@example.com', 1),
('from:@example.com', 1),
('testing@example.com', 1),
('includes:testing@example.com', 1),
('has:testing@example.com', 1),
('includes:@example.com', 1),
('to:testing@example.com', 0),
('to:@example.com', 0),
('include:recipient@foobar.com', 1),
('include:@foobar.com', 1),
('to:recipient@foobar.com', 1),
('to:@foobar.com', 1),
('recipient@foobar.com', 1),
('@foobar.com', 1),
('from:recipient@foobar.com', 0),
('includes:"testing@example.com recipient@foobar.com"', 1),
('files:*', 0),
('has:files', 0),
('rais', 1), # prefix search
('rai', 1), # prefix search
('ra', 0),
('"rais"', 0),
('rais!', 0),
('rais!', 0),
('\x1e' * 5, 0),
],
)
async def test_search_query_participants(factory: Factory, conns, query, count):
user = await factory.create_user(email='testing@example.com')
conv = await factory.create_conv(subject='apple pie', message='eggs, flour and raisins')
await factory.act(conv.id, Action(actor_id=user.id, act=ActionTypes.prt_add, participant='recipient@foobar.com'))
assert len(json.loads(await search(conns, user.id, query))['conversations']) == count, repr(query)
@pytest.mark.parametrize(
'query,count',
[
('files:fat', 1),
('files:fat', 1),
('files:.png', 1),
('files:"cat png"', 1),
('files:rat', 1),
('files:*', 1),
('has:files', 1),
('files:apple', 0),
],
)
async def test_search_query_files(factory: Factory, conns, query, count):
user = await factory.create_user(email='testing@example.com')
conv = await factory.create_conv(subject='apple pie', message='eggs, flour and raisins')
files = [
File(hash='x', name='fat cat.txt', content_id='a', content_disp='inline', content_type='text/plain', size=10),
File(hash='x', name='rat.png', content_id='b', content_disp='inline', content_type='image/png', size=100),
]
await factory.act(conv.id, Action(actor_id=user.id, act=ActionTypes.msg_add, body='apple **pie**', files=files))
assert len(json.loads(await search(conns, user.id, query))['conversations']) == count
async def test_http_search(factory: Factory, cli):
user = await factory.create_user()
conv = await factory.create_conv(subject='spam sandwich', message='processed meat and bread', publish=True)
obj = await cli.get_json(factory.url('ui:search', query={'query': 'meat'}))
assert obj == {
'conversations': [
{
'key': conv.key,
'updated_ts': CloseToNow(),
'publish_ts': CloseToNow(),
'seen': True,
'details': {
'act': 'conv:publish',
'sub': 'spam sandwich',
'email': user.email,
'creator': user.email,
'prev': 'processed meat and bread',
'prts': 1,
'msgs': 1,
},
}
]
}
obj = await cli.get_json(factory.url('ui:search'))
assert obj == {'conversations': []}
obj = await cli.get_json(factory.url('ui:search', query={'query': 'bacon'}))
assert obj == {'conversations': []}
obj = await cli.get_json(factory.url('ui:search', query={'query': 'x' * 200}))
assert obj == {'conversations': []}
async def test_search_ranking(factory: Factory, conns):
user = await factory.create_user()
await factory.create_conv(subject='fish pie', message='could include apples')
await factory.create_conv(subject='apple pie', message='eggs, flour and raisins')
assert 2 == await conns.main.fetchval('select count(*) from search')
results = [c['details']['sub'] for c in json.loads(await search(conns, user.id, 'apple'))['conversations']]
assert results == ['apple pie', 'fish pie']
| [
"noreply@github.com"
] | samuelcolvin.noreply@github.com |
00ee9af8c74827a02c584f58a3dcb8626b56d1b9 | 87fe0e123ac10a419e01283d742119c7772c8417 | /test/datetimeutil.py | d85545774af5a17c47d40d99d3b559d8b9d07213 | [] | no_license | Archanciel/seqdiagbuilder | 53ede17a5b5767c88e0cc3b8e321718fb0d47ad2 | 8509dc1eac1bf3c9c32362dcd7320990ceecede7 | refs/heads/master | 2020-03-23T02:10:42.933805 | 2019-10-26T16:59:48 | 2019-10-26T16:59:48 | 140,960,687 | 2 | 0 | null | null | null | null | UTF-8 | Python | false | false | 16,304 | py | import arrow
import re
class DateTimeUtil:
SECONDS_PER_DAY = 86400
SHORT_DATE_FORMAT_KEY = 'SHORT_DATE_FORMAT'
LONG_DATE_FORMAT_KEY = 'LONG_DATE_FORMAT'
TIME_FORMAT_KEY = 'TIME_FORMAT'
@staticmethod
def timeStampToArrowLocalDate(timeStamp, timeZoneStr):
'''
FGiven a UTC/GMT timezone independent timestamp and a timezone string specification,
returns a localized arrow object.
:param timeStamp: UTC/GMT timezone independent timestamp
:param timeZoneStr: like 'Europe/Zurich' or 'US/Pacific'
:return: arrow localized date time object
'''
return arrow.Arrow.utcfromtimestamp(timeStamp).to(timeZoneStr)
@staticmethod
def dateTimeStringToTimeStamp(dateTimeStr, timeZoneStr, dateTimeFormatArrow):
'''
Given a datetime string which format abide to to the passed arrow format and
a timezone string specification, return a UTC/GMT timezone independent timestamp.
:param dateTimeStr:
:param timeZoneStr: like 'Europe/Zurich' or 'US/Pacific'
:param dateTimeFormatArrow: example YYYY/MM/DD HH:mm:ss --> 2017/09/12 15:32:21
:return: int UTC/GMT timezone independent timestamp
'''
arrowObj = arrow.get(dateTimeStr, dateTimeFormatArrow).replace(tzinfo=timeZoneStr)
return arrowObj.timestamp # timestamp is independant from timezone !
@staticmethod
def dateTimeStringToArrowLocalDate(dateTimeStr, timeZoneStr, dateTimeFormatArrow):
'''
Given a datetime string which format abide to to the passed arrow format and
a timezone string specification, return an arrow localized date time object.
:param dateTimeStr:
:param timeZoneStr: like 'Europe/Zurich' or 'US/Pacific'
:param dateTimeFormatArrow: example YYYY/MM/DD HH:mm:ss --> 2017/09/12 15:32:21
:return: arrow localized date time object
'''
return arrow.get(dateTimeStr, dateTimeFormatArrow).replace(tzinfo=timeZoneStr)
@staticmethod
def dateTimeComponentsToArrowLocalDate(dayInt, monthInt, yearInt, hourInt, minuteInt, secondInt, timeZoneStr):
'''
Given the passed date/time components and a timezone string specification,
return an arrow localized date time object.
:param dayInt:
:param monthInt:
:param yearInt:
:param hourInt:
:param minuteInt:
:param timeZoneStr: like 'Europe/Zurich' or 'US/Pacific'
:return: arrow localized date time object.
'''
return arrow.get(yearInt, monthInt, dayInt, hourInt, minuteInt, secondInt).replace(tzinfo=timeZoneStr)
@staticmethod
def dateTimeComponentsToTimeStamp(day, month, year, hour, minute, second, timeZoneStr):
'''
Given the passed date/time components and a timezone string specification,
return a UTC/GMT timezone independent timestamp.
:param day:
:param month:
:param year:
:param hour:
:param minute:
:param timeZoneStr: like 'Europe/Zurich' or 'US/Pacific'
:return: UTC/GMT timezone independent timestamp.
'''
return arrow.get(year, month, day, hour, minute, second).replace(tzinfo=timeZoneStr).timestamp
@staticmethod
def convertToTimeZone(dateTimeArrowObject, timeZoneStr):
'''
Return the passed dateTimeArrowObject converted to the passed timeZoneStr.
The passed dateTimeArrowObject remains unchanged !
:param dateTimeArrowObject: arrow localized date time object.
:param timeZoneStr: like 'Europe/Zurich' or 'US/Pacific'
:return: arrow date time object localized to passed timeZoneStr
'''
return dateTimeArrowObject.to(timeZoneStr)
@staticmethod
def isDateOlderThan(dateTimeArrowObject, dayNumberInt):
'''
Return true if the passed dateTimeArrowObject converted to the UTC time zone
is dayNumber days before UTC now.
:param dateTimeArrowObject: arrow localized date time object.
:param dayNumberInt: int day number
:return: True or False
'''
return ((arrow.utcnow().timestamp - dateTimeArrowObject.to('UTC').timestamp) / dayNumberInt) > DateTimeUtil.SECONDS_PER_DAY
@staticmethod
def isAfter(dateArrowObjectAfter, dateArrowObjectBefore):
'''
Return True if dateArrowObjectAfter is after dateArrowObjectBefore, False if dateArrowObjectAfter
is on or is before dateArrowObjectBefore
:param dateArrowObjectAfter:
:param dateArrowObjectBefore:
:return: True or False
'''
return dateArrowObjectAfter.timestamp > dateArrowObjectBefore.timestamp
@staticmethod
def isTimeStampOlderThan(timeStamp, dayNumberInt):
'''
Return true if the passed time stamp is dayNumber days before UTC now.
:param dateTimeArrowObject: arrow localized date time object.
:param dayNumberInt: int day number
:return: True or False
'''
return ((arrow.utcnow().timestamp - timeStamp) / dayNumberInt) > DateTimeUtil.SECONDS_PER_DAY
@staticmethod
def utcNowTimeStamp():
'''
Return the current UTC time stamp
:return: current time zone independant (UTC) time stamp
'''
return arrow.utcnow().timestamp
@staticmethod
def localNow(timeZoneStr):
'''
Return a localised current dateTimeArrowObject
:param timeZoneStr: like 'Europe/Zurich' or 'US/Pacific'
:return: current arrow localized date time object
'''
return arrow.now(timeZoneStr)
@staticmethod
def shiftTimeStampToEndOfDay(inDayTimeStamp):
'''
Return the time stamp of midnight of the day including the passed inDayTimeStamp
:param inDayTimeStamp:
:return: time stamp of the day containing inDayTimeStamp, but at midnight precisely
'''
endOfDayDateTimeArrowObject = arrow.Arrow.utcfromtimestamp(inDayTimeStamp).replace(hour=23, minute=59, second=59)
return endOfDayDateTimeArrowObject.timestamp
@staticmethod
def getFormattedDateTimeComponents(arrowDateTimeObj, dateTimeformat):
'''
Returns 3 lists, one containing the date/time components symbols in the order they are used in the
passed dateTimeFormat, the second containing 2 elements: the date and the time separator, and the
third containing the corresponding formated values.
Ex: for dateTimeformat = 'DD/MM/YY HH:mm' and 24/1/2018 4:41, returns
['DD', 'MM', 'YY', 'HH', 'mm']
['/', ':'] and
['24', '01', '18', '04', '41']
for dateTimeformat = 'YYYY-MM-DD HH.mm' and 24-1-2018 4.41, returns
['YYYY', 'MM', 'DD', 'HH', 'mm']
['-', '.'] and
['2018', '01', '24', '04', '41']
:param arrowDateTimeObj:
:param dateTimeformat: in the format used by Arrow dates
:return: dateTimeComponentSymbolList, separatorsList and dateTimeComponentValueList
'''
dateTimeComponentSymbolList, separatorsList = DateTimeUtil._extractDateTimeFormatComponentFromDateTimeFormat(
dateTimeformat)
dateTimeComponentValueList = []
for dateTimeSymbol in dateTimeComponentSymbolList:
dateTimeComponentValueList.append(arrowDateTimeObj.format(dateTimeSymbol))
return dateTimeComponentSymbolList, separatorsList, dateTimeComponentValueList
@staticmethod
def _extractDateTimeFormatComponentFromDateTimeFormat(dateTimeformat):
'''
Returns 2 lists, the first containing the date/time components symbols in the order
they are used in the passed dateTimeFormat, the second containing 2 elements:
the date and the time separator.
Ex: for dateTimeformat = 'DD/MM/YY HH:mm', returns
['DD', 'MM', 'YY', 'HH', 'mm']
['/', ':'] and
for dateTimeformat = 'YYYY-MM-DD HH.mm', returns
['YYYY', 'MM', 'DD', 'HH', 'mm']
['-', '.'] and
:param dateTimeformat: in the format used by Arrow dates
:return: dateTimeComponentSymbolList, separatorsList
'''
# find the separators in 'DD/MM/YY HH:mm' - ['/', '/', ':'] or 'YYYY.MM.DD HH.mm' - ['.', '.', '.']
dateTimeSeparators = re.findall(r"[^\w^ ]", dateTimeformat)
# build the split pattern '/| |:' or '\.| |\.'
# if a separator is a dot, must be escaped !
if dateTimeSeparators[0] == '.':
dateTimeSeparators[0] = r'\.'
if dateTimeSeparators[-1] == '.':
dateTimeSeparators[-1] = r'\.'
separatorsList = [dateTimeSeparators[0], dateTimeSeparators[-1]]
dateTimeComponentsSplitPattern = '{}| |{}'.format(dateTimeSeparators[0], dateTimeSeparators[-1])
dateTimeComponentSymbolList = re.split(dateTimeComponentsSplitPattern, dateTimeformat)
return dateTimeComponentSymbolList, separatorsList
@staticmethod
def getDateAndTimeFormatDictionary(dateTimeformat):
'''
Returns a dictonary containing the date and time formats corresponding to the
passed Arrow dateTimeformat
Ex: for dateTimeformat = 'DD/MM/YY HH:mm', returns
['DD', 'MM', 'YY', 'HH', 'mm']
['/', ':'] and
for dateTimeformat = 'YYYY-MM-DD HH.mm', returns
['YYYY', 'MM', 'DD', 'HH', 'mm']
['-', '.'] and
:param dateTimeformat: in the format used by Arrow dates
:return: formatDic: dictionary containing the date and time formats
'''
dateTimeComponentSymbolList, separatorsList = DateTimeUtil._extractDateTimeFormatComponentFromDateTimeFormat(dateTimeformat)
separatorsList = [(lambda x : x.strip('\\'))(x) for x in separatorsList]
formatDic = {}
# handling date formats
dateSep = separatorsList[0]
if 'Y' in dateTimeComponentSymbolList[0].upper():
#date start with year
formatDic[DateTimeUtil.LONG_DATE_FORMAT_KEY] = '{}{}{}{}{}'.format(dateTimeComponentSymbolList[0],
dateSep,
dateTimeComponentSymbolList[1],
dateSep,
dateTimeComponentSymbolList[2])
formatDic[DateTimeUtil.SHORT_DATE_FORMAT_KEY] = '{}{}{}'.format(dateTimeComponentSymbolList[1],
dateSep,
dateTimeComponentSymbolList[2])
elif 'D' in dateTimeComponentSymbolList[0].upper():
#date start with day
formatDic[DateTimeUtil.LONG_DATE_FORMAT_KEY] = '{}{}{}{}{}'.format(dateTimeComponentSymbolList[0],
dateSep,
dateTimeComponentSymbolList[1],
dateSep,
dateTimeComponentSymbolList[2])
formatDic[DateTimeUtil.SHORT_DATE_FORMAT_KEY] = '{}{}{}'.format(dateTimeComponentSymbolList[0],
dateSep,
dateTimeComponentSymbolList[1])
elif 'M' in dateTimeComponentSymbolList[0].upper():
# date start with month
formatDic[DateTimeUtil.LONG_DATE_FORMAT_KEY] = '{}{}{}{}{}'.format(dateTimeComponentSymbolList[0],
dateSep,
dateTimeComponentSymbolList[1],
dateSep,
dateTimeComponentSymbolList[2])
formatDic[DateTimeUtil.SHORT_DATE_FORMAT_KEY] = '{}{}{}'.format(dateTimeComponentSymbolList[0],
dateSep,
dateTimeComponentSymbolList[1])
else:
#unsupported date format
pass
# handling time formats
timeSep = separatorsList[1]
formatDic[DateTimeUtil.TIME_FORMAT_KEY] = '{}{}{}'.format(dateTimeComponentSymbolList[3],
timeSep,
dateTimeComponentSymbolList[4])
return formatDic
@staticmethod
def _unescape(str):
return str.strip('\\')
if __name__ == '__main__':
utcArrowDateTimeObj_endOfPreviousDay = DateTimeUtil.dateTimeStringToArrowLocalDate("2017/09/29 23:59:59", 'UTC',
"YYYY/MM/DD HH:mm:ss")
print('endOfPreviousDay.timestamp: ' + str(utcArrowDateTimeObj_endOfPreviousDay.timestamp) + ' ' + utcArrowDateTimeObj_endOfPreviousDay.format("YYYY/MM/DD HH:mm:ss ZZ"))
utcArrowDateTimeObj_begOfCurrentDay = DateTimeUtil.dateTimeStringToArrowLocalDate("2017/09/30 00:00:00", 'UTC',
"YYYY/MM/DD HH:mm:ss")
print('begOfCurrentDay.timestamp; ' + str(utcArrowDateTimeObj_begOfCurrentDay.timestamp) + ' ' + utcArrowDateTimeObj_begOfCurrentDay.format("YYYY/MM/DD HH:mm:ss ZZ"))
utcArrowDateTimeObj_endOfCurrentDay = DateTimeUtil.dateTimeStringToArrowLocalDate("2017/09/30 23:59:59", 'UTC',
"YYYY/MM/DD HH:mm:ss")
print('endOfCurrentDay.timestamp: ' + str(utcArrowDateTimeObj_endOfCurrentDay.timestamp) + ' ' + utcArrowDateTimeObj_endOfCurrentDay.format("YYYY/MM/DD HH:mm:ss ZZ"))
utcArrowDateTimeObj_midOfCurrentDay = DateTimeUtil.dateTimeStringToArrowLocalDate("2017/09/30 13:59:59", 'UTC',
"YYYY/MM/DD HH:mm:ss")
print('midOfCurrentDay.timestamp: ' + str(utcArrowDateTimeObj_midOfCurrentDay.timestamp) + ' ' + utcArrowDateTimeObj_midOfCurrentDay.format("YYYY/MM/DD HH:mm:ss ZZ"))
utcArrowDateTimeObj_midOfCurrentDay = DateTimeUtil.dateTimeStringToArrowLocalDate("2017/09/29 22:00:00", 'UTC',
"YYYY/MM/DD HH:mm:ss")
print('midOfCurrentDay.timestamp: ' + str(utcArrowDateTimeObj_midOfCurrentDay.timestamp) + ' ' + utcArrowDateTimeObj_midOfCurrentDay.format("YYYY/MM/DD HH:mm:ss ZZ"))
print('essai : ' + str(utcArrowDateTimeObj_midOfCurrentDay.timestamp) + ' ' + utcArrowDateTimeObj_midOfCurrentDay.format("YYYY/MM/DD HH:mm:ss ZZ"))
tsEOD = DateTimeUtil.shiftTimeStampToEndOfDay(utcArrowDateTimeObj_begOfCurrentDay.timestamp)
print('shifted: ' + str(tsEOD))
timezoneStr = 'Europe/Zurich'
now = DateTimeUtil.localNow(timezoneStr)
dateTimeformat = 'DD/MM/YY HH:mm'
dateTimeComponentSymbolList, separatorsList, dateTimeComponentValueList = DateTimeUtil.getFormattedDateTimeComponents(now, dateTimeformat)
print(dateTimeComponentSymbolList)
print(separatorsList)
print(dateTimeComponentValueList)
dateTimeformat = 'YYYY.MM.DD HH.mm'
dateTimeComponentSymbolList, separatorsList, dateTimeComponentValueList = DateTimeUtil.getFormattedDateTimeComponents(now, dateTimeformat)
print(dateTimeComponentSymbolList)
print(separatorsList)
print(dateTimeComponentValueList)
gmtPlusList = []
gmtMinusList = []
for i in range(24):
tz = 'GMT+' + str(i)
gmtPlusList.append(tz)
tzTime = DateTimeUtil.localNow(tz)
print("{}: {}".format(tz, tzTime))
for i in range(24):
tz = 'GMT-' + str(i)
gmtMinusList.append(tz)
tzTime = DateTimeUtil.localNow(tz)
print("{}: {}".format(tz, tzTime))
gmtPlusList.append(gmtMinusList)
print(gmtPlusList)
| [
"jp.schnyder@gmail.com"
] | jp.schnyder@gmail.com |
4cea90c357cb1e8caed9ed683cb52f6f3bd5744f | c383d6adebdfc35e96fa88809111f79f7ebee819 | /interview/search/sequential_search.py | 8be59d3d4a3b577ea5030d69c480a9da5ca7390d | [] | no_license | aryabartar/learning | 5eb9f32673d01dcf181c73436dd3fecbf777d555 | f3ff3e4548922d9aa0f700e65fa949ab0108653c | refs/heads/master | 2020-05-16T06:03:11.974609 | 2019-08-26T14:10:42 | 2019-08-26T14:10:42 | 182,834,405 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 274 | py | def find_unordered(elem, array):
for i in array:
if elem == i:
return True
return False
def find_ordered(elem, array):
for i in array:
if i < elem:
break
if elem == i:
return True
return False
| [
"bartararya@gmail.com"
] | bartararya@gmail.com |
bf5aee2a49aedcbb1894fff95445f2209cc8718c | 7b1a5db0a067766a9805fe04105f6c7f9ff131f3 | /pysal/model/spreg/probit.py | 8d0e942ff7705edf18731f8eeff94fa76fdf8904 | [] | permissive | ocefpaf/pysal | 2d25b9f3a8bd87a7be3f96b825995a185624e1d0 | 7e397bdb4c22d4e2442b4ee88bcd691d2421651d | refs/heads/master | 2020-06-26T17:13:06.016203 | 2019-07-31T19:54:35 | 2019-07-31T19:54:35 | 199,696,188 | 0 | 0 | BSD-3-Clause | 2019-07-30T17:17:19 | 2019-07-30T17:17:18 | null | UTF-8 | Python | false | false | 34,172 | py | """Probit regression class and diagnostics."""
__author__ = "Luc Anselin luc.anselin@asu.edu, Pedro V. Amaral pedro.amaral@asu.edu"
import numpy as np
import numpy.linalg as la
import scipy.optimize as op
from scipy.stats import norm, chi2
chisqprob = chi2.sf
import scipy.sparse as SP
from . import user_output as USER
from . import summary_output as SUMMARY
from .utils import spdot, spbroadcast
__all__ = ["Probit"]
class BaseProbit(object):
"""
Probit class to do all the computations
Parameters
----------
x : array
nxk array of independent variables (assumed to be aligned with y)
y : array
nx1 array of dependent binary variable
w : W
PySAL weights instance or spatial weights sparse matrix
aligned with y
optim : string
Optimization method.
Default: 'newton' (Newton-Raphson).
Alternatives: 'ncg' (Newton-CG), 'bfgs' (BFGS algorithm)
scalem : string
Method to calculate the scale of the marginal effects.
Default: 'phimean' (Mean of individual marginal effects)
Alternative: 'xmean' (Marginal effects at variables mean)
maxiter : int
Maximum number of iterations until optimizer stops
Attributes
----------
x : array
Two dimensional array with n rows and one column for each
independent (exogenous) variable, including the constant
y : array
nx1 array of dependent variable
betas : array
kx1 array with estimated coefficients
predy : array
nx1 array of predicted y values
n : int
Number of observations
k : int
Number of variables
vm : array
Variance-covariance matrix (kxk)
z_stat : list of tuples
z statistic; each tuple contains the pair (statistic,
p-value), where each is a float
xmean : array
Mean of the independent variables (kx1)
predpc : float
Percent of y correctly predicted
logl : float
Log-Likelihhod of the estimation
scalem : string
Method to calculate the scale of the marginal effects.
scale : float
Scale of the marginal effects.
slopes : array
Marginal effects of the independent variables (k-1x1)
Note: Disregards the presence of dummies.
slopes_vm : array
Variance-covariance matrix of the slopes (k-1xk-1)
LR : tuple
Likelihood Ratio test of all coefficients = 0
(test statistics, p-value)
Pinkse_error: float
Lagrange Multiplier test against spatial error correlation.
Implemented as presented in [Pinkse2004]_
KP_error : float
Moran's I type test against spatial error correlation.
Implemented as presented in [Kelejian2001]_
PS_error : float
Lagrange Multiplier test against spatial error correlation.
Implemented as presented in [Pinkse1998]_
warning : boolean
if True Maximum number of iterations exceeded or gradient
and/or function calls not changing.
Examples
--------
>>> import numpy as np
>>> import pysal
>>> dbf = pysal.open(pysal.examples.get_path('columbus.dbf'),'r')
>>> y = np.array([dbf.by_col('CRIME')]).T
>>> x = np.array([dbf.by_col('INC'), dbf.by_col('HOVAL')]).T
>>> x = np.hstack((np.ones(y.shape),x))
>>> w = pysal.open(pysal.examples.get_path("columbus.gal"), 'r').read()
>>> w.transform='r'
>>> model = BaseProbit((y>40).astype(float), x, w=w)
>>> np.around(model.betas, decimals=6)
array([[ 3.353811],
[-0.199653],
[-0.029514]])
>>> np.around(model.vm, decimals=6)
array([[ 0.852814, -0.043627, -0.008052],
[-0.043627, 0.004114, -0.000193],
[-0.008052, -0.000193, 0.00031 ]])
>>> tests = np.array([['Pinkse_error','KP_error','PS_error']])
>>> stats = np.array([[model.Pinkse_error[0],model.KP_error[0],model.PS_error[0]]])
>>> pvalue = np.array([[model.Pinkse_error[1],model.KP_error[1],model.PS_error[1]]])
>>> print np.hstack((tests.T,np.around(np.hstack((stats.T,pvalue.T)),6)))
[['Pinkse_error' '3.131719' '0.076783']
['KP_error' '1.721312' '0.085194']
['PS_error' '2.558166' '0.109726']]
"""
def __init__(self, y, x, w=None, optim='newton', scalem='phimean', maxiter=100):
self.y = y
self.x = x
self.n, self.k = x.shape
self.optim = optim
self.scalem = scalem
self.w = w
self.maxiter = maxiter
par_est, self.warning = self.par_est()
self.betas = np.reshape(par_est[0], (self.k, 1))
self.logl = -float(par_est[1])
@property
def vm(self):
try:
return self._cache['vm']
except AttributeError:
self._cache = {}
H = self.hessian(self.betas)
self._cache['vm'] = -la.inv(H)
except KeyError:
H = self.hessian(self.betas)
self._cache['vm'] = -la.inv(H)
return self._cache['vm']
@vm.setter
def vm(self, val):
try:
self._cache['vm'] = val
except AttributeError:
self._cache = {}
self._cache['vm'] = val
@property #could this get packaged into a separate function or something? It feels weird to duplicate this.
def z_stat(self):
try:
return self._cache['z_stat']
except AttributeError:
self._cache = {}
variance = self.vm.diagonal()
zStat = self.betas.reshape(len(self.betas),) / np.sqrt(variance)
rs = {}
for i in range(len(self.betas)):
rs[i] = (zStat[i], norm.sf(abs(zStat[i])) * 2)
self._cache['z_stat'] = rs.values()
except KeyError:
variance = self.vm.diagonal()
zStat = self.betas.reshape(len(self.betas),) / np.sqrt(variance)
rs = {}
for i in range(len(self.betas)):
rs[i] = (zStat[i], norm.sf(abs(zStat[i])) * 2)
self._cache['z_stat'] = rs.values()
return self._cache['z_stat']
@z_stat.setter
def z_stat(self, val):
try:
self._cache['z_stat'] = val
except AttributeError:
self._cache = {}
self._cache['z_stat'] = val
@property
def slopes_std_err(self):
try:
return self._cache['slopes_std_err']
except AttributeError:
self._cache = {}
self._cache['slopes_std_err'] = np.sqrt(self.slopes_vm.diagonal())
except KeyError:
self._cache['slopes_std_err'] = np.sqrt(self.slopes_vm.diagonal())
return self._cache['slopes_std_err']
@slopes_std_err.setter
def slopes_std_err(self, val):
try:
self._cache['slopes_std_err'] = val
except AttributeError:
self._cache = {}
self._cache['slopes_std_err'] = val
@property
def slopes_z_stat(self):
try:
return self._cache['slopes_z_stat']
except AttributeError:
self._cache = {}
return self.slopes_z_stat
except KeyError:
zStat = self.slopes.reshape(
len(self.slopes),) / self.slopes_std_err
rs = {}
for i in range(len(self.slopes)):
rs[i] = (zStat[i], norm.sf(abs(zStat[i])) * 2)
self._cache['slopes_z_stat'] = list(rs.values())
return self._cache['slopes_z_stat']
@slopes_z_stat.setter
def slopes_z_stat(self, val):
try:
self._cache['slopes_z_stat'] = val
except AttributeError:
self._cache = {}
self._cache['slopes_z_stat'] = val
@property
def xmean(self):
try:
return self._cache['xmean']
except AttributeError:
self._cache = {}
try: #why is this try-accept? can x be a list??
self._cache['xmean'] = np.reshape(sum(self.x) / self.n, (self.k, 1))
except:
self._cache['xmean'] = np.reshape(sum(self.x).toarray() / self.n, (self.k, 1))
except KeyError:
try:
self._cache['xmean'] = np.reshape(sum(self.x) / self.n, (self.k, 1))
except:
self._cache['xmean'] = np.reshape(sum(self.x).toarray() / self.n, (self.k, 1))
return self._cache['xmean']
@xmean.setter
def xmean(self, val):
try:
self._cache['xmean'] = val
except AttributeError:
self._cache = {}
self._cache['xmean'] = val
@property
def xb(self):
try:
return self._cache['xb']
except AttributeError:
self._cache = {}
self._cache['xb'] = spdot(self.x, self.betas)
except KeyError:
self._cache['xb'] = spdot(self.x, self.betas)
return self._cache['xb']
@xb.setter
def xb(self, val):
try:
self._cache['xb'] = val
except AttributeError:
self._cache = {}
self._cache['xb'] = val
@property
def predy(self):
try:
return self._cache['predy']
except AttributeError:
self._cache = {}
self._cache['predy'] = norm.cdf(self.xb)
except KeyError:
self._cache['predy'] = norm.cdf(self.xb)
return self._cache['predy']
@predy.setter
def predy(self, val):
try:
self._cache['predy'] = val
except AttributeError:
self._cache = {}
self._cache['predy'] = val
@property
def predpc(self):
try:
return self._cache['predpc']
except AttributeError:
self._cache = {}
predpc = abs(self.y - self.predy)
for i in range(len(predpc)):
if predpc[i] > 0.5:
predpc[i] = 0
else:
predpc[i] = 1
self._cache['predpc'] = float(100.0 * np.sum(predpc) / self.n)
except KeyError:
predpc = abs(self.y - self.predy)
for i in range(len(predpc)):
if predpc[i] > 0.5:
predpc[i] = 0
else:
predpc[i] = 1
self._cache['predpc'] = float(100.0 * np.sum(predpc) / self.n)
return self._cache['predpc']
@predpc.setter
def predpc(self, val):
try:
self._cache['predpc'] = val
except AttributeError:
self._cache = {}
self._cache['predpc'] = val
@property
def phiy(self):
try:
return self._cache['phiy']
except AttributeError:
self._cache = {}
self._cache['phiy'] = norm.pdf(self.xb)
except KeyError:
self._cache['phiy'] = norm.pdf(self.xb)
return self._cache['phiy']
@phiy.setter
def phiy(self, val):
try:
self._cache['phiy'] = val
except AttributeError:
self._cache = {}
self._cache['phiy'] = val
@property
def scale(self):
try:
return self._cache['scale']
except AttributeError:
self._cache = {}
if self.scalem == 'phimean':
self._cache['scale'] = float(1.0 * np.sum(self.phiy) / self.n)
elif self.scalem == 'xmean':
self._cache['scale'] = float(norm.pdf(np.dot(self.xmean.T, self.betas)))
except KeyError:
if self.scalem == 'phimean':
self._cache['scale'] = float(1.0 * np.sum(self.phiy) / self.n)
if self.scalem == 'xmean':
self._cache['scale'] = float(norm.pdf(np.dot(self.xmean.T, self.betas)))
return self._cache['scale']
@scale.setter
def scale(self, val):
try:
self._cache['scale'] = val
except AttributeError:
self._cache = {}
self._cache['scale'] = val
@property
def slopes(self):
try:
return self._cache['slopes']
except AttributeError:
self._cache = {}
self._cache['slopes'] = self.betas[1:] * self.scale
except KeyError:
self._cache['slopes'] = self.betas[1:] * self.scale
return self._cache['slopes']
@slopes.setter
def slopes(self, val):
try:
self._cache['slopes'] = val
except AttributeError:
self._cache = {}
self._cache['slopes'] = val
@property
def slopes_vm(self):
try:
return self._cache['slopes_vm']
except AttributeError:
self._cache = {}
x = self.xmean
b = self.betas
dfdb = np.eye(self.k) - spdot(b.T, x) * spdot(b, x.T)
slopes_vm = (self.scale ** 2) * \
np.dot(np.dot(dfdb, self.vm), dfdb.T)
self._cache['slopes_vm'] = slopes_vm[1:, 1:]
except KeyError:
x = self.xmean
b = self.betas
dfdb = np.eye(self.k) - spdot(b.T, x) * spdot(b, x.T)
slopes_vm = (self.scale ** 2) * \
np.dot(np.dot(dfdb, self.vm), dfdb.T)
self._cache['slopes_vm'] = slopes_vm[1:, 1:]
return self._cache['slopes_vm']
@slopes_vm.setter
def slopes_vm(self, val):
try:
self._cache['slopes_vm'] = val
except AttributeError:
self._cache = {}
self._cache['slopes_vm'] = val
@property
def LR(self):
try:
return self._cache['LR']
except AttributeError:
self._cache = {}
P = 1.0 * np.sum(self.y) / self.n
LR = float(
-2 * (self.n * (P * np.log(P) + (1 - P) * np.log(1 - P)) - self.logl))
self._cache['LR'] = (LR, chisqprob(LR, self.k))
except KeyError:
P = 1.0 * np.sum(self.y) / self.n
LR = float(
-2 * (self.n * (P * np.log(P) + (1 - P) * np.log(1 - P)) - self.logl))
self._cache['LR'] = (LR, chisqprob(LR, self.k))
return self._cache['LR']
@LR.setter
def LR(self, val):
try:
self._cache['LR'] = val
except AttributeError:
self._cache = {}
self._cache['LR'] = val
@property
def u_naive(self):
try:
return self._cache['u_naive']
except AttributeError:
self._cache = {}
self._cache['u_naive'] = self.y - self.predy
except KeyError:
u_naive = self.y - self.predy
self._cache['u_naive'] = u_naive
return self._cache['u_naive']
@u_naive.setter
def u_naive(self, val):
try:
self._cache['u_naive'] = val
except AttributeError:
self._cache = {}
self._cache['u_naive'] = val
@property
def u_gen(self):
try:
return self._cache['u_gen']
except AttributeError:
self._cache = {}
Phi_prod = self.predy * (1 - self.predy)
u_gen = self.phiy * (self.u_naive / Phi_prod)
self._cache['u_gen'] = u_gen
except KeyError:
Phi_prod = self.predy * (1 - self.predy)
u_gen = self.phiy * (self.u_naive / Phi_prod)
self._cache['u_gen'] = u_gen
return self._cache['u_gen']
@u_gen.setter
def u_gen(self, val):
try:
self._cache['u_gen'] = val
except AttributeError:
self._cache = {}
self._cache['u_gen'] = val
@property
def Pinkse_error(self):
try:
return self._cache['Pinkse_error']
except AttributeError:
self._cache = {}
self._cache['Pinkse_error'], self._cache[
'KP_error'], self._cache['PS_error'] = sp_tests(self)
except KeyError:
self._cache['Pinkse_error'], self._cache[
'KP_error'], self._cache['PS_error'] = sp_tests(self)
return self._cache['Pinkse_error']
@Pinkse_error.setter
def Pinkse_error(self, val):
try:
self._cache['Pinkse_error'] = val
except AttributeError:
self._cache = {}
self._cache['Pinkse_error'] = val
@property
def KP_error(self):
try:
return self._cache['KP_error']
except AttributeError:
self._cache = {}
self._cache['Pinkse_error'], self._cache[
'KP_error'], self._cache['PS_error'] = sp_tests(self)
except KeyError:
self._cache['Pinkse_error'], self._cache[
'KP_error'], self._cache['PS_error'] = sp_tests(self)
return self._cache['KP_error']
@KP_error.setter
def KP_error(self, val):
try:
self._cache['KP_error'] = val
except AttributeError:
self._cache = {}
self._cache['KP_error'] = val
@property
def PS_error(self):
try:
return self._cache['PS_error']
except AttributeError:
self._cache = {}
self._cache['Pinkse_error'], self._cache[
'KP_error'], self._cache['PS_error'] = sp_tests(self)
except KeyError:
self._cache['Pinkse_error'], self._cache[
'KP_error'], self._cache['PS_error'] = sp_tests(self)
return self._cache['PS_error']
@PS_error.setter
def PS_error(self, val):
try:
self._cache['PS_error'] = val
except AttributeError:
self._cache = {}
self._cache['PS_error'] = val
def par_est(self):
start = np.dot(la.inv(spdot(self.x.T, self.x)),
spdot(self.x.T, self.y))
flogl = lambda par: -self.ll(par)
if self.optim == 'newton':
fgrad = lambda par: self.gradient(par)
fhess = lambda par: self.hessian(par)
par_hat = newton(flogl, start, fgrad, fhess, self.maxiter)
warn = par_hat[2]
else:
fgrad = lambda par: -self.gradient(par)
if self.optim == 'bfgs':
par_hat = op.fmin_bfgs(
flogl, start, fgrad, full_output=1, disp=0)
warn = par_hat[6]
if self.optim == 'ncg':
fhess = lambda par: -self.hessian(par)
par_hat = op.fmin_ncg(
flogl, start, fgrad, fhess=fhess, full_output=1, disp=0)
warn = par_hat[5]
if warn > 0:
warn = True
else:
warn = False
return par_hat, warn
def ll(self, par):
beta = np.reshape(np.array(par), (self.k, 1))
q = 2 * self.y - 1
qxb = q * spdot(self.x, beta)
ll = sum(np.log(norm.cdf(qxb)))
return ll
def gradient(self, par):
beta = np.reshape(np.array(par), (self.k, 1))
q = 2 * self.y - 1
qxb = q * spdot(self.x, beta)
lamb = q * norm.pdf(qxb) / norm.cdf(qxb)
gradient = spdot(lamb.T, self.x)[0]
return gradient
def hessian(self, par):
beta = np.reshape(np.array(par), (self.k, 1))
q = 2 * self.y - 1
xb = spdot(self.x, beta)
qxb = q * xb
lamb = q * norm.pdf(qxb) / norm.cdf(qxb)
hessian = spdot(self.x.T, spbroadcast(self.x,-lamb * (lamb + xb)))
return hessian
class Probit(BaseProbit):
"""
Classic non-spatial Probit and spatial diagnostics. The class includes a
printout that formats all the results and tests in a nice format.
The diagnostics for spatial dependence currently implemented are:
* Pinkse Error [Pinkse2004]_
* Kelejian and Prucha Moran's I [Kelejian2001]_
* Pinkse & Slade Error [Pinkse1998]_
Parameters
----------
x : array
nxk array of independent variables (assumed to be aligned with y)
y : array
nx1 array of dependent binary variable
w : W
PySAL weights instance aligned with y
optim : string
Optimization method.
Default: 'newton' (Newton-Raphson).
Alternatives: 'ncg' (Newton-CG), 'bfgs' (BFGS algorithm)
scalem : string
Method to calculate the scale of the marginal effects.
Default: 'phimean' (Mean of individual marginal effects)
Alternative: 'xmean' (Marginal effects at variables mean)
maxiter : int
Maximum number of iterations until optimizer stops
name_y : string
Name of dependent variable for use in output
name_x : list of strings
Names of independent variables for use in output
name_w : string
Name of weights matrix for use in output
name_ds : string
Name of dataset for use in output
Attributes
----------
x : array
Two dimensional array with n rows and one column for each
independent (exogenous) variable, including the constant
y : array
nx1 array of dependent variable
betas : array
kx1 array with estimated coefficients
predy : array
nx1 array of predicted y values
n : int
Number of observations
k : int
Number of variables
vm : array
Variance-covariance matrix (kxk)
z_stat : list of tuples
z statistic; each tuple contains the pair (statistic,
p-value), where each is a float
xmean : array
Mean of the independent variables (kx1)
predpc : float
Percent of y correctly predicted
logl : float
Log-Likelihhod of the estimation
scalem : string
Method to calculate the scale of the marginal effects.
scale : float
Scale of the marginal effects.
slopes : array
Marginal effects of the independent variables (k-1x1)
slopes_vm : array
Variance-covariance matrix of the slopes (k-1xk-1)
LR : tuple
Likelihood Ratio test of all coefficients = 0
(test statistics, p-value)
Pinkse_error: float
Lagrange Multiplier test against spatial error correlation.
Implemented as presented in [Pinkse2004]_
KP_error : float
Moran's I type test against spatial error correlation.
Implemented as presented in [Kelejian2001]_
PS_error : float
Lagrange Multiplier test against spatial error correlation.
Implemented as presented in [Pinkse1998]_
warning : boolean
if True Maximum number of iterations exceeded or gradient
and/or function calls not changing.
name_y : string
Name of dependent variable for use in output
name_x : list of strings
Names of independent variables for use in output
name_w : string
Name of weights matrix for use in output
name_ds : string
Name of dataset for use in output
title : string
Name of the regression method used
Examples
--------
We first need to import the needed modules, namely numpy to convert the
data we read into arrays that ``spreg`` understands and ``pysal`` to
perform all the analysis.
>>> import numpy as np
>>> import pysal
Open data on Columbus neighborhood crime (49 areas) using pysal.open().
This is the DBF associated with the Columbus shapefile. Note that
pysal.open() also reads data in CSV format; since the actual class
requires data to be passed in as numpy arrays, the user can read their
data in using any method.
>>> dbf = pysal.open(pysal.examples.get_path('columbus.dbf'),'r')
Extract the CRIME column (crime) from the DBF file and make it the
dependent variable for the regression. Note that PySAL requires this to be
an numpy array of shape (n, 1) as opposed to the also common shape of (n, )
that other packages accept. Since we want to run a probit model and for this
example we use the Columbus data, we also need to transform the continuous
CRIME variable into a binary variable. As in [McMillen1992]_, we define
y = 1 if CRIME > 40.
>>> y = np.array([dbf.by_col('CRIME')]).T
>>> y = (y>40).astype(float)
Extract HOVAL (home values) and INC (income) vectors from the DBF to be used as
independent variables in the regression. Note that PySAL requires this to
be an nxj numpy array, where j is the number of independent variables (not
including a constant). By default this class adds a vector of ones to the
independent variables passed in.
>>> names_to_extract = ['INC', 'HOVAL']
>>> x = np.array([dbf.by_col(name) for name in names_to_extract]).T
Since we want to the test the probit model for spatial dependence, we need to
specify the spatial weights matrix that includes the spatial configuration of
the observations into the error component of the model. To do that, we can open
an already existing gal file or create a new one. In this case, we will use
``columbus.gal``, which contains contiguity relationships between the
observations in the Columbus dataset we are using throughout this example.
Note that, in order to read the file, not only to open it, we need to
append '.read()' at the end of the command.
>>> w = pysal.open(pysal.examples.get_path("columbus.gal"), 'r').read()
Unless there is a good reason not to do it, the weights have to be
row-standardized so every row of the matrix sums to one. In PySAL, this
can be easily performed in the following way:
>>> w.transform='r'
We are all set with the preliminaries, we are good to run the model. In this
case, we will need the variables and the weights matrix. If we want to
have the names of the variables printed in the output summary, we will
have to pass them in as well, although this is optional.
>>> model = Probit(y, x, w=w, name_y='crime', name_x=['income','home value'], name_ds='columbus', name_w='columbus.gal')
Once we have run the model, we can explore a little bit the output. The
regression object we have created has many attributes so take your time to
discover them.
>>> np.around(model.betas, decimals=6)
array([[ 3.353811],
[-0.199653],
[-0.029514]])
>>> np.around(model.vm, decimals=6)
array([[ 0.852814, -0.043627, -0.008052],
[-0.043627, 0.004114, -0.000193],
[-0.008052, -0.000193, 0.00031 ]])
Since we have provided a spatial weigths matrix, the diagnostics for
spatial dependence have also been computed. We can access them and their
p-values individually:
>>> tests = np.array([['Pinkse_error','KP_error','PS_error']])
>>> stats = np.array([[model.Pinkse_error[0],model.KP_error[0],model.PS_error[0]]])
>>> pvalue = np.array([[model.Pinkse_error[1],model.KP_error[1],model.PS_error[1]]])
>>> print np.hstack((tests.T,np.around(np.hstack((stats.T,pvalue.T)),6)))
[['Pinkse_error' '3.131719' '0.076783']
['KP_error' '1.721312' '0.085194']
['PS_error' '2.558166' '0.109726']]
Or we can easily obtain a full summary of all the results nicely formatted and
ready to be printed simply by typing 'print model.summary'
"""
def __init__(
self, y, x, w=None, optim='newton', scalem='phimean', maxiter=100,
vm=False, name_y=None, name_x=None, name_w=None, name_ds=None,
spat_diag=False):
n = USER.check_arrays(y, x)
USER.check_y(y, n)
if w != None:
USER.check_weights(w, y)
spat_diag = True
ws = w.sparse
else:
ws = None
x_constant = USER.check_constant(x)
BaseProbit.__init__(self, y=y, x=x_constant, w=ws,
optim=optim, scalem=scalem, maxiter=maxiter)
self.title = "CLASSIC PROBIT ESTIMATOR"
self.name_ds = USER.set_name_ds(name_ds)
self.name_y = USER.set_name_y(name_y)
self.name_x = USER.set_name_x(name_x, x)
self.name_w = USER.set_name_w(name_w, w)
SUMMARY.Probit(reg=self, w=w, vm=vm, spat_diag=spat_diag)
def newton(flogl, start, fgrad, fhess, maxiter):
"""
Calculates the Newton-Raphson method
Parameters
----------
flogl : lambda
Function to calculate the log-likelihood
start : array
kx1 array of starting values
fgrad : lambda
Function to calculate the gradient
fhess : lambda
Function to calculate the hessian
maxiter : int
Maximum number of iterations until optimizer stops
"""
warn = 0
iteration = 0
par_hat0 = start
m = 1
while (iteration < maxiter and m >= 1e-04):
H = -la.inv(fhess(par_hat0))
g = fgrad(par_hat0).reshape(start.shape)
Hg = np.dot(H, g)
par_hat0 = par_hat0 + Hg
iteration += 1
m = np.dot(g.T, Hg)
if iteration == maxiter:
warn = 1
logl = flogl(par_hat0)
return (par_hat0, logl, warn)
def sp_tests(reg):
"""
Calculates tests for spatial dependence in Probit models
Parameters
----------
reg : regression object
output instance from a probit model
"""
if reg.w != None:
try:
w = reg.w.sparse
except:
w = reg.w
Phi = reg.predy
phi = reg.phiy
# Pinkse_error:
Phi_prod = Phi * (1 - Phi)
u_naive = reg.u_naive
u_gen = reg.u_gen
sig2 = np.sum((phi * phi) / Phi_prod) / reg.n
LM_err_num = np.dot(u_gen.T, (w * u_gen)) ** 2
trWW = np.sum((w * w).diagonal())
trWWWWp = trWW + np.sum((w * w.T).diagonal())
LM_err = float(1.0 * LM_err_num / (sig2 ** 2 * trWWWWp))
LM_err = np.array([LM_err, chisqprob(LM_err, 1)])
# KP_error:
moran = moran_KP(reg.w, u_naive, Phi_prod)
# Pinkse-Slade_error:
u_std = u_naive / np.sqrt(Phi_prod)
ps_num = np.dot(u_std.T, (w * u_std)) ** 2
trWpW = np.sum((w.T * w).diagonal())
ps = float(ps_num / (trWW + trWpW))
# chi-square instead of bootstrap.
ps = np.array([ps, chisqprob(ps, 1)])
else:
raise Exception("W matrix must be provided to calculate spatial tests.")
return LM_err, moran, ps
def moran_KP(w, u, sig2i):
"""
Calculates Moran-flavoured tests
Parameters
----------
w : W
PySAL weights instance aligned with y
u : array
nx1 array of naive residuals
sig2i : array
nx1 array of individual variance
"""
try:
w = w.sparse
except:
pass
moran_num = np.dot(u.T, (w * u))
E = SP.lil_matrix(w.get_shape())
E.setdiag(sig2i.flat)
E = E.asformat('csr')
WE = w * E
moran_den = np.sqrt(np.sum((WE * WE + (w.T * E) * WE).diagonal()))
moran = float(1.0 * moran_num / moran_den)
moran = np.array([moran, norm.sf(abs(moran)) * 2.])
return moran
def _test():
import doctest
start_suppress = np.get_printoptions()['suppress']
np.set_printoptions(suppress=True)
doctest.testmod()
np.set_printoptions(suppress=start_suppress)
if __name__ == '__main__':
_test()
import numpy as np
import pysal
dbf = pysal.open(pysal.examples.get_path('columbus.dbf'), 'r')
y = np.array([dbf.by_col('CRIME')]).T
var_x = ['INC', 'HOVAL']
x = np.array([dbf.by_col(name) for name in var_x]).T
w = pysal.open(pysal.examples.get_path("columbus.gal"), 'r').read()
w.transform = 'r'
probit1 = Probit(
(y > 40).astype(float), x, w=w, name_x=var_x, name_y="CRIME",
name_ds="Columbus", name_w="columbus.dbf")
print(probit1.summary)
| [
"sjsrey@gmail.com"
] | sjsrey@gmail.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.