blob_id stringlengths 40 40 | directory_id stringlengths 40 40 | path stringlengths 3 616 | content_id stringlengths 40 40 | detected_licenses listlengths 0 112 | license_type stringclasses 2 values | repo_name stringlengths 5 115 | snapshot_id stringlengths 40 40 | revision_id stringlengths 40 40 | branch_name stringclasses 777 values | visit_date timestamp[us]date 2015-08-06 10:31:46 2023-09-06 10:44:38 | revision_date timestamp[us]date 1970-01-01 02:38:32 2037-05-03 13:00:00 | committer_date timestamp[us]date 1970-01-01 02:38:32 2023-09-06 01:08:06 | github_id int64 4.92k 681M ⌀ | star_events_count int64 0 209k | fork_events_count int64 0 110k | gha_license_id stringclasses 22 values | gha_event_created_at timestamp[us]date 2012-06-04 01:52:49 2023-09-14 21:59:50 ⌀ | gha_created_at timestamp[us]date 2008-05-22 07:58:19 2023-08-21 12:35:19 ⌀ | gha_language stringclasses 149 values | src_encoding stringclasses 26 values | language stringclasses 1 value | is_vendor bool 2 classes | is_generated bool 2 classes | length_bytes int64 3 10.2M | extension stringclasses 188 values | content stringlengths 3 10.2M | authors listlengths 1 1 | author_id stringlengths 1 132 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
65c802fcb665af7e9553bbacc627687b7a33f4b6 | f313486c2cdbea0fa40bc9e8a7ea8810a2ce9e98 | /tests/run_ns.py | 18245a0b5394ce1ac6386ceede3641802e2b9e12 | [
"MIT"
] | permissive | trituenhantaoio/pytorch-rl | ae1f20dee149979f50e4a671767eed4524e7bb5b | efaa80a97ea805a5c76fad4df83221a100d3258a | refs/heads/master | 2020-09-19T23:37:03.343763 | 2019-07-31T17:03:56 | 2019-07-31T17:03:56 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 252 | py | import subprocess
import os
experiments = [e for e in os.listdir() if e.startswith('ns')]
for experiment in experiments:
print(experiment)
command = f'python {experiment}'
process = subprocess.Popen(command, shell=True)
process.wait() | [
"bentrevett@gmail.com"
] | bentrevett@gmail.com |
d057862aca60f62e0685051aaf79dd2246a34249 | ff6248be9573caec94bea0fa2b1e4b6bf0aa682b | /output/StudentProblem/10.21.9.56/1/1569577364.py | 53cfab12f42d6d84469e7e7b8a966d17dc4d851a | [] | no_license | LennartElbe/codeEvo | 0e41b1a7705204e934ef71a5a28c047366c10f71 | e89b329bc9edd37d5d9986f07ca8a63d50686882 | refs/heads/master | 2020-12-21T17:28:25.150352 | 2020-03-26T10:22:35 | 2020-03-26T10:22:35 | 236,498,032 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,674 | py | ============================= test session starts ==============================
platform darwin -- Python 3.7.4, pytest-5.4.1, py-1.8.1, pluggy-0.13.1
rootdir: /tmp
collected 0 items / 1 error
==================================== ERRORS ====================================
________________________ ERROR collecting test session _________________________
../../../Library/Python/3.7/lib/python/site-packages/_pytest/python.py:513: in _importtestmodule
mod = self.fspath.pyimport(ensuresyspath=importmode)
../../../Library/Python/3.7/lib/python/site-packages/py/_path/local.py:701: in pyimport
__import__(modname)
<frozen importlib._bootstrap>:983: in _find_and_load
???
<frozen importlib._bootstrap>:967: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:677: in _load_unlocked
???
../../../Library/Python/3.7/lib/python/site-packages/_pytest/assertion/rewrite.py:143: in exec_module
source_stat, co = _rewrite_test(fn, self.config)
../../../Library/Python/3.7/lib/python/site-packages/_pytest/assertion/rewrite.py:328: in _rewrite_test
tree = ast.parse(source, filename=fn)
/usr/local/Cellar/python/3.7.4_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/ast.py:35: in parse
return compile(source, filename, mode, PyCF_ONLY_AST)
E File "/private/tmp/blabla.py", line 47
E with open f as fn:
E ^
E SyntaxError: invalid syntax
=========================== short test summary info ============================
ERROR ../../../../../tmp
!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!
=============================== 1 error in 0.20s ===============================
| [
"lenni.elbe@gmail.com"
] | lenni.elbe@gmail.com |
44c9e8921ec5d14bab6637c0c9b70692a3cabd18 | 4a2bfa14d4d250d742b1737639e3768936382425 | /virtual/bin/pip | b6fff006bf91c8740a98fcf97af02304b45217ae | [] | no_license | AugustineOchieng/gallery | 1946c62894b5e73adb34deddaa8d93d9ece21705 | 1d0933b96dc50a100451e609ddfe49af6c6ff9b2 | refs/heads/master | 2020-05-21T07:19:11.787689 | 2019-05-10T09:15:30 | 2019-05-10T09:15:30 | 185,957,716 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 247 | #!/home/moringa/Desktop/gallery/virtual/bin/python
# -*- coding: utf-8 -*-
import re
import sys
from pip._internal import main
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
sys.exit(main())
| [
"gusochieng@gmail.com"
] | gusochieng@gmail.com | |
fc35f22f1539d28df0821c5ab0b2026bb173b18f | 8600ea155f279e5a8dfe5a1926038511f6b6a7ea | /hr_attendance/hr_attendance.py | 01ca1d441abc805ee879c1f031b880be3055d097 | [] | no_license | MarkNorgate/addons-EAD | c2fff89ab16fce3ba19fbe433ee5863705a6f4e5 | 840f28642b5d328e4b86839c413e5164622295a5 | refs/heads/master | 2020-04-23T22:11:00.164438 | 2015-07-22T12:24:53 | 2015-07-22T12:24:53 | 39,501,011 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 6,440 | py | # -*- encoding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2009 Tiny SPRL (<http://tiny.be>). All Rights Reserved
# $Id$
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
from mx import DateTime
import time
from osv import fields, osv
from tools.translate import _
class hr_action_reason(osv.osv):
_name = "hr.action.reason"
_description = "Action reason"
_columns = {
'name' : fields.char('Reason', size=64, required=True),
'action_type' : fields.selection([('sign_in', 'Sign in'), ('sign_out', 'Sign out')], "Action's type"),
}
_defaults = {
'action_type' : lambda *a: 'sign_in',
}
hr_action_reason()
def _employee_get(obj,cr,uid,context={}):
ids = obj.pool.get('hr.employee').search(cr, uid, [('user_id','=', uid)])
if ids:
return ids[0]
return False
class hr_attendance(osv.osv):
_name = "hr.attendance"
_description = "Attendance"
_columns = {
'name' : fields.datetime('Date', required=True),
'action' : fields.selection([('sign_in', 'Sign In'), ('sign_out', 'Sign Out'),('action','Action')], 'Action', required=True),
'action_desc' : fields.many2one("hr.action.reason", "Action reason", domain="[('action_type', '=', action)]"),
'employee_id' : fields.many2one('hr.employee', 'Employee', required=True, select=True),
}
_defaults = {
'name' : lambda *a: time.strftime('%Y-%m-%d %H:%M:%S'),
'employee_id' : _employee_get,
}
def _altern_si_so(self, cr, uid, ids):
for id in ids:
sql = '''
select action, name
from hr_attendance as att
where employee_id = (select employee_id from hr_attendance where id=%s)
and action in ('sign_in','sign_out')
and name <= (select name from hr_attendance where id=%s)
order by name desc
limit 2
'''
cr.execute(sql, (id, id))
atts = cr.fetchall()
if not ((len(atts)==1 and atts[0][0] == 'sign_in') or (atts[0][0] != atts[1][0] and atts[0][1] != atts[1][1])):
return False
return True
_constraints = [(_altern_si_so, 'Error: Sign in (resp. Sign out) must follow Sign out (resp. Sign in)', ['action'])]
_order = 'name desc'
hr_attendance()
class hr_employee(osv.osv):
_inherit = "hr.employee"
_description = "Employee"
def _state(self, cr, uid, ids, name, args, context={}):
result = {}
for id in ids:
result[id] = 'absent'
cr.execute('SELECT hr_attendance.action, hr_attendance.employee_id \
FROM ( \
SELECT MAX(name) AS name, employee_id \
FROM hr_attendance \
WHERE action in (\'sign_in\', \'sign_out\') \
GROUP BY employee_id \
) AS foo \
LEFT JOIN hr_attendance \
ON (hr_attendance.employee_id = foo.employee_id \
AND hr_attendance.name = foo.name) \
WHERE hr_attendance.employee_id \
in %s', (tuple(ids),))
for res in cr.fetchall():
result[res[1]] = res[0] == 'sign_in' and 'present' or 'absent'
return result
_columns = {
'state': fields.function(_state, method=True, type='selection', selection=[('absent', 'Absent'), ('present', 'Present')], string='Attendance'),
}
def sign_change(self, cr, uid, ids, context={}, dt=False):
for emp in self.browse(cr, uid, ids):
if not self._action_check(cr, uid, emp.id, dt, context):
raise osv.except_osv(_('Warning'), _('You tried to sign with a date anterior to another event !\nTry to contact the administrator to correct attendances.'))
res = {'action':'action', 'employee_id':emp.id}
if dt:
res['name'] = dt
att_id = self.pool.get('hr.attendance').create(cr, uid, res, context=context)
return True
def sign_out(self, cr, uid, ids, context={}, dt=False, *args):
id = False
for emp in self.browse(cr, uid, ids):
if not self._action_check(cr, uid, emp.id, dt, context):
raise osv.except_osv(_('Warning'), _('You tried to sign out with a date anterior to another event !\nTry to contact the administrator to correct attendances.'))
res = {'action':'sign_out', 'employee_id':emp.id}
if dt:
res['name'] = dt
att_id = self.pool.get('hr.attendance').create(cr, uid, res, context=context)
id = att_id
return id
def _action_check(self, cr, uid, emp_id, dt=False,context={}):
cr.execute('select max(name) from hr_attendance where employee_id=%s', (emp_id,))
res = cr.fetchone()
return not (res and (res[0]>=(dt or time.strftime('%Y-%m-%d %H:%M:%S'))))
def sign_in(self, cr, uid, ids, context={}, dt=False, *args):
id = False
for emp in self.browse(cr, uid, ids):
if not self._action_check(cr, uid, emp.id, dt, context):
raise osv.except_osv(_('Warning'), _('You tried to sign in with a date anterior to another event !\nTry to contact the administrator to correct attendances.'))
res = {'action':'sign_in', 'employee_id':emp.id}
if dt:
res['name'] = dt
id = self.pool.get('hr.attendance').create(cr, uid, res, context=context)
return id
hr_employee()
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| [
"mark.norgate@affinity-digital.com"
] | mark.norgate@affinity-digital.com |
0a38735793340df0638f90b37e8c9454bde3bc24 | 5b3d8b5c612c802fd846de63f86b57652d33f672 | /Python/seven_kyu/all_non_consecutive.py | 14c8d4b4b416904d5134728fa537b9d0cee1c26c | [
"Apache-2.0"
] | permissive | Brokenshire/codewars-projects | 1e591b57ed910a567f6c0423beb194fa7f8f693e | db9cd09618b8a7085b0d53ad76f73f9e249b9396 | refs/heads/master | 2021-07-22T18:50:25.847592 | 2021-01-25T23:27:17 | 2021-01-25T23:27:17 | 228,114,677 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,078 | py | # Python solution for 'Find all non-consecutive numbers' codewars question.
# Level: 7 kyu
# Tags: FUNDAMENTALS AND ARRAYS.
# Author: Jack Brokenshire
# Date: 05/08/2020
import unittest
def all_non_consecutive(arr):
"""
Find all the elements of an array that are non consecutive. A number is non consecutive if it is not exactly one
larger than the previous element in the array. The first element gets a pass and is never considered non consecutive.
:param arr: An array of integers.
:return: The results as an array of objects with two values i: <the index of the non-consecutive number> and n:
<the non-consecutive number>.
"""
return [{'i': i + 1, 'n': arr[i + 1]} for i in range(len(arr) - 1) if arr[i] + 1 != arr[i + 1]]
class TestAllNonConsecutive(unittest.TestCase):
"""Class to test 'all_non_consecutive' function"""
def test_all_non_consecutive(self):
self.assertEqual(all_non_consecutive([1, 2, 3, 4, 6, 7, 8, 10]), [{'i': 4, 'n': 6}, {'i': 7, 'n': 10}])
if __name__ == "__main__":
unittest.main()
| [
"29889878+Brokenshire@users.noreply.github.com"
] | 29889878+Brokenshire@users.noreply.github.com |
b7a1176f2647c53258d8cbf350246966f4f5e5b8 | 87bb2b9258c887e8fbcaca08d18e5d95ae96462d | /Codewars/Python/7kyu/7kyu_Date format validation.py | 17b214d1870933d89fc7b8688fb2ba7d133960df | [] | no_license | KonradMarzec1991/Codewars-LeetCode | a9e4d09f4271fecb3a7fc1ee436358ac1bbec5e4 | 442113532158f5a3ee7051a42e911afa5373bb5f | refs/heads/master | 2023-04-21T17:04:37.434876 | 2021-05-11T21:47:14 | 2021-05-11T21:47:14 | 166,555,499 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 106 | py | def date_checker(date):
import re
return bool(re.match(r'^\d{2}-\d{2}-\d{4}\s\d{2}:\d{2}$', date)) | [
"konrimarzec@gmail.com"
] | konrimarzec@gmail.com |
ee1db31b9cc40af9415eb0b52dc577ffb039e338 | fe71d0f38a282225e6cfbc918bd8e4ca21ad4335 | /factory/factory/settings.py | 021ecb68667bf0b52e8896d0db0d3bcd1017241b | [
"MIT"
] | permissive | avara1986/graphql-django | 5db4b503da9918e7c2ea6374ff99e1bb87c358f0 | 57b9bcb479842e243488a59cb4db4f523c2877ce | refs/heads/master | 2021-06-24T16:15:33.110235 | 2017-09-10T18:35:51 | 2017-09-10T18:35:51 | 103,052,101 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,220 | py | """
Django settings for factory project.
Generated by 'django-admin startproject' using Django 1.11.5.
For more information on this file, see
https://docs.djangoproject.com/en/1.11/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.11/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.11/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'z^t3h)3+s@+v&7j9-6&r(4ji9m5#secm(-_jz(1*_j&x26bp6t'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'graphene_django',
'cars'
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'factory.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'factory.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.11/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Password validation
# https://docs.djangoproject.com/en/1.11/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/1.11/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.11/howto/static-files/
STATIC_URL = '/static/'
GRAPHENE = {
'SCHEMA': 'cars.schema.schema' # Where your Graphene schema lives
}
| [
"a.vara.1986@gmail.com"
] | a.vara.1986@gmail.com |
e0305bf4df162ebc08c5f96428ce7e4ffb35c523 | f31ec01e5e7fc7ba1704cd7f1e59992752ecbf8f | /tornado/platform/auto.py | d3eeb56b2e519fa77293677eccfd6eba46e5f2a7 | [
"Apache-2.0",
"CC-BY-3.0"
] | permissive | st4lk/tornado | 5a79995ada89fd090acf251c71a26d4eeea75e6b | 1ceeb1ffd581f31678cd63fe81ef8d2e4f35380b | refs/heads/master | 2020-12-03T05:16:44.365763 | 2015-02-23T15:49:23 | 2015-02-23T15:49:23 | 29,911,658 | 1 | 1 | null | 2015-01-27T11:42:04 | 2015-01-27T11:42:04 | null | UTF-8 | Python | false | false | 1,726 | py | #!/usr/bin/env python
#
# Copyright 2011 Facebook
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Implementation of platform-specific functionality.
For each function or class described in `tornado.platform.interface`,
the appropriate platform-specific implementation exists in this module.
Most code that needs access to this functionality should do e.g.::
from tornado.platform.auto import set_close_exec
"""
from __future__ import absolute_import, division, print_function, with_statement
import os
if os.name == 'nt':
from tornado.platform.common import Waker
from tornado.platform.windows import set_close_exec
elif 'APPENGINE_RUNTIME' in os.environ:
from tornado.platform.common import Waker
def set_close_exec(fd):
pass
else:
from tornado.platform.posix import set_close_exec, Waker
try:
# monotime monkey-patches the time module to have a monotonic function
# in versions of python before 3.3.
import monotime
# Silence pyflakes warning about this unused import
monotime
except ImportError:
pass
try:
from time import monotonic as monotonic_time
except ImportError:
monotonic_time = None
__all__ = ['Waker', 'set_close_exec', 'monotonic_time']
| [
"ben@bendarnell.com"
] | ben@bendarnell.com |
e75599dda87392d1c22802b0f69cffe846666e9b | b9f0399cf7ea0a66fb76900f0c2ceac2d4859d34 | /venv/lib/python3.6/site-packages/markdown/extensions/smarty.py | fb767c551f4a761b0a52c5c827dd4ebf5818989e | [] | no_license | huangtaosdt/QA-website-zsb | eea0fcd6a2415cf5c61f01f6692d39a544ed900a | 518470a3b37d6561797a38de42fe0c81d27c6ceb | refs/heads/master | 2021-09-20T15:19:44.559747 | 2018-08-11T03:53:17 | 2018-08-11T03:53:17 | 100,498,996 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 10,305 | py | # -*- coding: utf-8 -*-
'''
Smarty extension for Python-Markdown
====================================
Adds conversion of ASCII dashes, quotes and ellipses to their HTML
entity equivalents.
See <https://pythonhosted.org/Markdown/extensions/smarty.html>
for documentation.
Author: 2013, Dmitry Shachnev <mitya57@gmail.com>
All changes Copyright 2013-2014 The Python Markdown Project
License: [BSD](http://www.opensource.org/licenses/bsd-license.php)
SmartyPants license:
Copyright (c) 2003 John Gruber <http://daringfireball.net/>
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in
the documentation and/or other materials provided with the
distribution.
* Neither the name "SmartyPants" nor the names of its contributors
may be used to endorse or promote products derived from this
software without specific prior written permission.
This software is provided by the copyright holders and contributors "as
is" and any express or implied warranties, including, but not limited
to, the implied warranties of merchantability and fitness for a
particular purpose are disclaimed. In no event shall the copyright
owner or contributors be liable for any direct, indirect, incidental,
special, exemplary, or consequential damages (including, but not
limited to, procurement of substitute goods or services; loss of use,
data, or profits; or business interruption) however caused and on any
theory of liability, whether in contract, strict liability, or tort
(including negligence or otherwise) arising in any way out of the use
of this software, even if advised of the possibility of such damage.
smartypants.py license:
smartypants.py is a derivative work of SmartyPants.
Copyright (c) 2004, 2007 Chad Miller <http://web.chad.org/>
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in
the documentation and/or other materials provided with the
distribution.
This software is provided by the copyright holders and contributors "as
is" and any express or implied warranties, including, but not limited
to, the implied warranties of merchantability and fitness for a
particular purpose are disclaimed. In no event shall the copyright
owner or contributors be liable for any direct, indirect, incidental,
special, exemplary, or consequential damages (including, but not
limited to, procurement of substitute goods or services; loss of use,
data, or profits; or business interruption) however caused and on any
theory of liability, whether in contract, strict liability, or tort
(including negligence or otherwise) arising in any way out of the use
of this software, even if advised of the possibility of such damage.
'''
from __future__ import unicode_literals
from . import Extension
from ..inlinepatterns import HtmlPattern, HTML_RE
from ..odict import OrderedDict
from ..treeprocessors import InlineProcessor
# Constants for quote education.
punctClass = r"""[!"#\$\%'()*+,-.\/:;<=>?\@\[\\\]\^_`{|}~]"""
endOfWordClass = r"[\s.,;:!?)]"
closeClass = "[^\ \t\r\n\[\{\(\-\u0002\u0003]"
openingQuotesBase = (
'(\s' # a whitespace char
'| ' # or a non-breaking space entity
'|--' # or dashes
'|–|—' # or unicode
'|&[mn]dash;' # or named dash entities
'|–|—' # or decimal entities
')'
)
substitutions = {
'mdash': '—',
'ndash': '–',
'ellipsis': '…',
'left-angle-quote': '«',
'right-angle-quote': '»',
'left-single-quote': '‘',
'right-single-quote': '’',
'left-double-quote': '“',
'right-double-quote': '”',
}
# Special case if the very first character is a quote
# followed by punctuation at a non-word-break. Close the quotes by brute force:
singleQuoteStartRe = r"^'(?=%s\B)" % punctClass
doubleQuoteStartRe = r'^"(?=%s\B)' % punctClass
# Special case for double sets of quotes, e.g.:
# <p>He said, "'Quoted' words in a larger quote."</p>
doubleQuoteSetsRe = r""""'(?=\w)"""
singleQuoteSetsRe = r"""'"(?=\w)"""
# Special case for decade abbreviations (the '80s):
decadeAbbrRe = r"(?<!\w)'(?=\d{2}s)"
# Get most opening double quotes:
openingDoubleQuotesRegex = r'%s"(?=\w)' % openingQuotesBase
# Double closing quotes:
closingDoubleQuotesRegex = r'"(?=\s)'
closingDoubleQuotesRegex2 = '(?<=%s)"' % closeClass
# Get most opening single quotes:
openingSingleQuotesRegex = r"%s'(?=\w)" % openingQuotesBase
# Single closing quotes:
closingSingleQuotesRegex = r"(?<=%s)'(?!\s|s\b|\d)" % closeClass
closingSingleQuotesRegex2 = r"(?<=%s)'(\s|s\b)" % closeClass
# All remaining quotes should be opening ones
remainingSingleQuotesRegex = "'"
remainingDoubleQuotesRegex = '"'
HTML_STRICT_RE = HTML_RE + r'(?!\>)'
class SubstituteTextPattern(HtmlPattern):
def __init__(self, pattern, replace, markdown_instance):
""" Replaces matches with some text. """
HtmlPattern.__init__(self, pattern)
self.replace = replace
self.markdown = markdown_instance
def handleMatch(self, m):
result = ''
for part in self.replace:
if isinstance(part, int):
result += m.group(part)
else:
result += self.markdown.htmlStash.store(part, safe=True)
return result
class SmartyExtension(Extension):
def __init__(self, *args, **kwargs):
self.config = {
'smart_quotes': [True, 'Educate quotes'],
'smart_angled_quotes': [False, 'Educate angled quotes'],
'smart_dashes': [True, 'Educate dashes'],
'smart_ellipses': [True, 'Educate ellipses'],
'substitutions': [{}, 'Overwrite default substitutions'],
}
super(SmartyExtension, self).__init__(*args, **kwargs)
self.substitutions = dict(substitutions)
self.substitutions.update(self.getConfig('substitutions', default={}))
def _addPatterns(self, md, patterns, serie):
for ind, pattern in enumerate(patterns):
pattern += (md,)
pattern = SubstituteTextPattern(*pattern)
after = ('>smarty-%s-%d' % (serie, ind - 1) if ind else '_begin')
name = 'smarty-%s-%d' % (serie, ind)
self.inlinePatterns.add(name, pattern, after)
def educateDashes(self, md):
emDashesPattern = SubstituteTextPattern(
r'(?<!-)---(?!-)', (self.substitutions['mdash'],), md
)
enDashesPattern = SubstituteTextPattern(
r'(?<!-)--(?!-)', (self.substitutions['ndash'],), md
)
self.inlinePatterns.add('smarty-em-dashes', emDashesPattern, '_begin')
self.inlinePatterns.add(
'smarty-en-dashes', enDashesPattern, '>smarty-em-dashes'
)
def educateEllipses(self, md):
ellipsesPattern = SubstituteTextPattern(
r'(?<!\.)\.{3}(?!\.)', (self.substitutions['ellipsis'],), md
)
self.inlinePatterns.add('smarty-ellipses', ellipsesPattern, '_begin')
def educateAngledQuotes(self, md):
leftAngledQuotePattern = SubstituteTextPattern(
r'\<\<', (self.substitutions['left-angle-quote'],), md
)
rightAngledQuotePattern = SubstituteTextPattern(
r'\>\>', (self.substitutions['right-angle-quote'],), md
)
self.inlinePatterns.add(
'smarty-left-angle-quotes', leftAngledQuotePattern, '_begin'
)
self.inlinePatterns.add(
'smarty-right-angle-quotes',
rightAngledQuotePattern,
'>smarty-left-angle-quotes'
)
def educateQuotes(self, md):
lsquo = self.substitutions['left-single-quote']
rsquo = self.substitutions['right-single-quote']
ldquo = self.substitutions['left-double-quote']
rdquo = self.substitutions['right-double-quote']
patterns = (
(singleQuoteStartRe, (rsquo,)),
(doubleQuoteStartRe, (rdquo,)),
(doubleQuoteSetsRe, (ldquo + lsquo,)),
(singleQuoteSetsRe, (lsquo + ldquo,)),
(decadeAbbrRe, (rsquo,)),
(openingSingleQuotesRegex, (2, lsquo)),
(closingSingleQuotesRegex, (rsquo,)),
(closingSingleQuotesRegex2, (rsquo, 2)),
(remainingSingleQuotesRegex, (lsquo,)),
(openingDoubleQuotesRegex, (2, ldquo)),
(closingDoubleQuotesRegex, (rdquo,)),
(closingDoubleQuotesRegex2, (rdquo,)),
(remainingDoubleQuotesRegex, (ldquo,))
)
self._addPatterns(md, patterns, 'quotes')
def extendMarkdown(self, md, md_globals):
configs = self.getConfigs()
self.inlinePatterns = OrderedDict()
if configs['smart_ellipses']:
self.educateEllipses(md)
if configs['smart_quotes']:
self.educateQuotes(md)
if configs['smart_angled_quotes']:
self.educateAngledQuotes(md)
# Override HTML_RE from inlinepatterns.py so that it does not
# process tags with duplicate closing quotes.
md.inlinePatterns["html"] = HtmlPattern(HTML_STRICT_RE, md)
if configs['smart_dashes']:
self.educateDashes(md)
inlineProcessor = InlineProcessor(md)
inlineProcessor.inlinePatterns = self.inlinePatterns
md.treeprocessors.add('smarty', inlineProcessor, '_end')
md.ESCAPED_CHARS.extend(['"', "'"])
def makeExtension(*args, **kwargs):
return SmartyExtension(*args, **kwargs)
| [
"huangtaosdt@163.com"
] | huangtaosdt@163.com |
96bb6d5195d072d90ecd75f5f6b6bab8750b45e8 | a2d13658503b9b921e27994152ab6adb554725bc | /store/migrations/0036_auto_20201226_1430.py | 90e2d15defac2bc93f126c4c3f0918dc532b67f9 | [] | no_license | avishkakavindu/sushi-chef-django | 40a1d7916d7f8c37ba1290cb717af517d2bce265 | 4c112d806720d903877822baaa26159c32704901 | refs/heads/master | 2023-03-18T11:12:41.721554 | 2021-03-11T08:22:52 | 2021-03-11T08:22:52 | 303,053,978 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 980 | py | # Generated by Django 3.1.2 on 2020-12-26 09:00
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('store', '0035_auto_20201226_1241'),
]
operations = [
migrations.AddField(
model_name='order',
name='total',
field=models.DecimalField(decimal_places=2, default=0, max_digits=10),
preserve_default=False,
),
migrations.AlterField(
model_name='order',
name='coupon',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='coupon_set', to='store.coupon'),
),
migrations.AlterField(
model_name='orderedproduct',
name='order',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='orderedproduct_set', to='store.order'),
),
]
| [
"avishkakavindud@gmail.com"
] | avishkakavindud@gmail.com |
9a3b27804f5201776943de0278f7deff9ad858ca | 781e2692049e87a4256320c76e82a19be257a05d | /assignments/python/wc/src/735.py | 174fa827ffdb1d04493c39ad3d27ab4e27f67fca | [] | no_license | itsolutionscorp/AutoStyle-Clustering | 54bde86fe6dbad35b568b38cfcb14c5ffaab51b0 | be0e2f635a7558f56c61bc0b36c6146b01d1e6e6 | refs/heads/master | 2020-12-11T07:27:19.291038 | 2016-03-16T03:18:00 | 2016-03-16T03:18:42 | 59,454,921 | 4 | 0 | null | 2016-05-23T05:40:56 | 2016-05-23T05:40:56 | null | UTF-8 | Python | false | false | 367 | py | import string
def word_count(phrase):
punct = set(string.punctuation)
no_punct =''
for char in phrase:
if char not in punct:
no_punct += char
dict = {}
for word in no_punct.split():
str = word.lower()
if str not in dict:
dict[str] = 1
else:
dict[str] +=1
return dict
| [
"rrc@berkeley.edu"
] | rrc@berkeley.edu |
665eedf826b911f95e4b7ff7f0f21c48d89547fb | 7eda5c4c9bfedbd561d77df14c454f0485d8e025 | /Program Assignment4_Mincut/kargerMinCut.py | e4844a29c093e5d88cf14dc66981c36e82f046e1 | [
"MIT"
] | permissive | brianchiang-tw/Algorithm_specialization_Part-I | af6c7e1ad7f70323d1b92086d85dd9f7ec157c1b | 44b24f4a97f23d24bde6a4235f70f7707c8b03b7 | refs/heads/master | 2020-09-12T14:57:59.434448 | 2019-11-18T14:36:37 | 2019-11-18T14:36:37 | 222,459,865 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,025 | py | import sys
import os
import math
import random
from datetime import datetime
import copy
def edge_contraction( adj_list_dict, vertex_u, vertex_v):
# contract edge(u, v)
# keep vertex u, and update u's adjacency list from appending vertex v's
adj_list_dict[ vertex_u] = adj_list_dict[ vertex_u ] + adj_list_dict[ vertex_v ]
# remove v's adjacency list from global adjacency list
adj_list_dict.pop( vertex_v )
# update each edge(x, v) redircting to edge(x, u)
for i in adj_list_dict:
for j in range( len(adj_list_dict[i] ) ):
if adj_list_dict[i][j] == vertex_v:
adj_list_dict[i][j] = vertex_u
# eliminate all self-loop edges during current edge contraction
adj_list_dict[ vertex_u ] = list( filter(lambda vertex: vertex != vertex_u, adj_list_dict[vertex_u] ) )
# return updated adjacency list dictionary
return adj_list_dict
def karger_min_cut( graph_with_adj_list_dict ):
if len(graph_with_adj_list_dict) == 2:
# Base case and stop condition
list_of_all_edge = list( graph_with_adj_list_dict.values() )
# the remaining count of edge is min cut
return len( list_of_all_edge[0] )
else:
# Inductive step:
# Keep conducting karger algorithm until only 2 verteices remain.
# list of all vertex (key value of "graph_with_adj_list_dict" )
list_of_all_vertex_in_graph = list( graph_with_adj_list_dict.keys() )
# randomly choose one edge with two end points, vertex_u and vertex v
# vertex u
vertex_u = random.choice( list_of_all_vertex_in_graph )
# vertex v
vertex_v = random.choice( graph_with_adj_list_dict[vertex_u] )
# conduct edge contraction on edge E = (u, v)
# update graph with adjacency list dictionary
#graph_with_adj_list_dict = edge_contraction( graph_with_adj_list_dict, vertex_u, vertex_v)
# keep ruuning karger algorithm until graph has two vertices only
min_cut = karger_min_cut( edge_contraction( graph_with_adj_list_dict, vertex_u, vertex_v) )
# the remaining count of edge is min cut
return min_cut
def main():
current_work_directory = os.getcwd()
filename = current_work_directory + "\Program Assignment4_Mincut\kargerMinCut.txt"
with open( filename) as file_handle:
# graph is a dictionay, on the basis of adjacency list
# key : vertex i
# value : those verteices connected to vertex i
graph = {}
for one_line in file_handle:
# each line in input text file is well separated by tab, i.e., the "\t"
one_adjacency_list = list( map(int, one_line.strip().split("\t") ) )
# get vertex index as dictionay's key
vertex_i = one_adjacency_list.pop(0)
# print("vertex i : ", vertex_i )
# get adjacency list, excluding the first one value(key), as dictionary's value
graph[vertex_i] = one_adjacency_list
# get size of graph ( the number of vertex)
size_of_graph = len(graph)
v_square = size_of_graph ** 2
# min_cut initialization with |V|^2
min_cut = v_square
# upper_bound initialization with |V|^2 * log |V|
upper_bound = int( v_square*math.log(size_of_graph) )
for i in range( upper_bound ):
new_graph = copy.deepcopy( graph )
current_min_cut = karger_min_cut( new_graph )
'''
print( "\n iteration counter: ", i)
print( "current min cut: ", current_min_cut )
print( "minimal min cut so far", min_cut )
'''
if( current_min_cut < min_cut ):
min_cut = current_min_cut
print("min cut updated in this iteration: ", min_cut)
print("\n final min cut value:", min_cut)
return
if __name__ == "__main__":
main()
| [
"brianchiang1988@icloud.com"
] | brianchiang1988@icloud.com |
a281da690d99c410475474b7ca9444b93a880654 | 743bd46390fa798194b6c028a3914dbdfe3686b1 | /win32/Lib/win32gui_struct.py | fadbe4e7eb790d75de30535a3838a968bce652fe | [] | no_license | tjguk/pywin32-old | 8667e0c042e2a031cd99b6f6693f55c51ff08b67 | 5f263eab8e1812ad350c0e7bff0ee9588ad5056c | refs/heads/master | 2021-05-29T16:49:43.710261 | 2015-05-23T15:27:22 | 2015-05-23T15:27:22 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 29,488 | py | from __future__ import division
from __future__ import absolute_import
from __future__ import print_function
# This is a work in progress - see Demos/win32gui_menu.py
# win32gui_struct.py - helpers for working with various win32gui structures.
# As win32gui is "light-weight", it does not define objects for all possible
# win32 structures - in general, "buffer" objects are passed around - it is
# the callers responsibility to pack the buffer in the correct format.
#
# This module defines some helpers for the commonly used structures.
#
# In general, each structure has 3 functions:
#
# buffer, extras = PackSTRUCTURE(items, ...)
# item, ... = UnpackSTRUCTURE(buffer)
# buffer, extras = EmtpySTRUCTURE(...)
#
# 'extras' is always items that must be held along with the buffer, as the
# buffer refers to these object's memory.
# For structures that support a 'mask', this mask is hidden from the user - if
# 'None' is passed, the mask flag will not be set, or on return, None will
# be returned for the value if the mask is not set.
#
# NOTE: I considered making these structures look like real classes, and
# support 'attributes' etc - however, ctypes already has a good structure
# mechanism - I think it makes more sense to support ctype structures
# at the win32gui level, then there will be no need for this module at all.
# XXX - the above makes sense in terms of what is built and passed to
# win32gui (ie, the Pack* functions) - but doesn't make as much sense for
# the Unpack* functions, where the aim is user convenience.
import sys
import win32gui
import win32con
import struct
import array
import commctrl
import pywintypes
is64bit = "64 bit" in sys.version
try:
from collections import namedtuple
def _MakeResult(names_str, values):
names = names_str.split()
nt = namedtuple(names[0], names[1:])
return nt(*values)
except ImportError:
# no namedtuple support - just return the values as a normal tuple.
def _MakeResult(names_str, values):
return values
_nmhdr_fmt = "PPi"
if is64bit:
# When the item past the NMHDR gets aligned (eg, when it is a struct)
# we need this many bytes padding.
_nmhdr_align_padding = "xxxx"
else:
_nmhdr_align_padding = ""
# Encode a string suitable for passing in a win32gui related structure
# If win32gui is built with UNICODE defined (ie, py3k), then functions
# like InsertMenuItem are actually calling InsertMenuItemW etc, so all
# strings will need to be unicode.
if win32gui.UNICODE:
def _make_text_buffer(text):
# XXX - at this stage win32gui.UNICODE is only True in py3k,
# and in py3k is makes sense to reject bytes.
if not isinstance(text, str):
raise TypeError('MENUITEMINFO text must be unicode')
data = (text+'\0').encode("unicode-internal")
return array.array("b", data)
else:
def _make_text_buffer(text):
if isinstance(text, str):
text = text.encode("mbcs")
return array.array("b", text+'\0')
# make an 'empty' buffer, ready for filling with cch characters.
def _make_empty_text_buffer(cch):
return _make_text_buffer("\0" * cch)
if sys.version_info < (3,0):
def _make_memory(ob):
return str(buffer(ob))
def _make_bytes(sval):
return sval
else:
def _make_memory(ob):
return bytes(memoryview(ob))
def _make_bytes(sval):
return sval.encode('ascii')
# Generic WM_NOTIFY unpacking
def UnpackWMNOTIFY(lparam):
format = "PPi"
buf = win32gui.PyGetMemory(lparam, struct.calcsize(format))
return _MakeResult("WMNOTIFY hwndFrom idFrom code", struct.unpack(format, buf))
def UnpackNMITEMACTIVATE(lparam):
format = _nmhdr_fmt + _nmhdr_align_padding
if is64bit:
# the struct module doesn't handle this correctly as some of the items
# are actually structs in structs, which get individually aligned.
format = format + "iiiiiiixxxxP"
else:
format = format + "iiiiiiiP"
buf = win32gui.PyMakeBuffer(struct.calcsize(format), lparam)
return _MakeResult("NMITEMACTIVATE hwndFrom idFrom code iItem iSubItem uNewState uOldState uChanged actionx actiony lParam",
struct.unpack(format, buf))
# MENUITEMINFO struct
# http://msdn.microsoft.com/library/default.asp?url=/library/en-us/winui/WinUI/WindowsUserInterface/Resources/Menus/MenuReference/MenuStructures/MENUITEMINFO.asp
# We use the struct module to pack and unpack strings as MENUITEMINFO
# structures. We also have special handling for the 'fMask' item in that
# structure to avoid the caller needing to explicitly check validity
# (None is used if the mask excludes/should exclude the value)
_menuiteminfo_fmt = '5i5PiP'
def PackMENUITEMINFO(fType=None, fState=None, wID=None, hSubMenu=None,
hbmpChecked=None, hbmpUnchecked=None, dwItemData=None,
text=None, hbmpItem=None, dwTypeData=None):
# 'extras' are objects the caller must keep a reference to (as their
# memory is used) for the lifetime of the INFO item.
extras = []
# ack - dwItemData and dwTypeData were confused for a while...
assert dwItemData is None or dwTypeData is None, \
"sorry - these were confused - you probably want dwItemData"
# if we are a long way past 209, then we can nuke the above...
if dwTypeData is not None:
import warnings
warnings.warn("PackMENUITEMINFO: please use dwItemData instead of dwTypeData")
if dwItemData is None:
dwItemData = dwTypeData or 0
fMask = 0
if fType is None: fType = 0
else: fMask |= win32con.MIIM_FTYPE
if fState is None: fState = 0
else: fMask |= win32con.MIIM_STATE
if wID is None: wID = 0
else: fMask |= win32con.MIIM_ID
if hSubMenu is None: hSubMenu = 0
else: fMask |= win32con.MIIM_SUBMENU
if hbmpChecked is None:
assert hbmpUnchecked is None, \
"neither or both checkmark bmps must be given"
hbmpChecked = hbmpUnchecked = 0
else:
assert hbmpUnchecked is not None, \
"neither or both checkmark bmps must be given"
fMask |= win32con.MIIM_CHECKMARKS
if dwItemData is None: dwItemData = 0
else: fMask |= win32con.MIIM_DATA
if hbmpItem is None: hbmpItem = 0
else: fMask |= win32con.MIIM_BITMAP
if text is not None:
fMask |= win32con.MIIM_STRING
str_buf = _make_text_buffer(text)
cch = len(text)
# We are taking address of strbuf - it must not die until windows
# has finished with our structure.
lptext = str_buf.buffer_info()[0]
extras.append(str_buf)
else:
lptext = 0
cch = 0
# Create the struct.
# 'P' format does not accept PyHANDLE's !
item = struct.pack(
_menuiteminfo_fmt,
struct.calcsize(_menuiteminfo_fmt), # cbSize
fMask,
fType,
fState,
wID,
int(hSubMenu),
int(hbmpChecked),
int(hbmpUnchecked),
dwItemData,
lptext,
cch,
int(hbmpItem)
)
# Now copy the string to a writable buffer, so that the result
# could be passed to a 'Get' function
return array.array("b", item), extras
def UnpackMENUITEMINFO(s):
(cb,
fMask,
fType,
fState,
wID,
hSubMenu,
hbmpChecked,
hbmpUnchecked,
dwItemData,
lptext,
cch,
hbmpItem) = struct.unpack(_menuiteminfo_fmt, s)
assert cb==len(s)
if fMask & win32con.MIIM_FTYPE==0: fType = None
if fMask & win32con.MIIM_STATE==0: fState = None
if fMask & win32con.MIIM_ID==0: wID = None
if fMask & win32con.MIIM_SUBMENU==0: hSubMenu = None
if fMask & win32con.MIIM_CHECKMARKS==0: hbmpChecked = hbmpUnchecked = None
if fMask & win32con.MIIM_DATA==0: dwItemData = None
if fMask & win32con.MIIM_BITMAP==0: hbmpItem = None
if fMask & win32con.MIIM_STRING:
text = win32gui.PyGetString(lptext, cch)
else:
text = None
return _MakeResult("MENUITEMINFO fType fState wID hSubMenu hbmpChecked "
"hbmpUnchecked dwItemData text hbmpItem",
(fType, fState, wID, hSubMenu, hbmpChecked, hbmpUnchecked, \
dwItemData, text, hbmpItem))
def EmptyMENUITEMINFO(mask = None, text_buf_size=512):
# text_buf_size is number of *characters* - not necessarily no of bytes.
extra = []
if mask is None:
mask = win32con.MIIM_BITMAP | win32con.MIIM_CHECKMARKS | \
win32con.MIIM_DATA | win32con.MIIM_FTYPE | \
win32con.MIIM_ID | win32con.MIIM_STATE | \
win32con.MIIM_STRING | win32con.MIIM_SUBMENU
# Note: No MIIM_TYPE - this screws win2k/98.
if mask & win32con.MIIM_STRING:
text_buffer = _make_empty_text_buffer(text_buf_size)
extra.append(text_buffer)
text_addr, _ = text_buffer.buffer_info()
else:
text_addr = text_buf_size = 0
# Now copy the string to a writable buffer, so that the result
# could be passed to a 'Get' function
buf = struct.pack(
_menuiteminfo_fmt,
struct.calcsize(_menuiteminfo_fmt), # cbSize
mask,
0, #fType,
0, #fState,
0, #wID,
0, #hSubMenu,
0, #hbmpChecked,
0, #hbmpUnchecked,
0, #dwItemData,
text_addr,
text_buf_size,
0, #hbmpItem
)
return array.array("b", buf), extra
# MENUINFO struct
_menuinfo_fmt = 'iiiiPiP'
def PackMENUINFO(dwStyle = None, cyMax = None,
hbrBack = None, dwContextHelpID = None, dwMenuData = None,
fMask = 0):
if dwStyle is None: dwStyle = 0
else: fMask |= win32con.MIM_STYLE
if cyMax is None: cyMax = 0
else: fMask |= win32con.MIM_MAXHEIGHT
if hbrBack is None: hbrBack = 0
else: fMask |= win32con.MIM_BACKGROUND
if dwContextHelpID is None: dwContextHelpID = 0
else: fMask |= win32con.MIM_HELPID
if dwMenuData is None: dwMenuData = 0
else: fMask |= win32con.MIM_MENUDATA
# Create the struct.
item = struct.pack(
_menuinfo_fmt,
struct.calcsize(_menuinfo_fmt), # cbSize
fMask,
dwStyle,
cyMax,
hbrBack,
dwContextHelpID,
dwMenuData)
return array.array("b", item)
def UnpackMENUINFO(s):
(cb,
fMask,
dwStyle,
cyMax,
hbrBack,
dwContextHelpID,
dwMenuData) = struct.unpack(_menuinfo_fmt, s)
assert cb==len(s)
if fMask & win32con.MIM_STYLE==0: dwStyle = None
if fMask & win32con.MIM_MAXHEIGHT==0: cyMax = None
if fMask & win32con.MIM_BACKGROUND==0: hbrBack = None
if fMask & win32con.MIM_HELPID==0: dwContextHelpID = None
if fMask & win32con.MIM_MENUDATA==0: dwMenuData = None
return _MakeResult("MENUINFO dwStyle cyMax hbrBack dwContextHelpID dwMenuData",
(dwStyle, cyMax, hbrBack, dwContextHelpID, dwMenuData))
def EmptyMENUINFO(mask = None):
if mask is None:
mask = win32con.MIM_STYLE | win32con.MIM_MAXHEIGHT| \
win32con.MIM_BACKGROUND | win32con.MIM_HELPID | \
win32con.MIM_MENUDATA
buf = struct.pack(
_menuinfo_fmt,
struct.calcsize(_menuinfo_fmt), # cbSize
mask,
0, #dwStyle
0, #cyMax
0, #hbrBack,
0, #dwContextHelpID,
0, #dwMenuData,
)
return array.array("b", buf)
##########################################################################
#
# Tree View structure support - TVITEM, TVINSERTSTRUCT and TVDISPINFO
#
##########################################################################
# XXX - Note that the following implementation of TreeView structures is ripped
# XXX - from the SpamBayes project. It may not quite work correctly yet - I
# XXX - intend checking them later - but having them is better than not at all!
_tvitem_fmt = "iPiiPiiiiP"
# Helpers for the ugly win32 structure packing/unpacking
# XXX - Note that functions using _GetMaskAndVal run 3x faster if they are
# 'inlined' into the function - see PackLVITEM. If the profiler points at
# _GetMaskAndVal(), you should nuke it (patches welcome once they have been
# tested)
def _GetMaskAndVal(val, default, mask, flag):
if val is None:
return mask, default
else:
if flag is not None:
mask |= flag
return mask, val
def PackTVINSERTSTRUCT(parent, insertAfter, tvitem):
tvitem_buf, extra = PackTVITEM(*tvitem)
tvitem_buf = tvitem_buf.tostring()
format = "PP%ds" % len(tvitem_buf)
return struct.pack(format, parent, insertAfter, tvitem_buf), extra
def PackTVITEM(hitem, state, stateMask, text, image, selimage, citems, param):
extra = [] # objects we must keep references to
mask = 0
mask, hitem = _GetMaskAndVal(hitem, 0, mask, commctrl.TVIF_HANDLE)
mask, state = _GetMaskAndVal(state, 0, mask, commctrl.TVIF_STATE)
if not mask & commctrl.TVIF_STATE:
stateMask = 0
mask, text = _GetMaskAndVal(text, None, mask, commctrl.TVIF_TEXT)
mask, image = _GetMaskAndVal(image, 0, mask, commctrl.TVIF_IMAGE)
mask, selimage = _GetMaskAndVal(selimage, 0, mask, commctrl.TVIF_SELECTEDIMAGE)
mask, citems = _GetMaskAndVal(citems, 0, mask, commctrl.TVIF_CHILDREN)
mask, param = _GetMaskAndVal(param, 0, mask, commctrl.TVIF_PARAM)
if text is None:
text_addr = text_len = 0
else:
text_buffer = _make_text_buffer(text)
text_len = len(text)
extra.append(text_buffer)
text_addr, _ = text_buffer.buffer_info()
buf = struct.pack(_tvitem_fmt,
mask, hitem,
state, stateMask,
text_addr, text_len, # text
image, selimage,
citems, param)
return array.array("b", buf), extra
# Make a new buffer suitable for querying hitem's attributes.
def EmptyTVITEM(hitem, mask = None, text_buf_size=512):
extra = [] # objects we must keep references to
if mask is None:
mask = commctrl.TVIF_HANDLE | commctrl.TVIF_STATE | commctrl.TVIF_TEXT | \
commctrl.TVIF_IMAGE | commctrl.TVIF_SELECTEDIMAGE | \
commctrl.TVIF_CHILDREN | commctrl.TVIF_PARAM
if mask & commctrl.TVIF_TEXT:
text_buffer = _make_empty_text_buffer(text_buf_size)
extra.append(text_buffer)
text_addr, _ = text_buffer.buffer_info()
else:
text_addr = text_buf_size = 0
buf = struct.pack(_tvitem_fmt,
mask, hitem,
0, 0,
text_addr, text_buf_size, # text
0, 0,
0, 0)
return array.array("b", buf), extra
def UnpackTVITEM(buffer):
item_mask, item_hItem, item_state, item_stateMask, \
item_textptr, item_cchText, item_image, item_selimage, \
item_cChildren, item_param = struct.unpack(_tvitem_fmt, buffer)
# ensure only items listed by the mask are valid (except we assume the
# handle is always valid - some notifications (eg, TVN_ENDLABELEDIT) set a
# mask that doesn't include the handle, but the docs explicity say it is.)
if not (item_mask & commctrl.TVIF_TEXT): item_textptr = item_cchText = None
if not (item_mask & commctrl.TVIF_CHILDREN): item_cChildren = None
if not (item_mask & commctrl.TVIF_IMAGE): item_image = None
if not (item_mask & commctrl.TVIF_PARAM): item_param = None
if not (item_mask & commctrl.TVIF_SELECTEDIMAGE): item_selimage = None
if not (item_mask & commctrl.TVIF_STATE): item_state = item_stateMask = None
if item_textptr:
text = win32gui.PyGetString(item_textptr)
else:
text = None
return _MakeResult("TVITEM item_hItem item_state item_stateMask "
"text item_image item_selimage item_cChildren item_param",
(item_hItem, item_state, item_stateMask, text,
item_image, item_selimage, item_cChildren, item_param))
# Unpack the lparm from a "TVNOTIFY" message
def UnpackTVNOTIFY(lparam):
item_size = struct.calcsize(_tvitem_fmt)
format = _nmhdr_fmt + _nmhdr_align_padding
if is64bit:
format = format + "ixxxx"
else:
format = format + "i"
format = format + "%ds%ds" % (item_size, item_size)
buf = win32gui.PyGetMemory(lparam, struct.calcsize(format))
hwndFrom, id, code, action, buf_old, buf_new \
= struct.unpack(format, buf)
item_old = UnpackTVITEM(buf_old)
item_new = UnpackTVITEM(buf_new)
return _MakeResult("TVNOTIFY hwndFrom id code action item_old item_new",
(hwndFrom, id, code, action, item_old, item_new))
def UnpackTVDISPINFO(lparam):
item_size = struct.calcsize(_tvitem_fmt)
format = "PPi%ds" % (item_size,)
buf = win32gui.PyGetMemory(lparam, struct.calcsize(format))
hwndFrom, id, code, buf_item = struct.unpack(format, buf)
item = UnpackTVITEM(buf_item)
return _MakeResult("TVDISPINFO hwndFrom id code item",
(hwndFrom, id, code, item))
#
# List view items
_lvitem_fmt = "iiiiiPiiPi"
def PackLVITEM(item=None, subItem=None, state=None, stateMask=None, text=None, image=None, param=None, indent=None):
extra = [] # objects we must keep references to
mask = 0
# _GetMaskAndVal adds quite a bit of overhead to this function.
if item is None: item = 0 # No mask for item
if subItem is None: subItem = 0 # No mask for sibItem
if state is None:
state = 0
stateMask = 0
else:
mask |= commctrl.LVIF_STATE
if stateMask is None: stateMask = state
if image is None: image = 0
else: mask |= commctrl.LVIF_IMAGE
if param is None: param = 0
else: mask |= commctrl.LVIF_PARAM
if indent is None: indent = 0
else: mask |= commctrl.LVIF_INDENT
if text is None:
text_addr = text_len = 0
else:
mask |= commctrl.LVIF_TEXT
text_buffer = _make_text_buffer(text)
text_len = len(text)
extra.append(text_buffer)
text_addr, _ = text_buffer.buffer_info()
buf = struct.pack(_lvitem_fmt,
mask, item, subItem,
state, stateMask,
text_addr, text_len, # text
image, param, indent)
return array.array("b", buf), extra
def UnpackLVITEM(buffer):
item_mask, item_item, item_subItem, \
item_state, item_stateMask, \
item_textptr, item_cchText, item_image, \
item_param, item_indent = struct.unpack(_lvitem_fmt, buffer)
# ensure only items listed by the mask are valid
if not (item_mask & commctrl.LVIF_TEXT): item_textptr = item_cchText = None
if not (item_mask & commctrl.LVIF_IMAGE): item_image = None
if not (item_mask & commctrl.LVIF_PARAM): item_param = None
if not (item_mask & commctrl.LVIF_INDENT): item_indent = None
if not (item_mask & commctrl.LVIF_STATE): item_state = item_stateMask = None
if item_textptr:
text = win32gui.PyGetString(item_textptr)
else:
text = None
return _MakeResult("LVITEM item_item item_subItem item_state "
"item_stateMask text item_image item_param item_indent",
(item_item, item_subItem, item_state, item_stateMask,
text, item_image, item_param, item_indent))
# Unpack an "LVNOTIFY" message
def UnpackLVDISPINFO(lparam):
item_size = struct.calcsize(_lvitem_fmt)
format = _nmhdr_fmt + _nmhdr_align_padding + ("%ds" % (item_size,))
buf = win32gui.PyGetMemory(lparam, struct.calcsize(format))
hwndFrom, id, code, buf_item = struct.unpack(format, buf)
item = UnpackLVITEM(buf_item)
return _MakeResult("LVDISPINFO hwndFrom id code item",
(hwndFrom, id, code, item))
def UnpackLVNOTIFY(lparam):
format = _nmhdr_fmt + _nmhdr_align_padding + "7i"
if is64bit:
format = format + "xxxx" # point needs padding.
format = format + "P"
buf = win32gui.PyGetMemory(lparam, struct.calcsize(format))
hwndFrom, id, code, item, subitem, newstate, oldstate, \
changed, pt_x, pt_y, lparam = struct.unpack(format, buf)
return _MakeResult("UnpackLVNOTIFY hwndFrom id code item subitem "
"newstate oldstate changed pt lparam",
(hwndFrom, id, code, item, subitem, newstate, oldstate,
changed, (pt_x, pt_y), lparam))
# Make a new buffer suitable for querying an items attributes.
def EmptyLVITEM(item, subitem, mask = None, text_buf_size=512):
extra = [] # objects we must keep references to
if mask is None:
mask = commctrl.LVIF_IMAGE | commctrl.LVIF_INDENT | commctrl.LVIF_TEXT | \
commctrl.LVIF_PARAM | commctrl.LVIF_STATE
if mask & commctrl.LVIF_TEXT:
text_buffer = _make_empty_text_buffer(text_buf_size)
extra.append(text_buffer)
text_addr, _ = text_buffer.buffer_info()
else:
text_addr = text_buf_size = 0
buf = struct.pack(_lvitem_fmt,
mask, item, subitem,
0, 0,
text_addr, text_buf_size, # text
0, 0, 0)
return array.array("b", buf), extra
# List view column structure
_lvcolumn_fmt = "iiiPiiii"
def PackLVCOLUMN(fmt=None, cx=None, text=None, subItem=None, image=None, order=None):
extra = [] # objects we must keep references to
mask = 0
mask, fmt = _GetMaskAndVal(fmt, 0, mask, commctrl.LVCF_FMT)
mask, cx = _GetMaskAndVal(cx, 0, mask, commctrl.LVCF_WIDTH)
mask, text = _GetMaskAndVal(text, None, mask, commctrl.LVCF_TEXT)
mask, subItem = _GetMaskAndVal(subItem, 0, mask, commctrl.LVCF_SUBITEM)
mask, image = _GetMaskAndVal(image, 0, mask, commctrl.LVCF_IMAGE)
mask, order= _GetMaskAndVal(order, 0, mask, commctrl.LVCF_ORDER)
if text is None:
text_addr = text_len = 0
else:
text_buffer = _make_text_buffer(text)
extra.append(text_buffer)
text_addr, _ = text_buffer.buffer_info()
text_len = len(text)
buf = struct.pack(_lvcolumn_fmt,
mask, fmt, cx,
text_addr, text_len, # text
subItem, image, order)
return array.array("b", buf), extra
def UnpackLVCOLUMN(lparam):
mask, fmt, cx, text_addr, text_size, subItem, image, order = \
struct.unpack(_lvcolumn_fmt, lparam)
# ensure only items listed by the mask are valid
if not (mask & commctrl.LVCF_FMT): fmt = None
if not (mask & commctrl.LVCF_WIDTH): cx = None
if not (mask & commctrl.LVCF_TEXT): text_addr = text_size = None
if not (mask & commctrl.LVCF_SUBITEM): subItem = None
if not (mask & commctrl.LVCF_IMAGE): image = None
if not (mask & commctrl.LVCF_ORDER): order = None
if text_addr:
text = win32gui.PyGetString(text_addr)
else:
text = None
return _MakeResult("LVCOLUMN fmt cx text subItem image order",
(fmt, cx, text, subItem, image, order))
# Make a new buffer suitable for querying an items attributes.
def EmptyLVCOLUMN(mask = None, text_buf_size=512):
extra = [] # objects we must keep references to
if mask is None:
mask = commctrl.LVCF_FMT | commctrl.LVCF_WIDTH | commctrl.LVCF_TEXT | \
commctrl.LVCF_SUBITEM | commctrl.LVCF_IMAGE | commctrl.LVCF_ORDER
if mask & commctrl.LVCF_TEXT:
text_buffer = _make_empty_text_buffer(text_buf_size)
extra.append(text_buffer)
text_addr, _ = text_buffer.buffer_info()
else:
text_addr = text_buf_size = 0
buf = struct.pack(_lvcolumn_fmt,
mask, 0, 0,
text_addr, text_buf_size, # text
0, 0, 0)
return array.array("b", buf), extra
# List view hit-test.
def PackLVHITTEST(pt):
format = "iiiii"
buf = struct.pack(format,
pt[0], pt[1],
0, 0, 0)
return array.array("b", buf), None
def UnpackLVHITTEST(buf):
format = "iiiii"
x, y, flags, item, subitem = struct.unpack(format, buf)
return _MakeResult("LVHITTEST pt flags item subitem",
((x,y), flags, item, subitem))
def PackHDITEM(cxy = None, text = None, hbm = None, fmt = None,
param = None, image = None, order = None):
extra = [] # objects we must keep references to
mask = 0
mask, cxy = _GetMaskAndVal(cxy, 0, mask, commctrl.HDI_HEIGHT)
mask, text = _GetMaskAndVal(text, None, mask, commctrl.LVCF_TEXT)
mask, hbm = _GetMaskAndVal(hbm, 0, mask, commctrl.HDI_BITMAP)
mask, fmt = _GetMaskAndVal(fmt, 0, mask, commctrl.HDI_FORMAT)
mask, param = _GetMaskAndVal(param, 0, mask, commctrl.HDI_LPARAM)
mask, image = _GetMaskAndVal(image, 0, mask, commctrl.HDI_IMAGE)
mask, order = _GetMaskAndVal(order, 0, mask, commctrl.HDI_ORDER)
if text is None:
text_addr = text_len = 0
else:
text_buffer = _make_text_buffer(text)
extra.append(text_buffer)
text_addr, _ = text_buffer.buffer_info()
text_len = len(text)
format = "iiPPiiPiiii"
buf = struct.pack(format,
mask, cxy, text_addr, hbm, text_len,
fmt, param, image, order, 0, 0)
return array.array("b", buf), extra
# Device notification stuff
# Generic function for packing a DEV_BROADCAST_* structure - generally used
# by the other PackDEV_BROADCAST_* functions in this module.
def PackDEV_BROADCAST(devicetype, rest_fmt, rest_data, extra_data=_make_bytes('')):
# It seems a requirement is 4 byte alignment, even for the 'BYTE data[1]'
# field (eg, that would make DEV_BROADCAST_HANDLE 41 bytes, but we must
# be 44.
extra_data += _make_bytes('\0' * (4-len(extra_data)%4))
format = "iii" + rest_fmt
full_size = struct.calcsize(format) + len(extra_data)
data = (full_size, devicetype, 0) + rest_data
return struct.pack(format, *data) + extra_data
def PackDEV_BROADCAST_HANDLE(handle, hdevnotify=0, guid=_make_bytes("\0"*16), name_offset=0, data=_make_bytes("\0")):
return PackDEV_BROADCAST(win32con.DBT_DEVTYP_HANDLE, "PP16sl",
(int(handle), int(hdevnotify), _make_memory(guid), name_offset),
data)
def PackDEV_BROADCAST_VOLUME(unitmask, flags):
return PackDEV_BROADCAST(win32con.DBT_DEVTYP_VOLUME, "II",
(unitmask, flags))
def PackDEV_BROADCAST_DEVICEINTERFACE(classguid, name=""):
if win32gui.UNICODE:
# This really means "is py3k?" - so not accepting bytes is OK
if not isinstance(name, str):
raise TypeError("Must provide unicode for the name")
name = name.encode('unicode-internal')
else:
# py2k was passed a unicode object - encode as mbcs.
if isinstance(name, str):
name = name.encode('mbcs')
# 16 bytes for the IID followed by \0 term'd string.
rest_fmt = "16s%ds" % len(name)
# _make_memory(iid) hoops necessary to get the raw IID bytes.
rest_data = (_make_memory(pywintypes.IID(classguid)), name)
return PackDEV_BROADCAST(win32con.DBT_DEVTYP_DEVICEINTERFACE, rest_fmt, rest_data)
# An object returned by UnpackDEV_BROADCAST.
class DEV_BROADCAST_INFO:
def __init__(self, devicetype, **kw):
self.devicetype = devicetype
self.__dict__.update(kw)
def __str__(self):
return "DEV_BROADCAST_INFO:" + str(self.__dict__)
# Support for unpacking the 'lparam'
def UnpackDEV_BROADCAST(lparam):
if lparam == 0:
return None
hdr_format = "iii"
hdr_size = struct.calcsize(hdr_format)
hdr_buf = win32gui.PyGetMemory(lparam, hdr_size)
size, devtype, reserved = struct.unpack("iii", hdr_buf)
# Due to x64 alignment issues, we need to use the full format string over
# the entire buffer. ie, on x64:
# calcsize('iiiP') != calcsize('iii')+calcsize('P')
buf = win32gui.PyGetMemory(lparam, size)
extra = x = {}
if devtype == win32con.DBT_DEVTYP_HANDLE:
# 2 handles, a GUID, a LONG and possibly an array following...
fmt = hdr_format + "PP16sl"
_, _, _, x['handle'], x['hdevnotify'], guid_bytes, x['nameoffset'] = \
struct.unpack(fmt, buf[:struct.calcsize(fmt)])
x['eventguid'] = pywintypes.IID(guid_bytes, True)
elif devtype == win32con.DBT_DEVTYP_DEVICEINTERFACE:
fmt = hdr_format + "16s"
_, _, _, guid_bytes = struct.unpack(fmt, buf[:struct.calcsize(fmt)])
x['classguid'] = pywintypes.IID(guid_bytes, True)
x['name'] = win32gui.PyGetString(lparam + struct.calcsize(fmt))
elif devtype == win32con.DBT_DEVTYP_VOLUME:
# int mask and flags
fmt = hdr_format + "II"
_, _, _, x['unitmask'], x['flags'] = struct.unpack(fmt, buf[:struct.calcsize(fmt)])
else:
raise NotImplementedError("unknown device type %d" % (devtype,))
return DEV_BROADCAST_INFO(devtype, **extra)
| [
"mail@timgolden.me.uk"
] | mail@timgolden.me.uk |
de9a7a96eb7c0d7336f901b12ee53b3107656994 | 7503725bc8098d34e0973c5661685582bef0cbbb | /mmdet2trt/models/roi_heads/htc_roi_head.py | 6e9f71f6bf2af79e897e5167e107623189a932a7 | [
"Apache-2.0"
] | permissive | zjj-2015/mmdetection-to-tensorrt | bf280ef9359ec95293eee5062cf184786133543a | e1b91743cd4c9a145fc2b2701baef5ff648a1a4c | refs/heads/master | 2023-08-30T05:00:26.866383 | 2021-11-16T14:18:10 | 2021-11-16T14:18:10 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,898 | py | import mmdet2trt.ops.util_ops as mm2trt_util
import torch
import torch.nn.functional as F
from mmdet2trt.core.post_processing import merge_aug_masks
from mmdet2trt.models.builder import build_wraper, register_wraper
from mmdet.core.bbox.coder.delta_xywh_bbox_coder import delta2bbox
from .cascade_roi_head import CascadeRoIHeadWraper
@register_wraper('mmdet.models.roi_heads.HybridTaskCascadeRoIHead')
class HybridTaskCascadeRoIHeadWraper(CascadeRoIHeadWraper):
def __init__(self, module, wrap_config):
super(HybridTaskCascadeRoIHeadWraper,
self).__init__(module, wrap_config)
module = self.module
self.semantic_head = None
if module.semantic_head is not None:
self.semantic_roi_extractor = build_wraper(
module.semantic_roi_extractor)
self.semantic_head = module.semantic_head
def _bbox_forward(self, stage, x, rois, semantic_feat=None):
bbox_roi_extractor = self.bbox_roi_extractor[stage]
bbox_head = self.bbox_head[stage]
if rois.shape[1] == 4:
zeros = rois.new_zeros([rois.shape[0], 1])
rois = torch.cat([zeros, rois], dim=1)
roi_feats = bbox_roi_extractor(x[:bbox_roi_extractor.num_inputs], rois)
if self.module.with_semantic and 'box' in self.module.semantic_fusion:
bbox_semantic_feat = self.semantic_roi_extractor([semantic_feat],
rois)
if bbox_semantic_feat.shape[-2:] != roi_feats.shape[-2:]:
bbox_semantic_feat = F.adaptive_avg_pool2d(
bbox_semantic_feat, roi_feats.shape[-2:])
cls_score, bbox_pred = bbox_head(roi_feats)
bbox_results = dict(
cls_score=cls_score, bbox_pred=bbox_pred, bbox_feats=roi_feats)
return bbox_results
def regress_by_class(self, stage, rois, label, bbox_pred):
bbox_head = self.bbox_head[stage]
reg_class_agnostic = bbox_head.reg_class_agnostic
if not reg_class_agnostic:
label = label * 4
inds = torch.stack((label, label + 1, label + 2, label + 3), 1)
bbox_pred = torch.gather(bbox_pred, 1, inds)
means = bbox_head.bbox_coder.means
stds = bbox_head.bbox_coder.stds
new_rois = delta2bbox(rois, bbox_pred, means, stds)
return new_rois
def forward(self, feat, proposals, img_shape):
ms_scores = []
batch_size = proposals.shape[0]
num_proposals = proposals.shape[1]
rois_pad = mm2trt_util.arange_by_input(proposals, 0).unsqueeze(1)
rois_pad = rois_pad.repeat(1, num_proposals).view(-1, 1)
proposals = proposals.view(-1, 4)
rois = proposals
if self.module.with_semantic:
_, semantic_feat = self.semantic_head(feat)
else:
semantic_feat = None
for i in range(self.num_stages):
bbox_results = self._bbox_forward(
i,
feat,
torch.cat([rois_pad, rois], dim=1),
semantic_feat=semantic_feat)
ms_scores.append(bbox_results['cls_score'])
bbox_pred = bbox_results['bbox_pred']
if i < self.num_stages - 1:
bbox_label = bbox_results['cls_score'].argmax(dim=1)
rois = self.bbox_head[i].regress_by_class(
rois, bbox_label, bbox_pred, img_shape)
rois = torch.cat([rois_pad, rois], dim=1)
# bbox_head.get_boxes
cls_score = bbox_results['cls_score']
bbox_pred = bbox_results['bbox_pred']
num_detections, det_boxes, det_scores, det_classes = self.bbox_head[
-1].get_bboxes(rois, cls_score, bbox_pred, img_shape, batch_size,
num_proposals, self.test_cfg)
result = [num_detections, det_boxes, det_scores, det_classes]
if self.enable_mask:
# mask roi input
num_mask_proposals = det_boxes.size(1)
rois_pad = mm2trt_util.arange_by_input(det_boxes, 0).unsqueeze(1)
rois_pad = rois_pad.repeat(1, num_mask_proposals).view(-1, 1)
mask_proposals = det_boxes.view(-1, 4)
mask_rois = torch.cat([rois_pad, mask_proposals], dim=1)
mask_roi_extractor = self.mask_roi_extractor[-1]
mask_feats = mask_roi_extractor(
feat[:mask_roi_extractor.num_inputs], mask_rois)
if self.module.with_semantic and ('mask'
in self.module.semantic_fusion):
mask_semantic_feat = self.semantic_roi_extractor(
[semantic_feat], mask_rois)
mask_feats += mask_semantic_feat
last_feat = None
aug_masks = []
for i in range(self.num_stages):
mask_head = self.mask_head[i]
if self.module.mask_info_flow:
mask_pred, last_feat = mask_head(mask_feats, last_feat)
else:
mask_pred = mask_head(mask_feats)
mask_pred = mask_pred.sigmoid()
aug_masks.append(mask_pred)
mask_pred = merge_aug_masks(aug_masks, self.test_cfg)
mc, mh, mw = mask_pred.shape[1:]
mask_pred = mask_pred.reshape(batch_size, -1, mc, mh, mw)
if not self.module.mask_head[-1].class_agnostic:
det_index = det_classes.unsqueeze(-1).long()
det_index = det_index + 1
mask_pad = mask_pred[:, :, 0:1, ...] * 0
mask_pred = torch.cat([mask_pad, mask_pred], dim=2)
mask_pred = mm2trt_util.gather_topk(
mask_pred, dim=2, index=det_index)
mask_pred = mask_pred.squeeze(2)
result += [mask_pred]
return result
| [
"streetyao@live.com"
] | streetyao@live.com |
07d34db149dfe54a5b165170aabfd130243cfe8d | fab14fae2b494068aa793901d76464afb965df7e | /benchmarks/f3_wrong_hints/scaling_software_termination/12-2Nested_false-termination_5.py | 03da5a2a4a14729312115bb8d36de8036d58e9cf | [
"MIT"
] | permissive | teodorov/F3 | 673f6f9ccc25acdfdecbfc180f439253474ba250 | c863215c318d7d5f258eb9be38c6962cf6863b52 | refs/heads/master | 2023-08-04T17:37:38.771863 | 2021-09-16T07:38:28 | 2021-09-16T07:38:28 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 6,525 | py | from typing import Tuple, FrozenSet
from pysmt.environment import Environment as PysmtEnv
from pysmt.fnode import FNode
import pysmt.typing as types
from utils import symb_to_next
from hint import Hint, Location
def transition_system(env: PysmtEnv) -> Tuple[FrozenSet[FNode], FNode, FNode,
FNode]:
assert isinstance(env, PysmtEnv)
mgr = env.formula_manager
pc = mgr.Symbol("pc", types.INT)
x = mgr.Symbol("x", types.INT)
y = mgr.Symbol("y", types.INT)
x_pc = symb_to_next(mgr, pc)
x_x = symb_to_next(mgr, x)
x_y = symb_to_next(mgr, y)
symbols = frozenset([pc, x, y])
m_1 = mgr.Int(-1)
n_locs = 3
max_int = n_locs
ints = []
pcs = []
x_pcs = []
for idx in range(n_locs):
num = mgr.Int(idx)
ints.append(num)
pcs.append(mgr.Equals(pc, num))
x_pcs.append(mgr.Equals(x_pc, num))
for idx in range(n_locs, max_int):
num = mgr.Int(idx)
ints.append(num)
pcend = mgr.Equals(pc, m_1)
x_pcend = mgr.Equals(x_pc, m_1)
init = pcs[0]
cfg = []
# pc = 0 & (x >= 0) -> pc' = 1
cond = mgr.GE(x, ints[0])
cfg.append(mgr.Implies(mgr.And(pcs[0], cond), x_pcs[1]))
# pc = 0 & !(x >= 0) -> pc' = -1
cfg.append(mgr.Implies(mgr.And(pcs[0], mgr.Not(cond)), x_pcend))
# pc = 1 -> pc' = 2
cfg.append(mgr.Implies(pcs[1], x_pcs[2]))
# pc = 2 -> pc' = 0
cfg.append(mgr.Implies(pcs[2], x_pcs[0]))
# pc = -1 -> pc' = -1
cfg.append(mgr.Implies(pcend, x_pcend))
trans = []
same_x = mgr.Equals(x_x, x)
same_y = mgr.Equals(x_y, y)
same = mgr.And(same_x, same_y)
# pc = 0 -> same
trans.append(mgr.Implies(pcs[0], same))
# pc = 1 -> x' = x + y & same_y
trans.append(mgr.Implies(pcs[1],
mgr.And(mgr.Equals(x_x, mgr.Plus(x, y)),
same_y)))
# pc = 2 -> same_x & y' = y + 1
trans.append(mgr.Implies(pcs[2],
mgr.And(same_x,
mgr.Equals(x_y, mgr.Plus(y, ints[1])))))
# pc = end -> same
trans.append(mgr.Implies(pcend, same))
trans = mgr.And(*cfg, *trans)
fairness = mgr.Not(mgr.Equals(pc, m_1))
return symbols, init, trans, fairness
def hints(env: PysmtEnv) -> FrozenSet[Hint]:
assert isinstance(env, PysmtEnv)
mgr = env.formula_manager
pc = mgr.Symbol("pc", types.INT)
x = mgr.Symbol("x", types.INT)
y = mgr.Symbol("y", types.INT)
symbs = frozenset([pc, x, y])
m_100 = mgr.Int(-100)
m_1 = mgr.Int(-1)
i_0 = mgr.Int(0)
i_1 = mgr.Int(1)
i_2 = mgr.Int(2)
i_4 = mgr.Int(4)
i_20 = mgr.Int(20)
x_pc = symb_to_next(mgr, pc)
x_x = symb_to_next(mgr, x)
x_y = symb_to_next(mgr, y)
res = []
stutter = mgr.Equals(x_x, x)
loc = Location(env, mgr.GE(x, i_20), mgr.GE(y, i_1), stutterT=stutter)
loc.set_progress(0, mgr.Equals(x_x, mgr.Plus(x, y)))
h_x = Hint("h_x0", env, frozenset([x]), symbs)
h_x.set_locs([loc])
res.append(h_x)
stutter = mgr.Equals(x_x, x)
loc = Location(env, mgr.GE(x, i_1), mgr.GE(y, i_1), stutterT=stutter)
loc.set_progress(0, mgr.Equals(x_x, mgr.Plus(x, y)))
h_x = Hint("h_x1", env, frozenset([x]), symbs)
h_x.set_locs([loc])
res.append(h_x)
loc0 = Location(env, mgr.GE(y, m_100), mgr.LE(x, i_20))
loc0.set_progress(1, mgr.Equals(x_y, mgr.Plus(x, y)))
loc1 = Location(env, mgr.TRUE(), mgr.GE(x, m_100))
loc1.set_progress(0, mgr.Equals(x_y, m_100))
h_y = Hint("h_y2", env, frozenset([y]), symbs)
h_y.set_locs([loc0, loc1])
res.append(h_y)
loc0 = Location(env, mgr.GE(x, i_1), mgr.GE(y, i_1))
loc0.set_progress(1, mgr.Equals(x_x, mgr.Plus(x, y)))
loc1 = Location(env, mgr.GE(x, i_2), mgr.GE(y, i_1))
loc1.set_progress(0, mgr.Equals(x_x, y))
h_x = Hint("h_x2", env, frozenset([x]), symbs)
h_x.set_locs([loc0, loc1])
res.append(h_x)
loc0 = Location(env, mgr.GE(y, m_100), mgr.LE(x, i_20))
loc0.set_progress(1, mgr.Equals(x_y, mgr.Times(x, y)))
loc1 = Location(env, mgr.TRUE(), mgr.GE(x, m_100))
loc1.set_progress(0, mgr.Equals(x_y, m_100))
h_y = Hint("h_y3", env, frozenset([y]), symbs)
h_y.set_locs([loc0, loc1])
res.append(h_y)
loc0 = Location(env, mgr.GE(x, i_1), mgr.GE(y, i_1))
loc0.set_progress(1, mgr.Equals(x_x, mgr.Times(x, y)))
loc1 = Location(env, mgr.GE(x, i_1), mgr.GE(y, i_1))
loc1.set_progress(0, mgr.Equals(x_x, y))
h_x = Hint("h_x3", env, frozenset([x]), symbs)
h_x.set_locs([loc0, loc1])
res.append(h_x)
loc0 = Location(env, mgr.GE(y, m_100), mgr.LE(x, i_20))
loc0.set_progress(1, mgr.Equals(x_y, mgr.Times(x, y)))
loc1 = Location(env, mgr.TRUE(), mgr.GE(x, m_100))
loc1.set_progress(2, mgr.GE(x_y, i_20))
loc2 = Location(env, mgr.TRUE())
loc2.set_progress(0, mgr.And(mgr.GE(x_y, m_100), mgr.LE(x_y, i_0)))
h_y = Hint("h_y4", env, frozenset([y]), symbs)
h_y.set_locs([loc0, loc1, loc2])
res.append(h_y)
loc0 = Location(env, mgr.GE(x, i_1), mgr.GE(y, i_1))
loc0.set_progress(1, mgr.Equals(x_x, mgr.Times(x, y)))
loc1 = Location(env, mgr.GE(x, i_1), mgr.GE(y, i_1))
loc1.set_progress(2, mgr.GT(x_x, y))
loc2 = Location(env, mgr.GE(x, i_2))
loc2.set_progress(0, mgr.GE(x_x, i_20))
h_x = Hint("h_x4", env, frozenset([x]), symbs)
h_x.set_locs([loc0, loc1, loc2])
res.append(h_x)
loc0 = Location(env, mgr.TRUE())
loc0.set_progress(0, mgr.TRUE())
h_pc = Hint("h_pc1", env, frozenset([pc]), symbs)
h_pc.set_locs([loc0])
res.append(h_pc)
loc0 = Location(env, mgr.GE(y, m_100))
loc0.set_progress(0, mgr.Equals(x_y, mgr.Times(y, y)))
h_y = Hint("h_y5", env, frozenset([y]), symbs)
h_y.set_locs([loc0])
res.append(h_y)
loc0 = Location(env, mgr.Equals(pc, i_1))
loc0.set_progress(1, mgr.GT(x_pc, pc))
loc1 = Location(env, mgr.GE(pc, i_2))
loc1.set_progress(0, mgr.Equals(x_pc, mgr.Div(pc, pc)))
h_pc = Hint("h_pc2", env, frozenset([pc]), symbs)
h_pc.set_locs([loc0, loc1])
res.append(h_pc)
loc0 = Location(env, mgr.LE(x, i_20))
loc0.set_progress(1, mgr.Equals(x_x, mgr.Plus(mgr.Times(x, x), i_1)))
loc1 = Location(env, mgr.GE(x, i_20))
loc1.set_progress(0, mgr.LT(x_x, mgr.Times(m_1, x, x)))
h_x = Hint("h_x6", env, frozenset([x]), symbs)
h_x.set_locs([loc0, loc1])
res.append(h_x)
return frozenset(res)
| [
"en.magnago@gmail.com"
] | en.magnago@gmail.com |
ec6be64c3fe458b8f2011b5ad6bf558c7309692b | 5da5473ff3026165a47f98744bac82903cf008e0 | /packages/google-cloud-bigquery-datapolicies/google/cloud/bigquery_datapolicies_v1/services/data_policy_service/client.py | 85e12cc88d1835ce78daa23989906d63f63601f7 | [
"Apache-2.0"
] | permissive | googleapis/google-cloud-python | ed61a5f03a476ab6053870f4da7bc5534e25558b | 93c4e63408c65129422f65217325f4e7d41f7edf | refs/heads/main | 2023-09-04T09:09:07.852632 | 2023-08-31T22:49:26 | 2023-08-31T22:49:26 | 16,316,451 | 2,792 | 917 | Apache-2.0 | 2023-09-14T21:45:18 | 2014-01-28T15:51:47 | Python | UTF-8 | Python | false | false | 62,935 | py | # -*- coding: utf-8 -*-
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from collections import OrderedDict
import os
import re
from typing import (
Dict,
Mapping,
MutableMapping,
MutableSequence,
Optional,
Sequence,
Tuple,
Type,
Union,
cast,
)
from google.api_core import client_options as client_options_lib
from google.api_core import exceptions as core_exceptions
from google.api_core import gapic_v1
from google.api_core import retry as retries
from google.auth import credentials as ga_credentials # type: ignore
from google.auth.exceptions import MutualTLSChannelError # type: ignore
from google.auth.transport import mtls # type: ignore
from google.auth.transport.grpc import SslCredentials # type: ignore
from google.oauth2 import service_account # type: ignore
from google.cloud.bigquery_datapolicies_v1 import gapic_version as package_version
try:
OptionalRetry = Union[retries.Retry, gapic_v1.method._MethodDefault]
except AttributeError: # pragma: NO COVER
OptionalRetry = Union[retries.Retry, object] # type: ignore
from google.iam.v1 import iam_policy_pb2 # type: ignore
from google.iam.v1 import policy_pb2 # type: ignore
from google.protobuf import field_mask_pb2 # type: ignore
from google.cloud.bigquery_datapolicies_v1.services.data_policy_service import pagers
from google.cloud.bigquery_datapolicies_v1.types import datapolicy
from .transports.base import DEFAULT_CLIENT_INFO, DataPolicyServiceTransport
from .transports.grpc import DataPolicyServiceGrpcTransport
from .transports.grpc_asyncio import DataPolicyServiceGrpcAsyncIOTransport
from .transports.rest import DataPolicyServiceRestTransport
class DataPolicyServiceClientMeta(type):
"""Metaclass for the DataPolicyService client.
This provides class-level methods for building and retrieving
support objects (e.g. transport) without polluting the client instance
objects.
"""
_transport_registry = (
OrderedDict()
) # type: Dict[str, Type[DataPolicyServiceTransport]]
_transport_registry["grpc"] = DataPolicyServiceGrpcTransport
_transport_registry["grpc_asyncio"] = DataPolicyServiceGrpcAsyncIOTransport
_transport_registry["rest"] = DataPolicyServiceRestTransport
def get_transport_class(
cls,
label: Optional[str] = None,
) -> Type[DataPolicyServiceTransport]:
"""Returns an appropriate transport class.
Args:
label: The name of the desired transport. If none is
provided, then the first transport in the registry is used.
Returns:
The transport class to use.
"""
# If a specific transport is requested, return that one.
if label:
return cls._transport_registry[label]
# No transport is requested; return the default (that is, the first one
# in the dictionary).
return next(iter(cls._transport_registry.values()))
class DataPolicyServiceClient(metaclass=DataPolicyServiceClientMeta):
"""Data Policy Service provides APIs for managing the
label-policy bindings.
"""
@staticmethod
def _get_default_mtls_endpoint(api_endpoint):
"""Converts api endpoint to mTLS endpoint.
Convert "*.sandbox.googleapis.com" and "*.googleapis.com" to
"*.mtls.sandbox.googleapis.com" and "*.mtls.googleapis.com" respectively.
Args:
api_endpoint (Optional[str]): the api endpoint to convert.
Returns:
str: converted mTLS api endpoint.
"""
if not api_endpoint:
return api_endpoint
mtls_endpoint_re = re.compile(
r"(?P<name>[^.]+)(?P<mtls>\.mtls)?(?P<sandbox>\.sandbox)?(?P<googledomain>\.googleapis\.com)?"
)
m = mtls_endpoint_re.match(api_endpoint)
name, mtls, sandbox, googledomain = m.groups()
if mtls or not googledomain:
return api_endpoint
if sandbox:
return api_endpoint.replace(
"sandbox.googleapis.com", "mtls.sandbox.googleapis.com"
)
return api_endpoint.replace(".googleapis.com", ".mtls.googleapis.com")
DEFAULT_ENDPOINT = "bigquerydatapolicy.googleapis.com"
DEFAULT_MTLS_ENDPOINT = _get_default_mtls_endpoint.__func__( # type: ignore
DEFAULT_ENDPOINT
)
@classmethod
def from_service_account_info(cls, info: dict, *args, **kwargs):
"""Creates an instance of this client using the provided credentials
info.
Args:
info (dict): The service account private key info.
args: Additional arguments to pass to the constructor.
kwargs: Additional arguments to pass to the constructor.
Returns:
DataPolicyServiceClient: The constructed client.
"""
credentials = service_account.Credentials.from_service_account_info(info)
kwargs["credentials"] = credentials
return cls(*args, **kwargs)
@classmethod
def from_service_account_file(cls, filename: str, *args, **kwargs):
"""Creates an instance of this client using the provided credentials
file.
Args:
filename (str): The path to the service account private key json
file.
args: Additional arguments to pass to the constructor.
kwargs: Additional arguments to pass to the constructor.
Returns:
DataPolicyServiceClient: The constructed client.
"""
credentials = service_account.Credentials.from_service_account_file(filename)
kwargs["credentials"] = credentials
return cls(*args, **kwargs)
from_service_account_json = from_service_account_file
@property
def transport(self) -> DataPolicyServiceTransport:
"""Returns the transport used by the client instance.
Returns:
DataPolicyServiceTransport: The transport used by the client
instance.
"""
return self._transport
@staticmethod
def data_policy_path(
project: str,
location: str,
data_policy: str,
) -> str:
"""Returns a fully-qualified data_policy string."""
return (
"projects/{project}/locations/{location}/dataPolicies/{data_policy}".format(
project=project,
location=location,
data_policy=data_policy,
)
)
@staticmethod
def parse_data_policy_path(path: str) -> Dict[str, str]:
"""Parses a data_policy path into its component segments."""
m = re.match(
r"^projects/(?P<project>.+?)/locations/(?P<location>.+?)/dataPolicies/(?P<data_policy>.+?)$",
path,
)
return m.groupdict() if m else {}
@staticmethod
def common_billing_account_path(
billing_account: str,
) -> str:
"""Returns a fully-qualified billing_account string."""
return "billingAccounts/{billing_account}".format(
billing_account=billing_account,
)
@staticmethod
def parse_common_billing_account_path(path: str) -> Dict[str, str]:
"""Parse a billing_account path into its component segments."""
m = re.match(r"^billingAccounts/(?P<billing_account>.+?)$", path)
return m.groupdict() if m else {}
@staticmethod
def common_folder_path(
folder: str,
) -> str:
"""Returns a fully-qualified folder string."""
return "folders/{folder}".format(
folder=folder,
)
@staticmethod
def parse_common_folder_path(path: str) -> Dict[str, str]:
"""Parse a folder path into its component segments."""
m = re.match(r"^folders/(?P<folder>.+?)$", path)
return m.groupdict() if m else {}
@staticmethod
def common_organization_path(
organization: str,
) -> str:
"""Returns a fully-qualified organization string."""
return "organizations/{organization}".format(
organization=organization,
)
@staticmethod
def parse_common_organization_path(path: str) -> Dict[str, str]:
"""Parse a organization path into its component segments."""
m = re.match(r"^organizations/(?P<organization>.+?)$", path)
return m.groupdict() if m else {}
@staticmethod
def common_project_path(
project: str,
) -> str:
"""Returns a fully-qualified project string."""
return "projects/{project}".format(
project=project,
)
@staticmethod
def parse_common_project_path(path: str) -> Dict[str, str]:
"""Parse a project path into its component segments."""
m = re.match(r"^projects/(?P<project>.+?)$", path)
return m.groupdict() if m else {}
@staticmethod
def common_location_path(
project: str,
location: str,
) -> str:
"""Returns a fully-qualified location string."""
return "projects/{project}/locations/{location}".format(
project=project,
location=location,
)
@staticmethod
def parse_common_location_path(path: str) -> Dict[str, str]:
"""Parse a location path into its component segments."""
m = re.match(r"^projects/(?P<project>.+?)/locations/(?P<location>.+?)$", path)
return m.groupdict() if m else {}
@classmethod
def get_mtls_endpoint_and_cert_source(
cls, client_options: Optional[client_options_lib.ClientOptions] = None
):
"""Return the API endpoint and client cert source for mutual TLS.
The client cert source is determined in the following order:
(1) if `GOOGLE_API_USE_CLIENT_CERTIFICATE` environment variable is not "true", the
client cert source is None.
(2) if `client_options.client_cert_source` is provided, use the provided one; if the
default client cert source exists, use the default one; otherwise the client cert
source is None.
The API endpoint is determined in the following order:
(1) if `client_options.api_endpoint` if provided, use the provided one.
(2) if `GOOGLE_API_USE_CLIENT_CERTIFICATE` environment variable is "always", use the
default mTLS endpoint; if the environment variable is "never", use the default API
endpoint; otherwise if client cert source exists, use the default mTLS endpoint, otherwise
use the default API endpoint.
More details can be found at https://google.aip.dev/auth/4114.
Args:
client_options (google.api_core.client_options.ClientOptions): Custom options for the
client. Only the `api_endpoint` and `client_cert_source` properties may be used
in this method.
Returns:
Tuple[str, Callable[[], Tuple[bytes, bytes]]]: returns the API endpoint and the
client cert source to use.
Raises:
google.auth.exceptions.MutualTLSChannelError: If any errors happen.
"""
if client_options is None:
client_options = client_options_lib.ClientOptions()
use_client_cert = os.getenv("GOOGLE_API_USE_CLIENT_CERTIFICATE", "false")
use_mtls_endpoint = os.getenv("GOOGLE_API_USE_MTLS_ENDPOINT", "auto")
if use_client_cert not in ("true", "false"):
raise ValueError(
"Environment variable `GOOGLE_API_USE_CLIENT_CERTIFICATE` must be either `true` or `false`"
)
if use_mtls_endpoint not in ("auto", "never", "always"):
raise MutualTLSChannelError(
"Environment variable `GOOGLE_API_USE_MTLS_ENDPOINT` must be `never`, `auto` or `always`"
)
# Figure out the client cert source to use.
client_cert_source = None
if use_client_cert == "true":
if client_options.client_cert_source:
client_cert_source = client_options.client_cert_source
elif mtls.has_default_client_cert_source():
client_cert_source = mtls.default_client_cert_source()
# Figure out which api endpoint to use.
if client_options.api_endpoint is not None:
api_endpoint = client_options.api_endpoint
elif use_mtls_endpoint == "always" or (
use_mtls_endpoint == "auto" and client_cert_source
):
api_endpoint = cls.DEFAULT_MTLS_ENDPOINT
else:
api_endpoint = cls.DEFAULT_ENDPOINT
return api_endpoint, client_cert_source
def __init__(
self,
*,
credentials: Optional[ga_credentials.Credentials] = None,
transport: Optional[Union[str, DataPolicyServiceTransport]] = None,
client_options: Optional[Union[client_options_lib.ClientOptions, dict]] = None,
client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO,
) -> None:
"""Instantiates the data policy service client.
Args:
credentials (Optional[google.auth.credentials.Credentials]): The
authorization credentials to attach to requests. These
credentials identify the application to the service; if none
are specified, the client will attempt to ascertain the
credentials from the environment.
transport (Union[str, DataPolicyServiceTransport]): The
transport to use. If set to None, a transport is chosen
automatically.
client_options (Optional[Union[google.api_core.client_options.ClientOptions, dict]]): Custom options for the
client. It won't take effect if a ``transport`` instance is provided.
(1) The ``api_endpoint`` property can be used to override the
default endpoint provided by the client. GOOGLE_API_USE_MTLS_ENDPOINT
environment variable can also be used to override the endpoint:
"always" (always use the default mTLS endpoint), "never" (always
use the default regular endpoint) and "auto" (auto switch to the
default mTLS endpoint if client certificate is present, this is
the default value). However, the ``api_endpoint`` property takes
precedence if provided.
(2) If GOOGLE_API_USE_CLIENT_CERTIFICATE environment variable
is "true", then the ``client_cert_source`` property can be used
to provide client certificate for mutual TLS transport. If
not provided, the default SSL client certificate will be used if
present. If GOOGLE_API_USE_CLIENT_CERTIFICATE is "false" or not
set, no client certificate will be used.
client_info (google.api_core.gapic_v1.client_info.ClientInfo):
The client info used to send a user-agent string along with
API requests. If ``None``, then default info will be used.
Generally, you only need to set this if you're developing
your own client library.
Raises:
google.auth.exceptions.MutualTLSChannelError: If mutual TLS transport
creation failed for any reason.
"""
if isinstance(client_options, dict):
client_options = client_options_lib.from_dict(client_options)
if client_options is None:
client_options = client_options_lib.ClientOptions()
client_options = cast(client_options_lib.ClientOptions, client_options)
api_endpoint, client_cert_source_func = self.get_mtls_endpoint_and_cert_source(
client_options
)
api_key_value = getattr(client_options, "api_key", None)
if api_key_value and credentials:
raise ValueError(
"client_options.api_key and credentials are mutually exclusive"
)
# Save or instantiate the transport.
# Ordinarily, we provide the transport, but allowing a custom transport
# instance provides an extensibility point for unusual situations.
if isinstance(transport, DataPolicyServiceTransport):
# transport is a DataPolicyServiceTransport instance.
if credentials or client_options.credentials_file or api_key_value:
raise ValueError(
"When providing a transport instance, "
"provide its credentials directly."
)
if client_options.scopes:
raise ValueError(
"When providing a transport instance, provide its scopes "
"directly."
)
self._transport = transport
else:
import google.auth._default # type: ignore
if api_key_value and hasattr(
google.auth._default, "get_api_key_credentials"
):
credentials = google.auth._default.get_api_key_credentials(
api_key_value
)
Transport = type(self).get_transport_class(transport)
self._transport = Transport(
credentials=credentials,
credentials_file=client_options.credentials_file,
host=api_endpoint,
scopes=client_options.scopes,
client_cert_source_for_mtls=client_cert_source_func,
quota_project_id=client_options.quota_project_id,
client_info=client_info,
always_use_jwt_access=True,
api_audience=client_options.api_audience,
)
def create_data_policy(
self,
request: Optional[Union[datapolicy.CreateDataPolicyRequest, dict]] = None,
*,
parent: Optional[str] = None,
data_policy: Optional[datapolicy.DataPolicy] = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
metadata: Sequence[Tuple[str, str]] = (),
) -> datapolicy.DataPolicy:
r"""Creates a new data policy under a project with the given
``dataPolicyId`` (used as the display name), policy tag, and
data policy type.
.. code-block:: python
# This snippet has been automatically generated and should be regarded as a
# code template only.
# It will require modifications to work:
# - It may require correct/in-range values for request initialization.
# - It may require specifying regional endpoints when creating the service
# client as shown in:
# https://googleapis.dev/python/google-api-core/latest/client_options.html
from google.cloud import bigquery_datapolicies_v1
def sample_create_data_policy():
# Create a client
client = bigquery_datapolicies_v1.DataPolicyServiceClient()
# Initialize request argument(s)
data_policy = bigquery_datapolicies_v1.DataPolicy()
data_policy.policy_tag = "policy_tag_value"
data_policy.data_masking_policy.predefined_expression = "DATE_YEAR_MASK"
request = bigquery_datapolicies_v1.CreateDataPolicyRequest(
parent="parent_value",
data_policy=data_policy,
)
# Make the request
response = client.create_data_policy(request=request)
# Handle the response
print(response)
Args:
request (Union[google.cloud.bigquery_datapolicies_v1.types.CreateDataPolicyRequest, dict]):
The request object. Request message for the
CreateDataPolicy method.
parent (str):
Required. Resource name of the project that the data
policy will belong to. The format is
``projects/{project_number}/locations/{location_id}``.
This corresponds to the ``parent`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
data_policy (google.cloud.bigquery_datapolicies_v1.types.DataPolicy):
Required. The data policy to create. The ``name`` field
does not need to be provided for the data policy
creation.
This corresponds to the ``data_policy`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.bigquery_datapolicies_v1.types.DataPolicy:
Represents the label-policy binding.
"""
# Create or coerce a protobuf request object.
# Quick check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([parent, data_policy])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
# Minor optimization to avoid making a copy if the user passes
# in a datapolicy.CreateDataPolicyRequest.
# There's no risk of modifying the input as we've already verified
# there are no flattened fields.
if not isinstance(request, datapolicy.CreateDataPolicyRequest):
request = datapolicy.CreateDataPolicyRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if parent is not None:
request.parent = parent
if data_policy is not None:
request.data_policy = data_policy
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = self._transport._wrapped_methods[self._transport.create_data_policy]
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)),
)
# Send the request.
response = rpc(
request,
retry=retry,
timeout=timeout,
metadata=metadata,
)
# Done; return the response.
return response
def update_data_policy(
self,
request: Optional[Union[datapolicy.UpdateDataPolicyRequest, dict]] = None,
*,
data_policy: Optional[datapolicy.DataPolicy] = None,
update_mask: Optional[field_mask_pb2.FieldMask] = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
metadata: Sequence[Tuple[str, str]] = (),
) -> datapolicy.DataPolicy:
r"""Updates the metadata for an existing data policy. The
target data policy can be specified by the resource
name.
.. code-block:: python
# This snippet has been automatically generated and should be regarded as a
# code template only.
# It will require modifications to work:
# - It may require correct/in-range values for request initialization.
# - It may require specifying regional endpoints when creating the service
# client as shown in:
# https://googleapis.dev/python/google-api-core/latest/client_options.html
from google.cloud import bigquery_datapolicies_v1
def sample_update_data_policy():
# Create a client
client = bigquery_datapolicies_v1.DataPolicyServiceClient()
# Initialize request argument(s)
data_policy = bigquery_datapolicies_v1.DataPolicy()
data_policy.policy_tag = "policy_tag_value"
data_policy.data_masking_policy.predefined_expression = "DATE_YEAR_MASK"
request = bigquery_datapolicies_v1.UpdateDataPolicyRequest(
data_policy=data_policy,
)
# Make the request
response = client.update_data_policy(request=request)
# Handle the response
print(response)
Args:
request (Union[google.cloud.bigquery_datapolicies_v1.types.UpdateDataPolicyRequest, dict]):
The request object. Response message for the
UpdateDataPolicy method.
data_policy (google.cloud.bigquery_datapolicies_v1.types.DataPolicy):
Required. Update the data policy's metadata.
The target data policy is determined by the ``name``
field. Other fields are updated to the specified values
based on the field masks.
This corresponds to the ``data_policy`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
update_mask (google.protobuf.field_mask_pb2.FieldMask):
The update mask applies to the resource. For the
``FieldMask`` definition, see
https://developers.google.com/protocol-buffers/docs/reference/google.protobuf#fieldmask
If not set, defaults to all of the fields that are
allowed to update.
Updates to the ``name`` and ``dataPolicyId`` fields are
not allowed.
This corresponds to the ``update_mask`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.bigquery_datapolicies_v1.types.DataPolicy:
Represents the label-policy binding.
"""
# Create or coerce a protobuf request object.
# Quick check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([data_policy, update_mask])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
# Minor optimization to avoid making a copy if the user passes
# in a datapolicy.UpdateDataPolicyRequest.
# There's no risk of modifying the input as we've already verified
# there are no flattened fields.
if not isinstance(request, datapolicy.UpdateDataPolicyRequest):
request = datapolicy.UpdateDataPolicyRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if data_policy is not None:
request.data_policy = data_policy
if update_mask is not None:
request.update_mask = update_mask
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = self._transport._wrapped_methods[self._transport.update_data_policy]
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata(
(("data_policy.name", request.data_policy.name),)
),
)
# Send the request.
response = rpc(
request,
retry=retry,
timeout=timeout,
metadata=metadata,
)
# Done; return the response.
return response
def rename_data_policy(
self,
request: Optional[Union[datapolicy.RenameDataPolicyRequest, dict]] = None,
*,
name: Optional[str] = None,
new_data_policy_id: Optional[str] = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
metadata: Sequence[Tuple[str, str]] = (),
) -> datapolicy.DataPolicy:
r"""Renames the id (display name) of the specified data
policy.
.. code-block:: python
# This snippet has been automatically generated and should be regarded as a
# code template only.
# It will require modifications to work:
# - It may require correct/in-range values for request initialization.
# - It may require specifying regional endpoints when creating the service
# client as shown in:
# https://googleapis.dev/python/google-api-core/latest/client_options.html
from google.cloud import bigquery_datapolicies_v1
def sample_rename_data_policy():
# Create a client
client = bigquery_datapolicies_v1.DataPolicyServiceClient()
# Initialize request argument(s)
request = bigquery_datapolicies_v1.RenameDataPolicyRequest(
name="name_value",
new_data_policy_id="new_data_policy_id_value",
)
# Make the request
response = client.rename_data_policy(request=request)
# Handle the response
print(response)
Args:
request (Union[google.cloud.bigquery_datapolicies_v1.types.RenameDataPolicyRequest, dict]):
The request object. Request message for the
RenameDataPolicy method.
name (str):
Required. Resource name of the data policy to rename.
The format is
``projects/{project_number}/locations/{location_id}/dataPolicies/{data_policy_id}``
This corresponds to the ``name`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
new_data_policy_id (str):
Required. The new data policy id.
This corresponds to the ``new_data_policy_id`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.bigquery_datapolicies_v1.types.DataPolicy:
Represents the label-policy binding.
"""
# Create or coerce a protobuf request object.
# Quick check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([name, new_data_policy_id])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
# Minor optimization to avoid making a copy if the user passes
# in a datapolicy.RenameDataPolicyRequest.
# There's no risk of modifying the input as we've already verified
# there are no flattened fields.
if not isinstance(request, datapolicy.RenameDataPolicyRequest):
request = datapolicy.RenameDataPolicyRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if name is not None:
request.name = name
if new_data_policy_id is not None:
request.new_data_policy_id = new_data_policy_id
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = self._transport._wrapped_methods[self._transport.rename_data_policy]
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)),
)
# Send the request.
response = rpc(
request,
retry=retry,
timeout=timeout,
metadata=metadata,
)
# Done; return the response.
return response
def delete_data_policy(
self,
request: Optional[Union[datapolicy.DeleteDataPolicyRequest, dict]] = None,
*,
name: Optional[str] = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
metadata: Sequence[Tuple[str, str]] = (),
) -> None:
r"""Deletes the data policy specified by its resource
name.
.. code-block:: python
# This snippet has been automatically generated and should be regarded as a
# code template only.
# It will require modifications to work:
# - It may require correct/in-range values for request initialization.
# - It may require specifying regional endpoints when creating the service
# client as shown in:
# https://googleapis.dev/python/google-api-core/latest/client_options.html
from google.cloud import bigquery_datapolicies_v1
def sample_delete_data_policy():
# Create a client
client = bigquery_datapolicies_v1.DataPolicyServiceClient()
# Initialize request argument(s)
request = bigquery_datapolicies_v1.DeleteDataPolicyRequest(
name="name_value",
)
# Make the request
client.delete_data_policy(request=request)
Args:
request (Union[google.cloud.bigquery_datapolicies_v1.types.DeleteDataPolicyRequest, dict]):
The request object. Request message for the
DeleteDataPolicy method.
name (str):
Required. Resource name of the data policy to delete.
Format is
``projects/{project_number}/locations/{location_id}/dataPolicies/{data_policy_id}``.
This corresponds to the ``name`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
"""
# Create or coerce a protobuf request object.
# Quick check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([name])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
# Minor optimization to avoid making a copy if the user passes
# in a datapolicy.DeleteDataPolicyRequest.
# There's no risk of modifying the input as we've already verified
# there are no flattened fields.
if not isinstance(request, datapolicy.DeleteDataPolicyRequest):
request = datapolicy.DeleteDataPolicyRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if name is not None:
request.name = name
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = self._transport._wrapped_methods[self._transport.delete_data_policy]
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)),
)
# Send the request.
rpc(
request,
retry=retry,
timeout=timeout,
metadata=metadata,
)
def get_data_policy(
self,
request: Optional[Union[datapolicy.GetDataPolicyRequest, dict]] = None,
*,
name: Optional[str] = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
metadata: Sequence[Tuple[str, str]] = (),
) -> datapolicy.DataPolicy:
r"""Gets the data policy specified by its resource name.
.. code-block:: python
# This snippet has been automatically generated and should be regarded as a
# code template only.
# It will require modifications to work:
# - It may require correct/in-range values for request initialization.
# - It may require specifying regional endpoints when creating the service
# client as shown in:
# https://googleapis.dev/python/google-api-core/latest/client_options.html
from google.cloud import bigquery_datapolicies_v1
def sample_get_data_policy():
# Create a client
client = bigquery_datapolicies_v1.DataPolicyServiceClient()
# Initialize request argument(s)
request = bigquery_datapolicies_v1.GetDataPolicyRequest(
name="name_value",
)
# Make the request
response = client.get_data_policy(request=request)
# Handle the response
print(response)
Args:
request (Union[google.cloud.bigquery_datapolicies_v1.types.GetDataPolicyRequest, dict]):
The request object. Request message for the GetDataPolicy
method.
name (str):
Required. Resource name of the requested data policy.
Format is
``projects/{project_number}/locations/{location_id}/dataPolicies/{data_policy_id}``.
This corresponds to the ``name`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.bigquery_datapolicies_v1.types.DataPolicy:
Represents the label-policy binding.
"""
# Create or coerce a protobuf request object.
# Quick check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([name])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
# Minor optimization to avoid making a copy if the user passes
# in a datapolicy.GetDataPolicyRequest.
# There's no risk of modifying the input as we've already verified
# there are no flattened fields.
if not isinstance(request, datapolicy.GetDataPolicyRequest):
request = datapolicy.GetDataPolicyRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if name is not None:
request.name = name
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = self._transport._wrapped_methods[self._transport.get_data_policy]
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)),
)
# Send the request.
response = rpc(
request,
retry=retry,
timeout=timeout,
metadata=metadata,
)
# Done; return the response.
return response
def list_data_policies(
self,
request: Optional[Union[datapolicy.ListDataPoliciesRequest, dict]] = None,
*,
parent: Optional[str] = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
metadata: Sequence[Tuple[str, str]] = (),
) -> pagers.ListDataPoliciesPager:
r"""List all of the data policies in the specified parent
project.
.. code-block:: python
# This snippet has been automatically generated and should be regarded as a
# code template only.
# It will require modifications to work:
# - It may require correct/in-range values for request initialization.
# - It may require specifying regional endpoints when creating the service
# client as shown in:
# https://googleapis.dev/python/google-api-core/latest/client_options.html
from google.cloud import bigquery_datapolicies_v1
def sample_list_data_policies():
# Create a client
client = bigquery_datapolicies_v1.DataPolicyServiceClient()
# Initialize request argument(s)
request = bigquery_datapolicies_v1.ListDataPoliciesRequest(
parent="parent_value",
)
# Make the request
page_result = client.list_data_policies(request=request)
# Handle the response
for response in page_result:
print(response)
Args:
request (Union[google.cloud.bigquery_datapolicies_v1.types.ListDataPoliciesRequest, dict]):
The request object. Request message for the
ListDataPolicies method.
parent (str):
Required. Resource name of the project for which to list
data policies. Format is
``projects/{project_number}/locations/{location_id}``.
This corresponds to the ``parent`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.bigquery_datapolicies_v1.services.data_policy_service.pagers.ListDataPoliciesPager:
Response message for the
ListDataPolicies method.
Iterating over this object will yield
results and resolve additional pages
automatically.
"""
# Create or coerce a protobuf request object.
# Quick check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([parent])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
# Minor optimization to avoid making a copy if the user passes
# in a datapolicy.ListDataPoliciesRequest.
# There's no risk of modifying the input as we've already verified
# there are no flattened fields.
if not isinstance(request, datapolicy.ListDataPoliciesRequest):
request = datapolicy.ListDataPoliciesRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if parent is not None:
request.parent = parent
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = self._transport._wrapped_methods[self._transport.list_data_policies]
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)),
)
# Send the request.
response = rpc(
request,
retry=retry,
timeout=timeout,
metadata=metadata,
)
# This method is paged; wrap the response in a pager, which provides
# an `__iter__` convenience method.
response = pagers.ListDataPoliciesPager(
method=rpc,
request=request,
response=response,
metadata=metadata,
)
# Done; return the response.
return response
def get_iam_policy(
self,
request: Optional[Union[iam_policy_pb2.GetIamPolicyRequest, dict]] = None,
*,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
metadata: Sequence[Tuple[str, str]] = (),
) -> policy_pb2.Policy:
r"""Gets the IAM policy for the specified data policy.
.. code-block:: python
# This snippet has been automatically generated and should be regarded as a
# code template only.
# It will require modifications to work:
# - It may require correct/in-range values for request initialization.
# - It may require specifying regional endpoints when creating the service
# client as shown in:
# https://googleapis.dev/python/google-api-core/latest/client_options.html
from google.cloud import bigquery_datapolicies_v1
from google.iam.v1 import iam_policy_pb2 # type: ignore
def sample_get_iam_policy():
# Create a client
client = bigquery_datapolicies_v1.DataPolicyServiceClient()
# Initialize request argument(s)
request = iam_policy_pb2.GetIamPolicyRequest(
resource="resource_value",
)
# Make the request
response = client.get_iam_policy(request=request)
# Handle the response
print(response)
Args:
request (Union[google.iam.v1.iam_policy_pb2.GetIamPolicyRequest, dict]):
The request object. Request message for ``GetIamPolicy`` method.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.iam.v1.policy_pb2.Policy:
An Identity and Access Management (IAM) policy, which specifies access
controls for Google Cloud resources.
A Policy is a collection of bindings. A binding binds
one or more members, or principals, to a single role.
Principals can be user accounts, service accounts,
Google groups, and domains (such as G Suite). A role
is a named list of permissions; each role can be an
IAM predefined role or a user-created custom role.
For some types of Google Cloud resources, a binding
can also specify a condition, which is a logical
expression that allows access to a resource only if
the expression evaluates to true. A condition can add
constraints based on attributes of the request, the
resource, or both. To learn which resources support
conditions in their IAM policies, see the [IAM
documentation](\ https://cloud.google.com/iam/help/conditions/resource-policies).
**JSON example:**
:literal:`\` { "bindings": [ { "role": "roles/resourcemanager.organizationAdmin", "members": [ "user:mike@example.com", "group:admins@example.com", "domain:google.com", "serviceAccount:my-project-id@appspot.gserviceaccount.com" ] }, { "role": "roles/resourcemanager.organizationViewer", "members": [ "user:eve@example.com" ], "condition": { "title": "expirable access", "description": "Does not grant access after Sep 2020", "expression": "request.time < timestamp('2020-10-01T00:00:00.000Z')", } } ], "etag": "BwWWja0YfJA=", "version": 3 }`\ \`
**YAML example:**
:literal:`\` bindings: - members: - user:mike@example.com - group:admins@example.com - domain:google.com - serviceAccount:my-project-id@appspot.gserviceaccount.com role: roles/resourcemanager.organizationAdmin - members: - user:eve@example.com role: roles/resourcemanager.organizationViewer condition: title: expirable access description: Does not grant access after Sep 2020 expression: request.time < timestamp('2020-10-01T00:00:00.000Z') etag: BwWWja0YfJA= version: 3`\ \`
For a description of IAM and its features, see the
[IAM
documentation](\ https://cloud.google.com/iam/docs/).
"""
# Create or coerce a protobuf request object.
if isinstance(request, dict):
# The request isn't a proto-plus wrapped type,
# so it must be constructed via keyword expansion.
request = iam_policy_pb2.GetIamPolicyRequest(**request)
elif not request:
# Null request, just make one.
request = iam_policy_pb2.GetIamPolicyRequest()
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = self._transport._wrapped_methods[self._transport.get_iam_policy]
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("resource", request.resource),)),
)
# Send the request.
response = rpc(
request,
retry=retry,
timeout=timeout,
metadata=metadata,
)
# Done; return the response.
return response
def set_iam_policy(
self,
request: Optional[Union[iam_policy_pb2.SetIamPolicyRequest, dict]] = None,
*,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
metadata: Sequence[Tuple[str, str]] = (),
) -> policy_pb2.Policy:
r"""Sets the IAM policy for the specified data policy.
.. code-block:: python
# This snippet has been automatically generated and should be regarded as a
# code template only.
# It will require modifications to work:
# - It may require correct/in-range values for request initialization.
# - It may require specifying regional endpoints when creating the service
# client as shown in:
# https://googleapis.dev/python/google-api-core/latest/client_options.html
from google.cloud import bigquery_datapolicies_v1
from google.iam.v1 import iam_policy_pb2 # type: ignore
def sample_set_iam_policy():
# Create a client
client = bigquery_datapolicies_v1.DataPolicyServiceClient()
# Initialize request argument(s)
request = iam_policy_pb2.SetIamPolicyRequest(
resource="resource_value",
)
# Make the request
response = client.set_iam_policy(request=request)
# Handle the response
print(response)
Args:
request (Union[google.iam.v1.iam_policy_pb2.SetIamPolicyRequest, dict]):
The request object. Request message for ``SetIamPolicy`` method.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.iam.v1.policy_pb2.Policy:
An Identity and Access Management (IAM) policy, which specifies access
controls for Google Cloud resources.
A Policy is a collection of bindings. A binding binds
one or more members, or principals, to a single role.
Principals can be user accounts, service accounts,
Google groups, and domains (such as G Suite). A role
is a named list of permissions; each role can be an
IAM predefined role or a user-created custom role.
For some types of Google Cloud resources, a binding
can also specify a condition, which is a logical
expression that allows access to a resource only if
the expression evaluates to true. A condition can add
constraints based on attributes of the request, the
resource, or both. To learn which resources support
conditions in their IAM policies, see the [IAM
documentation](\ https://cloud.google.com/iam/help/conditions/resource-policies).
**JSON example:**
:literal:`\` { "bindings": [ { "role": "roles/resourcemanager.organizationAdmin", "members": [ "user:mike@example.com", "group:admins@example.com", "domain:google.com", "serviceAccount:my-project-id@appspot.gserviceaccount.com" ] }, { "role": "roles/resourcemanager.organizationViewer", "members": [ "user:eve@example.com" ], "condition": { "title": "expirable access", "description": "Does not grant access after Sep 2020", "expression": "request.time < timestamp('2020-10-01T00:00:00.000Z')", } } ], "etag": "BwWWja0YfJA=", "version": 3 }`\ \`
**YAML example:**
:literal:`\` bindings: - members: - user:mike@example.com - group:admins@example.com - domain:google.com - serviceAccount:my-project-id@appspot.gserviceaccount.com role: roles/resourcemanager.organizationAdmin - members: - user:eve@example.com role: roles/resourcemanager.organizationViewer condition: title: expirable access description: Does not grant access after Sep 2020 expression: request.time < timestamp('2020-10-01T00:00:00.000Z') etag: BwWWja0YfJA= version: 3`\ \`
For a description of IAM and its features, see the
[IAM
documentation](\ https://cloud.google.com/iam/docs/).
"""
# Create or coerce a protobuf request object.
if isinstance(request, dict):
# The request isn't a proto-plus wrapped type,
# so it must be constructed via keyword expansion.
request = iam_policy_pb2.SetIamPolicyRequest(**request)
elif not request:
# Null request, just make one.
request = iam_policy_pb2.SetIamPolicyRequest()
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = self._transport._wrapped_methods[self._transport.set_iam_policy]
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("resource", request.resource),)),
)
# Send the request.
response = rpc(
request,
retry=retry,
timeout=timeout,
metadata=metadata,
)
# Done; return the response.
return response
def test_iam_permissions(
self,
request: Optional[Union[iam_policy_pb2.TestIamPermissionsRequest, dict]] = None,
*,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
metadata: Sequence[Tuple[str, str]] = (),
) -> iam_policy_pb2.TestIamPermissionsResponse:
r"""Returns the caller's permission on the specified data
policy resource.
.. code-block:: python
# This snippet has been automatically generated and should be regarded as a
# code template only.
# It will require modifications to work:
# - It may require correct/in-range values for request initialization.
# - It may require specifying regional endpoints when creating the service
# client as shown in:
# https://googleapis.dev/python/google-api-core/latest/client_options.html
from google.cloud import bigquery_datapolicies_v1
from google.iam.v1 import iam_policy_pb2 # type: ignore
def sample_test_iam_permissions():
# Create a client
client = bigquery_datapolicies_v1.DataPolicyServiceClient()
# Initialize request argument(s)
request = iam_policy_pb2.TestIamPermissionsRequest(
resource="resource_value",
permissions=['permissions_value1', 'permissions_value2'],
)
# Make the request
response = client.test_iam_permissions(request=request)
# Handle the response
print(response)
Args:
request (Union[google.iam.v1.iam_policy_pb2.TestIamPermissionsRequest, dict]):
The request object. Request message for ``TestIamPermissions`` method.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.iam.v1.iam_policy_pb2.TestIamPermissionsResponse:
Response message for TestIamPermissions method.
"""
# Create or coerce a protobuf request object.
if isinstance(request, dict):
# The request isn't a proto-plus wrapped type,
# so it must be constructed via keyword expansion.
request = iam_policy_pb2.TestIamPermissionsRequest(**request)
elif not request:
# Null request, just make one.
request = iam_policy_pb2.TestIamPermissionsRequest()
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = self._transport._wrapped_methods[self._transport.test_iam_permissions]
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("resource", request.resource),)),
)
# Send the request.
response = rpc(
request,
retry=retry,
timeout=timeout,
metadata=metadata,
)
# Done; return the response.
return response
def __enter__(self) -> "DataPolicyServiceClient":
return self
def __exit__(self, type, value, traceback):
"""Releases underlying transport's resources.
.. warning::
ONLY use as a context manager if the transport is NOT shared
with other clients! Exiting the with block will CLOSE the transport
and may cause errors in other clients!
"""
self.transport.close()
DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo(
gapic_version=package_version.__version__
)
__all__ = ("DataPolicyServiceClient",)
| [
"noreply@github.com"
] | googleapis.noreply@github.com |
6f66c53bf1536515d0e2b64c20185a6e0d4097b9 | c5c58a08ea6841f7042e037db6ad7eb668dbc0bb | /code/1/ninja/old/basicstats.py | 9adb83819260894257cc252d56d83a8e0865fb01 | [
"MIT"
] | permissive | amritbhanu/fss16591 | 99745091e355568230809e9c56bd7627fe1eea58 | d10c9e6e20c2d131441b6a03a27ecb030842b596 | refs/heads/master | 2020-04-24T01:17:37.431866 | 2017-08-09T15:03:25 | 2017-08-09T15:03:25 | 65,946,447 | 0 | 2 | null | 2016-09-10T03:37:59 | 2016-08-17T22:27:27 | Python | UTF-8 | Python | false | false | 3,072 | py |
""" __________________________________________________
# simplestats.py: simple basic stats
"""
from __future__ import division,print_function
import sys,math
sys.dont_write_bytecode=True
def normalDiff(mu1,sd1,n1,mu2,sd2,n2):
nom = mu2 - mu1
denom = delta/((sd1/n1 + sd2/n2)**0.5) if s1+s2 else 1
return nom/denom
def lstDiff(lst1,lst2):
"""Checks if two means are different, tempered
by the sample size of 'y' and 'z'"""
tmp1 = tmp2 = 0
n1,n2 = len(lst1), len(lst2)
mu1 = sum(lst1) / n1
mu2 = sum(lst2) / n2
tmp1 = sum( (y1 - mu1)**2 for y1 in lst1 )
tmp2 = sum( (y2 - mu2)**2 for y2 in lst2 )
sd1 = ( tmp1 / (n1 - 1) )**0.5
sd2 = ( tmp2 / (n2 - 1) )**0.5
return normalDiff(mu1,sd1,n1,mu2,sd2,n2)
""" _________________________________________________
## Stats tricks
"""
def xtend(x,xs,ys):
"""given pairs ofs values, find the gap with x
and extrapolate at that gap size across the y
xtend(-5, [0,5,10,20], [0,10,20,40] ) ==> -10
xtend(25, [0,5,10,20], [0,10,20,40] ) ==> 50
xtend(40, [0,5,10,20], [0,10,20,40] ) ==> 80
"""
x0, y0 = xs[0], ys[0]
for x1,y1 in zip(xs,ys):
if x < x0 or x > xs[-1] or x0 <= x < x1:
break
x0, y0 = x1, y1
gap = (x - x0)/(x1 - x0)
print dict(x0=x0,x=x,x1=x1,gap=gap,y0=y0,y1=y1)
return y0 + gap*(y1 - y0)
def ttestThreshold(df,conf=99,
xs= [ 1, 2, 5, 10, 15, 20, 25, 30, 60, 100]
ys={0.9: [ 3.078, 1.886, 1.476, 1.372, 1.341, 1.325, 1.316, 1.31, 1.296, 1.29],
0.95: [ 6.314, 2.92, 2.015, 1.812, 1.753, 1.725, 1.708, 1.697, 1.671, 1.66],
0.99: [31.821, 6.965, 3.365, 2.764, 2.602, 2.528, 2.485, 2.457, 2.39, 2.364]}):
return xtend(df,xs,ys[conf])
def ttestSame(lst1,lst2,conf=95):
df = min(len(lst1) - 1, len(lst2) - 1)
return ttestThreshold(df) < lstDiff(lst1,lst2)
def chi2Threshold(df,conf=99,
xs = [ 1 , 2, 5, 10, 15,
20 , 25, 30, 60, 100],
ys= {99 : [ 0.000, 0.020, 0.554, 2.558, 5.229,
8.260, 11.524, 14.953, 37.485, 70.065],
95 : [ 0.004, 0.103, 1.145, 3.940, 7.261,
10.851, 14.611, 18.493, 43.188, 77.929],
90 : [ 0.016, 0.211, 1.610, 4.865, 8.547,
12.443, 16.473, 20.599, 46.459, 82.358]}):
return xtend(df,xs,ys[conf])
def chi2Same(obs1,obs2):
obs12,tot1,tot2,r,c = {},0,0,2,0
for k,v in obs1.items():
c += 1
tot1 += v
obs12[k] = obs12.get(k,0) + v
for k,v in obs2.items():
tot2 += v
obs12[k] = obs12.get(k,0) + v
tots = tot1 + tot2
expect1 = { k:tot1*v/tots for k,v in obs12.items() }
expect2 = { k:tot2*v/tots for k,v in obs12.items() }
chi = [ (obs1[k] - expect)**2/expect for k,expect in expect1.items() ] + [
(obs2[k] - expect)**2/expect for k,expect in expect2.items() ]
df = (r-1)*(c-1)
return chi2Threshold(df) < sum(chi)
| [
"amritbhanu@gmail.com"
] | amritbhanu@gmail.com |
929b2b37442af1e84eb361c74e7758e337b1abab | 4aa7a4d0525095725eb99843c83827ba4806ceb1 | /keras/keras19_shape.py | a32d49ea5b537c83e66f5fe52ba953aa27ce27c8 | [] | no_license | seonukim/Study | 65a70f5bdfad68f643abc3086d5c7484bb2439d4 | a5f2538f9ae8b5fc93b5149dd51704e8881f0a80 | refs/heads/master | 2022-12-04T17:04:31.489771 | 2020-08-21T00:35:15 | 2020-08-21T00:35:15 | 260,144,755 | 2 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,297 | py | # 1. 데이터
import numpy as np
x = np.array([range(1, 101), range(311, 411), range(100)])
y = np.array(range(711, 811))
# 1-1. 행과 열을 바꾸기 - 전치행렬 구하기
x = x.transpose()
y = y.transpose()
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(
x, y, train_size = 0.8, shuffle = False)
# 2. 모델 구성
from keras.models import Sequential
from keras.layers import Dense
model = Sequential()
# model.add(Dense(5, input_dim = 3))
model.add(Dense(5, input_shape = (3, ))) # input_dim = 3과 같다.
model.add(Dense(4))
model.add(Dense(1))
# 3. 훈련
model.compile(loss = 'mse', optimizer = 'adam', metrics = ['mse'])
model.fit(x_train, y_train, epochs = 100, batch_size = 1,
validation_split = 0.25, verbose = 2)
# 4. 평가 및 예측
loss, mse = model.evaluate(x_test, y_test, batch_size = 1)
print("loss : ", loss)
print("mse : ", mse)
y_predict = model.predict(x_test)
print(y_predict)
# 5. RMSE 구하기
from sklearn.metrics import mean_squared_error
def RMSE(y_test, y_predict):
return np.sqrt(mean_squared_error(y_test, y_predict))
print("RMSE : ", RMSE(y_test, y_predict))
# 6. R2 구하기
from sklearn.metrics import r2_score
r2 = r2_score(y_test, y_predict)
print("R2 : ", r2)
| [
"92.seoonooo@gmail.com"
] | 92.seoonooo@gmail.com |
769912c4efb8929002f8bd37954cdb0604e47ca6 | 9e9588fbd2eeb48cb45b053cc37c6c40ef1b5558 | /web_app/app.py | c2837a25a396f7cf1527f63aea717d0bf534a208 | [
"MIT"
] | permissive | ArRosid/question-answer-flask | cfecbb8681b258fe389267fbbc16eec19595c36c | 920f5cf5a16887006819183b2573a9d896e3107d | refs/heads/master | 2021-04-23T00:19:34.742155 | 2020-03-31T05:08:28 | 2020-03-31T05:08:28 | 249,883,456 | 0 | 0 | MIT | 2021-03-20T03:16:30 | 2020-03-25T04:03:36 | HTML | UTF-8 | Python | false | false | 8,066 | py | import os
from flask import (Flask, g, redirect, render_template, request, session,
url_for)
from werkzeug.security import check_password_hash, generate_password_hash
import dbcon
app = Flask(__name__)
app.config["SECRET_KEY"] = os.urandom(24)
@app.teardown_appcontext
def close_db(error):
if hasattr(g, 'postgres_db_cur'):
g.postgres_db_cur.close()
if hasattr(g, 'postgres_db_conn'):
g.postgres_db_conn.close()
def get_current_user():
user_result = None
if 'user' in session:
user = session["user"]
db = dbcon.get_db()
db.execute("select * from users where name = %s", (user,))
user_result = db.fetchone()
return user_result
def get_unanswered_question(expert_user_id):
db = dbcon.get_db()
db.execute('''select id from questions
where answer_text is null and expert_id=%s''',
(expert_user_id,))
question_result = db.fetchall()
return len(question_result)
@app.route("/")
def index():
user = get_current_user()
error = request.args.get('error') #get the error message from argument
# Get unanswered question count (only for expert)
unanswered_q = None
if user is not None:
unanswered_q = get_unanswered_question(user["id"])
db = dbcon.get_db()
db.execute(''' select questions.id, questions.question_text,
asker.name as asker_name, expert.name as expert_name
from questions
join users as asker on asker.id = questions.asked_by_id
join users as expert on expert.id = questions.expert_id
where answer_text is not null ''')
questions_results = db.fetchall()
return render_template("home.html", user=user,
questions=questions_results,
unanswered_q=unanswered_q,
error=error)
@app.route("/register", methods=["GET", "POST"])
def register():
db = dbcon.get_db()
if request.method == "POST":
username = request.form["username"]
password = request.form["password"]
db.execute("select id from users where name=%s", (username, ))
existing_user = db.fetchone()
if existing_user:
return render_template("register.html", error="User already exist!")
hashed_password = generate_password_hash(password, method='sha256')
db.execute(''' insert into users (name, password, expert, admin)
values (%s, %s, %s, %s)''',
(username, hashed_password, '0', '0'))
session["user"] = username
return redirect(url_for('index'))
return render_template("register.html")
@app.route("/login", methods=["GET", "POST"])
def login():
db = dbcon.get_db()
if request.method == "POST":
username = request.form["username"]
password = request.form["password"]
db.execute("select id, name, password from users where name = %s ",
(username,))
user = db.fetchone()
if not user: # if the user is not in database
return render_template("login.html", error="Username & Password not match!")
if check_password_hash(user["password"], password):
session["user"] = user["name"]
return redirect(url_for("index"))
else: # if the password is wrong
return render_template("login.html", error="Username & Password not match!")
return render_template("login.html")
@app.route("/ask", methods=["GET","POST"])
def ask():
user = get_current_user()
if not user:
return redirect(url_for("login"))
db = dbcon.get_db()
if request.method == "POST":
db.execute('''insert into questions (question_text, asked_by_id, expert_id)
values (%s,%s,%s)''',
(request.form["question"], user["id"], request.form["expert"]))
return redirect(url_for("index"))
db.execute("select id, name from users where expert = True")
expert_result = db.fetchall()
return render_template("ask.html", user=user, experts=expert_result)
@app.route("/unanswered")
def unanswered():
user = get_current_user()
if not user:
return redirect(url_for("login"))
if not user["expert"]: #only expert can access this route
return redirect(url_for("index", error="You don't permission to access this page!"))
unanswered_q = get_unanswered_question(user["id"])
db = dbcon.get_db()
db.execute('''select questions.id, questions.question_text,
questions.asked_by_id, users.name
from questions
join users on users.id = questions.asked_by_id
where answer_text is null and expert_id = %s''',
(user["id"],))
question_result = db.fetchall()
return render_template("unanswered.html", user=user,
questions=question_result,
unanswered_q=unanswered_q)
@app.route("/answer/<question_id>", methods=["GET","POST"])
def answer(question_id):
user = get_current_user()
if not user:
return redirect(url_for("login"))
if not user["expert"]: # only expert can answer questions
return redirect(url_for("index", error="You don't permission to access this page!"))
db = dbcon.get_db()
if request.method == "POST":
db.execute("update questions set answer_text = %s where id=%s",
(request.form["answer"], question_id,))
return redirect(url_for("unanswered"))
db.execute("select id, question_text from questions where id=%s", (question_id,))
question = db.fetchone()
return render_template("answer.html", user=user, question=question)
@app.route("/question/<question_id>")
def question(question_id):
user = get_current_user
db = dbcon.get_db()
db.execute('''select questions.question_text, questions.answer_text,
asker.name as asker_name, expert.name as expert_name
from questions
join users as asker on asker.id = questions.asked_by_id
join users as expert on expert.id = questions.expert_id
where questions.id = %s''', (question_id,))
question_result = db.fetchone()
return render_template("question.html", question=question_result)
@app.route("/users")
def users():
user = get_current_user()
if not user:
return redirect(url_for('login'))
if not user["admin"]: #only admin can manage user
return redirect(url_for("index", error="You don't permission to access this page!"))
db = dbcon.get_db()
db.execute("select id, name, expert, admin from users")
users_results = db.fetchall()
return render_template("users.html", user=user, users=users_results)
@app.route("/promote/<user_id>")
def promote(user_id):
user = get_current_user()
if not user:
return redirect(url_for("login"))
if not user["admin"]: # only admin can promote user
return redirect(url_for("index", error="You don't permission to access this page!"))
db = dbcon.get_db()
db.execute("select expert from users where id = %s", (user_id,))
user_result = db.fetchone()
if user_result["expert"]: # if user expert, set user to non expert
db.execute("update users set expert = False where id = %s", (user_id,))
else: # if user is not expert, set user to expert
db.execute("update users set expert = True where id = %s", (user_id,))
return redirect(url_for("users"))
@app.route("/logout")
def logout():
session.pop("user", None)
return redirect(url_for("index"))
if __name__ == "__main__":
app.run(host="0.0.0.0", port="5001", debug=True)
| [
"ahmadrosid30121997@gmail.com"
] | ahmadrosid30121997@gmail.com |
0c02ee4413803d74805396f86653758d7d8ec906 | e4d4283a7d77719d4da7705aa2cb14e8b171ae51 | /document_library/migrations/0003_auto__add_documentcategorytitle__add_documentcategory__add_field_docum.py | e5b24aa78e4802393f3a4b661c607682857b2732 | [
"MIT"
] | permissive | django-cms-plugins/django-document-library | 666ed165aff6ecaef6af5e585fdb96e16c45bed9 | 74233b1d8d933844f2e1f423d685708f682edfb5 | refs/heads/master | 2021-01-14T14:18:35.759996 | 2013-11-06T00:06:02 | 2013-11-06T00:06:02 | 16,535,949 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 10,727 | py | # flake8: noqa
# -*- coding: utf-8 -*-
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding model 'DocumentCategoryTitle'
db.create_table('document_library_documentcategorytitle', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('title', self.gf('django.db.models.fields.CharField')(max_length=256)),
('category', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['document_library.DocumentCategory'])),
('language', self.gf('django.db.models.fields.CharField')(max_length=2)),
))
db.send_create_signal('document_library', ['DocumentCategoryTitle'])
# Adding model 'DocumentCategory'
db.create_table('document_library_documentcategory', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('creation_date', self.gf('django.db.models.fields.DateTimeField')(auto_now_add=True, blank=True)),
))
db.send_create_signal('document_library', ['DocumentCategory'])
# Adding field 'Document.category'
db.add_column('document_library_document', 'category',
self.gf('django.db.models.fields.related.ForeignKey')(to=orm['document_library.DocumentCategory'], null=True, blank=True),
keep_default=False)
def backwards(self, orm):
# Deleting model 'DocumentCategoryTitle'
db.delete_table('document_library_documentcategorytitle')
# Deleting model 'DocumentCategory'
db.delete_table('document_library_documentcategory')
# Deleting field 'Document.category'
db.delete_column('document_library_document', 'category_id')
models = {
'auth.group': {
'Meta': {'object_name': 'Group'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
'auth.permission': {
'Meta': {'ordering': "('content_type__app_label', 'content_type__model', 'codename')", 'unique_together': "(('content_type', 'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
'auth.user': {
'Meta': {'object_name': 'User'},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
'document_library.document': {
'Meta': {'ordering': "('position', '-creation_date')", 'object_name': 'Document'},
'category': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['document_library.DocumentCategory']", 'null': 'True', 'blank': 'True'}),
'creation_date': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_on_front_page': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_published': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'position': ('django.db.models.fields.PositiveIntegerField', [], {'null': 'True', 'blank': 'True'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']", 'null': 'True', 'blank': 'True'})
},
'document_library.documentcategory': {
'Meta': {'object_name': 'DocumentCategory'},
'creation_date': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'})
},
'document_library.documentcategorytitle': {
'Meta': {'object_name': 'DocumentCategoryTitle'},
'category': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['document_library.DocumentCategory']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'language': ('django.db.models.fields.CharField', [], {'max_length': '2'}),
'title': ('django.db.models.fields.CharField', [], {'max_length': '256'})
},
'document_library.documenttitle': {
'Meta': {'object_name': 'DocumentTitle'},
'description': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'document': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['document_library.Document']"}),
'filer_file': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['filer.File']", 'null': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'language': ('django.db.models.fields.CharField', [], {'max_length': '5'}),
'title': ('django.db.models.fields.CharField', [], {'max_length': '512'})
},
'filer.file': {
'Meta': {'object_name': 'File'},
'_file_size': ('django.db.models.fields.IntegerField', [], {'null': 'True', 'blank': 'True'}),
'description': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'file': ('django.db.models.fields.files.FileField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'folder': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'all_files'", 'null': 'True', 'to': "orm['filer.Folder']"}),
'has_all_mandatory_data': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_public': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'modified_at': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '255', 'blank': 'True'}),
'original_filename': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'owner': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'owned_files'", 'null': 'True', 'to': "orm['auth.User']"}),
'polymorphic_ctype': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'polymorphic_filer.file_set'", 'null': 'True', 'to': "orm['contenttypes.ContentType']"}),
'sha1': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '40', 'blank': 'True'}),
'uploaded_at': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'})
},
'filer.folder': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('parent', 'name'),)", 'object_name': 'Folder'},
'created_at': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'level': ('django.db.models.fields.PositiveIntegerField', [], {'db_index': 'True'}),
'lft': ('django.db.models.fields.PositiveIntegerField', [], {'db_index': 'True'}),
'modified_at': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'owner': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'filer_owned_folders'", 'null': 'True', 'to': "orm['auth.User']"}),
'parent': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'children'", 'null': 'True', 'to': "orm['filer.Folder']"}),
'rght': ('django.db.models.fields.PositiveIntegerField', [], {'db_index': 'True'}),
'tree_id': ('django.db.models.fields.PositiveIntegerField', [], {'db_index': 'True'}),
'uploaded_at': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'})
}
}
complete_apps = ['document_library']
| [
"mbrochh@gmail.com"
] | mbrochh@gmail.com |
e7497d37b10b60b49e884962d8cc83295158c6b3 | a3493aaf1fc0b067d852bb7cd8e81b0fee6145d6 | /Modules_&_pip.py | ba57b9dc523a2416afbdde8e5a9cdf195315561a | [] | no_license | VivakaNand/Python_For_Beginners_by_Udemy | 862c128f8dd4035c794b99494474de661a2be5e0 | ab119426c9ded6f46256ec8f915fdf439b1e520d | refs/heads/master | 2020-11-27T14:30:31.530986 | 2019-12-21T22:25:47 | 2019-12-21T22:25:47 | 229,488,902 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 220 | py | # -*- coding: utf-8 -*-
"""
Created on Sun Dec 22 03:00:21 2019
@author: VIVEK VISHAN
"""
# Modules & pip
import useful_tools
print(useful_tools.roll_dice(10))
print(useful_tools.feet_in_mile)
import docx
docx.
| [
"vivekjetani83@gmail.com"
] | vivekjetani83@gmail.com |
7446edb80b72ff3519521b98b4f5fbc49526c82d | b68e3b8485ea8bef9fc7b3cbf6baa98a51fa533f | /section14/lesson173/test_calculation.py | 1eac580f11491856c765cc11bd46aca7c0f2f447 | [] | no_license | Naoya-abe/siliconvalley-python | 7936b7e779072b23e16c9d50cca44c2e0bf6eb5f | 8d226adaea839b64b1e5eb62985349b5bb2e1484 | refs/heads/master | 2021-05-20T14:40:47.682812 | 2020-04-27T02:02:43 | 2020-04-27T02:02:43 | 252,336,229 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,117 | py | # 独自ののfixture
import os
import pytest
import calculation
class TestCal(object):
@classmethod
def setup_class(cls):
cls.cal = calculation.Cal()
cls.test_dir = '/tmp/test_dir'
cls.test_file_name = 'test.txt'
@classmethod
def teardown_class(cls):
import shutil
if os.path.exists(cls.test_dir):
shutil.rmtree(cls.test_dir)
def test_save_no_dir(self):
self.cal.save(self.test_dir, self.test_file_name)
test_file_path = os.path.join(
self.test_dir, self.test_file_name
)
assert os.path.exists(test_file_path) is True
def test_add_and_double(self, csv_file):
print(csv_file)
assert self.cal.add_and_double(1, 1) == 4
def test_save(self, tmpdir):
self.cal.save(tmpdir, self.test_file_name)
test_file_path = os.path.join(
tmpdir, self.test_file_name
)
assert os.path.exists(test_file_path) is True
def test_add_and_double_raise(self):
with pytest.raises(ValueError):
self.cal.add_and_double('1', '1')
| [
"n.abe@gemcook.com"
] | n.abe@gemcook.com |
6649e00fafca3d64cad131fe2b2c0d4be4921d60 | 0ee5ae0b71b81419d4534b2ed8681e28a1ed9ddb | /arxivanalysis/cons.py | 56a990e9ed4ffdff6869afc9a19b71c8628b952e | [
"MIT"
] | permissive | refraction-ray/arxiv-analysis | a0f4542298ecc427a49ec9bb026f0ef31699a7f5 | 10b72853920bc653d5622b17da817a1fc1d83c4e | refs/heads/master | 2023-05-02T16:33:01.629979 | 2023-04-17T02:35:58 | 2023-04-17T02:35:58 | 160,020,387 | 2 | 10 | null | null | null | null | UTF-8 | Python | false | false | 7,214 | py | """
some constants
"""
weekdaylist = ["Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"]
category = {
"astro-ph": "Astrophysics",
"astro-ph.CO": "Cosmology and Nongalactic Astrophysics",
"astro-ph.EP": "Earth and Planetary Astrophysics",
"astro-ph.GA": "Astrophysics of Galaxies",
"astro-ph.HE": "High Energy Astrophysical Phenomena",
"astro-ph.IM": "Instrumentation and Methods for Astrophysics",
"astro-ph.SR": "Solar and Stellar Astrophysics",
"cond-mat.dis-nn": "Disordered Systems and Neural Networks",
"cond-mat.mes-hall": "Mesoscale and Nanoscale Physics",
"cond-mat.mtrl-sci": "Materials Science",
"cond-mat.other": "Other Condensed Matter",
"cond-mat.quant-gas": "Quantum Gases",
"cond-mat.soft": "Soft Condensed Matter",
"cond-mat.stat-mech": "Statistical Mechanics",
"cond-mat.str-el": "Strongly Correlated Electrons",
"cond-mat.supr-con": "Superconductivity",
"cs.AI": "Artificial Intelligence",
"cs.AR": "Hardware Architecture",
"cs.CC": "Computational Complexity",
"cs.CE": "Computational Engineering, Finance, and Science",
"cs.CG": "Computational Geometry",
"cs.CL": "Computation and Language",
"cs.CR": "Cryptography and Security",
"cs.CV": "Computer Vision and Pattern Recognition",
"cs.CY": "Computers and Society",
"cs.DB": "Databases",
"cs.DC": "Distributed, Parallel, and Cluster Computing",
"cs.DL": "Digital Libraries",
"cs.DM": "Discrete Mathematics",
"cs.DS": "Data Structures and Algorithms",
"cs.ET": "Emerging Technologies",
"cs.FL": "Formal Languages and Automata Theory",
"cs.GL": "General Literature",
"cs.GR": "Graphics",
"cs.GT": "Computer Science and Game Theory",
"cs.HC": "Human-Computer Interaction",
"cs.IR": "Information Retrieval",
"cs.IT": "Information Theory",
"cs.LG": "Machine Learning",
"cs.LO": "Logic in Computer Science",
"cs.MA": "Multiagent Systems",
"cs.MM": "Multimedia",
"cs.MS": "Mathematical Software",
"cs.NA": "Numerical Analysis",
"cs.NE": "Neural and Evolutionary Computing",
"cs.NI": "Networking and Internet Architecture",
"cs.OH": "Other Computer Science",
"cs.OS": "Operating Systems",
"cs.PF": "Performance",
"cs.PL": "Programming Languages",
"cs.RO": "Robotics",
"cs.SC": "Symbolic Computation",
"cs.SD": "Sound",
"cs.SE": "Software Engineering",
"cs.SI": "Social and Information Networks",
"cs.SY": "Systems and Control",
"econ.EM": "Econometrics",
"eess.AS": "Audio and Speech Processing",
"eess.IV": "Image and Video Processing",
"eess.SP": "Signal Processing",
"gr-qc": "General Relativity and Quantum Cosmology",
"hep-ex": "High Energy Physics - Experiment",
"hep-lat": "High Energy Physics - Lattice",
"hep-ph": "High Energy Physics - Phenomenology",
"hep-th": "High Energy Physics - Theory",
"math-ph": "Mathematical Physics",
"math.AC": "Commutative Algebra",
"math.AG": "Algebraic Geometry",
"math.AP": "Analysis of PDEs",
"math.AT": "Algebraic Topology",
"math.CA": "Classical Analysis and ODEs",
"math.CO": "Combinatorics",
"math.CT": "Category Theory",
"math.CV": "Complex Variables",
"math.DG": "Differential Geometry",
"math.DS": "Dynamical Systems",
"math.FA": "Functional Analysis",
"math.GM": "General Mathematics",
"math.GN": "General Topology",
"math.GR": "Group Theory",
"math.GT": "Geometric Topology",
"math.HO": "History and Overview",
"math.IT": "Information Theory",
"math.KT": "K-Theory and Homology",
"math.LO": "Logic",
"math.MG": "Metric Geometry",
"math.MP": "Mathematical Physics",
"math.NA": "Numerical Analysis",
"math.NT": "Number Theory",
"math.OA": "Operator Algebras",
"math.OC": "Optimization and Control",
"math.PR": "Probability",
"math.QA": "Quantum Algebra",
"math.RA": "Rings and Algebras",
"math.RT": "Representation Theory",
"math.SG": "Symplectic Geometry",
"math.SP": "Spectral Theory",
"math.ST": "Statistics Theory",
"nlin.AO": "Adaptation and Self-Organizing Systems",
"nlin.CD": "Chaotic Dynamics",
"nlin.CG": "Cellular Automata and Lattice Gases",
"nlin.PS": "Pattern Formation and Solitons",
"nlin.SI": "Exactly Solvable and Integrable Systems",
"nucl-ex": "Nuclear Experiment",
"nucl-th": "Nuclear Theory",
"physics.acc-ph": "Accelerator Physics",
"physics.ao-ph": "Atmospheric and Oceanic Physics",
"physics.app-ph": "Applied Physics",
"physics.atm-clus": "Atomic and Molecular Clusters",
"physics.atom-ph": "Atomic Physics",
"physics.bio-ph": "Biological Physics",
"physics.chem-ph": "Chemical Physics",
"physics.class-ph": "Classical Physics",
"physics.comp-ph": "Computational Physics",
"physics.data-an": "Data Analysis, Statistics and Probability",
"physics.ed-ph": "Physics Education",
"physics.flu-dyn": "Fluid Dynamics",
"physics.gen-ph": "General Physics",
"physics.geo-ph": "Geophysics",
"physics.hist-ph": "History and Philosophy of Physics",
"physics.ins-det": "Instrumentation and Detectors",
"physics.med-ph": "Medical Physics",
"physics.optics": "Optics",
"physics.plasm-ph": "Plasma Physics",
"physics.pop-ph": "Popular Physics",
"physics.soc-ph": "Physics and Society",
"physics.space-ph": "Space Physics",
"q-bio.BM": "Biomolecules",
"q-bio.CB": "Cell Behavior",
"q-bio.GN": "Genomics",
"q-bio.MN": "Molecular Networks",
"q-bio.NC": "Neurons and Cognition",
"q-bio.OT": "Other Quantitative Biology",
"q-bio.PE": "Populations and Evolution",
"q-bio.QM": "Quantitative Methods",
"q-bio.SC": "Subcellular Processes",
"q-bio.TO": "Tissues and Organs",
"q-fin.CP": "Computational Finance",
"q-fin.EC": "Economics",
"q-fin.GN": "General Finance",
"q-fin.MF": "Mathematical Finance",
"q-fin.PM": "Portfolio Management",
"q-fin.PR": "Pricing of Securities",
"q-fin.RM": "Risk Management",
"q-fin.ST": "Statistical Finance",
"q-fin.TR": "Trading and Market Microstructure",
"quant-ph": "Quantum Physics",
"stat.AP": "Applications",
"stat.CO": "Computation",
"stat.ME": "Methodology",
"stat.ML": "Machine Learning",
"stat.OT": "Other Statistics",
"stat.TH": "Statistics Theory",
}
field = {
"astro-ph": "Astrophysics",
"cond-mat": "Condensed Matter",
"gr-qc": "General Relativity and Quantum Cosmology",
"hep-ex": "High Energy Physics - Experiment",
"hep-lat": "High Energy Physics - Lattice",
"hep-ph": "High Energy Physics - Phenomenology",
"hep-th": "High Energy Physics - Theory",
"math-ph": "Mathematical Physics",
"nlin": "Nonlinear Sciences",
"nucl-ex": "Nuclear Experiment",
"nucl-th": "Nuclear Theory",
"physics": "Physics",
"quant-ph": "Quantum Physics",
"math": "Mathematics",
"CoRR": "Computing Research Repository",
"q-bio": "Quantitative Biology",
"q-fin": "Quantitative Finance",
"stat": "Statistics",
"eess": "Electrical Engineering and System Science",
"econ": "Economics",
}
| [
"kcanamgal@foxmail.com"
] | kcanamgal@foxmail.com |
e26636a9ae7ceac0ee556870e3f586cf4b775f62 | 032a1ad3c94e1126729417a16e2a95743d121244 | /cell_fitting/optimization/evaluation/plots_for_thesis/dap_mechanism/blocking_channels.py | 44fad4a980d5b5fbf8da14d9834d41c3231a5824 | [] | no_license | cafischer/cell_fitting | 0fd928f5ae59488e12c77648c2e6227c1911d0e9 | 75a81987e1b455f43b5abdc8a9baf6b8f863bee2 | refs/heads/master | 2021-01-23T19:27:30.635173 | 2019-09-14T08:46:57 | 2019-09-14T08:46:57 | 44,301,986 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,869 | py | import numpy as np
import os
import matplotlib.pyplot as pl
import matplotlib.gridspec as gridspec
from nrn_wrapper import Cell
from cell_fitting.optimization.evaluation import simulate_model, simulate_model_currents
from cell_fitting.optimization.evaluation.plot_blocking.block_channel import block_channel, \
block_channel_at_timepoint, plot_channel_block_on_ax
from cell_fitting.optimization.evaluation import get_spike_characteristics_dict
from cell_fitting.optimization.simulate import get_standard_simulation_params
from cell_characteristics.analyze_APs import get_spike_characteristics
pl.style.use('paper')
if __name__ == '__main__':
save_dir_img = '/home/cfischer/Dropbox/thesis/figures_results'
save_dir_model = '/home/cfischer/Phd/programming/projects/cell_fitting/cell_fitting/results/best_models'
mechanism_dir = '/home/cfischer/Phd/programming/projects/cell_fitting/cell_fitting/model/channels/vavoulis'
#save_dir_data = '/home/cfischer/Phd/DAP-Project/cell_data/raw_data'
save_dir_data = '/media/cfischer/TOSHIBA EXT/2019-04-03-Sicherung_all/Phd/DAP-Project/cell_data/raw_data'
save_dir_data_plots = '/home/cfischer/Phd/programming/projects/cell_fitting/cell_fitting/data/plots'
model = '2'
exp_cell = '2015_08_26b'
ramp_amp = 3.5
standard_sim_params = get_standard_simulation_params()
standard_sim_params['tstop'] = 162
# create model cell
cell = Cell.from_modeldir(os.path.join(save_dir_model, model, 'cell_rounded.json'), mechanism_dir)
# simulate cell
v_model, t_model, i_inj = simulate_model(cell, 'rampIV', ramp_amp, **standard_sim_params)
currents, channel_list = simulate_model_currents(cell, 'rampIV', ramp_amp, **standard_sim_params)
# plot
fig = pl.figure(figsize=(11, 7))
outer = gridspec.GridSpec(2, 3)
# blocking ion channels whole trace
axes = [outer[0, 0], outer[0, 1], outer[0, 2]]
percent_blocks = [10, 50, 100]
letters = ['A', 'B', 'C']
for percent_block_idx, percent_block in enumerate(percent_blocks):
ax = pl.Subplot(fig, axes[percent_block_idx])
fig.add_subplot(ax)
v_after_block = np.zeros((len(channel_list), len(t_model)))
for i, channel_name in enumerate(channel_list):
cell = Cell.from_modeldir(os.path.join(save_dir_model, model, 'cell.json'))
block_channel(cell, channel_name, percent_block)
v_after_block[i, :], _, _ = simulate_model(cell, 'rampIV', ramp_amp, **standard_sim_params)
plot_channel_block_on_ax(ax, channel_list, t_model, v_model, v_after_block, percent_block,
plot_with_ellipses=True)
ax.set_ylim(-100, 60)
ax.set_xlim(0, t_model[-1])
ax.get_yaxis().set_label_coords(-0.15, 0.5)
ax.text(-0.25, 1.0, letters[percent_block_idx], transform=ax.transAxes, size=18, weight='bold')
# from cell_fitting.optimization.evaluation import get_spike_characteristics_dict
# AP_width_before_block = get_spike_characteristics(v_after_block[4], t_model, ['AP_width'], -75, **get_spike_characteristics_dict())
# AP_width_block_HCN = get_spike_characteristics(v_after_block[4], t_model, ['AP_width'], -75, **get_spike_characteristics_dict())
# AP width is the same
# blocking ion channels after AP
axes = [outer[1, 0], outer[1, 1], outer[1, 2]]
letters = ['D', 'E', 'F']
start_i_inj = np.where(np.diff(np.abs(i_inj)) > 0)[0][0] + 1
v_rest = np.mean(v_model[0:start_i_inj])
fAHP_min_idx = get_spike_characteristics(v_model, t_model, ['fAHP_min_idx'], v_rest,
check=False, **get_spike_characteristics_dict())[0]
for percent_block_idx, percent_block in enumerate(percent_blocks):
ax = pl.Subplot(fig, axes[percent_block_idx])
fig.add_subplot(ax)
v_after_block = np.zeros((len(channel_list), len(t_model)))
for i, channel_name in enumerate(channel_list):
cell = Cell.from_modeldir(os.path.join(save_dir_model, model, 'cell.json'))
block_channel_at_timepoint(cell, channel_name, percent_block,
t_model[fAHP_min_idx]+standard_sim_params['onset'])
v_after_block[i, :], _, _ = simulate_model(cell, 'rampIV', ramp_amp, **standard_sim_params)
plot_channel_block_on_ax(ax, channel_list, t_model, v_model, v_after_block, percent_block,
plot_with_ellipses=True)
ax.set_ylim(-100, 60)
ax.set_xlim(0, t_model[-1])
ax.get_yaxis().set_label_coords(-0.15, 0.5)
ax.text(-0.25, 1.0, letters[percent_block_idx], transform=ax.transAxes, size=18, weight='bold')
pl.tight_layout()
pl.subplots_adjust(left=0.07, bottom=0.07)
#pl.savefig(os.path.join(save_dir_img, 'block_channels.png'))
pl.show() | [
"coralinefischer@gmail.com"
] | coralinefischer@gmail.com |
b0aa9837b396935b4b74ad19f72b0b276e28e19b | be4e7d877a7a61237f3a58315158a20f125dc71c | /cartridge/shop/page_processors.py | b2c72c68cc708d9c80b20ff8a0708d8ed5dd9d4d | [
"BSD-2-Clause",
"BSD-3-Clause"
] | permissive | krbanton/cartridge | 14d846e85524e743f83794e4628acaa29d24950d | 41deb8812cceacf47a057233e5a020c2ea04b786 | refs/heads/master | 2021-01-17T11:43:26.693347 | 2012-03-24T11:57:10 | 2012-03-24T11:57:10 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,031 | py |
from django.template.defaultfilters import slugify
from mezzanine.conf import settings
from mezzanine.pages.page_processors import processor_for
from mezzanine.utils.views import paginate
from cartridge.shop.models import Category, Product
@processor_for(Category)
def category_processor(request, page):
"""
Add paging/sorting to the products for the category.
"""
settings.use_editable()
products = Product.objects.published(for_user=request.user
).filter(page.category.filters()).distinct()
sort_options = [(slugify(option[0]), option[1])
for option in settings.SHOP_PRODUCT_SORT_OPTIONS]
sort_by = request.GET.get("sort", sort_options[0][0])
products = paginate(products.order_by(dict(sort_options).get(sort_by)),
request.GET.get("page", 1),
settings.SHOP_PER_PAGE_CATEGORY,
settings.MAX_PAGING_LINKS)
products.sort_by = sort_by
return {"products": products}
| [
"steve@jupo.org"
] | steve@jupo.org |
6a572f3e0982bbe239ad1b6c507d6fc5419da16c | ed7fde0483a4836bfc9ef3ab887cf1220559bfc7 | /phd/i18_remove_ref.py | 69d65e676406a2e3eea0335ebd761ec9ae61f6b8 | [] | no_license | cizydorczyk/python_scripts | 326b3142a3c6ce850237e8b13e229854699c6359 | b914dcff60727bbfaa2b32e1a634ca9ca354eeeb | refs/heads/master | 2023-05-11T14:29:44.548144 | 2023-05-05T19:39:28 | 2023-05-05T19:39:28 | 116,588,201 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 342 | py | import os
import argparse
from Bio import SeqIO
def RemoveReference(seq_to_remove, fasta, output_fasta):
print(fasta)
records = list(SeqIO.parse(fasta, "fasta"))
records2 = [i for i in records if i.id not in seq_to_remove.split(",")]
with open(output_fasta, "w") as outfile:
SeqIO.write(records2, outfile, "fasta")
| [
"conradizydorczyk@gmail.com"
] | conradizydorczyk@gmail.com |
c56397e5206162d095057fc4190656549c5a4445 | 6edb295c0eacc50655d83ece6566325fd0c46afb | /VBF/kfactors/plot.py | f59ed6e114e3f98f8fa4041c6fa080116a926eb1 | [
"MIT"
] | permissive | mcremone/PandaAnalysis | 74ffe64e77887d6622883b4d856d5ef61759eb92 | 078dc6ba435335ba1f8bceecb12459751ce3f5c3 | refs/heads/master | 2020-12-30T12:56:49.093685 | 2017-05-03T11:17:37 | 2017-05-03T11:17:37 | 91,379,788 | 0 | 5 | MIT | 2019-12-11T09:33:59 | 2017-05-15T20:05:33 | Python | UTF-8 | Python | false | false | 3,536 | py | #!/usr/bin/env python
from os import system,getenv
from sys import argv
import argparse
### SET GLOBAL VARIABLES ###
baseDir = '/home/snarayan/home000/store/kfactors/skimmed/'
parser = argparse.ArgumentParser(description='plot stuff')
parser.add_argument('--outdir',metavar='outdir',type=str)
args = parser.parse_args()
sname=argv[0]
argv=[]
import ROOT as root
from ROOT import gROOT
from PandaCore.Tools.Load import *
from PandaCore.Tools.Misc import *
from array import array
from math import sqrt
Load('Drawers','HistogramDrawer')
### DEFINE REGIONS ###
recoilBins = [200,250,300,350,400,500,600,1000]
nRecoilBins = len(recoilBins)-1
recoilBins = array('f',recoilBins)
ptBins = [100,120,160,200,250,300,350,400,450,500,550,600,650,700,800,900,1000,1200]
nPtBins = len(ptBins)-1
ptBins = array('f',ptBins)
plot = root.HistogramDrawer()
plot.SetTDRStyle()
plot.AddCMSLabel()
plot.Logy(True)
#plot.SetAbsMin(0.0001)
plot.InitLegend()
plotr = root.HistogramDrawer()
plotr.SetRatioStyle()
plotr.AddCMSLabel()
#plotr.InitLegend(.15,.6,.5,.8)
plotr.InitLegend()
counter=0
fzlo = root.TFile(baseDir+'z_lo.root'); tzlo = fzlo.Get('events')
fznlo = root.TFile(baseDir+'z_nlo.root'); tznlo = fznlo.Get('events')
fwlo = root.TFile(baseDir+'w_lo.root'); twlo = fwlo.Get('events')
fwnlo = root.TFile(baseDir+'w_nlo.root'); twnlo = fwnlo.Get('events')
ctmp = root.TCanvas()
def getDist(tree,var,bins,xlabel,cut='1==1'):
global counter
ctmp.cd()
if len(bins)==3:
h = root.TH1D('h%i'%counter,'h%i'%counter,bins[0],bins[1],bins[2])
scale=False
else:
h = root.TH1D('h%i'%counter,'h%i'%counter,len(bins)-1,bins)
scale=True
h.GetXaxis().SetTitle(xlabel)
h.GetYaxis().SetTitle('')
tree.Draw('%s>>h%i'%(var,counter),'weight*(%s)'%(cut))
if scale:
h.Scale(1,'width')
counter += 1
h.SetFillStyle(0)
return h
def plotDist(V,dists,cut):
if V=='Z':
tlo = tzlo
tnlo = tznlo
else:
tlo = twlo
tnlo = twnlo
toreturn = []
for d in dists:
hlo = getDist(tlo,d[0],d[1],d[2],cut)
hnlo = getDist(tnlo,d[0],d[1],d[2],cut)
toreturn.append((hlo,hnlo))
plot.AddHistogram(hlo,'%s LO'%(V),root.kSignal2)
plot.AddHistogram(hnlo,'%s NLO'%(V),root.kExtra2)
if len(d)<4 or d[3]==None:
plot.Draw(args.outdir,V+'_'+d[0])
else:
plot.Draw(args.outdir,V+'_'+d[3])
plot.Reset()
plot.AddCMSLabel()
return toreturn
def plotKFactors(V,hists,name):
# hists is a list of tuples (hlo, hnlo, label)
counter=0
for hlo,hnlo,label in hists:
hratio = hnlo.Clone()
hratio.Divide(hlo)
if counter==0:
hratio.SetMaximum(2); hratio.SetMinimum(0)
plotr.AddHistogram(hratio,label,root.kExtra1+counter)
hratioerr = hratio.Clone()
hratioerr.SetFillStyle(3004)
hratioerr.SetFillColorAlpha(root.kBlack,0.5)
hratioerr.SetLineWidth(0)
plotr.AddAdditional(hratioerr,'e2')
counter += 1
plotr.Draw(args.outdir,V+'_'+name)
plotr.Reset()
plotr.AddCMSLabel()
hmono = plotDist('Z',[('vpt',ptBins,'p_{T}^{V} [GeV]','vpt_monojet')],'njet>0 && jet1pt>100')[0]
hdi = plotDist('Z',[('vpt',ptBins,'p_{T}^{V} [GeV]','vpt_dijet')],'njet>1 && jet1pt>80 && jet2pt>40 && jet1eta*jet2eta<0')[0]
hvbf = plotDist('Z',[('vpt',ptBins,'p_{T}^{V} [GeV]','vpt_vbf')],'njet>1 && jet1pt>80 && jet2pt>40 && jet1eta*jet2eta<0 && mjj>1100')[0]
plotKFactors('Z',[(hmono[0],hmono[1],'Monojet'),
(hdi[0],hdi[1],'Dijet'),
(hvbf[0],hvbf[1],'VBF')],'kfactor_ptV')
#plotDist('W',[('vpt',recoilBins,'p_{T}^{V} [GeV]')])
| [
"sidn@mit.edu"
] | sidn@mit.edu |
45e663b5e49d193e5c11fda55a0aeea262da8551 | a8750439f200e4efc11715df797489f30e9828c6 | /codechef/ENP.py | 737381f0e7b74f7bfda64d769c2e14e42f248c68 | [] | no_license | rajlath/rkl_codes | f657174305dc85c3fa07a6fff1c7c31cfe6e2f89 | d4bcee3df2f501349feed7a26ef9828573aff873 | refs/heads/master | 2023-02-21T10:16:35.800612 | 2021-01-27T11:43:34 | 2021-01-27T11:43:34 | 110,989,354 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 86 | py | def encrypt(s, x, d):
encoded = ''
for i in range(len(s)):
encoded += | [
"raj.lath@gmail.com"
] | raj.lath@gmail.com |
36c6a3f4b29eb110001c86e07a73a605b39033d0 | facb8b9155a569b09ba66aefc22564a5bf9cd319 | /wp2/merra_scripts/03_model_fitting/rfRecon/402-tideGauge.py | 83be9c7c5e9e4512a93df74158af7243a1dab31b | [] | no_license | moinabyssinia/modeling-global-storm-surges | 13e69faa8f45a1244a964c5de4e2a5a6c95b2128 | 6e385b2a5f0867df8ceabd155e17ba876779c1bd | refs/heads/master | 2023-06-09T00:40:39.319465 | 2021-06-25T21:00:44 | 2021-06-25T21:00:44 | 229,080,191 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 10,696 | py | # -*- coding: utf-8 -*-
"""
Created on Mon May 7 11:39:00 2020
This program is designed to reconstruct
daily max surge using RF
@author: Michael Tadesse
"""
def reconstructRF():
"""
run KFOLD method for random forest regression
"""
#import packages
import os
import numpy as np
import pandas as pd
#from sklearn import metrics
#from scipy import stats
#import seaborn as sns
#import matplotlib.pyplot as plt
#from sklearn.model_selection import KFold
from datetime import datetime
from sklearn.ensemble import RandomForestRegressor
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
#defining directories
dir_in = "/lustre/fs0/home/mtadesse/merraAllLagged"
dir_out = "/lustre/fs0/home/mtadesse/rfReconstruction"
surge_path = "/lustre/fs0/home/mtadesse/05_dmax_surge_georef"
# #load KFOLD result csv file
# os.chdir('F:\\06_eraint_results\\sonstig')
# kf_dat = pd.read_csv('eraint_randForest_kfold.csv')
# #edit the tg names to be usable later on
# editName = lambda x: x.split('.csv')[0]
# kf_dat['tg'] = pd.DataFrame(list(map(editName, kf_dat['tg'])), columns= ['tg'])
#cd to the lagged predictors directory
os.chdir(dir_in)
x = 402
y = 403
#looping through
for tg in range(x,y):
os.chdir(dir_in)
tg_name = os.listdir()[tg]
print(tg, tg_name)
#load predictor
pred = pd.read_csv(tg_name)
pred.drop('Unnamed: 0', axis = 1, inplace = True)
#add squared and cubed wind terms (as in WPI model)
pickTerms = lambda x: x.startswith('wnd')
wndTerms = pred.columns[list(map(pickTerms, pred.columns))]
wnd_sqr = pred[wndTerms]**2
wnd_cbd = pred[wndTerms]**3
pred = pd.concat([pred, wnd_sqr, wnd_cbd], axis = 1)
#standardize predictor data
dat = pred.iloc[:,1:]
scaler = StandardScaler()
print(scaler.fit(dat))
dat_standardized = pd.DataFrame(scaler.transform(dat), \
columns = dat.columns)
pred_standardized = pd.concat([pred['date'], dat_standardized], axis = 1)
#load surge data
os.chdir(surge_path)
surge = pd.read_csv(tg_name)
surge.drop('Unnamed: 0', axis = 1, inplace = True)
#remove duplicated surge rows
surge.drop(surge[surge['ymd'].duplicated()].index, axis = 0, inplace = True)
surge.reset_index(inplace = True)
surge.drop('index', axis = 1, inplace = True)
#adjust surge time format to match that of pred
time_str = lambda x: str(datetime.strptime(x, '%Y-%m-%d'))
surge_time = pd.DataFrame(list(map(time_str, surge['ymd'])), columns = ['date'])
time_stamp = lambda x: (datetime.strptime(x, '%Y-%m-%d %H:%M:%S'))
surge_new = pd.concat([surge_time, surge[['surge', 'lon', 'lat']]], axis = 1)
#merge predictors and surge to find common time frame
pred_surge = pd.merge(pred_standardized, surge_new.iloc[:,:2], on='date', how='right')
pred_surge.sort_values(by = 'date', inplace = True)
#find rows that have nans and remove them
row_nan = pred_surge[pred_surge.isna().any(axis =1)]
pred_surge.drop(row_nan.index, axis = 0, inplace = True)
pred_surge.reset_index(inplace = True)
pred_surge.drop('index', axis = 1, inplace = True)
#in case pred and surge don't overlap
if pred_surge.shape[0] == 0:
print('-'*80)
print('Predictors and Surge don''t overlap')
print('-'*80)
continue
pred_surge['date'] = pd.DataFrame(list(map(time_stamp, \
pred_surge['date'])), \
columns = ['date'])
#prepare data for training/testing
X = pred_surge.iloc[:,1:-1]
y = pd.DataFrame(pred_surge['surge'])
y = y.reset_index()
y.drop(['index'], axis = 1, inplace = True)
#apply PCA
#get the number of PCs used during validation
# pc_num = kf_dat.loc[kf_dat['tg'] == tg_name]['num_95pcs']
pca = PCA(0.95)
pca.fit(X)
X_pca = pca.transform(X)
{# #apply 10 fold cross validation
# kf = KFold(n_splits=10, random_state=29)
# metric_corr = []; metric_rmse = []; #combo = pd.DataFrame(columns = ['pred', 'obs'])
# for train_index, test_index in kf.split(X):
# X_train, X_test = X_pca[train_index], X_pca[test_index]
# y_train, y_test = y['surge'][train_index], y['surge'][test_index]
# #train regression model
# rf = RandomForestRegressor(n_estimator = 50, min_samples_leaf = 1)
# lm.fit(X_train, y_train)
# #predictions
# predictions = lm.predict(X_test)
# # pred_obs = pd.concat([pd.DataFrame(np.array(predictions)), \
# # pd.DataFrame(np.array(y_test))], \
# # axis = 1)
# # pred_obs.columns = ['pred', 'obs']
# # combo = pd.concat([combo, pred_obs], axis = 0)
# #evaluation matrix - check p value
# if stats.pearsonr(y_test, predictions)[1] >= 0.05:
# print("insignificant correlation!")
# continue
# else:
# #print(stats.pearsonr(y_test, predictions))
# metric_corr.append(stats.pearsonr(y_test, predictions)[0])
# #print(np.sqrt(metrics.mean_squared_error(y_test, predictions)))
# metric_rmse.append(np.sqrt(metrics.mean_squared_error(y_test, predictions)))
# #number of years used to train/test model
# num_years = np.ceil((pred_surge['date'][pred_surge.shape[0]-1] -\
# pred_surge['date'][0]).days/365)
}
longitude = surge['lon'][0]
latitude = surge['lat'][0]
num_pc = X_pca.shape[1] #number of principal components
# corr = np.mean(metric_corr)
# rmse = np.mean(metric_rmse)
# print('num_year = ', num_years, ' num_pc = ', num_pc ,'avg_corr = ',\
# np.mean(metric_corr), ' - avg_rmse (m) = ', \
# np.mean(metric_rmse), '\n')
#%%
#surge reconstruction
pred_for_recon = pred[~pred.isna().any(axis = 1)]
pred_for_recon = pred_for_recon.reset_index().drop('index', axis = 1)
#standardize predictor data
dat = pred_for_recon.iloc[:,1:]
scaler = StandardScaler()
print(scaler.fit(dat))
dat_standardized = pd.DataFrame(scaler.transform(dat), \
columns = dat.columns)
pred_standardized = pd.concat([pred_for_recon['date'], dat_standardized], axis = 1)
X_recon = pred_standardized.iloc[:, 1:]
#apply PCA
pca = PCA(num_pc) #use the same number of PCs used for training
pca.fit(X_recon)
X_pca_recon = pca.transform(X_recon)
#%%
#model preparation
#defining the rf model with number of trees and minimum leaves
rf = RandomForestRegressor(n_estimators=50, min_samples_leaf=1, \
random_state = 29)
rf.fit(X_pca, y)
#get prediction interval
def pred_ints(model, X_pca_recon, percentile = 95):
"""
function to construct prediction interval
taking into account the result of each
regression tree
"""
err_down = [];
err_up = [];
preds= [];
for pred in model.estimators_:
preds.append(pred.predict(X_pca_recon))
preds = np.vstack(preds).T
err_down = np.percentile(preds, (100 - percentile)/2., axis = 1, \
keepdims = True)
err_up = np.percentile(preds, 100 - (100 - percentile)/2., axis =1, \
keepdims = True)
return err_down.reshape(-1), err_up.reshape(-1)
#compute 95% prediction intervals
err_down, err_up = pred_ints(rf, X_pca_recon, percentile = 95);
#reconstructed surge goes here
truth = rf.predict(X_pca_recon);
correct = 0.;
for i, val in enumerate(truth):
if err_down[i] <= val <= err_up[i]:
correct +=1
print(correct*100/len(truth), '\n')
#final dataframe
final_dat = pd.concat([pred_standardized['date'], \
pd.DataFrame([truth, err_down, err_up]).T], axis = 1)
final_dat['lon'] = longitude
final_dat['lat'] = latitude
final_dat.columns = ['date', 'surge_reconsturcted', 'pred_int_lower',\
'pred_int_upper', 'lon', 'lat']
{#plot - optional
# time_stamp = lambda x: (datetime.strptime(x, '%Y-%m-%d %H:%M:%S'))
# final_dat['date'] = pd.DataFrame(list(map(time_stamp, final_dat['date'])), columns = ['date'])
# surge['date'] = pd.DataFrame(list(map(time_stamp, surge['date'])), columns = ['date'])
# sns.set_context('notebook', font_scale = 2)
# plt.figure()
# plt.plot(final_dat['date'], final_dat['mean'], color = 'green')
# plt.scatter(surge['date'], surge['surge'], color = 'blue')
#prediction intervals
# plt.plot(final_dat['date'], final_dat['obs_ci_lower'], color = 'red', linestyle = "--", lw = 0.8)
# plt.plot(final_dat['date'], final_dat['obs_ci_upper'], color = 'red', linestyle = "--", lw = 0.8)
#confidence intervals
# plt.plot(final_dat['date'], final_dat['mean_ci_upper'], color = 'black', linestyle = "--", lw = 0.8)
# plt.plot(final_dat['date'], final_dat['mean_ci_lower'], color = 'black', linestyle = "--", lw = 0.8)
}
#save df as cs - in case of interruption
os.chdir(dir_out)
final_dat.to_csv(tg_name)
#cd to dir_in
os.chdir(dir_in)
reconstructRF()
| [
"michaelg.tadesse@gmail.com"
] | michaelg.tadesse@gmail.com |
ad1d4b5235569f72c77e592f75ebac24c3935bd0 | d9b53673b899a9b842a42060740b734bf0c63a31 | /leetcode/python/easy/p590_postorder.py | 5954e459ee2b9d55d61a3032eef31365502caca2 | [
"Apache-2.0"
] | permissive | kefirzhang/algorithms | a8d656774b576295625dd663154d264cd6a6a802 | 549e68731d4c05002e35f0499d4f7744f5c63979 | refs/heads/master | 2021-06-13T13:05:40.851704 | 2021-04-02T07:37:59 | 2021-04-02T07:37:59 | 173,903,408 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 485 | py | # Definition for a Node.
class Node:
def __init__(self, val, children):
self.val = val
self.children = children
class Solution:
def postorder(self, root: 'Node'):
ret = []
def dfs(node):
if node is None:
return
for child in node.children:
dfs(child)
ret.append(node.val)
dfs(root)
return ret
root = Node(1, [])
slu = Solution()
print(slu.postorder(root))
| [
"8390671@qq.com"
] | 8390671@qq.com |
3731f5f7f3dacddba1e20322528e5137c82ad1d6 | 8bcaec3e096158f875e08cc6c18df8f7ff1e2586 | /codechef/DEC20B-codechef/even_pair_sum.py | 779696b154a80646ec9874c9dde3f9f8c7ff1ced | [] | no_license | Aryamanz29/DSA-CP | 5693d7e169f3a165a64771efd713d1fa8dd3b418 | 306ebd4b623ec79c2657eeba1ff1ce0fc294be50 | refs/heads/master | 2023-04-08T23:33:20.406720 | 2021-04-11T20:27:26 | 2021-04-11T20:27:26 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 685 | py | def get_even(n):
return n//2
for _ in range(int(input())):
a, b = list(map(int, input().split()))
less = min(a, b)
if a == less:
even = get_even(b)
odd = b-even
else:
even = get_even(a)
odd = a-even
# for i in range(1, a+1):
# for j in range(1, b+1):
# if (i+j) % 2 == 0:
# print(i, j)
# print("-----")
# print("=======")
# ans = 0
less_even = get_even(less)
less_odd = less - less_even
ans = less_even*even + less_odd*odd
# for i in range(1, less+1):
# if i % 2 == 0:
# ans += even
# else:
# ans += odd
print(ans)
| [
"sankalp123427@gmail.com"
] | sankalp123427@gmail.com |
110b91f1cab239e9e82f0af4b73f1f032a8f1ff8 | c717b260750d9c733b40e668d2841dee92167699 | /hardware/mechanics/electronics_mount/main_plate/cnc/drill_4-40_insert_hole.py | e3384f36753a7e295809bf5c3374bc87e3420f2c | [] | no_license | hanhanhan-kim/noah_motion_system | b68e3fc6db1a0faea272ead7a22a043dfb80a6c8 | 5bea2750eac638b9f90720b10b5e2516f108c65b | refs/heads/master | 2022-11-06T08:20:49.977792 | 2017-10-06T00:12:05 | 2017-10-06T00:12:05 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 996 | py | from __future__ import print_function
import os
import sys
from py2gcode import gcode_cmd
from py2gcode import cnc_dxf
feedrate = 50.0
fileName = 'main_plate.dxf'
stockThickness = 0.25
drillMargin = 0.125
startZ = 0.0
stopZ = -(stockThickness + drillMargin)
safeZ = 0.3
stepZ = 0.05
startDwell = 0.5
prog = gcode_cmd.GCodeProg()
prog.add(gcode_cmd.GenericStart())
prog.add(gcode_cmd.Space())
prog.add(gcode_cmd.FeedRate(feedrate))
param = {
'fileName' : fileName,
'layers' : ['4-40_insert_hole'],
'dxfTypes' : ['CIRCLE'],
'startZ' : startZ,
'stopZ' : stopZ,
'safeZ' : safeZ,
'stepZ' : stepZ,
'startDwell' : startDwell,
}
drill = cnc_dxf.DxfDrill(param)
prog.add(drill)
prog.add(gcode_cmd.Space())
prog.add(gcode_cmd.End(),comment=True)
baseName, dummy = os.path.splitext(__file__)
fileName = '{0}.ngc'.format(baseName)
print('generating: {0}'.format(fileName))
prog.write(fileName)
| [
"will@iorodeo.com"
] | will@iorodeo.com |
6adf71907c74640fac9ba5a3035be2f8fa45d1b4 | 52908b901ebebbecf94f68c5ed4edb748d8b83d7 | /chatette/parsing/lexing/rule_percent_gen.py | 71f87422992a30e781efff7485c03d85652b9b94 | [
"MIT"
] | permissive | ImanMesgaran/Chatette | 6edef61740ba75ead35e240350359f1c3ee2de3c | fd22b6c2e4a27b222071c93772c2ae99387aa5c3 | refs/heads/master | 2023-07-01T16:36:39.865660 | 2021-06-08T20:14:24 | 2021-06-08T20:22:05 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,988 | py | # coding: utf-8
"""
Module `chatette.parsing.lexing.rule_percent_gen`
Contains the class representing the lexing rule meant to tokenize
percentage for the random generation modifiers.
"""
from chatette.parsing.lexing.lexing_rule import LexingRule
from chatette.parsing.lexing import LexicalToken, TerminalType
from chatette.parsing.lexing.rule_whitespaces import RuleWhitespaces
class RulePercentGen(LexingRule):
def _apply_strategy(self, **kwargs):
while self._text[self._next_index].isdigit():
self._next_index += 1
self._update_furthest_matched_index()
percentage = self._text[self._start_index:self._next_index]
if self._text[self._next_index] != '.':
if len(percentage) == 0:
self.error_msg = \
"Invalid token. Expected a percentage for the random " + \
"generation modifier."
return False
else:
percentage += '.'
self._next_index += 1
self._update_furthest_matched_index()
start_index_non_int_part = self._next_index
while self._text[self._next_index].isdigit():
self._next_index += 1
self._update_furthest_matched_index()
if self._next_index == start_index_non_int_part:
self.error_msg = \
"Invalid token. Cannot have a percentage with an empty " + \
"non-integral part."
return False
percentage += self._text[start_index_non_int_part:self._next_index]
if not self._try_to_match_rule(RuleWhitespaces):
self.error_msg = None
# Ignore tokens as this whitespace is not meaningful
if self._text[self._next_index] == '%':
self._next_index += 1
self._update_furthest_matched_index()
self._tokens.append(LexicalToken(TerminalType.percentgen, percentage))
return True
| [
"simon.gustin@hotmail.com"
] | simon.gustin@hotmail.com |
5f3d1ab0f166594fa5cfca4b4bae63f0cccd32fe | adc6d8ee596e4710c3241332758bb6990bdd8914 | /Imagenes doc/Evaluación/RE.py | b3429635eb088404f911957ccaa23ada29324936 | [] | no_license | NatalyTinoco/Trabajo-de-grado_Artefactos | cf9491c47a8a23ce5bab7c52498093a61319f834 | 5cc4e009f94c871c7ed0d820eb113398ac66ec2f | refs/heads/master | 2022-03-20T00:51:48.420253 | 2019-11-24T19:10:40 | 2019-11-24T19:10:40 | 197,964,659 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,135 | py | # -*- coding: utf-8 -*-
"""
Created on Wed Aug 7 21:59:15 2019
@author: Nataly
"""
from matplotlib import pyplot as plt
import cv2
import numpy as np
i=0
file='00000.jpg'
seg='00000_seg.jpg'
img = cv2.imread(file)
def tloga(img):
img = (np.log(img+1)/(np.log(1+np.max(img))))*255
img = np.array(img,dtype=np.uint8)
return img
img=tloga(img)
img=cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
segima=img.copy()
imaROI=cv2.imread(seg,0)
imaROI1=imaROI.copy()
imaROI1=imaROI*-1
imaROI=cv2.normalize(imaROI, None, 0, 1, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8UC3)
imaROI1=cv2.normalize(imaROI1, None, 0, 1, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8UC3)
for z in range(3):
img[:,:,z]= img[:,:,z]*(imaROI)
plt.imshow(img)
plt.show()
#plt.hist(img.ravel(),[256])
#plt.show()
for i in range(3):
hist = cv2.calcHist([img], [i], None, [256], [1, 256])
plt.plot(hist)
plt.show()
for z in range(3):
segima[:,:,z]= segima[:,:,z]*imaROI1
plt.imshow(segima)
plt.show()
for i in range(3):
hist = cv2.calcHist([segima], [i], None, [256], [1, 256])
plt.plot(hist)
plt.show()
#plt.imshow(imaROI1,'Greys')
#plt.show
| [
"51056570+NatalyTinoco@users.noreply.github.com"
] | 51056570+NatalyTinoco@users.noreply.github.com |
75caa21cdb6da2ed748dafe135772382e987e81f | c1eb69dc5dc5b83d987d1bda0bd74a2d7d912fdf | /articles/migrations/0031_merge.py | d3d5ce4a90180e735765e1096d04c22ba755cb79 | [
"MIT"
] | permissive | CIGIHub/opencanada | 47c4e9268343aaaf0fe06b62c1838871968a0b87 | 6334ff412addc0562ac247080194e5d182e8e924 | refs/heads/staging | 2023-05-07T16:02:35.915344 | 2021-05-26T18:10:09 | 2021-05-26T18:10:09 | 36,510,047 | 8 | 2 | MIT | 2020-07-06T14:22:09 | 2015-05-29T14:43:28 | Python | UTF-8 | Python | false | false | 315 | py | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('articles', '0030_auto_20150806_2136'),
('articles', '0030_articlecategory_include_main_image'),
]
operations = [
]
| [
"csimpson@cigionline.org"
] | csimpson@cigionline.org |
a20131c6b9f7b27a6ae04fe5be74645df91f9be4 | 83de24182a7af33c43ee340b57755e73275149ae | /aliyun-python-sdk-polardb/aliyunsdkpolardb/request/v20170801/OpenAITaskRequest.py | 9d342c7d8e6e785ae43c35b9c816ca0b25ef9696 | [
"Apache-2.0"
] | permissive | aliyun/aliyun-openapi-python-sdk | 4436ca6c57190ceadbc80f0b1c35b1ab13c00c7f | 83fd547946fd6772cf26f338d9653f4316c81d3c | refs/heads/master | 2023-08-04T12:32:57.028821 | 2023-08-04T06:00:29 | 2023-08-04T06:00:29 | 39,558,861 | 1,080 | 721 | NOASSERTION | 2023-09-14T08:51:06 | 2015-07-23T09:39:45 | Python | UTF-8 | Python | false | false | 3,272 | py | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from aliyunsdkcore.request import RpcRequest
from aliyunsdkpolardb.endpoint import endpoint_data
class OpenAITaskRequest(RpcRequest):
def __init__(self):
RpcRequest.__init__(self, 'polardb', '2017-08-01', 'OpenAITask','polardb')
self.set_method('POST')
if hasattr(self, "endpoint_map"):
setattr(self, "endpoint_map", endpoint_data.getEndpointMap())
if hasattr(self, "endpoint_regional"):
setattr(self, "endpoint_regional", endpoint_data.getEndpointRegional())
def get_ResourceOwnerId(self): # Long
return self.get_query_params().get('ResourceOwnerId')
def set_ResourceOwnerId(self, ResourceOwnerId): # Long
self.add_query_param('ResourceOwnerId', ResourceOwnerId)
def get_NodeType(self): # String
return self.get_query_params().get('NodeType')
def set_NodeType(self, NodeType): # String
self.add_query_param('NodeType', NodeType)
def get_DescribeType(self): # String
return self.get_query_params().get('DescribeType')
def set_DescribeType(self, DescribeType): # String
self.add_query_param('DescribeType', DescribeType)
def get_ResourceGroupId(self): # String
return self.get_query_params().get('ResourceGroupId')
def set_ResourceGroupId(self, ResourceGroupId): # String
self.add_query_param('ResourceGroupId', ResourceGroupId)
def get_Password(self): # String
return self.get_query_params().get('Password')
def set_Password(self, Password): # String
self.add_query_param('Password', Password)
def get_ResourceOwnerAccount(self): # String
return self.get_query_params().get('ResourceOwnerAccount')
def set_ResourceOwnerAccount(self, ResourceOwnerAccount): # String
self.add_query_param('ResourceOwnerAccount', ResourceOwnerAccount)
def get_DBClusterId(self): # String
return self.get_query_params().get('DBClusterId')
def set_DBClusterId(self, DBClusterId): # String
self.add_query_param('DBClusterId', DBClusterId)
def get_OwnerAccount(self): # String
return self.get_query_params().get('OwnerAccount')
def set_OwnerAccount(self, OwnerAccount): # String
self.add_query_param('OwnerAccount', OwnerAccount)
def get_OwnerId(self): # Long
return self.get_query_params().get('OwnerId')
def set_OwnerId(self, OwnerId): # Long
self.add_query_param('OwnerId', OwnerId)
def get_Username(self): # String
return self.get_query_params().get('Username')
def set_Username(self, Username): # String
self.add_query_param('Username', Username)
| [
"sdk-team@alibabacloud.com"
] | sdk-team@alibabacloud.com |
93c6d4f655e6cecaf0204c9fde501bd9f14f9b0a | 233f97c6f360d478bf975016dd9e9c2be4a64adb | /program42.py | 583befdf151b9f7a4196e04186dc04cce05c8aa2 | [] | no_license | unknownboyy/GUVI | 3dbd1bb2bc6b3db52f5f79491accd6c56a2dec45 | d757dd473c4f5eef526a516cf64a1757eb235869 | refs/heads/master | 2020-03-27T00:07:12.449280 | 2019-03-19T12:57:03 | 2019-03-19T12:57:03 | 145,595,379 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 170 | py | a,b=map(int,input().split())
c,d=map(int,input().split())
if a==c:
print(a)
elif a>c and b>=d:
print(-1)
elif a<c and b<=d:
print(-1)
else:
x1=a;x2=b
| [
"ankitagrawal11b@gmail.com"
] | ankitagrawal11b@gmail.com |
8d820380e47b4db8af57aa047a0dc8cc8e697560 | d6fe71e3e995c03b8f5151ab1d53411b77b325ba | /walklist_api_service/models/response.py | 2168793ce5bbd0c675b43d9a5a3dd17848c6d775 | [] | no_license | mwilkins91/petpoint-scraper | 95468ae9951deaa8bd3bef7d88c0ff660146c1a3 | dd0c60c68fc6a7d11358aa63d28fdf07fff3c7cd | refs/heads/master | 2022-11-27T00:02:50.654404 | 2020-08-09T18:41:40 | 2020-08-09T18:41:40 | 286,180,666 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,652 | py | # coding: utf-8
"""
The Enrichment List
The THS enrichment list # noqa: E501
OpenAPI spec version: 1.0.0
Contact: contactme@markwilkins.co
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
import pprint
import re # noqa: F401
import six
class Response(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
"""
Attributes:
swagger_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
swagger_types = {
'payload': 'AnyOfResponsePayload',
'meta': 'ResponseMeta'
}
attribute_map = {
'payload': 'payload',
'meta': 'meta'
}
def __init__(self, payload=None, meta=None): # noqa: E501
"""Response - a model defined in Swagger""" # noqa: E501
self._payload = None
self._meta = None
self.discriminator = None
if payload is not None:
self.payload = payload
if meta is not None:
self.meta = meta
@property
def payload(self):
"""Gets the payload of this Response. # noqa: E501
:return: The payload of this Response. # noqa: E501
:rtype: AnyOfResponsePayload
"""
return self._payload
@payload.setter
def payload(self, payload):
"""Sets the payload of this Response.
:param payload: The payload of this Response. # noqa: E501
:type: AnyOfResponsePayload
"""
self._payload = payload
@property
def meta(self):
"""Gets the meta of this Response. # noqa: E501
:return: The meta of this Response. # noqa: E501
:rtype: ResponseMeta
"""
return self._meta
@meta.setter
def meta(self, meta):
"""Sets the meta of this Response.
:param meta: The meta of this Response. # noqa: E501
:type: ResponseMeta
"""
self._meta = meta
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
if issubclass(getResponse(), dict):
for key, value in self.items():
result[key] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, Response):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""Returns true if both objects are not equal"""
return not self == other
| [
"contactme@markwilkins.co"
] | contactme@markwilkins.co |
394db0d9907aa1558d646da41d52cb08d950dc1c | 0652d264baea6238c0b581f17fdf2ff6cb45f537 | /websauna/system/form/csrf.py | 79d964adf49107a108eb25f1ef5598df2cef83c9 | [
"MIT",
"Apache-2.0"
] | permissive | gitter-badger/websauna | f12fc57322c9c86bb2859a30c346858e8ede209e | 09c07d80a831d1f718ec05aea0f85293a1198063 | refs/heads/master | 2021-01-22T19:15:15.071709 | 2016-04-21T15:10:30 | 2016-04-21T15:10:30 | 56,784,419 | 0 | 0 | null | 2016-04-21T15:19:24 | 2016-04-21T15:19:24 | null | UTF-8 | Python | false | false | 456 | py | """Deform CSRF token support."""
import colander
import deform
from pyramid_deform import deferred_csrf_validator
from pyramid_deform import deferred_csrf_value
def add_csrf(schema: colander.Schema):
"""Add a hidden CSRF field on the schema."""
csrf_token = colander.SchemaNode(colander.String(), name="csrf_token", widget=deform.widget.HiddenWidget(), default=deferred_csrf_value, validator=deferred_csrf_validator,)
schema.add(csrf_token)
| [
"mikko@opensourcehacker.com"
] | mikko@opensourcehacker.com |
89a49c4c96b660fbd71aa567dc005a322340dde8 | 330899fd4a9653e05e2a09e0a4f30c119af97ad4 | /python/hidet/transforms/common/scope.py | e171328b5e5e34acb2c17ac4d9e0d7127f5ec878 | [
"Apache-2.0"
] | permissive | yaoyaoding/hidet-artifacts | f8a4707c7fc28aa7bfa4dab3a9f2a9387c020f99 | f2e9767bb2464bd0592a8ec0b276f97481f13df2 | refs/heads/main | 2023-04-30T13:12:57.350002 | 2023-04-24T19:37:34 | 2023-04-24T19:37:34 | 551,692,225 | 3 | 1 | Apache-2.0 | 2022-11-01T23:25:17 | 2022-10-14T22:40:28 | Python | UTF-8 | Python | false | false | 5,309 | py | from typing import List, Dict, Optional, ContextManager
from hidet.ir.type import ScalarType, FuncType
from hidet.ir.expr import Expr, Var, BitwiseAnd, LeftShift, BitwiseOr
from hidet.ir.functors import collect
from hidet.ir.stmt import LetStmt, ForStmt
from hidet.ir.func import Function
from hidet.ir.functors import FuncStmtExprRewriter
class Scope:
"""
Every variable (i.e., parameter variable, local variable, loop variable, let variable) much be declared or defined
in a scope. Parameter, local and loop variable should be declared, because we should not move it place. Every
let variable should be defined (with their value).
"""
def __init__(self, stack, scope_stmt):
self.stack: 'ScopeStack' = stack
self.scope_stmt = scope_stmt
self.level = None
self.parent: Optional['Scope'] = None
self.declare_vars: List[Var] = []
self.defined_vars: List[Var] = []
self.var2value: Dict[Var, Optional[Expr]] = {}
self.defined_predicates: List[List[Expr]] = []
self.predicate_vars: List[Var] = []
def __enter__(self):
scopes = self.stack.scopes
self.parent = scopes[0] if len(scopes) > 0 else None
self.level = len(scopes)
scopes.append(self)
return self
def __exit__(self, exc_type, exc_val, exc_tb):
scope = self.stack.scopes.pop()
assert scope is self
def declare(self, var: Var):
# declare a variable at current scope
self.declare_vars.append(var)
self.var2value[var] = None
assert var not in self.stack.var2scope
self.stack.var2scope[var] = self
def define(self, var: Var, value: Expr):
self.defined_vars.append(var)
self.var2value[var] = value
assert var not in self.stack.var2scope
self.stack.var2scope[var] = self
def define_predicate(self, predicate: Expr) -> Expr:
if len(self.defined_predicates) == 0 or len(self.defined_predicates[-1]) == 32:
var = Var('p', type=ScalarType('uint32'))
self.defined_predicates.append([])
self.predicate_vars.append(var)
self.stack.var2scope[var] = self
self.defined_predicates[-1].append(predicate)
mask = 1 << (len(self.defined_predicates[-1]) - 1)
return BitwiseAnd(self.predicate_vars[-1], mask)
def wrap(self, body):
# wrap the body with defined variables at current scope
bind_vars = self.defined_vars
bind_values = [self.var2value[var] for var in bind_vars]
for p_var, p_exprs in zip(self.predicate_vars, self.defined_predicates):
bind_vars.append(p_var)
bind_values.append(BitwiseOr.join_list([LeftShift(p, idx) for idx, p in enumerate(p_exprs)]))
if len(bind_vars) > 0:
ret = LetStmt(bind_vars, bind_values, body)
else:
ret = body
for var in self.defined_vars + self.declare_vars:
del self.stack.var2scope[var]
return ret
class ScopeStack:
def __init__(self):
self.scopes = []
self.var2scope: Dict[Var, Scope] = {}
def find_scope_for_expr(self, expr) -> 'Scope':
used_vars = collect(expr, Var)
levels = [self.var2scope[used_var].level for used_var in used_vars if not isinstance(used_var.type, FuncType)]
max_level = max(levels)
return self.scopes[max_level]
def new_scope(self, scope_stmt=None):
return Scope(self, scope_stmt)
def current(self) -> Scope:
assert len(self.scopes) > 0
return self.scopes[-1]
class FuncStmtExprRewriterWithScope(FuncStmtExprRewriter):
def __init__(self, use_memo=False):
super().__init__(use_memo=use_memo)
self.scope_stack = ScopeStack()
def new_scope(self, stmt=None) -> ContextManager[Scope]:
return self.scope_stack.new_scope(stmt)
def scope_to_define(self, expr: Expr) -> Scope:
return self.scope_stack.find_scope_for_expr(expr)
def visit_Function(self, func: Function):
with self.new_scope(None) as scope:
for extern_var in func.extern_vars:
scope.declare(extern_var)
for param in func.params:
scope.declare(param)
for local_var in func.local_vars:
scope.declare(local_var)
for local_const_var, _ in func.local_const_vars:
scope.declare(local_const_var)
body = scope.wrap(self.visit(func.body))
return Function(func.name, func.params, body, func.ret_type, kind=func.kind, local_vars=func.local_vars,
local_const_vars=func.local_const_vars, extern_vars=func.extern_vars, attrs=func.attrs)
def visit_ForStmt(self, stmt: ForStmt):
with self.new_scope(stmt) as scope:
self.visit(stmt.extent)
scope.declare(stmt.loop_var)
body = scope.wrap(self.visit(stmt.body))
return ForStmt(stmt.loop_var, stmt.extent, stmt.unroll, body)
def visit_LetStmt(self, stmt: LetStmt):
with self.new_scope(stmt) as scope:
for var, value in zip(stmt.bind_vars, stmt.bind_values):
scope.define(var, self.visit(value))
return scope.wrap(self.visit(stmt.body))
| [
"dingyaoyao.cs@gmail.com"
] | dingyaoyao.cs@gmail.com |
7834b8677f64f35c4cc8daa3874916b64985b960 | d9f7123433fe473cfa2fd5c3438251f83ffb326c | /apps/friends/migrations/0001_initial.py | 16167c1f6e319ab036c0be97c12a1794ba42f116 | [] | no_license | mazurbeam/friends | 6c2d201220db52bc85eb1869fd6685eee372e920 | 1dc2432ad371113c0979158053c821a449ebbc6c | refs/heads/master | 2021-01-01T18:27:12.875643 | 2017-07-25T20:46:08 | 2017-07-25T20:46:08 | 98,345,240 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 594 | py | # -*- coding: utf-8 -*-
# Generated by Django 1.10 on 2017-07-25 17:42
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
('login', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='Friend',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('friends', models.ManyToManyField(to='login.User')),
],
),
]
| [
"mazurbeam@gmail.com"
] | mazurbeam@gmail.com |
4ec44310093c3c6d0fdd8224e882899b6e273eb1 | 009df7ad499b19a4df066160cf0c7d8b20355dfb | /src/the_tale/the_tale/game/actions/relations.py | 88e1482deff3017d5f67533c4ecbe384f063fd64 | [
"BSD-3-Clause"
] | permissive | devapromix/the-tale | c0804c7475e877f12f29444ddbbba025561d3412 | 2a10efd3270734f8cf482b4cfbc5353ef8f0494c | refs/heads/develop | 2020-03-28T20:26:30.492292 | 2018-10-07T17:32:46 | 2018-10-07T17:32:46 | 149,070,887 | 1 | 0 | BSD-3-Clause | 2018-10-07T17:32:47 | 2018-09-17T04:57:50 | Python | UTF-8 | Python | false | false | 3,039 | py |
import smart_imports
smart_imports.all()
UNINITIALIZED_STATE = 'uninitialized'
class ACTION_EVENT(rels_django.DjangoEnum):
records = (('DISHONORABLE', 0, 'бесчестный герой'),
('NOBLE', 1, 'благородный герой'),
('AGGRESSIVE', 2, 'аггрессивный герой'),
('PEACEABLE', 3, 'миролюбивый герой'),)
class ACTION_HABIT_MODE(rels_django.DjangoEnum):
records = (('AGGRESSIVE', 0, 'агрессивное действие'),
('PEACEFUL', 1, 'мирное действие'),
('COMPANION', 2, 'зависит от спутника'))
class ACTION_EVENT_REWARD(rels_django.DjangoEnum):
priority = rels.Column(unique=False)
records = (('NOTHING', 0, 'без награды', c.HABIT_EVENT_NOTHING_PRIORITY),
('MONEY', 1, 'деньги', c.HABIT_EVENT_MONEY_PRIORITY),
('ARTIFACT', 2, 'артефакт', c.HABIT_EVENT_ARTIFACT_PRIORITY),
('EXPERIENCE', 3, 'опыт', c.HABIT_EVENT_EXPERIENCE_PRIORITY))
class ACTION_TYPE(rels_django.DjangoEnum):
meta = rels.Column(unique=False)
technical = rels.Column(unique=False)
records = (('IDLENESS', 0, 'герой бездельничает', False, False),
('QUEST', 1, 'герой выполненяет задание', False, False),
('MOVE_TO', 2, 'герой путешествует между городами', False, False),
('BATTLE_PVE_1X1', 3, 'герой сражается 1x1 с монстром', False, False),
('RESURRECT', 4, 'герой воскресает', False, False),
('IN_PLACE', 5, 'герой в городе', False, False),
('REST', 6, 'герой лечится', False, False),
('EQUIPPING', 7, 'герой экипируется', False, False),
('TRADING', 8, 'герой торгует', False, False),
('MOVE_NEAR_PLACE', 9, 'герой путешествует около города', False, False),
('REGENERATE_ENERGY', 10, 'герой восстановливает энергию Хранителю', False, False),
('DO_NOTHING', 11, 'техническое действие для особых действий героя в заданиях', False, False),
('META_PROXY', 12, 'техническое прокси-действие для взаимодействия героев', False, True),
('ARENA_PVP_1X1', 13, 'герой сражается 1x1 с другим героем', True, False),
('TEST', 14, 'техническое действие для тестов', False, True),
('HEAL_COMPANION', 15, 'герой ухаживает за спутником', False, False),
('FIRST_STEPS', 16, 'действия героя сразу после иницииации', False, False))
| [
"a.eletsky@gmail.com"
] | a.eletsky@gmail.com |
97f46c4e8a69b34ea9056ea3637ad2af8618fcba | e3365bc8fa7da2753c248c2b8a5c5e16aef84d9f | /indices/vetat.py | 5d3176c7883a2ee42e63a5db54a708748e1421c9 | [] | no_license | psdh/WhatsintheVector | e8aabacc054a88b4cb25303548980af9a10c12a8 | a24168d068d9c69dc7a0fd13f606c080ae82e2a6 | refs/heads/master | 2021-01-25T10:34:22.651619 | 2015-09-23T11:54:06 | 2015-09-23T11:54:06 | 42,749,205 | 2 | 3 | null | 2015-09-23T11:54:07 | 2015-09-18T22:06:38 | Python | UTF-8 | Python | false | false | 43 | py | ii = [('DibdTRL.py', 1), ('WordWYR.py', 1)] | [
"prabhjyotsingh95@gmail.com"
] | prabhjyotsingh95@gmail.com |
feb2dc16e2789132a98ac763eb08a63bb6ff086e | 2ca7eda87460f702bec33708d8a494d8c701a7b2 | /tensorflow/python/keras/mixed_precision/experimental/device_compatibility_check.py | 9279c37bb527a972aa8867a79d31b0c5e9777dc4 | [
"Apache-2.0"
] | permissive | xiaolinpeter/tensorflow | 7f931b294a434d731185131c22034c6b68cdf2b7 | 28aa08fc1e017355fc1118913bd988cf7890bec5 | refs/heads/master | 2021-05-19T06:55:47.491635 | 2020-03-31T09:02:33 | 2020-03-31T09:05:54 | 251,556,442 | 2 | 0 | Apache-2.0 | 2020-03-31T09:24:32 | 2020-03-31T09:24:31 | null | UTF-8 | Python | false | false | 7,115 | py | # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Contains function to log if devices are compatible with mixed precision."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import itertools
from tensorflow.python.client import device_lib
from tensorflow.python.eager import context
from tensorflow.python.framework import config
from tensorflow.python.framework import gpu_util
from tensorflow.python.platform import tf_logging
_COMPAT_CHECK_PREFIX = 'Mixed precision compatibility check (mixed_float16): '
_COMPAT_CHECK_OK_PREFIX = _COMPAT_CHECK_PREFIX + 'OK'
_COMPAT_CHECK_WARNING_PREFIX = _COMPAT_CHECK_PREFIX + 'WARNING'
_COMPAT_CHECK_WARNING_SUFFIX = (
'If you will use compatible GPU(s) not attached to this host, e.g. by '
'running a multi-worker model, you can ignore this warning. This message '
'will only be logged once')
def _dedup_strings(device_strs):
"""Groups together consecutive identical strings.
For example, given:
['GPU 1', 'GPU 2', 'GPU 2', 'GPU 3', 'GPU 3', 'GPU 3']
This function returns:
['GPU 1', 'GPU 2 (x2)', 'GPU 3 (x3)']
Args:
device_strs: A list of strings, each representing a device.
Returns:
A copy of the input, but identical consecutive strings are merged into a
single string.
"""
new_device_strs = []
for device_str, vals in itertools.groupby(device_strs):
num = len(list(vals))
if num == 1:
new_device_strs.append(device_str)
else:
new_device_strs.append('%s (x%d)' % (device_str, num))
return new_device_strs
def _log_device_compatibility_check(policy_name, device_attr_list):
"""Logs a compatibility check if the devices support the policy.
Currently only logs for the policy mixed_float16.
Args:
policy_name: The name of the dtype policy.
device_attr_list: A list of DeviceAttributes.
"""
if policy_name != 'mixed_float16':
# TODO(b/145686977): Log if the policy is 'mixed_bfloat16'. This requires
# checking if a TPU is available.
return
supported_device_strs = []
unsupported_device_strs = []
for device in device_attr_list:
if device.device_type == 'GPU':
name, cc = gpu_util.compute_capability_from_device_desc(device)
name = name or 'Unknown GPU'
if cc:
device_str = '%s, compute capability %s.%s' % (name, cc[0], cc[1])
if cc >= (7, 0):
supported_device_strs.append(device_str)
else:
unsupported_device_strs.append(device_str)
else:
unsupported_device_strs.append(
name + ', no compute capability (probably not an Nvidia GPU)')
if unsupported_device_strs:
warning_str = _COMPAT_CHECK_WARNING_PREFIX + '\n'
if supported_device_strs:
warning_str += ('Some of your GPUs may run slowly with dtype policy '
'mixed_float16 because they do not all have compute '
'capability of at least 7.0. Your GPUs:\n')
elif len(unsupported_device_strs) == 1:
warning_str += ('Your GPU may run slowly with dtype policy mixed_float16 '
'because it does not have compute capability of at least '
'7.0. Your GPU:\n')
else:
warning_str += ('Your GPUs may run slowly with dtype policy '
'mixed_float16 because they do not have compute '
'capability of at least 7.0. Your GPUs:\n')
for device_str in _dedup_strings(supported_device_strs +
unsupported_device_strs):
warning_str += ' ' + device_str + '\n'
warning_str += ('See https://developer.nvidia.com/cuda-gpus for a list of '
'GPUs and their compute capabilities.\n')
warning_str += _COMPAT_CHECK_WARNING_SUFFIX
tf_logging.warn(warning_str)
elif not supported_device_strs:
tf_logging.warn('%s\n'
'The dtype policy mixed_float16 may run slowly because '
'this machine does not have a GPU. Only Nvidia GPUs with '
'compute capability of at least 7.0 run quickly with '
'mixed_float16.\n%s' % (_COMPAT_CHECK_WARNING_PREFIX,
_COMPAT_CHECK_WARNING_SUFFIX))
elif len(supported_device_strs) == 1:
tf_logging.info('%s\n'
'Your GPU will likely run quickly with dtype policy '
'mixed_float16 as it has compute capability of at least '
'7.0. Your GPU: %s' % (_COMPAT_CHECK_OK_PREFIX,
supported_device_strs[0]))
else:
tf_logging.info('%s\n'
'Your GPUs will likely run quickly with dtype policy '
'mixed_float16 as they all have compute capability of at '
'least 7.0' % _COMPAT_CHECK_OK_PREFIX)
_logged_compatibility_check = False
def log_device_compatibility_check(policy_name, skip_local):
"""Logs a compatibility check if the devices support the policy.
Currently only logs for the policy mixed_float16. A log is shown only the
first time this function is called.
Args:
policy_name: The name of the dtype policy.
skip_local: If True, do not call list_local_devices(). This is useful since
if list_local_devices() and tf.config.set_visible_devices() are both
called, TensorFlow will crash. However, since GPU names and compute
capabilities cannot be checked without list_local_devices(), setting this
to True means the function will only warn if there are no GPUs.
"""
global _logged_compatibility_check
# In graph mode, calling list_local_devices may initialize some session state,
# so we only call it in eager mode.
if not context.executing_eagerly() or _logged_compatibility_check:
return
_logged_compatibility_check = True
device_attr_list = device_lib.list_local_devices()
if not skip_local:
_log_device_compatibility_check(policy_name, device_attr_list)
return
# TODO(b/146009447): Create an API to replace list_local_devices(), then
# remove the skip_local paramater.
gpus = config.list_physical_devices('GPU')
if not gpus and policy_name == 'mixed_float16':
tf_logging.warn(
'%s\n'
'The dtype policy mixed_float16 may run slowly because '
'this machine does not have a GPU.\n%s' %
(_COMPAT_CHECK_WARNING_PREFIX, _COMPAT_CHECK_WARNING_SUFFIX))
| [
"gardener@tensorflow.org"
] | gardener@tensorflow.org |
16e44fe8b2dcdb7050cf93f7d7f693492e0b9d39 | 075786cd6b8b5d3e943162512bbc3950532734f3 | /player/human.py | 84b9008c187e6edd22e571255debd7daa32ab40a | [] | no_license | VincentVelthuizen/Menace | 470c6744de65a2685be92ed9d450d1dfea5c0bad | 196498200cbdbfba9ccd2b1497efacf7c63b4171 | refs/heads/master | 2023-05-02T20:48:25.636309 | 2021-05-20T12:37:49 | 2021-05-20T12:37:49 | 320,262,025 | 0 | 2 | null | 2021-05-20T12:37:50 | 2020-12-10T12:13:00 | Python | UTF-8 | Python | false | false | 708 | py | import player
from board import Board, _state_set_cell
class Human(player.Player):
keys = {113: (0, 0), 119: (0, 1), 101: (0, 2),
97: (1, 0), 115: (1, 1), 100: (1, 2),
122: (2, 0), 120: (2, 1), 99: (2, 2)}
# The human player object needs to be able to talk to the computer user through a UI
def __init__(self, ui):
self.ui = ui
# Asking the human player for input means waiting until the user (finally) gives 'valid' feedback
def move(self, board):
while True:
self.ui.tick()
move = self.ui.get_move(board)
if move in self.keys:
coordinate = self.keys[move]
return coordinate
| [
"mail@vincentvelthuizen.com"
] | mail@vincentvelthuizen.com |
4d62a2fb14ec957250f83aec716fc37141077cda | 73b3ca8a063778f30fc259110519791bedd67801 | /ticketplace/settings.py | f3d224c9f1d212ba90381440d8aa2b7ea111fc85 | [] | no_license | DasomJung24/ticketplace | f57fb2368443026c185766f28e778545acb7d647 | 930e34f4e498ecf588bcb094b16ada57ac43ddf3 | refs/heads/master | 2023-01-29T04:33:34.924023 | 2020-12-13T16:59:16 | 2020-12-13T16:59:16 | 320,288,514 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,093 | py | """
Django settings for ticketplace project.
Generated by 'django-admin startproject' using Django 3.1.1.
For more information on this file, see
https://docs.djangoproject.com/en/3.1/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/3.1/ref/settings/
"""
from pathlib import Path
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/3.1/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'kyj+yb&9bzl=v=#xuwesi6e3$_hzq81yt(+bi&ffn$5$u2paf2'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = ['*']
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'movie',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'ticketplace.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'ticketplace.wsgi.application'
# Database
# https://docs.djangoproject.com/en/3.1/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': BASE_DIR / 'db.sqlite3',
}
}
# Password validation
# https://docs.djangoproject.com/en/3.1/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/3.1/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/3.1/howto/static-files/
STATIC_URL = '/static/'
| [
"jeongdasom6@gmail.com"
] | jeongdasom6@gmail.com |
ee479a8e15e54478b39ed5ed5115da25c05873d9 | 781e2692049e87a4256320c76e82a19be257a05d | /all_data/exercism_data/python/gigasecond/98fa6807241a4f499dd59d2ee5226c72.py | 74dc0ddc31fe3bb8dc22e62a1f1d583aa6b658bd | [] | no_license | itsolutionscorp/AutoStyle-Clustering | 54bde86fe6dbad35b568b38cfcb14c5ffaab51b0 | be0e2f635a7558f56c61bc0b36c6146b01d1e6e6 | refs/heads/master | 2020-12-11T07:27:19.291038 | 2016-03-16T03:18:00 | 2016-03-16T03:18:42 | 59,454,921 | 4 | 0 | null | 2016-05-23T05:40:56 | 2016-05-23T05:40:56 | null | UTF-8 | Python | false | false | 98 | py | import datetime
def add_gigasecond(date):
return date + datetime.timedelta(seconds=10**9)
| [
"rrc@berkeley.edu"
] | rrc@berkeley.edu |
8e9683fe6b02f5131b9205be402928e216b2f878 | adb145c78bbc9fa557abef3333c85882cb2442fe | /examples/show_channels.py | c769d6dacb1c3c3a9195c9be5af8f44c5d5965e8 | [] | no_license | mgpwanderer/pyst3 | 574b40c7edbd058048c393c923b7f581b6ef2799 | b7ef58b8dab6ceeb0c23e498d9f21e61afaa9b4c | refs/heads/master | 2021-01-18T13:08:55.205169 | 2014-11-21T09:04:31 | 2014-11-21T09:04:31 | 17,709,524 | 9 | 4 | null | 2020-05-08T15:48:08 | 2014-03-13T12:38:39 | Python | UTF-8 | Python | false | false | 915 | py | """
Example to get list of active channels
"""
import asterisk.manager
import sys
manager = asterisk.manager.Manager()
try:
# connect to the manager
try:
manager.connect('localhost')
manager.login('user', 'secret')
# get a status report
response = manager.status()
print response
response = manager.command('core show channels concise')
print response.data
manager.logoff()
except asterisk.manager.ManagerSocketException, (errno, reason):
print "Error connecting to the manager: %s" % reason
sys.exit(1)
except asterisk.manager.ManagerAuthException, reason:
print "Error logging in to the manager: %s" % reason
sys.exit(1)
except asterisk.manager.ManagerException, reason:
print "Error: %s" % reason
sys.exit(1)
finally:
# remember to clean up
manager.close()
| [
"areski@gmail.com"
] | areski@gmail.com |
769cedf41185d39d17f661de4cfba647bc7c158c | 52b5773617a1b972a905de4d692540d26ff74926 | /.history/robort_20200727104801.py | 2010285cce961f66b93daecfab4a32147f02ab60 | [] | no_license | MaryanneNjeri/pythonModules | 56f54bf098ae58ea069bf33f11ae94fa8eedcabc | f4e56b1e4dda2349267af634a46f6b9df6686020 | refs/heads/master | 2022-12-16T02:59:19.896129 | 2020-09-11T12:05:22 | 2020-09-11T12:05:22 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 232 | py | def uniquePaths(m,n):
# use dynamic programming and answer is at arr[m][n]
# let's create and empty grid with 0's
grid = [[0] * m] * n
# then using the top down uproach we shall prefill all the
uniquePaths(3,2) | [
"mary.jereh@gmail.com"
] | mary.jereh@gmail.com |
7b189de1b50ee4d7efb7fdb48f5d1dd07172fb5e | 055581f9d6c81eda2f73ea05b90b7a2256da1219 | /parts/zodiac/jinja2/nodes.py | e8e4074b02fbc668efd963aeab34328eaab7ecba | [] | no_license | Tosti770/zodiac | 488a91c3e872a62d09a3ebb22a951dadcbd1c2df | af0380e20eb90699a84e3b7c6cb2085a1fb81667 | refs/heads/master | 2020-04-13T06:54:26.333228 | 2014-03-03T20:10:11 | 2014-03-03T20:10:11 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 62 | py | /home/ruben/zodiac/eggs/Jinja2-2.7.1-py2.7.egg/jinja2/nodes.py | [
"ruben_tc@hotmail.es"
] | ruben_tc@hotmail.es |
12826fe19b895ac45cd98838364927f7bb08dd9c | ed962cd83f09d9f14f4528c3b2e6ae55d48de5b3 | /wagtail-repository/wagtail/core/apps.py | dd6ee694395a3e97cc26a142e7ac0ec899d0e0ff | [
"LicenseRef-scancode-unknown-license-reference",
"BSD-3-Clause"
] | permissive | TobiasSkovgaardJepsen/wagtail-on-heroku | 9eefc4346a88191b8a2f5c902db4b2645fdbad67 | 17e4720f86023225e0704890688998a80bb87a17 | refs/heads/master | 2022-12-19T03:54:51.766911 | 2018-01-20T14:41:33 | 2018-01-20T14:41:33 | 117,421,808 | 0 | 1 | BSD-3-Clause | 2022-12-07T23:51:05 | 2018-01-14T10:46:23 | Python | UTF-8 | Python | false | false | 292 | py | from django.apps import AppConfig
class WagtailCoreAppConfig(AppConfig):
name = 'wagtail.core'
label = 'wagtailcore'
verbose_name = "Wagtail core"
def ready(self):
from wagtail.core.signal_handlers import register_signal_handlers
register_signal_handlers()
| [
"tsj@aau114974.mynet"
] | tsj@aau114974.mynet |
6ff1d93caaa9b376c31d963dc66cd9a3cb8fc42b | 9743d5fd24822f79c156ad112229e25adb9ed6f6 | /xai/brain/wordbase/otherforms/_weighted.py | e340dc692dc99f8c2fded0864e71b23542626ba9 | [
"MIT"
] | permissive | cash2one/xai | de7adad1758f50dd6786bf0111e71a903f039b64 | e76f12c9f4dcf3ac1c7c08b0cc8844c0b0a104b6 | refs/heads/master | 2021-01-19T12:33:54.964379 | 2017-01-28T02:00:50 | 2017-01-28T02:00:50 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 224 | py |
#calss header
class _WEIGHTED():
def __init__(self,):
self.name = "WEIGHTED"
self.definitions = weight
self.parents = []
self.childen = []
self.properties = []
self.jsondata = {}
self.basic = ['weight']
| [
"xingwang1991@gmail.com"
] | xingwang1991@gmail.com |
e16d6b0c732d957e8b90b55aaf84a77a990b923a | 36216b52f6d3c9b5d16e9d93c56540fe07bc5f5a | /backstage/server/forms.py | 71e701d5357e296b3d5d578887a50bac8171c822 | [
"MIT"
] | permissive | zerolfx/eoj3 | 3984676d1e29ad5d04f06a41836ece3f1a452054 | 156060399d1c3e5f7bcdbf34eaffbe2be66e1b20 | refs/heads/master | 2020-08-10T19:55:36.278006 | 2019-10-11T10:26:20 | 2019-10-11T10:27:19 | 214,410,171 | 1 | 0 | MIT | 2019-10-11T10:39:12 | 2019-10-11T10:39:11 | null | UTF-8 | Python | false | false | 367 | py | from django import forms
from dispatcher.models import Server
class ServerEditForm(forms.ModelForm):
class Meta:
model = Server
fields = ['name', 'ip', 'port', 'token', 'concurrency', 'runtime_multiplier', 'version', 'master']
class ServerUpdateTokenForm(forms.Form):
new_password = forms.CharField(min_length=4, max_length=128, label='New Password')
| [
"scottyugochang@hotmail.com"
] | scottyugochang@hotmail.com |
c63ce4b9e4bf4d5e9cee09e0aea032917a339c41 | 838f063e516b979364bdddb7a8604f9c3ff405d8 | /tests/gcloud/database_ddl_test.py | 1d0ce1560405f41a462e23ba85afa985200059d2 | [
"Apache-2.0"
] | permissive | GoogleCloudPlatform/cloud-spanner-emulator | d205193c7c3c265a47a822e1df574271c8522759 | 53eaa404d303fb2dc03f3b444553aa9bb24c3786 | refs/heads/master | 2023-08-29T12:33:41.780107 | 2023-08-11T08:15:10 | 2023-08-11T08:15:10 | 251,420,886 | 236 | 38 | Apache-2.0 | 2023-09-07T12:35:45 | 2020-03-30T20:28:25 | C++ | UTF-8 | Python | false | false | 3,012 | py | #
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""Tests for Cloud Spanner gcloud command for ddl statements to CREATE/ALTER/DROP CHANGE STREAM."""
from tests.gcloud import emulator
class GCloudDatabaseDdlTest(emulator.TestCase):
# TODO: Test returned strings from ddl.
def testUpdateDDLChangeStream(self):
# Create an instance.
self.RunGCloud(
'spanner',
'instances',
'create',
'test-instance',
'--config=emulator-config',
'--description=Test Instance',
'--nodes',
'3',
)
# Create the database.
self.assertEqual(
self.RunGCloud(
'spanner',
'databases',
'create',
'test-database',
'--instance=test-instance',
'--ddl=CREATE TABLE mytable (a INT64, b INT64) PRIMARY KEY(a)',
),
self.JoinLines(''),
)
# Perform an update to create a change stream.
self.RunGCloud(
'spanner',
'databases',
'ddl',
'update',
'test-database',
'--instance=test-instance',
'--ddl=CREATE CHANGE STREAM myChangeStream FOR ALL',
)
# Perform an update to alter a change stream's value capture type.
self.RunGCloud(
'spanner',
'databases',
'ddl',
'update',
'test-database',
'--instance=test-instance',
(
'--ddl=ALTER CHANGE STREAM myChangeStream SET OPTIONS ('
" value_capture_type = 'NEW_VALUES' )"
),
)
# Perform an update to alter a change stream's retention period.
self.RunGCloud(
'spanner',
'databases',
'ddl',
'update',
'test-database',
'--instance=test-instance',
(
'--ddl=ALTER CHANGE STREAM myChangeStream SET OPTIONS ('
" retention_period = '3d' )"
),
)
# Perform an update to suspend a change stream.
self.RunGCloud(
'spanner',
'databases',
'ddl',
'update',
'test-database',
'--instance=test-instance',
'--ddl=ALTER CHANGE STREAM myChangeStream DROP FOR ALL',
)
# Perform an update to drop a change stream.
self.RunGCloud(
'spanner',
'databases',
'ddl',
'update',
'test-database',
'--instance=test-instance',
'--ddl=DROP CHANGE STREAM myChangeStream',
)
if __name__ == '__main__':
emulator.RunTests()
| [
"noreply@github.com"
] | GoogleCloudPlatform.noreply@github.com |
97b8ec6308ec18fdc55cc64668317b9d601f77e6 | 2d20823359e012c3d5942ec72b2442e2d5e3f2d7 | /demo/World population.spx.py | 777a6503622bacd601997903f984450c5894e330 | [
"MIT"
] | permissive | urbach/jupytext | 8fa20d6f83abb6c09ad4cd952c6e8748e3183643 | 6d3a38505ae539975085f9d5b4e457c9566a7977 | refs/heads/master | 2020-04-24T14:55:49.401909 | 2019-02-22T08:43:37 | 2019-02-22T08:43:37 | 172,044,219 | 1 | 0 | MIT | 2019-02-22T10:14:39 | 2019-02-22T10:14:39 | null | UTF-8 | Python | false | false | 3,282 | py | # ---
# jupyter:
# jupytext:
# formats: ipynb,.pct.py:percent,.lgt.py:light,.spx.py:sphinx,md,Rmd
# text_representation:
# extension: .py
# format_name: sphinx
# format_version: '1.1'
# jupytext_version: 1.0.0-dev
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
"""
# A quick insight at world population
## Collecting population data
In the below we retrieve population data from the
[World Bank](http://www.worldbank.org/)
using the [wbdata](https://github.com/OliverSherouse/wbdata) python package
"""
import pandas as pd
import wbdata as wb
pd.options.display.max_rows = 6
pd.options.display.max_columns = 20
###############################################################################
# Corresponding indicator is found using search method - or, directly,
# the World Bank site.
wb.search_indicators('Population, total') # SP.POP.TOTL
# wb.search_indicators('area')
# => https://data.worldbank.org/indicator is easier to use
###############################################################################
# Now we download the population data
indicators = {'SP.POP.TOTL': 'Population, total',
'AG.SRF.TOTL.K2': 'Surface area (sq. km)',
'AG.LND.TOTL.K2': 'Land area (sq. km)',
'AG.LND.ARBL.ZS': 'Arable land (% of land area)'}
data = wb.get_dataframe(indicators, convert_date=True).sort_index()
data
###############################################################################
# World is one of the countries
data.loc['World']
###############################################################################
# Can we classify over continents?
data.loc[(slice(None), '2017-01-01'), :]['Population, total'].dropna(
).sort_values().tail(60).index.get_level_values('country')
###############################################################################
# Extract zones manually (in order of increasing population)
zones = ['North America', 'Middle East & North Africa',
'Latin America & Caribbean', 'Europe & Central Asia',
'Sub-Saharan Africa', 'South Asia',
'East Asia & Pacific'][::-1]
###############################################################################
# And extract population information (and check total is right)
population = data.loc[zones]['Population, total'].swaplevel().unstack()
population = population[zones]
assert all(data.loc['World']['Population, total'] == population.sum(axis=1))
###############################################################################
# ## Stacked area plot with matplotlib
import matplotlib.pyplot as plt
""
plt.clf()
plt.figure(figsize=(10, 5), dpi=100)
plt.stackplot(population.index, population.values.T / 1e9)
plt.legend(population.columns, loc='upper left')
plt.ylabel('Population count (B)')
plt.show()
###############################################################################
# ## Stacked bar plot with plotly
import plotly.offline as offline
import plotly.graph_objs as go
offline.init_notebook_mode()
""
data = [go.Scatter(x=population.index, y=population[zone], name=zone, stackgroup='World')
for zone in zones]
fig = go.Figure(data=data,
layout=go.Layout(title='World population'))
offline.iplot(fig)
| [
"marc.wouts@gmail.com"
] | marc.wouts@gmail.com |
f501394f023f2bbc4fd47d7e5d9395da864c4d02 | edcd74f8f65119bdbe737360c2ca33b4a6da160a | /python/problem-math/number_of_boomerangs.py | c34438c428b9e6c199038da0702b5990b8fb8aaa | [] | no_license | hyunjun/practice | 72e83de6a1d5e04ddcd16526f16110ea2dd00373 | 5376dd48b1cefb4faba9d2ef6a8a497b6b1d6c67 | refs/heads/master | 2023-08-31T07:00:37.320351 | 2023-08-17T07:29:24 | 2023-08-17T07:29:24 | 2,704,126 | 3 | 2 | null | 2022-12-14T20:25:07 | 2011-11-03T18:28:44 | Python | UTF-8 | Python | false | false | 10,228 | py | # https://leetcode.com/problems/number-of-boomerangs
from collections import Counter
class Solution:
def distance(self, p, q):
return (p[0] - q[0]) ** 2 + (p[1] - q[1]) ** 2
# Time Limit Exceeded
def numberOfBoomerangs(self, points):
if points is None or 0 == len(points):
return 0
resultSet, lenPoints = set(), len(points)
for i in range(lenPoints):
for j in range(lenPoints):
if j == i:
continue
for k in range(lenPoints):
if k == i or k == j:
continue
if (i, j, k) in resultSet or (i, k, j) in resultSet:
continue
if self.distance(points[i], points[j]) == self.distance(points[i], points[k]):
#if points[i][0] == (points[j][0] + points[k][0]) // 2 and points[i][1] == (points[j][1] + points[k][1]) // 2:
resultSet.add((i, j, k))
resultSet.add((i, k, j))
return len(resultSet)
def numberOfBoomerangsRecur(self, resultSet, points, used, left):
if 3 == len(used):
i, j, k = used
if (i, j, k) not in resultSet or (i, k, j) not in resultSet:
if self.distance(points[i], points[j]) == self.distance(points[i], points[k]):
#if points[i][0] == (points[j][0] + points[k][0]) // 2 and points[i][1] == (points[j][1] + points[k][1]) // 2:
resultSet.add((i, j, k))
resultSet.add((i, k, j))
else:
for i, l in enumerate(left):
used.append(l)
self.numberOfBoomerangsRecur(resultSet, points, used, left[:i] + left[i + 1:])
used.pop()
# Time Limit Exceeded
def numberOfBoomerangs1(self, points):
if points is None or 0 == len(points):
return 0
resultSet, lenPoints = set(), len(points)
self.numberOfBoomerangsRecur(resultSet, points, [], list(range(lenPoints)))
return len(resultSet)
# Time Limit Exceeded
def numberOfBoomerangs2(self, points):
if points is None or 0 == len(points):
return 0
d, lenPoints = {}, len(points)
for i in range(lenPoints - 1):
for j in range(i + 1, lenPoints):
ijDistance = self.distance(points[i], points[j])
if ijDistance in d:
dd = d[ijDistance]
if i in dd:
dd[i].append(j)
else:
dd[i] = [j]
if j in dd:
dd[j].append(i)
else:
dd[j] = [i]
else:
d[ijDistance] = {i: [j], j: [i]}
res = 0
for dist, distDict in d.items():
for i, jList in distDict.items():
n = len(jList)
res += n * (n - 1)
return res
# 26.77%
# https://leetcode.com/problems/number-of-boomerangs/discuss/129623/python-solution
def numberOfBoomerangs(self, points):
if points is None or 0 == len(points):
return 0
_sum, lenPoints = 0, len(points)
for i in range(lenPoints):
distList = [self.distance(points[i], points[j]) for j in range(lenPoints) if j != i]
counter = Counter(distList)
cList = [i * (i - 1) for i in counter.values()]
_sum += sum(cList)
return _sum
'''
1 = [(0, 1), (1, 2)]
2 = [(0, 2)]
0 1 0 2
1 0 1 2
2 0 2 1
1 = [(0, 1), (0, 2), (0, 3), (0, 4)] 4C2 = 4*3 / 2 = 6
root2 = [(1, 3), (1, 4), (2, 3), (2, 4)] 4
2 = [(1, 2), (3, 4)]
'''
s = Solution()
data = [([[0, 0], [1, 0], [2, 0]], 2),
([[0, 0], [1, 0], [-1, 0], [0, 1], [0, -1]], 20),
([[3327,-549],[9196,-8118],[7610,-9506],[5098,8392],[8582,7953],[1053,5802],[3847,2652],[7654,8355],[1614,-9409],[9986,5538],[4660,2944],[4528,-9512],[7483,-1455],[3422,-3966],[2037,-4456],[5107,-4635],[4996,655],[7247,2606],[1149,8697],[7350,6083],[3002,8403],[8238,6850],[1055,5892],[5205,9021],[2835,5191],[911,-2505],[4488,-4561],[7983,-1677],[336,-2243],[4358,-1274],[3302,9465],[4091,-5350],[120,7690],[3608,7622],[6388,-9042],[57,-610],[9361,8295],[6240,-3232],[540,7797],[2141,-6625],[9341,3053],[7223,3829],[4844,1558],[2152,-8467],[9316,6510],[259,-1030],[2327,-5650],[9972,8800],[2040,-6420],[2774,4780],[4538,-7169],[4171,-6101],[7479,-3237],[7019,-1981],[4561,-4488],[7746,254],[4917,4969],[4083,-238],[6528,-7413],[1295,-7804],[5450,-8446],[1166,-5871],[2256,-8862],[2929,-5704],[4718,2055],[5429,-4392],[4887,9600],[9507,-1282],[2715,2878],[6737,-6372],[8390,-9165],[3882,3308],[5805,4317],[9422,8685],[3257,-2931],[881,-1293],[8623,-1601],[2836,879],[5889,2118],[1527,607],[4173,-3044],[6215,5412],[2908,-7926],[4130,-8024],[1304,7219],[1956,-3954],[8055,5839],[5706,212],[6508,5128],[8897,9765],[2197,-3870],[8472,-2828],[4529,7661],[4403,-9582],[6131,-7717],[7377,-3344],[5591,9944],[2069,-5148],[8370,-7449],[6828,-3974],[6123,-1216],[2072,530],[975,-2221],[7094,-2516],[9259,-4009],[7249,7809],[8473,2074],[4981,-6998],[9735,5737],[9772,5866],[8020,-6499],[8874,-6389],[3445,-9057],[4815,8167],[9847,1643],[4193,2322],[6780,2617],[9204,4107],[396,6298],[1591,6008],[2289,-4807],[3817,762],[7267,5150],[116,-6646],[887,-3760],[5572,-4741],[9741,4446],[5223,-462],[1742,38],[7705,1589],[1682,-1750],[263,4814],[867,9467],[8921,7616],[5765,-3135],[3624,4406],[2058,-2559],[1520,-675],[2591,-2012],[2679,-169],[4228,-1749],[5090,-6031],[2697,-9687],[9859,791],[352,3916],[8732,-1614],[2166,8995],[3200,9385],[4814,-1527],[7001,579],[5338,-3023],[1337,-2604],[4418,-7143],[3073,3362],[845,-7896],[3193,-8575],[6707,4635],[1746,-595],[4949,1605],[6548,-8347],[1873,5281],[39,-5961],[4276,-409],[9777,-909],[8064,3130],[6022,-245],[108,7360],[7151,4526],[6569,-3423],[4240,-2585],[8681,-2567],[5192,5389],[2069,-3061],[1146,3370],[4896,7694],[5023,6770],[2975,-8586],[7161,-6396],[1005,6938],[2695,-4579],[69,-4931],[5176,177],[2429,-1320],[1055,8999],[5257,-4704],[2766,-6062],[9081,-2042],[5679,-2498],[1249,6825],[7224,-3854],[872,2247],[2916,-6153],[3661,-9923],[7451,-8982],[7016,6498],[6440,-6563],[1568,-8384],[9966,-9651],[296,1021],[9348,-8095],[2669,8466],[2196,-8249],[2777,7875],[5605,4026],[1053,-7170],[172,-8075],[1429,-6912],[5772,-8557],[9518,-424],[2461,2886],[2426,-1099],[6323,-6006],[6870,-3711],[696,3518],[3662,6396],[5424,-3668],[4863,7620],[4435,7640],[1847,-3608],[8018,-7100],[9222,-5457],[4825,7004],[3983,-3050],[8447,-6499],[2878,-9092],[6387,5304],[6162,-938],[5651,3032],[5351,6347],[2902,-4634],[2743,8326],[8050,-6042],[2298,-1163],[7950,-9502],[5229,-4031],[3398,-9196],[512,-5424],[7808,847],[7878,6255],[4349,7108],[7163,736],[8764,9677],[6151,-5585],[2709,-2146],[7114,5612],[3220,-3790],[290,-8730],[168,8941],[107,-5529],[9439,-8311],[440,9189],[2493,7304],[117,6653],[8151,-5653],[2908,8852],[1455,-3577],[5941,-3428],[6101,-7908],[7339,5162],[9946,-5546],[7126,9519],[7016,3769],[789,7184],[2710,-2751],[1655,-1499],[5290,-1553],[4042,-2217],[2103,-9488],[788,-3393],[1211,3696],[1811,9019],[6471,-2248],[5591,8924],[6196,2930],[4087,6143],[3736,7565],[5662,-9248],[1334,2803],[4289,-9604],[6404,2296],[8897,-8306],[7096,-708],[5829,9199],[6156,-3383],[2158,-2633],[6665,-9678],[6386,3137],[8074,1977],[2061,4271],[4908,-7500],[6766,4996],[66,8780],[5749,1400],[7935,38],[1797,-5660],[2334,7046],[2386,9430],[2690,-1784],[4982,-1154],[1185,3492],[6214,-2149],[3814,8952],[7340,8241],[930,-4247],[8864,2190],[8254,5630],[7186,-5328],[762,9287],[6072,8697],[9325,-5779],[9389,1660],[7620,-8224],[7442,-9690],[9992,-7576],[5509,7529],[2269,8075],[5380,-3917],[7027,-7280],[4324,-5691],[8474,3188],[6499,3080],[5170,-9962],[7752,5932],[9325,176],[982,-1349],[4398,371],[6663,-1630],[2147,-9543],[5032,8491],[9234,541],[6021,1503],[8616,7753],[3938,-8004],[6826,8263],[6305,-8348],[7803,9157],[4732,-674],[9195,-1164],[5258,8520],[9012,2592],[3523,-238],[2964,6538],[8132,1463],[3348,-6835],[6307,2582],[58,-7672],[437,5027],[6433,4375],[7023,3259],[8990,-6672],[4911,3146],[2485,-4005],[2472,8032],[4831,-5918],[2905,196],[6675,6428],[9958,9639],[9319,4443],[7454,-7333],[3960,3761],[1601,-9630],[2441,2038],[5397,-1125],[6413,2420],[8486,1756],[2101,3398],[4902,938],[5745,-2626],[5323,-3071],[1456,8228],[7125,-1869],[1008,3435],[4122,6679],[4230,1577],[9346,8190],[1690,947],[4913,4132],[9337,310],[3007,-4249],[9083,-8507],[7507,-2464],[1243,-7591],[4826,-3011],[6135,-9851],[3918,7591],[8377,-2605],[5723,-4262],[830,-3803],[2417,-8587],[7774,8116],[5955,9465],[5415,868],[9949,-5247],[1179,2956],[6856,6614],[801,-9285],[4150,8397],[9476,8976],[1738,-4389],[9126,2008],[3202,3855],[9403,-4723],[9593,6585],[1475,-7989],[7998,-4399],[127,306],[1418,-4458],[1174,1367],[6647,-7647],[4323,3503],[8967,1477],[4218,9469],[6226,3694],[8446,-2036],[9305,3924],[9972,8860],[7779,5727],[4137,-6275],[8664,1964],[5736,-6985],[7566,-7785],[3321,8984],[4109,4495],[352,757],[3201,1027],[4260,-1480],[8856,4831],[7990,-4918],[8525,-7212],[3046,-5817],[6712,-630],[3043,-5509],[1449,-6468],[8216,-3534],[5497,304],[9481,3063],[8871,9154],[8399,2981],[1,8751],[90,-6798],[6131,-9298],[8075,-5013],[5533,6065],[70,-9589],[5205,9468],[946,1917],[5191,-6011],[2760,-7008],[3873,7329],[9458,9370],[7633,5291],[8785,2857],[797,3537],[2190,-9201],[2288,-7720],[353,4771],[9334,-1572],[9759,1220],[845,-3819],[7983,6050],[2001,-1071],[4319,-2808],[9270,7080],[6537,3143],[4409,2347],[8866,8394],[7639,4003],[7603,4788],[7540,-207],[5587,6181],[8425,5941],[952,-5888],[721,-2937],[5332,-8433],[3244,-6685],[3969,5246],[2244,8289],[8790,-8486],[1721,-4673],[1009,-3870],[7675,9875],[876,-8334],[231,-1520],[6454,7771],[4625,2042],[304,9403],[4335,-8743],[3515,-4944],[4672,8847],[2975,7917],[8514,6945],[3163,758],[1586,1953],[8624,-6693],[7281,9633],[5789,1308],[5861,-6983],[2974,-3908],[7849,-572],[215,-7525]], 6),
]
for points, expected in data:
real = s.numberOfBoomerangs(points)
print('{}, expected {}, real {}, result {}'.format(points, expected, real, expected == real))
| [
"agapelover4u@yahoo.co.kr"
] | agapelover4u@yahoo.co.kr |
0c7496f34ee608feab34d8444ee4d5c33dc88ec5 | d94b6845aeeb412aac6850b70e22628bc84d1d6d | /factors_of_influence/fids/sunrgbd.py | d7687853f656cc3995e06bd4997ecd3bd6b68748 | [
"CC-BY-4.0",
"Apache-2.0"
] | permissive | ishine/google-research | 541aea114a68ced68736340e037fc0f8257d1ea2 | c1ae273841592fce4c993bf35cdd0a6424e73da4 | refs/heads/master | 2023-06-08T23:02:25.502203 | 2023-05-31T01:00:56 | 2023-05-31T01:06:45 | 242,478,569 | 0 | 0 | Apache-2.0 | 2020-06-23T01:55:11 | 2020-02-23T07:59:42 | Jupyter Notebook | UTF-8 | Python | false | false | 3,865 | py | # coding=utf-8
# Copyright 2023 The Google Research Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Defines SUNRGBD, segmentation (including Mseg) and depth.
SUN RGB-D Dataset a Scene Understanding Benchmark
Website: https://rgbd.cs.princeton.edu/
Paper:
SUN RGB-D: A RGB-D scene understanding benchmark suite.
S. Song, S. Lichtenberg, and J. Xiao. In CVPR, 2015.
Features/Modalities:
1. RGB image
2. Semantic segmentation
3. Depth image
4. Object detection (2D & 3D)
5. Room layout
Currently only image, semantic segmentation and depth are used.
"""
from typing import Text
import numpy as np
from factors_of_influence import dataset_dirs
from factors_of_influence.fids import mseg_base
from factors_of_influence.fids import utils
DEPTH = 'depth'
MSEG = 'mseg'
ALL = 'all'
DEPTH_FILE_PATTERN = dataset_dirs.SUNRGBD_DEPTH_DIR + '/{}/{:08d}.png'
class SUNRGBD(mseg_base.MSegBase):
"""Import SUNRGBD."""
def __init__(self, sunrgb_config = MSEG):
super().__init__(mseg_name='SUNRGB-D',
mseg_original_name='sunrgbd-38',
mseg_base_name='sunrgbd-37',
mseg_dirname='SUNRGBD',
mseg_train_dataset=True,
mseg_config=sunrgb_config)
self.feature_names = self.get_features_from_config(sunrgb_config)
def get_features_from_config(self, sunrgb_config):
"""Return features based on SUNRGBD config."""
if sunrgb_config == DEPTH:
return ['image', 'depth']
elif sunrgb_config == MSEG:
return self.MSEG_FEATURE_NAMES
elif sunrgb_config == ALL:
return self.MSEG_FEATURE_NAMES + ['depth']
else:
raise ValueError(f'SUNRGBD config {sunrgb_config} not valid!')
def _info_features(self):
info_features = super()._info_features()
if 'depth' in self.feature_names:
info_features['depth'] = dict(
default_clip_min=0.369, default_clip_max=8.0)
return info_features
@staticmethod
def _convert_depth_to_m(depth_raw):
"""Converts depth (uint16) to cm (float)."""
# Follows the SUNRGBD Matlab Toolbox [SMT]:
# https://rgbd.cs.princeton.edu/data/SUNRGBDtoolbox.zip
# [SMT]: depth = bitor(bitshift(depth,-3), bitshift(depth,16-3));
# matlab's bitshift(..., -3) is a right shift (of 3); and
# matlab's bitshift(..., 13) is a left shift:
depth_raw = np.bitwise_or(np.right_shift(depth_raw, np.uint16(3)),
np.left_shift(depth_raw, np.uint16(13)))
# [SMT]: depth = single(depthInpaint)/1000;
depth_in_meter = depth_raw.astype(np.float32)/1000.0
# [SMT]: depth(depth >8)=8;
# Note practical max is around 5m (given sensors and indoor environments).
depth_in_meter = np.minimum(depth_in_meter, 8)
return depth_in_meter
def get_feature(self, split, curr_id, feature_name):
"""Returns a feature. Can be a numpy array or path to an image."""
if feature_name in self.MSEG_FEATURE_NAMES:
return super().get_feature(split, curr_id, feature_name)
if feature_name in ['depth']:
depth_id = int(curr_id.split('-')[1])
depth_split = 'train' if split == 'train' else 'test'
depth_file_name = DEPTH_FILE_PATTERN.format(depth_split, depth_id)
depth_raw = utils.load_image_cv2_any_color_any_depth(depth_file_name)
return self._convert_depth_to_m(depth_raw), True
| [
"copybara-worker@google.com"
] | copybara-worker@google.com |
26c116499964bec993a0ecb35274db83c88896aa | 0119f92755f9a1b2b959891457207063f8f200ae | /py2/h2o_import.py | 928e29fd0c9f8ea4dd80cbab28b36b94b26250b8 | [
"Apache-2.0"
] | permissive | chagge/h2o-dev | 4a505acfe840fde821536757fdc497917ad0e06c | 9c444efa618077962728f7c19f7eecb43a4e9849 | refs/heads/master | 2021-01-14T19:43:00.331580 | 2014-11-17T02:14:22 | 2014-11-17T02:14:22 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 28,261 | py | import h2o, h2o_cmd, h2o_jobs, h2o_print as h2p
import getpass, time, re, os, fnmatch
import h2o_args, h2o_util, h2o_nodes
from h2o_test import verboseprint, dump_json, check_sandbox_for_errors
#****************************************************************************************
# hdfs/maprfs/s3/s3n paths should be absolute from the bucket (top level)
# so only walk around for local
# using this standalone, we probably want 'put' decision making by default (can always pass schema='local')
def find_folder_and_filename(bucket, pathWithRegex, schema='put', returnFullPath=False):
checkPath = True
# strip the common mistake of leading "/" in path, if bucket is specified too
giveUpAndSearchLocally = False
if bucket is not None and re.match("/", pathWithRegex):
verboseprint("You said bucket:", bucket, "so stripping incorrect leading '/' from", pathWithRegex)
pathWithRegex = pathWithRegex.lstrip('/')
if bucket is None: # good for absolute path name
bucketPath = ""
elif bucket == ".":
bucketPath = os.getcwd()
# only use if the build_cloud was for remote H2O
# Never use the var for remote, if you're doing a put! (which always sources local)
elif h2o_nodes.nodes[0].remoteH2O and schema!='put' and \
(os.environ.get('H2O_REMOTE_BUCKETS_ROOT') or h2o_nodes.nodes[0].h2o_remote_buckets_root):
if (bucket=='smalldata' or bucket=='datasets') and schema=='local':
msg1 = "\nWARNING: you're using remote nodes, and 'smalldata' or 'datasets' git buckets, with schema!=put"
msg2 = "\nThose aren't git pull'ed by the test. Since they are user-maintained, not globally-maintained-by-0xdata,"
msg3 = "\nthey may be out of date at those remote nodes?"
msg4 = "\nGoing to assume we find a path to them locally, and remote path will be the same"
h2p.red_print(msg1, msg2, msg3, msg4)
giveUpAndSearchLocally = True
else:
if os.environ.get('H2O_REMOTE_BUCKETS_ROOT'):
rootPath = os.environ.get('H2O_REMOTE_BUCKETS_ROOT')
print "Found H2O_REMOTE_BUCKETS_ROOT:", rootPath
else:
rootPath = h2o_nodes.nodes[0].h2o_remote_buckets_root
print "Found h2o_nodes[0].h2o_remote_buckets_root:", rootPath
bucketPath = os.path.join(rootPath, bucket)
checkPath = False
# does it work to use bucket "." to get current directory
# this covers reote with put too
elif os.environ.get('H2O_BUCKETS_ROOT'):
rootPath = os.environ.get('H2O_BUCKETS_ROOT')
print "Using H2O_BUCKETS_ROOT environment variable:", rootPath
if not (os.path.exists(rootPath)):
raise Exception("H2O_BUCKETS_ROOT in env but %s doesn't exist." % rootPath)
bucketPath = os.path.join(rootPath, bucket)
if not (os.path.exists(bucketPath)):
raise Exception("H2O_BUCKETS_ROOT and path used to form %s which doesn't exist." % bucketPath)
else:
giveUpAndSearchLocally = True
#******************************************************************************************
if giveUpAndSearchLocally:
# if we run remotely, we're assuming the import folder path on the remote machine
# matches what we find on our local machine. But maybe the local user doesn't exist remotely
# so using his path won't work.
# Resolve by looking for special state in the config. If user = 0xdiag, just force the bucket location
# This is a lot like knowing about fixed paths with s3 and hdfs
# Otherwise the remote path needs to match the local discovered path.
# want to check the username being used remotely first. should exist here too if going to use
username = getpass.getuser()
h2oUsername = h2o_nodes.nodes[0].username
verboseprint("username:", username, "h2oUsername:", h2oUsername)
# bucket named "datasets" is special. Don't want to find it in /home/0xdiag/datasets
# needs to be the git clone 'datasets'. Find it by walking upwards below
# disable it from this looking in home dir. Could change priority order?
# resolved in order, looking for bucket (ln -s will work) in these home dirs.
if bucket=='datasets': # special case
possibleUsers = []
elif h2oUsername != username:
possibleUsers = [username, h2oUsername, "0xdiag"]
else:
possibleUsers = [username, "0xdiag"]
for u in possibleUsers:
rootPath = os.path.expanduser("~" + u)
bucketPath = os.path.join(rootPath, bucket)
verboseprint("Checking bucketPath:", bucketPath, 'assuming home is', rootPath)
if os.path.exists(bucketPath):
verboseprint("search A did find", bucket, "at", rootPath)
break
else:
# last chance to find it by snooping around
rootPath = os.getcwd()
verboseprint("find_bucket looking upwards from", rootPath, "for", bucket)
# don't spin forever
levels = 0
while not (os.path.exists(os.path.join(rootPath, bucket))):
verboseprint("Didn't find", bucket, "at", rootPath)
rootPath = os.path.split(rootPath)[0]
levels += 1
if (levels==6):
raise Exception("unable to find bucket: %s. Maybe missing link in /home/0xdiag or /home/0xcustomer or jenkins ~? or whatever user is running the python or the h2o?" % bucket)
verboseprint("search B did find", bucket, "at", rootPath)
bucketPath = os.path.join(rootPath, bucket)
#******************************************************************************************
# if there's no path, just return the bucketPath
# but what about cases with a header in the folder too? (not putfile)
if pathWithRegex is None:
if returnFullPath:
return bucketPath
else:
return (bucketPath, None)
# if there is a "/" in the path, that means it's not just a pattern
# split it
# otherwise it is a pattern. use it to search for files in python first?
# FIX! do that later
elif "/" in pathWithRegex:
(head, tail) = os.path.split(pathWithRegex)
folderPath = os.path.abspath(os.path.join(bucketPath, head))
# accept all 0xcustomer-datasets without checking..since the current python user
# may not have permission, but h2o will
# try a couple times with os.stat in between, in case it's not automounting
if '/mnt/0xcustomer-datasets' in folderPath:
pass
else:
retry = 0
while checkPath and (not os.path.exists(folderPath)) and retry<5:
# we can't stat an actual file, because we could have a regex at the end of the pathname
print "Retrying", folderPath, "in case there's a autofs mount problem"
os.stat(folderPath)
retry += 1
time.sleep(1)
if checkPath and not os.path.exists(folderPath):
raise Exception("%s doesn't exist. %s under %s may be wrong?" % (folderPath, head, bucketPath))
else:
folderPath = bucketPath
tail = pathWithRegex
verboseprint("folderPath:", folderPath, "tail:", tail)
if returnFullPath:
return os.path.join(folderPath, tail)
else:
return (folderPath, tail)
#***************************************************************************yy
# passes additional params thru kwargs for parse
# use_header_file=
# header=
# exclude=
# src_key= only used if for put file key name (optional)
# path should point to a file or regex of files. (maybe folder works? but unnecessary
def import_only(node=None, schema='local', bucket=None, path=None,
timeoutSecs=30, retryDelaySecs=0.1, initialDelaySecs=0, pollTimeoutSecs=180, noise=None,
benchmarkLogging=None, noPoll=False, doSummary=True, src_key=None, noPrint=False,
importParentDir=True, **kwargs):
# FIX! hack all put to local, since h2o-dev doesn't have put yet?
# multi-machine put will fail as a result.
if schema=='put':
h2p.yellow_print("WARNING: hacking schema='put' to 'local'..h2o-dev doesn't support upload." +
"\nMeans multi-machine with 'put' will fail")
schema = 'local'
if src_key and schema!='put':
raise Exception("can only specify a 'src_key' param for schema='put'. You have %s %s" % (schema, src_key))
# no bucket is sometimes legal (fixed path)
if not node: node = h2o_nodes.nodes[0]
if path is None:
raise Exception("import_only: path parameter needs to be specified")
if "/" in path:
(head, pattern) = os.path.split(path)
else:
(head, pattern) = ("", path)
verboseprint("head:", head)
verboseprint("pattern:", pattern)
# to train users / okay here
# normally we import the folder above, but if we import exactly, the path can't have regex
# the folder can't have regex in any case
if importParentDir:
if re.search(r"[\*<>{}[\]~`]", head):
raise Exception("h2o folder path %s can't be regex. path= was %s" % (head, path))
else:
if re.search(r"[\*<>{}[\]~`]", path):
raise Exception("h2o path %s can't be regex. path= was %s" % (head, path))
if schema=='put':
# to train users
if re.search(r"[/\*<>{}[\]~`]", pattern):
raise Exception("h2o putfile basename %s can't be regex. path= was %s" % (pattern, path))
if not path:
raise Exception("path= didn't say what file to put")
(folderPath, filename) = find_folder_and_filename(bucket, path, schema)
filePath = os.path.join(folderPath, filename)
verboseprint("put filename:", filename, "folderPath:", folderPath, "filePath:", filePath)
if not noPrint:
h2p.green_print("\nimport_only:", h2o_args.python_test_name, "uses put:/%s" % filePath)
h2p.green_print("Local path to file that will be uploaded: %s" % filePath)
h2p.blue_print("That path resolves as:", os.path.realpath(filePath))
if h2o_args.abort_after_import:
raise Exception("Aborting due to abort_after_import (-aai) argument's effect in import_only()")
key = node.put_file(filePath, key=src_key, timeoutSecs=timeoutSecs)
# hmm.. what should importResult be in the put case
# set it to None. No import is done, and shouldn't be used if you're doing schema='put'
importResult = None
return (None, key)
if schema=='local' and not \
(node.redirect_import_folder_to_s3_path or node.redirect_import_folder_to_s3n_path):
(folderPath, pattern) = find_folder_and_filename(bucket, path, schema)
filePath = os.path.join(folderPath, pattern)
h2p.green_print("\nimport_only:", h2o_args.python_test_name, "uses local:/%s" % filePath)
h2p.green_print("Path h2o will be told to use: %s" % filePath)
h2p.blue_print("If local jvms, path resolves locally as:", os.path.realpath(filePath))
if h2o_args.abort_after_import:
raise Exception("Aborting due to abort_after_import (-aai) argument's effect in import_only()")
# FIX! why are we returning importPattern here..it's different than finalImportString if we import a folder?
# is it used for key matching by others?
# FIX! hack ..h2o-dev is creating key names with the absolute path, not the sym link path
# messes up for import folders that go thru /home/<user>/home-0xdiag-datasets
# importPattern = folderURI + "/" + pattern
# could include this on the entire importPattern if we no longer have regex basename in h2o-dev?
# folderURI = 'nfs:/' + folderPath
folderURI = 'nfs:/' + os.path.realpath(folderPath)
if importParentDir:
finalImportString = folderPath
else:
finalImportString = folderPath + "/" + pattern
importResult = node.import_files(finalImportString, timeoutSecs=timeoutSecs)
else:
if bucket is not None and re.match("/", head):
verboseprint("You said bucket:", bucket, "so stripping incorrect leading '/' from", head)
head = head.lstrip('/')
# strip leading / in head if present
if bucket and head!="":
folderOffset = bucket + "/" + head
elif bucket:
folderOffset = bucket
else:
folderOffset = head
if h2o_args.abort_after_import:
raise Exception("Aborting due to abort_after_import (-aai) argument's effect in import_only()")
n = h2o_nodes.nodes[0]
if schema=='s3' or node.redirect_import_folder_to_s3_path:
# this is just like s3n now? i.e. we can point down inside the s3 bucket like s3n?
folderOffset = re.sub("smalldata", "h2o-smalldata", folderOffset)
folderURI = "s3://" + folderOffset
if not n.aws_credentials:
print "aws_credentials: %s" % n.aws_credentials
# raise Exception("Something was missing for s3 on the java -jar cmd line when the cloud was built")
print "ERROR: Something was missing for s3 on the java -jar cmd line when the cloud was built"
if importParentDir:
finalImportString = folderURI
else:
finalImportString = folderURI + "/" + pattern
importResult = node.import_files(finalImportString, timeoutSecs=timeoutSecs)
elif schema=='s3n' or node.redirect_import_folder_to_s3n_path:
# FIX! hack for now...when we change import folder to import s3, point to unique bucket name for h2o
# should probably deal with this up in the bucket resolution
# this may change other cases, but smalldata should only exist as a "bucket" for us?
folderOffset = re.sub("smalldata", "h2o-smalldata", folderOffset)
if not (n.use_hdfs and ((n.hdfs_version and n.hdfs_name_node) or n.hdfs_config)):
print "use_hdfs: %s hdfs_version: %s hdfs_name_node: %s" % (n.use_hdfs, n.hdfs_version, n.hdfs_name_node)
if n.hdfs_config:
print "hdfs_config: %s" % n.hdfs_config
# raise Exception("Something was missing for s3n on the java -jar cmd line when the cloud was built")
print "ERROR: Something was missing for s3n on the java -jar cmd line when the cloud was built"
folderURI = "s3n://" + folderOffset
if importParentDir:
finalImportString = folderURI
else:
finalImportString = folderURI + "/" + pattern
importResult = node.import_files(finalImportString, timeoutSecs=timeoutSecs)
elif schema=='maprfs':
if not n.use_maprfs:
print "use_maprfs: %s" % n.use_maprfs
# raise Exception("Something was missing for maprfs on the java -jar cmd line when the cloud was built")
print "ERROR: Something was missing for maprfs on the java -jar cmd line when the cloud was built"
# if I use the /// and default, the key names that get created by h2o only have 1 slash
# so the parse doesn't find the key name
if n.hdfs_name_node:
folderURI = "maprfs://" + n.hdfs_name_node + "/" + folderOffset
else:
# this is different than maprfs? normally we specify the name though
# folderURI = "maprfs:///" + folderOffset
folderURI = "maprfs:/" + folderOffset
if importParentDir:
finalImportString = folderURI
else:
finalImportString = folderURI + "/" + pattern
importResult = node.import_files(finalImportString, timeoutSecs=timeoutSecs)
elif schema=='hdfs':
# check that some state from the cloud building time was right
# the requirements for this may change and require updating
if not (n.use_hdfs and ((n.hdfs_version and n.hdfs_name_node) or n.hdfs_config)):
print "use_hdfs: %s hdfs_version: %s hdfs_name_node: %s" % (n.use_hdfs, n.hdfs_version, n.hdfs_name_node)
if n.hdfs_config:
print "hdfs_config: %s" % n.hdfs_config
# raise Exception("Something was missing for hdfs on the java -jar cmd line when the cloud was built")
print "ERROR: Something was missing for hdfs on the java -jar cmd line when the cloud was built"
if n.hdfs_name_node:
folderURI = "hdfs://" + n.hdfs_name_node + "/" + folderOffset
else:
# this is different than maprfs? normally we specify the name though
folderURI = "hdfs://" + folderOffset
if importParentDir:
finalImportString = folderURI
else:
finalImportString = folderURI + "/" + pattern
importResult = node.import_files(finalImportString, timeoutSecs=timeoutSecs)
else:
raise Exception("schema not understood: %s" % schema)
print "\nimport_only:", h2o_args.python_test_name, schema, "uses", finalImportString
importPattern = folderURI + "/" + pattern
return (importResult, importPattern)
#****************************************************************************************
# can take header, header_from_file, exclude params
def parse_only(node=None, pattern=None, hex_key=None,
timeoutSecs=30, retryDelaySecs=0.1, initialDelaySecs=0, pollTimeoutSecs=180, noise=None,
benchmarkLogging=None, noPoll=False, **kwargs):
if not node: node = h2o_nodes.nodes[0]
# Get the list of all keys and use those that match the pattern
# FIX! this can be slow. Can we use h2o to filter the list for us?
framesResult = node.frames()
matchingList = []
for frame in framesResult['frames']:
# print frame
key_name = frame['key']['name']
if fnmatch.fnmatch(key_name, pattern):
matchingList.append(key_name)
parseResult = node.parse(key=matchingList, hex_key=hex_key,
timeoutSecs=timeoutSecs, retryDelaySecs=retryDelaySecs,
initialDelaySecs=initialDelaySecs, pollTimeoutSecs=pollTimeoutSecs, noise=noise,
benchmarkLogging=benchmarkLogging, noPoll=noPoll, **kwargs)
parseResult['python_source'] = pattern
return parseResult
#****************************************************************************************
def import_parse(node=None, schema='local', bucket=None, path=None,
src_key=None, hex_key=None,
timeoutSecs=30, retryDelaySecs=0.1, initialDelaySecs=0, pollTimeoutSecs=180, noise=None,
benchmarkLogging=None, noPoll=False, doSummary=True, noPrint=True,
importParentDir=True, **kwargs):
# FIX! hack all put to local, since h2o-dev doesn't have put yet?
# multi-machine put will fail as a result.
if schema=='put':
h2p.yellow_print("WARNING: hacking schema='put' to 'local'..h2o-dev doesn't support upload." +
"\nMeans multi-machine with 'put' will fail")
schema = 'local'
if not node: node = h2o_nodes.nodes[0]
(importResult, importPattern) = import_only(node, schema, bucket, path,
timeoutSecs, retryDelaySecs, initialDelaySecs, pollTimeoutSecs, noise,
benchmarkLogging, noPoll, doSummary, src_key, noPrint, importParentDir, **kwargs)
verboseprint("importPattern:", importPattern)
verboseprint("importResult", dump_json(importResult))
parseResult = parse_only(node, importPattern, hex_key,
timeoutSecs, retryDelaySecs, initialDelaySecs, pollTimeoutSecs, noise,
benchmarkLogging, noPoll, **kwargs)
verboseprint("parseResult:", dump_json(parseResult))
# do SummaryPage here too, just to get some coverage
# only if not noPoll. otherwise parse isn't done
if doSummary and not noPoll:
# if parse blows up, we want error isolation ..i.e. find stack traces here, rather than the next guy blowing up
check_sandbox_for_errors()
print "WARNING: not doing inspect/summary for now after parse"
## inspect = node.inspect(parseResult['destination_key'], timeoutSecs=timeoutSecs)
## numRows = inspect['numRows']
## numCols = inspect['numCols']
# we pass numCols, for detecting whether the na cnt means a col is all NAs, (for ignoring min/max/mean/sigma)
## node.summary_page(parseResult['destination_key'], timeoutSecs=timeoutSecs, noPrint=noPrint, numRows=numRows, numCols=numCols)
# for now, don't worry about error isolating summary
else:
# isolate a parse from the next thing
check_sandbox_for_errors()
return parseResult
#****************************************************************************************
# returns full key name, from current store view
def find_key(pattern=None):
try:
patternObj = re.compile(pattern)
except:
raise Exception("Need legal pattern in find_key, not %s", pattern)
frames = h2o_nodes.nodes[0].frames()['frames']
frames_dict = h2o_util.list_to_dict(frames, 'key/name')
result = []
for key in frames_dict:
if patternObj.search(key):
result.append(key)
if len(result) == 0:
verboseprint("Warning: No match for %s" % pattern)
return None
if len(result) > 1:
verboseprint("Warning: multiple imported keys match the key pattern %s, Using: %s" % (pattern, result[0]))
return result[0]
#****************************************************************************************
# the storeViewResult for every node may or may not be the same
# supposed to be the same? In any case
# pattern can't be regex to h2o?
# None should be same as no pattern
def delete_keys(node=None, pattern=None, timeoutSecs=120):
if not node: node = h2o_nodes.nodes[0]
kwargs = {'filter': pattern}
deletedCnt = 0
triedKeys = []
while True:
# FIX! h2o is getting a bad store_view NPE stack trace if I grabe all the
# keys at the end of a test, prior to removing. Just grab 20 at a time like h2o
# used to do for me. Maybe the keys are changing state, and going slower will eliminate the race
# against prior work (but note that R might see the same problem
storeViewResult = h2o_cmd.runStoreView(node, timeoutSecs=timeoutSecs, view=20, **kwargs)
# we get 20 at a time with default storeView
keys = storeViewResult['keys']
if not keys:
break
# look for keys we already sent a remove on. Maybe those are locked.
# give up on those
deletedThisTime = 0
for k in keys:
if k in triedKeys:
print "Already tried to delete %s. Must have failed. Not trying again" % k
# don't delete the DRF __Tree__ keys. deleting the model does that. causes race conditions
elif '__Tree__' in k['key']:
print "Not deleting a tree key from DRF: %s" % k
elif 'DRF_' in k['key']:
print "Not deleting DRF key..they may be problematic in flight: %s" % k
elif '__RFModel__' in k['key']:
print "Not deleting __RFModel__ key..seeing NPE's if I try to delete them: %s" % k
else:
print "Deleting", k['key'], "at", node
node.remove_key(k['key'], timeoutSecs=timeoutSecs)
deletedCnt += 1
deletedThisTime += 1
triedKeys.append(k)
# print "Deleted", deletedCnt, "keys at %s:%s" % (node.http_addr, node.port)
if deletedThisTime==0:
break
# this is really the count that we attempted. Some could have failed.
return deletedCnt
# could detect if pattern is used, and use the h2o "delete all keys" method if not
def delete_keys_at_all_nodes(node=None, pattern=None, timeoutSecs=120):
print "Going to delete all keys one at a time (slower than 'remove all keys')"
# TEMP: change this to remove_all_keys which ignores locking and removes keys?
# getting problems when tests fail in multi-test-on-one-h2o-cluster runner*sh tests
if not node: node = h2o_nodes.nodes[0]
print "Will cancel any running jobs, because we can't unlock keys on running jobs"
# I suppose if we used a pattern, we wouldn't have to worry about running jobs..oh well.
h2o_jobs.cancelAllJobs()
print "unlock all keys first to make sure broken keys get removed"
node.unlock()
totalDeletedCnt = 0
deletedCnt = delete_keys(node, pattern=pattern, timeoutSecs=timeoutSecs)
totalDeletedCnt += deletedCnt
if pattern:
print "Total: Deleted", totalDeletedCnt, "keys with filter=", pattern, "at", len(h2o_nodes.nodes), "nodes"
else:
print "Total: Deleted", totalDeletedCnt, "keys at", len(h2o_nodes.nodes), "nodes"
# do a remove_all_keys to clean out any locked keys also (locked keys will complain above)
# doesn't work if you remove job keys first, since it looks at the job list and gets confused
### node.remove_all_keys(timeoutSecs=timeoutSecs)
return totalDeletedCnt
def count_keys(node=None, pattern=None, timeoutSecs=90):
if not node: node = h2o_nodes.nodes[0]
kwargs = {'filter': pattern}
nodeCnt = 0
offset = 0
while True:
# we get 20 at a time with default storeView
# if we get < 20, we're done
storeViewResult = h2o_cmd.runStoreView(node, timeoutSecs=timeoutSecs, offset=offset, view=20, **kwargs)
keys = storeViewResult['keys']
if not keys:
break
nodeCnt += len(storeViewResult['keys'])
if len(keys) < 20:
break
offset += 20
print nodeCnt, "keys at %s:%s" % (node.http_addr, node.port)
return nodeCnt
def count_keys_at_all_nodes(node=None, pattern=None, timeoutSecs=90):
if not node: node = h2o_nodes.nodes[0]
totalCnt = 0
# do it in reverse order, since we always talk to 0 for other stuff
# this will be interesting if the others don't have a complete set
# theoretically, the deletes should be 0 after the first node
# since the deletes should be global
for node in reversed(h2o_nodes.nodes):
nodeCnt = count_keys(node, pattern=pattern, timeoutSecs=timeoutSecs)
totalCnt += nodeCnt
if pattern:
print "Total: ", totalCnt, "keys with filter=", pattern, "at", len(h2o_nodes.nodes), "nodes"
else:
print "Total: ", totalCnt, "keys at", len(h2o_nodes.nodes), "nodes"
return totalCnt
#****************************************************************************************
# Since we can't trust a single node storeview list, this will get keys that match text
# for deleting, from a list saved from an import
def delete_keys_from_import_result(node=None, pattern=None, importResult=None, timeoutSecs=30):
if not node: node = h2o_nodes.nodes[0]
# the list could be from hdfs/s3 or local. They have to different list structures
deletedCnt = 0
if 'succeeded' in importResult:
kDict = importResult['succeeded']
for k in kDict:
key = k['key']
if (pattern in key) or pattern is None:
print "Removing", key
removeKeyResult = node.remove_key(key=key)
deletedCnt += 1
elif 'keys' in importResult:
kDict = importResult['keys']
for k in kDict:
key = k
if (pattern in key) or pattern is None:
print "Removing", key
removeKeyResult = node.remove_key(key=key)
deletedCnt += 1
else:
raise Exception ("Can't find 'files' or 'succeeded' in your file dict. why? not from hdfs/s3 or local?")
print "Deleted", deletedCnt, "keys at", node
return deletedCnt
| [
"kevin@0xdata.com"
] | kevin@0xdata.com |
03db6f6bd47c591afc9e7b870cc8ed968dca8687 | acc37c2f7ea1500b46b9a8d3d01f70f64af4f8d9 | /orders/__init__.py | c77f0dbde7ad6ababa4e9edfb83d9c20e0b7a730 | [] | no_license | solotony/avia78 | 3e9e80548353a8a491194a1c6f9f6cf7b9c6810f | 6b2b11046c48db2b95dc4e2e09e06fc3a05b5584 | refs/heads/master | 2021-07-07T15:14:34.473472 | 2020-12-02T08:27:37 | 2020-12-02T08:27:37 | 207,470,387 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 49 | py | default_app_config = "orders.apps.OrdersConfig"
| [
"as@solotony.com"
] | as@solotony.com |
761631e008d73e304e87b40d11cc0bc377475f41 | d30233316dd25fa1fe757f46769fba2da4934876 | /GRADER/File.py | 6d9daec59ae5214854e9ed492dcfb42a1e9cb1f7 | [] | no_license | saumya-singh/CodeGrader | 7fb1ca75bc07a76c7ff0c506f22d22cfaa8661b0 | 6338da979cff00c7a12b8988d2d1886663278f14 | refs/heads/master | 2020-03-10T19:21:44.775031 | 2018-04-14T19:44:35 | 2018-04-14T19:44:35 | 129,546,341 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,261 | py | import requests
import os
class File:
base_url = "http://192.168.43.190:8080/CodeGrader2"
base_directory = "/tmp"
def __init__(self, submission_id, file_name):
self.submission_id = submission_id
self.file_name = file_name.strip()
self.filename_noext, file_ext = os.path.splitext(self.file_name)
self.file_ext = file_ext[1:]
def downloadFile(self):
r = requests.get(self.getRemoteFileUrl())
if not os.path.exists(self.getLocalDestinationDir()):
os.makedirs(self.getLocalDestinationDir())
print self.getRemoteFileUrl()
output = open( self.getLocalFileLocation() , 'w')
output.write(r.text)
output.close()
return True
def getRemoteFileUrl(self):
return File.base_url+"/"+self.submission_id+"/"+self.file_name
def getLocalDestinationDir(self):
return File.base_directory+"/sol_"+self.submission_id
def getLocalFileLocation(self):
return self.getLocalDestinationDir()+"/" + self.file_name
def getLanguage(self):
return self.file_ext.lower()
def getClassName(self):
return self.filename_noext
def getFileName(self):
return self.file_name
if __name__ == "__main__":
f = File("1234", "armstrong.c", "1")
#print f.downloadFile()
#print f.getFileContent()
print f.getTestFilePath("1")
| [
"saumya.singh0993@gmail.com"
] | saumya.singh0993@gmail.com |
b9ad598aef5d9c6ff63715cc8682439a6df16879 | 5d32d0e65aa3bfa677fd1b8c92569e07e9b82af1 | /Section 4 - Lists/Fruit Machine 2.py | e78f93ec3c25930ea64edd0961d61c27284f6398 | [
"CC0-1.0"
] | permissive | pdst-lccs/lccs-python | b74ef2a02ac8ad2637f713fff5559f4e56c9827d | 95cb7ece05716521e9951d7a40de8fb20a88021f | refs/heads/master | 2023-05-28T00:46:57.313972 | 2023-05-22T10:16:43 | 2023-05-22T10:16:43 | 240,501,524 | 21 | 18 | null | null | null | null | UTF-8 | Python | false | false | 1,026 | py | # Event: LCCS Python Fundamental Skills Workshop
# Date: May 2018
# Author: Joe English, PDST
# eMail: computerscience@pdst.ie
# Purpose: A program to simulate a fruit machine
# Description: To run this program the file fruits.txt must exist in the runtime folder
# This program reads the entire file in one command (read)
# The contents of the file are saved in a variable called fileContents
# The string is split into a list of tokens called fruits
# The choice command is used to select a random element from fruits
# Program to simulate a fruit machine!
import random
# Open the fruits file (already created)
fruitFile = open("fruits.txt","r")
# Read the entire file
fileContents = fruitFile.read()
# Close the file
fruitFile.close()
# Split the content into a list
fruits = fileContents.split()
# Spin! Display three fruits
print(random.choice(fruits))
print(random.choice(fruits))
print(random.choice(fruits))
# This line is just here for debugging purposes
# print(fruits)
| [
"noreply@github.com"
] | pdst-lccs.noreply@github.com |
f741b30f77ecb2541f9ec583b4922d037e629df7 | fee8b1eb5b2d3f12b6b549be0daddecaaa3d0c25 | /test/orm/dml/test_bulk_statements.py | 0cca9e6f5f9b54b67e88c6ad55b3131ce7baf623 | [
"MIT"
] | permissive | john-bodley/sqlalchemy | e26c85677a9405b4ba94adc492fcb78a2372d368 | eddf474d528f55a2ed56e3dac1b0e5decd1e0952 | refs/heads/main | 2022-10-25T11:45:27.681999 | 2022-09-27T14:23:28 | 2022-09-27T14:23:28 | 542,353,543 | 0 | 0 | MIT | 2022-09-28T01:14:37 | 2022-09-28T01:14:35 | null | UTF-8 | Python | false | false | 37,511 | py | from __future__ import annotations
from typing import Any
from typing import List
from typing import Optional
import uuid
from sqlalchemy import exc
from sqlalchemy import ForeignKey
from sqlalchemy import func
from sqlalchemy import Identity
from sqlalchemy import insert
from sqlalchemy import inspect
from sqlalchemy import literal_column
from sqlalchemy import select
from sqlalchemy import String
from sqlalchemy import testing
from sqlalchemy import update
from sqlalchemy.orm import aliased
from sqlalchemy.orm import load_only
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
from sqlalchemy.testing import config
from sqlalchemy.testing import eq_
from sqlalchemy.testing import expect_raises_message
from sqlalchemy.testing import fixtures
from sqlalchemy.testing import mock
from sqlalchemy.testing import provision
from sqlalchemy.testing.assertsql import CompiledSQL
from sqlalchemy.testing.fixtures import fixture_session
class NoReturningTest(fixtures.TestBase):
def test_no_returning_error(self, decl_base):
class A(fixtures.ComparableEntity, decl_base):
__tablename__ = "a"
id: Mapped[int] = mapped_column(Identity(), primary_key=True)
data: Mapped[str]
x: Mapped[Optional[int]] = mapped_column("xcol")
decl_base.metadata.create_all(testing.db)
s = fixture_session()
if testing.requires.insert_executemany_returning.enabled:
result = s.scalars(
insert(A).returning(A),
[
{"data": "d3", "x": 5},
{"data": "d4", "x": 6},
],
)
eq_(result.all(), [A(data="d3", x=5), A(data="d4", x=6)])
else:
with expect_raises_message(
exc.InvalidRequestError,
"Can't use explicit RETURNING for bulk INSERT operation",
):
s.scalars(
insert(A).returning(A),
[
{"data": "d3", "x": 5},
{"data": "d4", "x": 6},
],
)
def test_omit_returning_ok(self, decl_base):
class A(decl_base):
__tablename__ = "a"
id: Mapped[int] = mapped_column(Identity(), primary_key=True)
data: Mapped[str]
x: Mapped[Optional[int]] = mapped_column("xcol")
decl_base.metadata.create_all(testing.db)
s = fixture_session()
s.execute(
insert(A),
[
{"data": "d3", "x": 5},
{"data": "d4", "x": 6},
],
)
eq_(
s.execute(select(A.data, A.x).order_by(A.id)).all(),
[("d3", 5), ("d4", 6)],
)
class BulkDMLReturningInhTest:
def test_insert_col_key_also_works_currently(self):
"""using the column key, not mapped attr key.
right now this passes through to the INSERT. when doing this with
an UPDATE, it tends to fail because the synchronize session
strategies can't match "xcol" back. however w/ INSERT we aren't
doing that, so there's no place this gets checked. UPDATE also
succeeds if synchronize_session is turned off.
"""
A, B = self.classes("A", "B")
s = fixture_session()
s.execute(insert(A).values(type="a", data="d", xcol=10))
eq_(s.scalars(select(A.x)).all(), [10])
@testing.combinations(True, False, argnames="use_returning")
def test_heterogeneous_keys(self, use_returning):
A, B = self.classes("A", "B")
values = [
{"data": "d3", "x": 5, "type": "a"},
{"data": "d4", "x": 6, "type": "a"},
{"data": "d5", "type": "a"},
{"data": "d6", "x": 8, "y": 9, "type": "a"},
{"data": "d7", "x": 12, "y": 12, "type": "a"},
{"data": "d8", "x": 7, "type": "a"},
]
s = fixture_session()
stmt = insert(A)
if use_returning:
stmt = stmt.returning(A)
with self.sql_execution_asserter() as asserter:
result = s.execute(stmt, values)
if inspect(B).single:
single_inh = ", a.bd, a.zcol, a.q"
else:
single_inh = ""
if use_returning:
asserter.assert_(
CompiledSQL(
"INSERT INTO a (type, data, xcol) VALUES "
"(:type, :data, :xcol) "
f"RETURNING a.id, a.type, a.data, a.xcol, a.y{single_inh}",
[
{"type": "a", "data": "d3", "xcol": 5},
{"type": "a", "data": "d4", "xcol": 6},
],
),
CompiledSQL(
"INSERT INTO a (type, data) VALUES (:type, :data) "
f"RETURNING a.id, a.type, a.data, a.xcol, a.y{single_inh}",
[{"type": "a", "data": "d5"}],
),
CompiledSQL(
"INSERT INTO a (type, data, xcol, y) "
"VALUES (:type, :data, :xcol, :y) "
f"RETURNING a.id, a.type, a.data, a.xcol, a.y{single_inh}",
[
{"type": "a", "data": "d6", "xcol": 8, "y": 9},
{"type": "a", "data": "d7", "xcol": 12, "y": 12},
],
),
CompiledSQL(
"INSERT INTO a (type, data, xcol) "
"VALUES (:type, :data, :xcol) "
f"RETURNING a.id, a.type, a.data, a.xcol, a.y{single_inh}",
[{"type": "a", "data": "d8", "xcol": 7}],
),
)
else:
asserter.assert_(
CompiledSQL(
"INSERT INTO a (type, data, xcol) VALUES "
"(:type, :data, :xcol)",
[
{"type": "a", "data": "d3", "xcol": 5},
{"type": "a", "data": "d4", "xcol": 6},
],
),
CompiledSQL(
"INSERT INTO a (type, data) VALUES (:type, :data)",
[{"type": "a", "data": "d5"}],
),
CompiledSQL(
"INSERT INTO a (type, data, xcol, y) "
"VALUES (:type, :data, :xcol, :y)",
[
{"type": "a", "data": "d6", "xcol": 8, "y": 9},
{"type": "a", "data": "d7", "xcol": 12, "y": 12},
],
),
CompiledSQL(
"INSERT INTO a (type, data, xcol) "
"VALUES (:type, :data, :xcol)",
[{"type": "a", "data": "d8", "xcol": 7}],
),
)
if use_returning:
eq_(
result.scalars().all(),
[
A(data="d3", id=mock.ANY, type="a", x=5, y=None),
A(data="d4", id=mock.ANY, type="a", x=6, y=None),
A(data="d5", id=mock.ANY, type="a", x=None, y=None),
A(data="d6", id=mock.ANY, type="a", x=8, y=9),
A(data="d7", id=mock.ANY, type="a", x=12, y=12),
A(data="d8", id=mock.ANY, type="a", x=7, y=None),
],
)
@testing.combinations(
"strings",
"cols",
"strings_w_exprs",
"cols_w_exprs",
argnames="paramstyle",
)
@testing.combinations(
True,
(False, testing.requires.multivalues_inserts),
argnames="single_element",
)
def test_single_values_returning_fn(self, paramstyle, single_element):
"""test using insert().values().
these INSERT statements go straight in as a single execute without any
insertmanyreturning or bulk_insert_mappings thing going on. the
advantage here is that SQL expressions can be used in the values also.
Disadvantage is none of the automation for inheritance mappers.
"""
A, B = self.classes("A", "B")
if paramstyle == "strings":
values = [
{"data": "d3", "x": 5, "y": 9, "type": "a"},
{"data": "d4", "x": 10, "y": 8, "type": "a"},
]
elif paramstyle == "cols":
values = [
{A.data: "d3", A.x: 5, A.y: 9, A.type: "a"},
{A.data: "d4", A.x: 10, A.y: 8, A.type: "a"},
]
elif paramstyle == "strings_w_exprs":
values = [
{"data": func.lower("D3"), "x": 5, "y": 9, "type": "a"},
{
"data": "d4",
"x": literal_column("5") + 5,
"y": 8,
"type": "a",
},
]
elif paramstyle == "cols_w_exprs":
values = [
{A.data: func.lower("D3"), A.x: 5, A.y: 9, A.type: "a"},
{
A.data: "d4",
A.x: literal_column("5") + 5,
A.y: 8,
A.type: "a",
},
]
else:
assert False
s = fixture_session()
if single_element:
if paramstyle.startswith("strings"):
stmt = (
insert(A)
.values(**values[0])
.returning(A, func.upper(A.data, type_=String))
)
else:
stmt = (
insert(A)
.values(values[0])
.returning(A, func.upper(A.data, type_=String))
)
else:
stmt = (
insert(A)
.values(values)
.returning(A, func.upper(A.data, type_=String))
)
for i in range(3):
result = s.execute(stmt)
expected: List[Any] = [(A(data="d3", x=5, y=9), "D3")]
if not single_element:
expected.append((A(data="d4", x=10, y=8), "D4"))
eq_(result.all(), expected)
def test_bulk_w_sql_expressions(self):
A, B = self.classes("A", "B")
data = [
{"x": 5, "y": 9, "type": "a"},
{
"x": 10,
"y": 8,
"type": "a",
},
]
s = fixture_session()
stmt = (
insert(A)
.values(data=func.lower("DD"))
.returning(A, func.upper(A.data, type_=String))
)
for i in range(3):
result = s.execute(stmt, data)
expected: List[Any] = [
(A(data="dd", x=5, y=9), "DD"),
(A(data="dd", x=10, y=8), "DD"),
]
eq_(result.all(), expected)
def test_bulk_w_sql_expressions_subclass(self):
A, B = self.classes("A", "B")
data = [
{"bd": "bd1", "x": 1, "y": 2, "z": 3, "q": 4},
{"bd": "bd2", "x": 5, "y": 6, "z": 7, "q": 8},
]
s = fixture_session()
stmt = (
insert(B)
.values(data=func.lower("DD"))
.returning(B, func.upper(B.data, type_=String))
)
for i in range(3):
result = s.execute(stmt, data)
expected: List[Any] = [
(B(bd="bd1", data="dd", q=4, type="b", x=1, y=2, z=3), "DD"),
(B(bd="bd2", data="dd", q=8, type="b", x=5, y=6, z=7), "DD"),
]
eq_(result.all(), expected)
@testing.combinations(True, False, argnames="use_ordered")
def test_bulk_upd_w_sql_expressions_no_ordered_values(self, use_ordered):
A, B = self.classes("A", "B")
s = fixture_session()
stmt = update(B).ordered_values(
("data", func.lower("DD_UPDATE")),
("z", literal_column("3 + 12")),
)
with expect_raises_message(
exc.InvalidRequestError,
r"bulk ORM UPDATE does not support ordered_values\(\) "
r"for custom UPDATE",
):
s.execute(
stmt,
[
{"id": 5, "bd": "bd1_updated"},
{"id": 6, "bd": "bd2_updated"},
],
)
def test_bulk_upd_w_sql_expressions_subclass(self):
A, B = self.classes("A", "B")
s = fixture_session()
data = [
{"data": "d3", "bd": "bd1", "x": 1, "y": 2, "z": 3, "q": 4},
{"data": "d4", "bd": "bd2", "x": 5, "y": 6, "z": 7, "q": 8},
]
ids = s.scalars(insert(B).returning(B.id), data).all()
stmt = update(B).values(
data=func.lower("DD_UPDATE"), z=literal_column("3 + 12")
)
result = s.execute(
stmt,
[
{"id": ids[0], "bd": "bd1_updated"},
{"id": ids[1], "bd": "bd2_updated"},
],
)
# this is a nullresult at the moment
assert result is not None
eq_(
s.scalars(select(B)).all(),
[
B(
bd="bd1_updated",
data="dd_update",
id=ids[0],
q=4,
type="b",
x=1,
y=2,
z=15,
),
B(
bd="bd2_updated",
data="dd_update",
id=ids[1],
q=8,
type="b",
x=5,
y=6,
z=15,
),
],
)
def test_single_returning_fn(self):
A, B = self.classes("A", "B")
s = fixture_session()
for i in range(3):
result = s.execute(
insert(A).returning(A, func.upper(A.data, type_=String)),
[{"data": "d3"}, {"data": "d4"}],
)
eq_(result.all(), [(A(data="d3"), "D3"), (A(data="d4"), "D4")])
@testing.combinations(
True,
False,
argnames="single_element",
)
def test_subclass_no_returning(self, single_element):
A, B = self.classes("A", "B")
s = fixture_session()
if single_element:
data = {"data": "d3", "bd": "bd1", "x": 1, "y": 2, "z": 3, "q": 4}
else:
data = [
{"data": "d3", "bd": "bd1", "x": 1, "y": 2, "z": 3, "q": 4},
{"data": "d4", "bd": "bd2", "x": 5, "y": 6, "z": 7, "q": 8},
]
result = s.execute(insert(B), data)
assert result._soft_closed
@testing.combinations(
True,
False,
argnames="single_element",
)
def test_subclass_load_only(self, single_element):
"""test that load_only() prevents additional attributes from being
populated.
"""
A, B = self.classes("A", "B")
s = fixture_session()
if single_element:
data = {"data": "d3", "bd": "bd1", "x": 1, "y": 2, "z": 3, "q": 4}
else:
data = [
{"data": "d3", "bd": "bd1", "x": 1, "y": 2, "z": 3, "q": 4},
{"data": "d4", "bd": "bd2", "x": 5, "y": 6, "z": 7, "q": 8},
]
for i in range(3):
# tests both caching and that the data dictionaries aren't
# mutated...
result = s.execute(
insert(B).returning(B).options(load_only(B.data, B.y, B.q)),
data,
)
objects = result.scalars().all()
for obj in objects:
assert "data" in obj.__dict__
assert "q" in obj.__dict__
assert "z" not in obj.__dict__
assert "x" not in obj.__dict__
expected = [
B(data="d3", bd="bd1", x=1, y=2, z=3, q=4),
]
if not single_element:
expected.append(B(data="d4", bd="bd2", x=5, y=6, z=7, q=8))
eq_(objects, expected)
@testing.combinations(
True,
False,
argnames="single_element",
)
def test_subclass_load_only_doesnt_fetch_cols(self, single_element):
"""test that when using load_only(), the actual INSERT statement
does not include the deferred columns
"""
A, B = self.classes("A", "B")
s = fixture_session()
data = [
{"data": "d3", "bd": "bd1", "x": 1, "y": 2, "z": 3, "q": 4},
{"data": "d4", "bd": "bd2", "x": 5, "y": 6, "z": 7, "q": 8},
]
if single_element:
data = data[0]
with self.sql_execution_asserter() as asserter:
# tests both caching and that the data dictionaries aren't
# mutated...
# note that if we don't put B.id here, accessing .id on the
# B object for joined inheritance is triggering a SELECT
# (and not for single inheritance). this seems not great, but is
# likely a different issue
result = s.execute(
insert(B)
.returning(B)
.options(load_only(B.id, B.data, B.y, B.q)),
data,
)
objects = result.scalars().all()
if single_element:
id0 = objects[0].id
id1 = None
else:
id0, id1 = objects[0].id, objects[1].id
if inspect(B).single or inspect(B).concrete:
expected_params = [
{
"type": "b",
"data": "d3",
"xcol": 1,
"y": 2,
"bd": "bd1",
"zcol": 3,
"q": 4,
},
{
"type": "b",
"data": "d4",
"xcol": 5,
"y": 6,
"bd": "bd2",
"zcol": 7,
"q": 8,
},
]
if single_element:
expected_params[1:] = []
# RETURNING only includes PK, discriminator, then the cols
# we asked for data, y, q. xcol, z, bd are omitted
if inspect(B).single:
asserter.assert_(
CompiledSQL(
"INSERT INTO a (type, data, xcol, y, bd, zcol, q) "
"VALUES "
"(:type, :data, :xcol, :y, :bd, :zcol, :q) "
"RETURNING a.id, a.type, a.data, a.y, a.q",
expected_params,
),
)
else:
asserter.assert_(
CompiledSQL(
"INSERT INTO b (type, data, xcol, y, bd, zcol, q) "
"VALUES "
"(:type, :data, :xcol, :y, :bd, :zcol, :q) "
"RETURNING b.id, b.type, b.data, b.y, b.q",
expected_params,
),
)
else:
a_data = [
{"type": "b", "data": "d3", "xcol": 1, "y": 2},
{"type": "b", "data": "d4", "xcol": 5, "y": 6},
]
b_data = [
{"id": id0, "bd": "bd1", "zcol": 3, "q": 4},
{"id": id1, "bd": "bd2", "zcol": 7, "q": 8},
]
if single_element:
a_data[1:] = []
b_data[1:] = []
# RETURNING only includes PK, discriminator, then the cols
# we asked for data, y, q. xcol, z, bd are omitted. plus they
# are broken out correctly in the two statements.
asserter.assert_(
CompiledSQL(
"INSERT INTO a (type, data, xcol, y) VALUES "
"(:type, :data, :xcol, :y) "
"RETURNING a.id, a.type, a.data, a.y",
a_data,
),
CompiledSQL(
"INSERT INTO b (id, bd, zcol, q) "
"VALUES (:id, :bd, :zcol, :q) "
"RETURNING b.id, b.q",
b_data,
),
)
@testing.combinations(
True,
False,
argnames="single_element",
)
def test_subclass_returning_bind_expr(self, single_element):
A, B = self.classes("A", "B")
s = fixture_session()
if single_element:
data = {"data": "d3", "bd": "bd1", "x": 1, "y": 2, "z": 3, "q": 4}
else:
data = [
{"data": "d3", "bd": "bd1", "x": 1, "y": 2, "z": 3, "q": 4},
{"data": "d4", "bd": "bd2", "x": 5, "y": 6, "z": 7, "q": 8},
]
# note there's a fix in compiler.py ->
# _deliver_insertmanyvalues_batches
# for this re: the parameter rendering that isn't tested anywhere
# else. two different versions of the bug for both positional
# and non
result = s.execute(insert(B).returning(B.data, B.y, B.q + 5), data)
if single_element:
eq_(result.all(), [("d3", 2, 9)])
else:
eq_(result.all(), [("d3", 2, 9), ("d4", 6, 13)])
def test_subclass_bulk_update(self):
A, B = self.classes("A", "B")
s = fixture_session()
data = [
{"data": "d3", "bd": "bd1", "x": 1, "y": 2, "z": 3, "q": 4},
{"data": "d4", "bd": "bd2", "x": 5, "y": 6, "z": 7, "q": 8},
]
ids = s.scalars(insert(B).returning(B.id), data).all()
result = s.execute(
update(B),
[
{"id": ids[0], "data": "d3_updated", "bd": "bd1_updated"},
{"id": ids[1], "data": "d4_updated", "bd": "bd2_updated"},
],
)
# this is a nullresult at the moment
assert result is not None
eq_(
s.scalars(select(B)).all(),
[
B(
bd="bd1_updated",
data="d3_updated",
id=ids[0],
q=4,
type="b",
x=1,
y=2,
z=3,
),
B(
bd="bd2_updated",
data="d4_updated",
id=ids[1],
q=8,
type="b",
x=5,
y=6,
z=7,
),
],
)
@testing.combinations(True, False, argnames="single_element")
def test_subclass_return_just_subclass_ids(self, single_element):
A, B = self.classes("A", "B")
s = fixture_session()
if single_element:
data = {"data": "d3", "bd": "bd1", "x": 1, "y": 2, "z": 3, "q": 4}
else:
data = [
{"data": "d3", "bd": "bd1", "x": 1, "y": 2, "z": 3, "q": 4},
{"data": "d4", "bd": "bd2", "x": 5, "y": 6, "z": 7, "q": 8},
]
ids = s.scalars(insert(B).returning(B.id), data).all()
actual_ids = s.scalars(select(B.id).order_by(B.data)).all()
eq_(ids, actual_ids)
@testing.combinations(
"orm",
"bulk",
argnames="insert_strategy",
)
@testing.requires.provisioned_upsert
def test_base_class_upsert(self, insert_strategy):
"""upsert is really tricky. if you dont have any data updated,
then you dont get the rows back and things dont work so well.
so we need to be careful how much we document this because this is
still a thorny use case.
"""
A = self.classes.A
s = fixture_session()
initial_data = [
{"data": "d3", "x": 1, "y": 2, "q": 4},
{"data": "d4", "x": 5, "y": 6, "q": 8},
]
ids = s.scalars(insert(A).returning(A.id), initial_data).all()
upsert_data = [
{
"id": ids[0],
"type": "a",
"data": "d3",
"x": 1,
"y": 2,
},
{
"id": 32,
"type": "a",
"data": "d32",
"x": 19,
"y": 5,
},
{
"id": ids[1],
"type": "a",
"data": "d4",
"x": 5,
"y": 6,
},
{
"id": 28,
"type": "a",
"data": "d28",
"x": 9,
"y": 15,
},
]
stmt = provision.upsert(
config,
A,
(A,),
lambda inserted: {"data": inserted.data + " upserted"},
)
if insert_strategy == "orm":
result = s.scalars(stmt.values(upsert_data))
elif insert_strategy == "bulk":
result = s.scalars(stmt, upsert_data)
else:
assert False
eq_(
result.all(),
[
A(data="d3 upserted", id=ids[0], type="a", x=1, y=2),
A(data="d32", id=32, type="a", x=19, y=5),
A(data="d4 upserted", id=ids[1], type="a", x=5, y=6),
A(data="d28", id=28, type="a", x=9, y=15),
],
)
@testing.combinations(
"orm",
"bulk",
argnames="insert_strategy",
)
@testing.requires.provisioned_upsert
def test_subclass_upsert(self, insert_strategy):
"""note this is overridden in the joined version to expect failure"""
A, B = self.classes("A", "B")
s = fixture_session()
idd3 = 1
idd4 = 2
id32 = 32
id28 = 28
initial_data = [
{
"id": idd3,
"data": "d3",
"bd": "bd1",
"x": 1,
"y": 2,
"z": 3,
"q": 4,
},
{
"id": idd4,
"data": "d4",
"bd": "bd2",
"x": 5,
"y": 6,
"z": 7,
"q": 8,
},
]
ids = s.scalars(insert(B).returning(B.id), initial_data).all()
upsert_data = [
{
"id": ids[0],
"type": "b",
"data": "d3",
"bd": "bd1_upserted",
"x": 1,
"y": 2,
"z": 33,
"q": 44,
},
{
"id": id32,
"type": "b",
"data": "d32",
"bd": "bd 32",
"x": 19,
"y": 5,
"z": 20,
"q": 21,
},
{
"id": ids[1],
"type": "b",
"bd": "bd2_upserted",
"data": "d4",
"x": 5,
"y": 6,
"z": 77,
"q": 88,
},
{
"id": id28,
"type": "b",
"data": "d28",
"bd": "bd 28",
"x": 9,
"y": 15,
"z": 10,
"q": 11,
},
]
stmt = provision.upsert(
config,
B,
(B,),
lambda inserted: {
"data": inserted.data + " upserted",
"bd": inserted.bd + " upserted",
},
)
result = s.scalars(stmt, upsert_data)
eq_(
result.all(),
[
B(
bd="bd1_upserted upserted",
data="d3 upserted",
id=ids[0],
q=4,
type="b",
x=1,
y=2,
z=3,
),
B(
bd="bd 32",
data="d32",
id=32,
q=21,
type="b",
x=19,
y=5,
z=20,
),
B(
bd="bd2_upserted upserted",
data="d4 upserted",
id=ids[1],
q=8,
type="b",
x=5,
y=6,
z=7,
),
B(
bd="bd 28",
data="d28",
id=28,
q=11,
type="b",
x=9,
y=15,
z=10,
),
],
)
class BulkDMLReturningJoinedInhTest(
BulkDMLReturningInhTest, fixtures.DeclarativeMappedTest
):
__requires__ = ("insert_returning",)
__backend__ = True
@classmethod
def setup_classes(cls):
decl_base = cls.DeclarativeBasic
class A(fixtures.ComparableEntity, decl_base):
__tablename__ = "a"
id: Mapped[int] = mapped_column(Identity(), primary_key=True)
type: Mapped[str]
data: Mapped[str]
x: Mapped[Optional[int]] = mapped_column("xcol")
y: Mapped[Optional[int]]
__mapper_args__ = {
"polymorphic_identity": "a",
"polymorphic_on": "type",
}
class B(A):
__tablename__ = "b"
id: Mapped[int] = mapped_column(
ForeignKey("a.id"), primary_key=True
)
bd: Mapped[str]
z: Mapped[Optional[int]] = mapped_column("zcol")
q: Mapped[Optional[int]]
__mapper_args__ = {"polymorphic_identity": "b"}
@testing.combinations(
"orm",
"bulk",
argnames="insert_strategy",
)
@testing.combinations(
True,
False,
argnames="single_param",
)
@testing.requires.provisioned_upsert
def test_subclass_upsert(self, insert_strategy, single_param):
A, B = self.classes("A", "B")
s = fixture_session()
initial_data = [
{"data": "d3", "bd": "bd1", "x": 1, "y": 2, "z": 3, "q": 4},
{"data": "d4", "bd": "bd2", "x": 5, "y": 6, "z": 7, "q": 8},
]
ids = s.scalars(insert(B).returning(B.id), initial_data).all()
upsert_data = [
{
"id": ids[0],
"type": "b",
},
{
"id": 32,
"type": "b",
},
]
if single_param:
upsert_data = upsert_data[0]
stmt = provision.upsert(
config,
B,
(B,),
lambda inserted: {
"bd": inserted.bd + " upserted",
},
)
with expect_raises_message(
exc.InvalidRequestError,
r"bulk INSERT with a 'post values' clause \(typically upsert\) "
r"not supported for multi-table mapper",
):
s.scalars(stmt, upsert_data)
class BulkDMLReturningSingleInhTest(
BulkDMLReturningInhTest, fixtures.DeclarativeMappedTest
):
__requires__ = ("insert_returning",)
__backend__ = True
@classmethod
def setup_classes(cls):
decl_base = cls.DeclarativeBasic
class A(fixtures.ComparableEntity, decl_base):
__tablename__ = "a"
id: Mapped[int] = mapped_column(Identity(), primary_key=True)
type: Mapped[str]
data: Mapped[str]
x: Mapped[Optional[int]] = mapped_column("xcol")
y: Mapped[Optional[int]]
__mapper_args__ = {
"polymorphic_identity": "a",
"polymorphic_on": "type",
}
class B(A):
bd: Mapped[str] = mapped_column(nullable=True)
z: Mapped[Optional[int]] = mapped_column("zcol")
q: Mapped[Optional[int]]
__mapper_args__ = {"polymorphic_identity": "b"}
class BulkDMLReturningConcreteInhTest(
BulkDMLReturningInhTest, fixtures.DeclarativeMappedTest
):
__requires__ = ("insert_returning",)
__backend__ = True
@classmethod
def setup_classes(cls):
decl_base = cls.DeclarativeBasic
class A(fixtures.ComparableEntity, decl_base):
__tablename__ = "a"
id: Mapped[int] = mapped_column(Identity(), primary_key=True)
type: Mapped[str]
data: Mapped[str]
x: Mapped[Optional[int]] = mapped_column("xcol")
y: Mapped[Optional[int]]
__mapper_args__ = {
"polymorphic_identity": "a",
"polymorphic_on": "type",
}
class B(A):
__tablename__ = "b"
id: Mapped[int] = mapped_column(Identity(), primary_key=True)
type: Mapped[str]
data: Mapped[str]
x: Mapped[Optional[int]] = mapped_column("xcol")
y: Mapped[Optional[int]]
bd: Mapped[str] = mapped_column(nullable=True)
z: Mapped[Optional[int]] = mapped_column("zcol")
q: Mapped[Optional[int]]
__mapper_args__ = {
"polymorphic_identity": "b",
"concrete": True,
"polymorphic_on": "type",
}
class CTETest(fixtures.DeclarativeMappedTest):
__requires__ = ("insert_returning", "ctes_on_dml")
__backend__ = True
@classmethod
def setup_classes(cls):
decl_base = cls.DeclarativeBasic
class User(fixtures.ComparableEntity, decl_base):
__tablename__ = "users"
id: Mapped[uuid.UUID] = mapped_column(primary_key=True)
username: Mapped[str]
@testing.combinations(
("cte_aliased", True),
("cte", False),
argnames="wrap_cte_in_aliased",
id_="ia",
)
@testing.combinations(
("use_union", True),
("no_union", False),
argnames="use_a_union",
id_="ia",
)
@testing.combinations(
"from_statement", "aliased", "direct", argnames="fetch_entity_type"
)
def test_select_from_insert_cte(
self, wrap_cte_in_aliased, use_a_union, fetch_entity_type
):
"""test the use case from #8544; SELECT that selects from a
CTE INSERT...RETURNING.
"""
User = self.classes.User
id_ = uuid.uuid4()
cte = (
insert(User)
.values(id=id_, username="some user")
.returning(User)
.cte()
)
if wrap_cte_in_aliased:
cte = aliased(User, cte)
if use_a_union:
stmt = select(User).where(User.id == id_).union(select(cte))
else:
stmt = select(cte)
if fetch_entity_type == "from_statement":
outer_stmt = select(User).from_statement(stmt)
expect_entity = True
elif fetch_entity_type == "aliased":
outer_stmt = select(aliased(User, stmt.subquery()))
expect_entity = True
elif fetch_entity_type == "direct":
outer_stmt = stmt
expect_entity = not use_a_union and wrap_cte_in_aliased
else:
assert False
sess = fixture_session()
with self.sql_execution_asserter() as asserter:
if not expect_entity:
row = sess.execute(outer_stmt).one()
eq_(row, (id_, "some user"))
else:
new_user = sess.scalars(outer_stmt).one()
eq_(new_user, User(id=id_, username="some user"))
cte_sql = (
"(INSERT INTO users (id, username) "
"VALUES (:param_1, :param_2) "
"RETURNING users.id, users.username)"
)
if fetch_entity_type == "aliased" and not use_a_union:
expected = (
f"WITH anon_2 AS {cte_sql} "
"SELECT anon_1.id, anon_1.username "
"FROM (SELECT anon_2.id AS id, anon_2.username AS username "
"FROM anon_2) AS anon_1"
)
elif not use_a_union:
expected = (
f"WITH anon_1 AS {cte_sql} "
"SELECT anon_1.id, anon_1.username FROM anon_1"
)
elif fetch_entity_type == "aliased":
expected = (
f"WITH anon_2 AS {cte_sql} SELECT anon_1.id, anon_1.username "
"FROM (SELECT users.id AS id, users.username AS username "
"FROM users WHERE users.id = :id_1 "
"UNION SELECT anon_2.id AS id, anon_2.username AS username "
"FROM anon_2) AS anon_1"
)
else:
expected = (
f"WITH anon_1 AS {cte_sql} "
"SELECT users.id, users.username FROM users "
"WHERE users.id = :id_1 "
"UNION SELECT anon_1.id, anon_1.username FROM anon_1"
)
asserter.assert_(
CompiledSQL(expected, [{"param_1": id_, "param_2": "some user"}])
)
| [
"mike_mp@zzzcomputing.com"
] | mike_mp@zzzcomputing.com |
99676168522f6040813b9ddb4402a4be3081a0d5 | f87f51ec4d9353bc3836e22ac4a944951f9c45c0 | /.history/HW02_20210630162753.py | ba3ab937a9adc7f59ef0592f1d2524cc1d73c68b | [] | no_license | sanjayMamidipaka/cs1301 | deaffee3847519eb85030d1bd82ae11e734bc1b7 | 9ddb66596497382d807673eba96853a17884d67b | refs/heads/main | 2023-06-25T04:52:28.153535 | 2021-07-26T16:42:44 | 2021-07-26T16:42:44 | 389,703,530 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,432 | py | """
Georgia Institute of Technology - CS1301
HW02 - Conditionals and Loops
Collaboration Statement:
"""
#########################################
"""
Function Name: snackBar()
Parameters: snack (str), ingredient (str), yourMoney (float)
Returns: whether you can get the snack (bool)
"""
#########################################
########## WRITE FUNCTION HERE ##########
#########################################
def snackBar(snack, ingredient, yourMoney):
if snack == 'Hotdog':
if not ingredient == 'Gluten' and not ingredient == 'Meat' and yourMoney >= 5.99:
return True
else:
return False
if snack == 'Veggie Burger':
if not ingredient == 'Gluten' and yourMoney >= 5.99:
return True
else:
return False
if snack == 'Chili Bowl':
if not ingredient == 'Meat' and yourMoney >= 3.99:
return True
else:
return False
if snack == 'Chili Cheese Fries':
if not ingredient == 'Meat' and not ingredient == 'Diary' and yourMoney >= 4.99:
return True
else:
return False
"""
Function Name: waterGames()
Parameters: gameName (str), numPlayers (int), totalFriends (int)
Returns: None (NoneType)
"""
#########################################
########## WRITE FUNCTION HERE ##########
#########################################
def waterGames(gameName, numPlayers, totalFriends):
percentPlaying = numPlayers / totalFriends
if percentPlaying < 0.3:
print('Let’s choose something else.')
elif percentPlaying >= 0.3 and percentPlaying < 0.75:
print('We will {} for a little bit!'.format(gameName))
elif percentPlaying >= 0.75:
print("Let's " + gameName + '!!!')
"""
Function Name: summerShopping()
Parameters: clothingItem (str), size (str)
Returns: None (NoneType)
"""
#########################################
########## WRITE FUNCTION HERE ##########
#########################################
def summerShopping(clothingItem, size):
if clothingItem == 'shorts':
if size == 'S':
print("2 colors are available in this item and size.")
elif size == 'M':
print("1 colors are available in this item and size.")
elif size == 'L':
print("No colors are available in this item and size.")
if clothingItem == 'tank':
if size == 'S':
print("1 colors are available in this item and size.")
elif size == 'M':
print("1 colors are available in this item and size.")
elif size == 'L':
print("2 colors are available in this item and size.")
if clothingItem == 'flipflops':
if size == 'S':
print("1 colors are available in this item and size.")
elif size == 'M':
print("1 colors are available in this item and size.")
elif size == 'L':
print("2 colors are available in this item and size.")
"""
Function Name: stopGame()
Parameters: initialPrice (float), finalPrice (float), percentGrowth (float)
Returns: numberOfDays (int)
"""
#########################################
########## WRITE FUNCTION HERE ##########
#########################################
def stopGame(initialPrice, finalPrice, percentGrowth):
if finalPrice <= initialPrice:
return 0
newPrice = initialPrice
days = 0
while (newPrice <= finalPrice):
newPrice = newPrice * (1 + (percentGrowth/100))
days += 1
return days
"""
Function Name: adventure()
Parameters: startDay (int), stopDay (int), hikeLimit(int)
Returns: None (NoneType)
"""
#########################################
########## WRITE FUNCTION HERE ##########
#########################################
def adventure(startDay, stopDay, hikeLimit):
numberOfHikes = 0
for i in range(startDay, stopDay+1):
if i % 3 == 0 and i % 4 == 0 and numberOfHikes < hikeLimit:
print('Roadtrip!')
elif i % 3 == 0 and numberOfHikes < hikeLimit:
print('Hike')
numberOfHikes += 1
if numberOfHikes == hikeLimit:
print('No more hikes')
return 'yay'
print(stopGame(232.0, 20000.0, 15.0))
adventure(4, 29, 3)
| [
"sanjay.mamidipaka@gmail.com"
] | sanjay.mamidipaka@gmail.com |
b5b8ba1d6d74bfc6140163460ff7ee5b0e2234ff | 4d27d69c22f9c405e1d11baa7d3872b7075c68fa | /day3/oploadpic.py | 3f1229049a597b4ea32491649f0c3c45f4bce2dc | [] | no_license | zhentestnice/selenium_test1 | 99f547104e5b547e78c9fb1dd3860a6a97c91d63 | 9bbb8578b84447c6b14adefd3122e2b8ac437dc4 | refs/heads/master | 2021-08-23T04:13:25.251992 | 2017-12-03T07:09:14 | 2017-12-03T07:09:14 | 112,907,711 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,764 | py | import time
from selenium import webdriver
from selenium.webdriver import ActionChains
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.select import Select
driver = webdriver.Chrome()
driver.implicitly_wait(30)
driver.maximize_window()
driver.get("http://localhost/index.php?m=admin&c=public&a=login")
#1.登录
driver.find_element_by_name("username").send_keys("admin")
driver.find_element_by_name("userpass").send_keys("password")
driver.find_element_by_name("userverify").send_keys("1234")
driver.find_element_by_class_name("Btn").click()
#2.商品管理
driver.find_element_by_link_text("商品管理").click()
#3.添加商品
driver.find_element_by_link_text("添加商品").click()
#4.商品名称
#有一种特殊的网页,比如左边或上面有导航条
#其中"商品管理"和"添加商品"属于页面根节点的网页
#商品名称属于frame框架中的子网页
#现在需要切换网页
driver.switch_to.frame("mainFrame") #切换到子框架
driver.find_element_by_name("name").send_keys("iphone 2")
#5.商品分类
driver.find_element_by_id("1").click()
driver.find_element_by_id("2").click()
driver.find_element_by_id("6").click()
#driver.find_element_by_id("7").click()
#双击是一种特殊的元素操作,被封装到ActionChains这个类里,java封装到Actions这个类里
#链表必须以perform()结束
ActionChains(driver).double_click(driver.find_element_by_id("7")).perform()
#driver.find_element_by_link_text("选择当前分类").click()
#6.商品品牌
pinpai = driver.find_element_by_tag_name("select")
Select(pinpai).select_by_visible_text("苹果 (Apple)")
#7.上传图片
driver.find_element_by_link_text("商品图册").click()
#有些页面控件是javascript在页面加载之后生成的,有时页面加载完,但javascript的控件还没创建好,所以需要加time.sleep提高程序的稳定性
#implicitly_wait(是用来判断页面是否加载完毕
time.sleep(2)
#driver.find_element_by_css_selector("filePicker label").click()
#class="webuploader-element-invisible"不可见控件
#因为真正负责上传问价您的页面元素是<input type="file"...>
#这个控件可以直接输入图片的路径
driver.find_element_by_name("file").send_keys("D:/111.png")
driver.find_element_by_css_selector(".uploadBtn.state-finish.state-ready").click()
time.sleep(3)
driver.switch_to.alert.accept()
#7.提交
driver.find_element_by_class_name("button_search").click()
#driver.find_element_by_class_name("button_search").click()
#问题:
#页面太长,点击不了下面的按钮,怎么操作滚动条
#range是区间的
ac = ActionChains(driver)
for i in range(10):
ac.send_keys(Keys.ARROW_DOWN)
ac.perform()
driver.execute_script("window.scrollTo(200,100)") #横竖坐标滚动 | [
"51Testing"
] | 51Testing |
933a90d77bcc6337f44e77eb422d6513ca2f3a4e | 32eeb97dff5b1bf18cf5be2926b70bb322e5c1bd | /benchmark/alwayson/testcase/firstcases/testcase5_028.py | 6cf58969ae1d92078ea9a7ace0fa943a46deb2cd | [] | no_license | Prefest2018/Prefest | c374d0441d714fb90fca40226fe2875b41cf37fc | ac236987512889e822ea6686c5d2e5b66b295648 | refs/heads/master | 2021-12-09T19:36:24.554864 | 2021-12-06T12:46:14 | 2021-12-06T12:46:14 | 173,225,161 | 5 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,631 | py | #coding=utf-8
import os
import subprocess
import time
import traceback
from appium import webdriver
from appium.webdriver.common.touch_action import TouchAction
from selenium.common.exceptions import NoSuchElementException, WebDriverException
desired_caps = {
'platformName' : 'Android',
'deviceName' : 'Android Emulator',
'platformVersion' : '4.4',
'appPackage' : 'com.tomer.alwayson',
'appActivity' : 'com.tomer.alwayson.activities.PreferencesActivity',
'resetKeyboard' : True,
'androidCoverage' : 'com.tomer.alwayson/com.tomer.alwayson.JacocoInstrumentation',
'noReset' : True
}
def command(cmd, timeout=5):
p = subprocess.Popen(cmd, stderr=subprocess.STDOUT, stdout=subprocess.PIPE, shell=True)
time.sleep(timeout)
p.terminate()
return
def getElememt(driver, str) :
for i in range(0, 5, 1):
try:
element = driver.find_element_by_android_uiautomator(str)
except NoSuchElementException:
time.sleep(1)
else:
return element
os.popen("adb shell input tap 50 50")
element = driver.find_element_by_android_uiautomator(str)
return element
def getElememtBack(driver, str1, str2) :
for i in range(0, 2, 1):
try:
element = driver.find_element_by_android_uiautomator(str1)
except NoSuchElementException:
time.sleep(1)
else:
return element
for i in range(0, 5, 1):
try:
element = driver.find_element_by_android_uiautomator(str2)
except NoSuchElementException:
time.sleep(1)
else:
return element
os.popen("adb shell input tap 50 50")
element = driver.find_element_by_android_uiautomator(str2)
return element
def swipe(driver, startxper, startyper, endxper, endyper) :
size = driver.get_window_size()
width = size["width"]
height = size["height"]
try:
driver.swipe(start_x=int(width * startxper), start_y=int(height * startyper), end_x=int(width * endxper),
end_y=int(height * endyper), duration=2000)
except WebDriverException:
time.sleep(1)
driver.swipe(start_x=int(width * startxper), start_y=int(height * startyper), end_x=int(width * endxper),
end_y=int(height * endyper), duration=2000)
return
# testcase028
try :
starttime = time.time()
driver = webdriver.Remote('http://localhost:4723/wd/hub', desired_caps)
element = getElememtBack(driver, "new UiSelector().text(\"Customize Watchface\")", "new UiSelector().className(\"android.widget.TextView\").instance(9)")
TouchAction(driver).tap(element).perform()
element = getElememtBack(driver, "new UiSelector().text(\"Text\")", "new UiSelector().className(\"android.widget.TextView\").instance(4)")
TouchAction(driver).tap(element).perform()
element = getElememtBack(driver, "new UiSelector().text(\"M\")", "new UiSelector().className(\"android.widget.TextView\").instance(7)")
TouchAction(driver).tap(element).perform()
element = getElememtBack(driver, "new UiSelector().text(\"F\")", "new UiSelector().className(\"android.widget.TextView\").instance(11)")
TouchAction(driver).tap(element).perform()
driver.press_keycode(4)
element = getElememtBack(driver, "new UiSelector().text(\"Memo text\")", "new UiSelector().className(\"android.widget.TextView\").instance(15)")
TouchAction(driver).tap(element).perform()
element = getElememtBack(driver, "new UiSelector().text(\"Cancel\")", "new UiSelector().className(\"android.widget.Button\")")
TouchAction(driver).tap(element).perform()
element = getElememtBack(driver, "new UiSelector().text(\"Styles\")", "new UiSelector().className(\"android.widget.TextView\")")
TouchAction(driver).tap(element).perform()
element = getElememtBack(driver, "new UiSelector().text(\"Set the text color\")", "new UiSelector().className(\"android.widget.TextView\").instance(13)")
TouchAction(driver).tap(element).perform()
driver.press_keycode(4)
element = getElememtBack(driver, "new UiSelector().text(\"Text & Font\")", "new UiSelector().className(\"android.widget.TextView\").instance(7)")
TouchAction(driver).tap(element).perform()
element = getElememtBack(driver, "new UiSelector().text(\"Full calendar\")", "new UiSelector().className(\"android.widget.TextView\").instance(4)")
TouchAction(driver).tap(element).perform()
element = getElememtBack(driver, "new UiSelector().text(\"Text\")", "new UiSelector().className(\"android.widget.TextView\").instance(3)")
TouchAction(driver).tap(element).perform()
element = getElememtBack(driver, "new UiSelector().text(\"S\")", "new UiSelector().className(\"android.widget.TextView\").instance(12)")
TouchAction(driver).tap(element).perform()
element = getElememtBack(driver, "new UiSelector().text(\"July 2018\")", "new UiSelector().className(\"android.widget.TextView\").instance(4)")
TouchAction(driver).tap(element).perform()
element = getElememtBack(driver, "new UiSelector().text(\"Full calendar\")", "new UiSelector().className(\"android.widget.TextView\").instance(13)")
TouchAction(driver).tap(element).perform()
driver.press_keycode(4)
element = getElememtBack(driver, "new UiSelector().text(\"Battery style\")", "new UiSelector().className(\"android.widget.TextView\").instance(5)")
TouchAction(driver).tap(element).perform()
except Exception, e:
print 'FAIL'
print 'str(e):\t\t', str(e)
print 'repr(e):\t', repr(e)
print traceback.format_exc()
else:
print 'OK'
finally:
cpackage = driver.current_package
endtime = time.time()
print 'consumed time:', str(endtime - starttime), 's'
command("adb shell am broadcast -a com.example.pkg.END_EMMA --es name \"5_028\"")
jacocotime = time.time()
print 'jacoco time:', str(jacocotime - endtime), 's'
driver.quit()
if (cpackage != 'com.tomer.alwayson'):
cpackage = "adb shell am force-stop " + cpackage
os.popen(cpackage)
| [
"prefest2018@gmail.com"
] | prefest2018@gmail.com |
5e4cb27bc88ca962e358de8631e842d4b3395cfb | 5292189eb99d9a69b4e417dfed352e7de0844b0e | /scripts/generate_enriched_texts.py | d76137474504d68ed6cc0d8f876971e1e90b30da | [
"MIT"
] | permissive | Envinorma/data-tasks | e1197ac3deada7edc5406933b65fd099bd412f6d | 7aa12b5def1b8a7a10c9651fb02267592fef0368 | refs/heads/main | 2022-10-26T21:38:39.952029 | 2022-06-12T08:46:38 | 2022-06-12T08:46:38 | 364,975,968 | 0 | 0 | MIT | 2022-10-11T12:25:53 | 2021-05-06T16:41:49 | Python | UTF-8 | Python | false | false | 1,265 | py | # DEPRECATED
'''
Script for generating all versions of a specific AM using its
structured version and its parametrization.
'''
# from typing import Optional, Tuple
# from envinorma.parametrization.am_with_versions import AMVersions, generate_am_with_versions
# from envinorma.utils import write_json
# from tasks.data_build.config import DATA_FETCHER
# TEST_ID = 'JORFTEXT000023081678'
# def _create_folder_and_generate_parametric_filename(am_id: str, version_desc: Tuple[str, ...]) -> str:
# raise NotImplementedError()
# def _dump(am_id: str, versions: Optional[AMVersions]) -> None:
# if not versions:
# return
# for version_desc, version in versions.items():
# filename = _create_folder_and_generate_parametric_filename(am_id, version_desc)
# write_json(version.to_dict(), filename)
# def handle_am(am_id: str) -> None:
# metadata = DATA_FETCHER.load_am_metadata(am_id)
# if not metadata:
# raise ValueError(f'AM {am_id} not found.')
# final_am = generate_am_with_versions(
# DATA_FETCHER.safe_load_most_advanced_am(am_id), DATA_FETCHER.load_or_init_parametrization(am_id), metadata
# )
# _dump(am_id, final_am.am_versions)
# if __name__ == '__main__':
# handle_am(TEST_ID)
| [
"remi.delbouys@laposte.net"
] | remi.delbouys@laposte.net |
37276aeb06dcad99c2d20af20c2879662c23e92f | 6e8f2e28479566dbaa338300b2d61f784ff83f97 | /.history/code/live_20210420075102.py | 5aaca3d8bdf9d7202c4559c9a90c0e239879462c | [] | no_license | eeng5/CV-final-project | 55a7d736f75602858233ebc380c4e1d67ab2b866 | 580e28819560b86f6974959efb1d31ef138198fc | refs/heads/main | 2023-04-09T21:28:21.531293 | 2021-04-21T19:57:22 | 2021-04-21T19:57:22 | 352,703,734 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,547 | py | import os
import cv2
import sys
import numpy as np
from models import SimpleModel
from preprocess import Datasets
import hyperparameters as hp
import tensorflow as tf
from skimage.transform import resize
from PIL import Image, ImageFont, ImageDraw
from scipy.spatial import distance as dist
from imutils import face_utils
from imutils.video import VideoStream
import fastai
import fastai.vision
import imutils
import argparse
import time
import dlib
from skimage import transform
from keras.preprocessing import image
def createPixelArray(arr):
array = image
array = np.array(arr, dtype=np.uint8)/225.
array = transform.resize(array, (48, 48, 1))
array = [array]
return array
weights_str = "/Users/Natalie/Desktop/cs1430/CV-final-project/code/checkpoints/simple_model/041321-113618/your.weights.e015-acc0.6121.h5"
os.chdir(sys.path[0])
model = SimpleModel()
model(tf.keras.Input(shape=(hp.img_size, hp.img_size)))
model.load_weights(weights_str, by_name=False)
model.compile(
optimizer=model.optimizer,
loss=model.loss_fn,
metrics=["sparse_categorical_accuracy"],
)
face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml')
vs = VideoStream(src=0).start()
start = time.perf_counter()
data = []
time_value = 0
out = cv2.VideoWriter(
"liveoutput.avi", cv2.VideoWriter_fourcc("M", "J", "P", "G"), 10, (450, 253)
)
while True:
frame = vs.read()
frame = imutils.resize(frame, width=450)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
face_coord = face_cascade.detectMultiScale(gray, 1.1, 5, minSize=(48, 48))
for coords in face_coord:
X, Y, w, h = coords
H, W, _ = frame.shape
X_1, X_2 = (max(0, X - int(w)), min(X + int(1.3 * w), W))
Y_1, Y_2 = (max(0, Y - int(0.1 * h)), min(Y + int(1.3 * h), H))
img_cp = gray[Y_1:Y_1+48, X_1:X_1+48].copy()
img_mod = createPixelArray(img_cp)
prediction = model.predict(img_mod)
prediction = np.argmax(prediction)
cv2.rectangle(
img=frame,
pt1=(X_1, Y_1),
pt2=(X_2, Y_2),
color=(128, 128, 0),
thickness=2,
)
cv2.putText(
frame,
str(prediction),
(10, frame.shape[0] - 25),
cv2.FONT_HERSHEY_SIMPLEX,
0.7,
(225, 255, 255),
2,)
cv2.imshow("frame", frame)
out.write(frame)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
vs.stop()
out.release()
cv2.destroyAllWindows()
| [
"natalie_rshaidat@brown.edu"
] | natalie_rshaidat@brown.edu |
ed7ad0ebeb496183ba4b5ae5a8803c223274731c | 6b1dd40d16ae6169e7ed780c5062e88d10502c85 | /Kaggle/Playgroud/RiskPrediction/Home-Credit-Default-Risk-master/py/trash/902_cv_LOO_524-1.py | da31953b17d71903f208c8f4e306fb38ac469b9e | [
"MIT"
] | permissive | hehuanlin123/DeepLearning | 8a59680a341cfc525d50aa5afc3e44202ca4acc4 | 6b7feabbbde9ac9489f76da4c06eeb6703fb165a | refs/heads/master | 2022-07-12T09:26:08.617883 | 2019-06-10T11:31:37 | 2019-06-10T11:31:37 | 183,748,407 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,572 | py | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Thu May 24 21:09:29 2018
@author: kazuki.onodera
"""
import numpy as np
import pandas as pd
import sys
sys.path.append('/home/kazuki_onodera/Python')
import lgbmextension as ex
import lightgbm as lgb
import gc
import utils
utils.start(__file__)
#==============================================================================
SEED = 71
X = pd.concat([utils.read_pickles('../data/101_train'),
utils.read_pickles('../data/102_train'),
utils.read_pickles('../data/103_train')], axis=1)
y = utils.read_pickles('../data/label').TARGET
param = {
'objective': 'binary',
'metric': 'auc',
'learning_rate': 0.05,
'max_depth': -1,
'num_leaves': 127,
'max_bin': 100,
'colsample_bytree': 0.5,
'subsample': 0.5,
'nthread': 64,
'bagging_freq': 1,
'seed': SEED,
'verbose': -1
}
categorical_feature = ['NAME_CONTRACT_TYPE',
'CODE_GENDER',
'FLAG_OWN_CAR',
'FLAG_OWN_REALTY',
'NAME_TYPE_SUITE',
'NAME_INCOME_TYPE',
'NAME_EDUCATION_TYPE',
'NAME_FAMILY_STATUS',
'NAME_HOUSING_TYPE',
'OCCUPATION_TYPE',
'WEEKDAY_APPR_PROCESS_START',
'ORGANIZATION_TYPE',
'FONDKAPREMONT_MODE',
'HOUSETYPE_MODE',
'WALLSMATERIAL_MODE',
'EMERGENCYSTATE_MODE']
dtrain = lgb.Dataset(X, y,
categorical_feature=categorical_feature)
ret = lgb.cv(param, dtrain, 9999, nfold=5,
early_stopping_rounds=50, verbose_eval=None,
seed=SEED)
print(f"NO drop auc-mean {ret['auc-mean'][-1]}")
for c in X.columns:
print(f'drop {c}')
gc.collect()
categorical_feature_ = categorical_feature[:]
if c in categorical_feature_:
categorical_feature_.remove(c)
dtrain = lgb.Dataset(X.drop(c, axis=1), y,
categorical_feature=categorical_feature_)
ret = lgb.cv(param, dtrain, 9999, nfold=5,
# categorical_feature=categorical_feature,
early_stopping_rounds=50, verbose_eval=None,
seed=SEED)
print(f"auc-mean {ret['auc-mean'][-1]}")
#==============================================================================
utils.end(__file__)
| [
"szkfzx@szkfzxdeiMac.local"
] | szkfzx@szkfzxdeiMac.local |
f58417c1d198d5d79a6b11bba9b19fb2b7416ef0 | 9743d5fd24822f79c156ad112229e25adb9ed6f6 | /xai/brain/wordbase/verbs/_perish.py | f448a3104103b61dfd8543a82aa79519714189e1 | [
"MIT"
] | permissive | cash2one/xai | de7adad1758f50dd6786bf0111e71a903f039b64 | e76f12c9f4dcf3ac1c7c08b0cc8844c0b0a104b6 | refs/heads/master | 2021-01-19T12:33:54.964379 | 2017-01-28T02:00:50 | 2017-01-28T02:00:50 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 449 | py |
#calss header
class _PERISH():
def __init__(self,):
self.name = "PERISH"
self.definitions = [u'to die, especially in an accident or by being killed, or to be destroyed: ', u'If material such as rubber or leather perishes, it decays and starts to break into pieces: ']
self.parents = []
self.childen = []
self.properties = []
self.jsondata = {}
self.specie = 'verbs'
def run(self, obj1 = [], obj2 = []):
return self.jsondata
| [
"xingwang1991@gmail.com"
] | xingwang1991@gmail.com |
5814d5b9897fce2326f8464326357614ccef0682 | ad9782856ec2f860fccbefa5e75a896691b8e1cc | /MonteCarlo/test/dropLargeRespace/VBF_HToZZTo4L_M125_14TeV_powheg2_JHUgenV702_pythia8_LHE_GEN_SIM_OT_Tilted_362_200_Pixel_4021_dropLargeRespace.py | a4a679fc47b806bb6339c6dcefcd23818ed9126b | [] | no_license | OSU-CMS/VFPix | 7fe092fc5a973b4f9edc29dbfdf44907664683e5 | 4c9fd903219742a4eba1321dc4181da125616e4c | refs/heads/master | 2020-04-09T05:52:05.644653 | 2019-01-09T13:44:22 | 2019-01-09T13:44:22 | 30,070,948 | 0 | 0 | null | 2018-11-30T13:15:54 | 2015-01-30T12:26:20 | Python | UTF-8 | Python | false | false | 6,517 | py | # Auto generated configuration file
# using:
# Revision: 1.19
# Source: /local/reps/CMSSW/CMSSW/Configuration/Applications/python/ConfigBuilder.py,v
# with command line options: Configuration/Generator/python/VBF_HToZZTo4L_M125_14TeV_powheg2_JHUgenV702_pythia8_cfi.py --conditions auto:phase2_realistic -n 100 --era Phase2C2 --eventcontent FEVTDEBUG --relval 9000,100 -s LHE,GEN,SIM --datatier GEN-SIM --beamspot HLLHC --geometry Extended2023D4 --fileout step2_SIM.root
import FWCore.ParameterSet.Config as cms
from Configuration.StandardSequences.Eras import eras
process = cms.Process('SIM',eras.Phase2C2)
# import of standard configurations
process.load('Configuration.StandardSequences.Services_cff')
process.load('SimGeneral.HepPDTESSource.pythiapdt_cfi')
process.load('FWCore.MessageService.MessageLogger_cfi')
process.load('Configuration.EventContent.EventContent_cff')
process.load('SimGeneral.MixingModule.mixNoPU_cfi')
process.load('Configuration.Geometry.GeometryExtended2023D4Reco_cff')
process.load('Configuration.Geometry.GeometryExtended2023D4_cff')
process.load('Configuration.StandardSequences.MagneticField_cff')
process.load('Configuration.StandardSequences.Generator_cff')
process.load('IOMC.EventVertexGenerators.VtxSmearedHLLHC_cfi')
process.load('GeneratorInterface.Core.genFilterSummary_cff')
process.load('Configuration.StandardSequences.SimIdeal_cff')
process.load('Configuration.StandardSequences.EndOfProcess_cff')
process.load('Configuration.StandardSequences.FrontierConditions_GlobalTag_cff')
process.maxEvents = cms.untracked.PSet(
input = cms.untracked.int32(5)
)
# Input source
process.source = cms.Source("EmptySource")
process.options = cms.untracked.PSet(
)
# Production Info
process.configurationMetadata = cms.untracked.PSet(
annotation = cms.untracked.string('VFPix/MonteCarlo/python/VBF_HToZZTo4L_M125_14TeV_powheg2_JHUgenV702_pythia8_cfi.py nevts:100'),
name = cms.untracked.string('Applications'),
version = cms.untracked.string('$Revision: 1.19 $')
)
# Output definition
process.FEVTDEBUGoutput = cms.OutputModule("PoolOutputModule",
SelectEvents = cms.untracked.PSet(
SelectEvents = cms.vstring('generation_step')
),
dataset = cms.untracked.PSet(
dataTier = cms.untracked.string('GEN-SIM'),
filterName = cms.untracked.string('')
),
eventAutoFlushCompressedSize = cms.untracked.int32(5242880),
fileName = cms.untracked.string('step2_SIM.root'),
outputCommands = process.FEVTDEBUGEventContent.outputCommands,
splitLevel = cms.untracked.int32(0)
)
# Additional output definition
# Other statements
process.genstepfilter.triggerConditions=cms.vstring("generation_step")
from Configuration.AlCa.GlobalTag import GlobalTag
process.GlobalTag = GlobalTag(process.GlobalTag, 'auto:phase2_realistic', '')
process.generator = cms.EDFilter("Pythia8HadronizerFilter",
PythiaParameters = cms.PSet(
parameterSets = cms.vstring('pythia8CommonSettings',
'pythia8CUEP8M1Settings',
'pythia8PowhegEmissionVetoSettings',
'processParameters'),
processParameters = cms.vstring('POWHEG:nFinal = 3'),
pythia8CUEP8M1Settings = cms.vstring('Tune:pp 14',
'Tune:ee 7',
'MultipartonInteractions:pT0Ref=2.4024',
'MultipartonInteractions:ecmPow=0.25208',
'MultipartonInteractions:expPow=1.6'),
pythia8CommonSettings = cms.vstring('Tune:preferLHAPDF = 2',
'Main:timesAllowErrors = 10000',
'Check:epTolErr = 0.01',
'Beams:setProductionScalesFromLHEF = off',
'SLHA:keepSM = on',
'SLHA:minMassSM = 1000.',
'ParticleDecays:limitTau0 = on',
'ParticleDecays:tau0Max = 10',
'ParticleDecays:allowPhotonRadiation = on'),
pythia8PowhegEmissionVetoSettings = cms.vstring('POWHEG:veto = 1',
'POWHEG:pTdef = 1',
'POWHEG:emitted = 0',
'POWHEG:pTemt = 0',
'POWHEG:pThard = 0',
'POWHEG:vetoCount = 100',
'SpaceShower:pTmaxMatch = 2',
'TimeShower:pTmaxMatch = 2')
),
comEnergy = cms.double(14000.0),
filterEfficiency = cms.untracked.double(1.0),
maxEventsToPrint = cms.untracked.int32(1),
pythiaHepMCVerbosity = cms.untracked.bool(False),
pythiaPylistVerbosity = cms.untracked.int32(1)
)
process.externalLHEProducer = cms.EDProducer("ExternalLHEProducer",
args = cms.vstring('/cvmfs/cms.cern.ch/phys_generator/gridpacks/slc6_amd64_gcc481/14TeV/powheg/V2/VBF_HZZ4L_NNPDF30_14TeV_M125_JHUGenV702/v2/VBF_HZZ4L_NNPDF30_14TeV_M125_JHUGenV702.tgz'),
nEvents = cms.untracked.uint32(100),
numberOfParameters = cms.uint32(1),
outputFile = cms.string('cmsgrid_final.lhe'),
scriptName = cms.FileInPath('GeneratorInterface/LHEInterface/data/run_generic_tarball_cvmfs.sh')
)
# Path and EndPath definitions
process.lhe_step = cms.Path(process.externalLHEProducer)
process.generation_step = cms.Path(process.pgen)
process.simulation_step = cms.Path(process.psim)
process.genfiltersummary_step = cms.EndPath(process.genFilterSummary)
process.endjob_step = cms.EndPath(process.endOfProcess)
process.FEVTDEBUGoutput_step = cms.EndPath(process.FEVTDEBUGoutput)
# Schedule definition
process.schedule = cms.Schedule(process.lhe_step,process.generation_step,process.genfiltersummary_step,process.simulation_step,process.endjob_step,process.FEVTDEBUGoutput_step)
# filter all path with the production filter sequence
for path in process.paths:
if path in ['lhe_step']: continue
getattr(process,path)._seq = process.generator * getattr(process,path)._seq
# Customisation from command line
# Add early deletion of temporary data products to reduce peak memory need
from Configuration.StandardSequences.earlyDeleteSettings_cff import customiseEarlyDelete
process = customiseEarlyDelete(process)
# End adding early deletion
inputDir = "VFPix/MonteCarlo/data/OT_Tilted_362_200_Pixel_4021_dropLargeRespace/"
fileNames =["pixbar.xml","pixelProdCuts.xml","pixelStructureTopology.xml","pixelsens.xml","pixfwd.xml","tracker.xml","trackerProdCuts.xml","trackerRecoMaterial.xml","trackerStructureTopology.xml","trackersens.xml","pixel.xml"]
for i in range (0, len (process.XMLIdealGeometryESSource.geomXMLFiles)):
xmlFile = process.XMLIdealGeometryESSource.geomXMLFiles[i]
fileName = xmlFile.split("/")[-1]
if fileName in fileNames:
process.XMLIdealGeometryESSource.geomXMLFiles[i] = inputDir + fileName
| [
"juliette.alimena@cern.ch"
] | juliette.alimena@cern.ch |
be72477f2c2a81bb266c758a9409a5ecc183c251 | 652121d51e6ff25aa5b1ad6df2be7eb341683c35 | /programs/e2proclst.py | 4ed2507cb5f9c3544cff4642cdfa9f78535076d1 | [] | no_license | jgalaz84/eman2 | be93624f1c261048170b85416e517e5813992501 | 6d3a1249ed590bbc92e25fb0fc319e3ce17deb65 | refs/heads/master | 2020-04-25T18:15:55.870663 | 2015-06-05T20:21:44 | 2015-06-05T20:21:44 | 36,952,784 | 2 | 0 | null | null | null | null | UTF-8 | Python | false | false | 6,932 | py | #!/usr/bin/env python
# This program performs simple processing of .LST files
# Author: Steven Ludtke, 10/06/14 (sludtke@bcm.edu)
# Copyright (c) 2014- Baylor College of Medicine
#
# This software is issued under a joint BSD/GNU license. You may use the
# source code in this file under either license. However, note that the
# complete EMAN2 and SPARX software packages have some GPL dependencies,
# so you are responsible for compliance with the licenses of these packages
# if you opt to use BSD licensing. The warranty disclaimer below holds
# in either instance.
#
# This complete copyright notice must be included in any revised version of the
# source code. Additional authorship citations may be added, but existing
# author citations must be preserved.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 2111-1307 USA
#
from EMAN2 import *
from math import *
import os
import sys
def main():
progname = os.path.basename(sys.argv[0])
usage = """Usage:\nproclst.py [options] <lst 1> <lst 2> ... \nSimple manipulations of LST files. If your goal is to produce an actual image file rather than the
sort of virtual stack represented by .lst files, use e2proc2d.py or e2proc3d.py instead. Those programs will treat LST files as normal image files for input.\n."""
parser = EMArgumentParser(usage=usage,version=EMANVERSION)
####################
# parser.add_argument("--average", action="store_true", help="Averages all input images (without alignment) and writes a single output image")
parser.add_argument("--merge",type=str,help="Specify the output name here. This will concatenate all of the input .lst files into a single output",default=None)
parser.add_argument("--create",type=str,help="Input files should be image files. Specify an .lst file to create here with references to all of the images in the inputs.")
parser.add_argument("--mergesort",type=str,help="Specify the output name here. This will merge all of the input .lst files into a single (resorted) output",default=None)
parser.add_argument("--retype",type=str,help="If a lst file is referencing a set of particles from particles/imgname__oldtype.hdf, this will change oldtype to the specified string in-place (modifies input files)",default=None)
parser.add_argument("--minlosnr",type=float,help="Integrated SNR from 1/200-1/20 1/A must be larger than this",default=0,guitype='floatbox', row=8, col=0)
parser.add_argument("--minhisnr",type=float,help="Integrated SNR from 1/10-1/4 1/A must be larger than this",default=0,guitype='floatbox', row=8, col=1)
parser.add_argument("--verbose", "-v", dest="verbose", action="store", metavar="n", type=int, help="verbose level [0-9], higner number means higher level of verboseness",default=1)
parser.add_argument("--ppid", type=int, help="Set the PID of the parent process, used for cross platform PPID",default=-1)
(options, args) = parser.parse_args()
if len(args)<1 :
parser.error("At least one lst file required")
sys.exit(1)
logid=E2init(sys.argv,options.ppid)
if options.create != None:
lst=LSXFile(options.create,False)
for f in args:
n=EMUtil.get_image_count(f)
if options.verbose : print "Processing {} images in {}".format(n,f)
for i in xrange(n):
lst.write(-1,i,f)
sys.exit(0)
if options.retype != None:
if options.minlosnr>0 or options.minhisnr>0 :
print "ERROR: --minlosnr and --minhisnr not compatible with --retype"
sys.exit(1)
# if the user provided the leading __ for us, we strip it off and add it back later
if options.retype[:2]=="__" :
options.retype=options.retype[2:]
for f in args:
if options.verbose : print "Processing ",f
lst=LSXFile(f,True)
a=lst.read(0)
if a[1][:10]!="particles/" :
print "To use the --retype option, the .lst file must reference image files in particles/*"
if options.verbose>1 :
b=base_name(a[1])
print "{} -> {}".format(a[1],b+"__"+options.retype+".hdf")
# loop over the images in the lst file
for i in xrange(len(lst)):
im=lst.read(i)
outname="particles/{}__{}.hdf".format(base_name(im[1]),options.retype)
lst.write(i,im[0],outname,im[2])
lst.normalize() # clean up at the end
if options.verbose>1 : print len(lst)," particles adjusted"
if options.verbose : print "Done processing {} files".format(len(args))
if options.merge!=None:
if options.minlosnr>0 or options.minhisnr>0 :
print "ERROR: --minlosnr and --minhisnr not compatible with --merge. Please use --mergesort instead."
sys.exit(1)
# create/update output lst
lsto=LSXFile(options.merge)
ntot=0
# loop over input files
for f in args:
lst=LSXFile(f,True)
ntot+=len(lst)
for i in xrange(len(lst)):
im=lst.read(i)
lsto.write(-1,im[0],im[1],im[2])
if options.verbose : print "{} particles added to {}".format(ntot,options.merge)
if options.mergesort!=None:
# create/update output lst
lsto=LSXFile(options.mergesort)
ntot=0
# loop over input files
ptcls=[]
pfiles=set()
for f in args:
lst=LSXFile(f,True)
ntot+=len(lst)
for i in xrange(len(lst)):
im=lst.read(i)
ptcls.append((im[1],im[0],im[2]))
pfiles.add(im[1])
ptcls.sort()
# remove particles in files not meeting our criteria
if options.minlosnr>0 or options.minhisnr>0 :
# the list conversion here is so we are iterating over a copy and not modifying the set while we iterate over it
for pfile in list(pfiles):
js=js_open_dict(info_name(pfile))
ctf=js["ctf"][0]
js.close()
r1=int(floor(1.0/(200.0*ctf.dsbg))) # lowsnr is 200-20 A
r2=int(ceil(1.0/(20.0*ctf.dsbg)))
r3=int(floor(1.0/(10.0*ctf.dsbg))) # hisnr is 10 to 4 A
r4=int(ceil(1.0/(4.0*ctf.dsbg)))
losnr=sum(ctf.snr[r1:r2])/(r2-r1)
hisnr=sum(ctf.snr[r3:r4])/(r4-r3)
if losnr<options.minlosnr or hisnr<options.minhisnr:
pfiles.remove(pfile)
if options.verbose: print pfile," removed due to SNR criteria"
nwrt=0
for i in ptcls:
if i[0] in pfiles :
lsto.write(-1,i[1],i[0],i[2])
nwrt+=1
if options.verbose :
if nwrt==ntot : print "{} particles in {}".format(ntot,options.mergesort)
else : print "{} of {} particles written to {}".format(nwrt,ntot,options.mergesort)
E2end(logid)
if __name__ == "__main__":
main()
| [
"jgalaz@gmail.com"
] | jgalaz@gmail.com |
ccc42f46382a7123266fc8d929e483370e32deca | 892efbbd60049f22c5e271a0e49f505e9f6029e1 | /doc/examples/plot_holes_and_peaks.py | a9c3a7fe4e8a26085f62c53f4856ad170652fcd5 | [
"BSD-3-Clause"
] | permissive | teoliphant/scikit-image | 95338caa2876f2c6360a8e164b7cc2d4127f2038 | d0415e6df475157705fd1ef2af69b16e4f7e38cc | refs/heads/master | 2020-12-30T19:23:04.806465 | 2012-11-02T23:02:06 | 2012-11-02T23:02:06 | 6,649,519 | 2 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,601 | py | """
===============================
Filling holes and finding peaks
===============================
In this example, we fill holes (i.e. isolated, dark spots) in an image using
morphological reconstruction by erosion. Erosion expands the minimal values of
the seed image until it encounters a mask image. Thus, the seed image and mask
image represent the maximum and minimum possible values of the reconstructed
image.
We start with an image containing both peaks and holes:
"""
import matplotlib.pyplot as plt
from skimage import data
from skimage.exposure import rescale_intensity
image = data.moon()
# Rescale image intensity so that we can see dim features.
image = rescale_intensity(image, in_range=(50, 200))
# convenience function for plotting images
def imshow(image, **kwargs):
plt.figure(figsize=(5, 4))
plt.imshow(image, **kwargs)
plt.axis('off')
imshow(image)
plt.title('original image')
"""
.. image:: PLOT2RST.current_figure
Now we need to create the seed image, where the minima represent the starting
points for erosion. To fill holes, we initialize the seed image to the maximum
value of the original image. Along the borders, however, we use the original
values of the image. These border pixels will be the starting points for the
erosion process. We then limit the erosion by setting the mask to the values
of the original image.
"""
import numpy as np
from skimage.morphology import reconstruction
seed = np.copy(image)
seed[1:-1, 1:-1] = image.max()
mask = image
filled = reconstruction(seed, mask, method='erosion')
imshow(filled, vmin=image.min(), vmax=image.max())
plt.title('after filling holes')
"""
.. image:: PLOT2RST.current_figure
As shown above, eroding inward from the edges removes holes, since (by
definition) holes are surrounded by pixels of brighter value. Finally, we can
isolate the dark regions by subtracting the reconstructed image from the
original image.
"""
imshow(image - filled)
plt.title('holes')
"""
.. image:: PLOT2RST.current_figure
Alternatively, we can find bright spots in an image using morphological
reconstruction by dilation. Dilation is the inverse of erosion and expands the
*maximal* values of the seed image until it encounters a mask image. Since this
is an inverse operation, we initialize the seed image to the minimum image
intensity instead of the maximum. The remainder of the process is the same.
"""
seed = np.copy(image)
seed[1:-1, 1:-1] = image.min()
rec = reconstruction(seed, mask, method='dilation')
imshow(image - rec)
plt.title('peaks')
plt.show()
"""
.. image:: PLOT2RST.current_figure
"""
| [
"tsyu80@gmail.com"
] | tsyu80@gmail.com |
09efec9a6022b0570ad0e33b0dff86b355536217 | adc6d8ee596e4710c3241332758bb6990bdd8914 | /subDM/subNormDM/brilloContraste.py | fbeff8b5ec28d286245d03e284c4ef261b484cfc | [] | no_license | NatalyTinoco/Trabajo-de-grado_Artefactos | cf9491c47a8a23ce5bab7c52498093a61319f834 | 5cc4e009f94c871c7ed0d820eb113398ac66ec2f | refs/heads/master | 2022-03-20T00:51:48.420253 | 2019-11-24T19:10:40 | 2019-11-24T19:10:40 | 197,964,659 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,247 | py | # -*- coding: utf-8 -*-
"""
Created on Thu Jul 18 20:07:00 2019
@author: Nataly
"""
import numpy as np
def contraste(img):
im11 = img
#arreglo = np.array(im11.size)
#print(im11.size)
#total = arreglo[0] * arreglo[1]
arreglo=im11.shape
#arreglo=list(arreglo)
total = arreglo[0] * arreglo[1]
i = 0
suma = 0
while i < arreglo[0]:
j = 0
while j < arreglo[1]:
suma = suma + im11[i, j]
j+=1
i+=1
brillo = suma / total
i = 0
while i < arreglo[0]:
j = 0
while j < arreglo[1]:
aux = im11[i, j] - brillo
suma = suma + aux
j+=1
i+=1
cont = suma * suma
cont = np.sqrt(suma / total)
contraste = int(cont)
#print("El contraste de la imagen es: ", contraste)
return contraste
def brillo(img):
im10 = img
arreglo=im10.shape
#arreglo=list(arreglo)
total = arreglo[0] * arreglo[1]
i = 0
suma = 0
while i < arreglo[0]:
j = 0
while j < arreglo[1]:
suma = suma + im10[i, j]
j+=1
i+=1
brillo = suma / total
brillo = int(brillo)
#print("El brillo de la imagen es: ", brillo)
| [
"51056570+NatalyTinoco@users.noreply.github.com"
] | 51056570+NatalyTinoco@users.noreply.github.com |
14415685f0b81759021a5a3f25a5cc9cb4e92344 | 79140b67cac1f5c8e3eb3ab3e7ad65a3a98866e8 | /test/evil.py | 4f413dc29670f310f6adc4af6df1abae8bf75cc3 | [] | no_license | dlovemore/bible | 63c1eceed4a919f7a6d2dfb76b6b084d05c49612 | 2594a2414a66c0abedd1278fef805415a8793f28 | refs/heads/master | 2021-01-03T07:17:45.527017 | 2020-05-16T17:54:18 | 2020-05-16T17:54:18 | 239,975,858 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 28 | py | >>> from mene import *
>>>
| [
"davidlovemore@gmail.com"
] | davidlovemore@gmail.com |
fb38f12c46f395528c451d5947b56f7e4c6fe7c6 | 353def93fa77384ee3a5e3de98cfed318c480634 | /.history/week01/homework02/maoyanspiders/maoyanspiders/pipelines_20200627225323.py | e51756f37474e463a0b362cab3f142dfb62b103a | [] | no_license | ydbB/Python001-class01 | d680abc3ea1ccaeb610751e3488421417d381156 | ad80037ccfc68d39125fa94d2747ab7394ac1be8 | refs/heads/master | 2022-11-25T11:27:45.077139 | 2020-07-19T12:35:12 | 2020-07-19T12:35:12 | 272,783,233 | 0 | 0 | null | 2020-06-16T18:28:15 | 2020-06-16T18:28:15 | null | UTF-8 | Python | false | false | 497 | py | # -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
class MaoyanspidersPipeline(object):
def process_item(self, item, spider):
films_name = item['films_name']
films_type = item['films_type']
release_time = item['release_time']
output = f'|{films_name}|\t|{films_type}|\t|{release_time}|\n\n'
with open('./w')
| [
"31039587+ydbB@users.noreply.github.com"
] | 31039587+ydbB@users.noreply.github.com |
dd8bb1c250b24c436a67fde4a1a0d51018deae0f | 1af44bdcbc3c15d3f6e436a7924dfd45f504ab3a | /01.jump to python/chpter 7/1_Regular_Expression/322_4.py | e7264c33ea0b3f9081a31f5eecfb71e933afd995 | [] | no_license | wql7654/bigdata_exam | f57c8b475690cbc5978009dbf8008bedff602e2a | c07ee711bb84407428ba31165185b9607b6825e8 | refs/heads/master | 2023-04-07T00:50:59.563714 | 2021-05-25T02:46:43 | 2021-05-25T02:46:43 | 180,915,985 | 0 | 0 | null | 2023-03-25T01:08:09 | 2019-04-12T02:36:08 | Jupyter Notebook | UTF-8 | Python | false | false | 491 | py | import re
file_name=["foo.bar","autoexec.bat","sendmail.cf","sandstrom.p"]
#확장자가 bat인 파일은 제외해야 하는 조건 추가
p = re.compile(".*[.]([^b].?.?|.[^a]?.?|..?[^t]?)$")
for file in file_name:
m = p.search(file)
print(m)
#확장자길이가 1~3개까지 가능
#확장자의 글자의 갯수가 2이상이 되도록 "?"를 추가하여
# ver 2에서 추가한 확장자가 'bat'인 파일을 제거하기 위한 요구사항을 만족했다. | [
"studerande5@gmail.com"
] | studerande5@gmail.com |
6e4f75ac35fb9b664f40bc19a40e3bf93dc0da7b | 3bae1ed6460064f997264091aca0f37ac31c1a77 | /core/sort/sort.py | 46dd25711428d5aa0b51bd19a417355790d4ac24 | [] | no_license | racktivity/ext-pylabs-core | 04d96b80ac1942754257d59e91460c3a141f0a32 | 53d349fa6bee0ccead29afd6676979b44c109a61 | refs/heads/master | 2021-01-22T10:33:18.523799 | 2017-06-08T09:09:28 | 2017-06-08T09:09:28 | 54,314,984 | 0 | 0 | null | 2017-06-08T09:09:29 | 2016-03-20T11:55:01 | Python | UTF-8 | Python | false | false | 4,647 | py | # <License type="Sun Cloud BSD" version="2.2">
#
# Copyright (c) 2005-2009, Sun Microsystems, Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or
# without modification, are permitted provided that the following
# conditions are met:
#
# 1. Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in
# the documentation and/or other materials provided with the
# distribution.
#
# 3. Neither the name Sun Microsystems, Inc. nor the names of other
# contributors may be used to endorse or promote products derived
# from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY SUN MICROSYSTEMS, INC. "AS IS" AND ANY
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL SUN MICROSYSTEMS, INC. OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
# OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# </License>
from heapq import heapify, heappop, heappush
from itertools import islice, cycle
from tempfile import gettempdir
import os
import pylabs
def merge(chunks,key=None):
if key is None:
key = lambda x : x
values = []
for index, chunk in enumerate(chunks):
try:
iterator = iter(chunk)
value = iterator.next()
except StopIteration:
try:
chunk.close()
os.remove(chunk.name)
chunks.remove(chunk)
except:
pylabs.q.logger.log("StopIterationException", 5)
else:
heappush(values,((key(value),index,value,iterator,chunk)))
while values:
k, index, value, iterator, chunk = heappop(values)
yield value
try:
value = iterator.next()
except StopIteration:
try:
chunk.close()
os.remove(chunk.name)
chunks.remove(chunk)
except:
pylabs.q.logger.log("StopIterationException", 5)
else:
heappush(values,(key(value),index,value,iterator,chunk))
def batch_sort(input, output, header, key=None,buffer_size=32000,tempdirs=[]):
if not tempdirs:
tempdirs.append(gettempdir())
input_file = file(input,'rb',64*1024)
try:
input_iterator = iter(input_file)
chunks = []
try:
for tempdir in cycle(tempdirs):
current_chunk = list(islice(input_iterator,buffer_size))
if current_chunk:
current_chunk.sort(key=key)
output_chunk = file(os.path.join(tempdir,'%06i'%len(chunks)),'w+b',64*1024)
output_chunk.writelines(current_chunk)
output_chunk.flush()
output_chunk.seek(0)
chunks.append(output_chunk)
else:
break
except:
for chunk in chunks:
try:
chunk.close()
os.remove(chunk.name)
except:
pylabs.q.logger.log("StopIterationException", 5)
if output_chunk not in chunks:
try:
output_chunk.close()
os.remove(output_chunk.name)
except:
pylabs.q.logger.log("StopIterationException", 5)
return
finally:
input_file.close()
output_file = file(output,'wb',64*1024)
try:
output_file.write(header[0])
output_file.write(header[1])
output_file.write(header[2])
output_file.write(header[3])
output_file.write(header[4])
output_file.writelines(merge(chunks,key))
finally:
for chunk in chunks:
try:
chunk.close()
os.remove(chunk.name)
except:
pylabs.q.logger.log("StopIterationException", 5)
output_file.close() | [
"devnull@localhost"
] | devnull@localhost |
c283a8c68863c409076408a410ce155dddcf590e | 11a246743073e9d2cb550f9144f59b95afebf195 | /codeforces/793/a.py | 0905d62437f790cd23e8d77eb696802d02271c3b | [] | no_license | ankitpriyarup/online-judge | b5b779c26439369cedc05c045af5511cbc3c980f | 8a00ec141142c129bfa13a68dbf704091eae9588 | refs/heads/master | 2020-09-05T02:46:56.377213 | 2019-10-27T20:12:25 | 2019-10-27T20:12:25 | 219,959,932 | 0 | 1 | null | 2019-11-06T09:30:58 | 2019-11-06T09:30:57 | null | UTF-8 | Python | false | false | 268 | py | def main():
n, k = map(int, input().split())
a = list(map(int, input().split()))
target = min(a)
mods = set([x % k for x in a])
if len(mods) != 1:
print(-1)
return
ans = sum((x - target) // k for x in a)
print(ans)
main()
| [
"arnavsastry@gmail.com"
] | arnavsastry@gmail.com |
89beec6259029120fe9cf824913120a3783b0624 | 202f687add55894f77d88a84f1f7e84605301a0c | /4.scrapy框架/collectips_itemloader$$$/collectips/items.py | f10731bea40faca39daba4f78510340eecfef775 | [] | no_license | SuneastChen/python_crawler_learning | 6f8ef3b8409ad3c0a9ed900ccd89e7180df5a9bd | 6651177ef177231acd9638c39c809bb8f62d5df0 | refs/heads/master | 2020-03-07T22:45:39.515235 | 2018-04-02T13:53:55 | 2018-04-02T13:53:55 | 127,763,206 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 869 | py | # -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.html
import scrapy
from scrapy.loader import ItemLoader
from scrapy.loader.processors import MapCompose, TakeFirst
import re
class IPItemLoader(ItemLoader): #继承itemloader,自定义类
default_output_processor = TakeFirst() # 默认输出第一个值
def re_speed(value):
return re.search('\d+\.\d*', value).group()
class CollectipsItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
IP = scrapy.Field()
PORT = scrapy.Field()
POSITION = scrapy.Field(
input_processor=MapCompose(lambda x : x.strip(),))
TYPE = scrapy.Field()
SPEED = scrapy.Field(
input_processor=MapCompose(re_speed,))
LAST_CHECK_TIME = scrapy.Field()
| [
"1050521852@qq.com"
] | 1050521852@qq.com |
3fdd9076108fb49a93543a90c859c4488e5f0549 | 6fa701cdaa0d83caa0d3cbffe39b40e54bf3d386 | /google/cloud/talent/v4beta1/talent-v4beta1-py/google/cloud/talent_v4beta1/services/tenant_service/async_client.py | 9d5e489bae015155dbff5a92171f69c120ea6c73 | [
"Apache-2.0"
] | permissive | oltoco/googleapis-gen | bf40cfad61b4217aca07068bd4922a86e3bbd2d5 | 00ca50bdde80906d6f62314ef4f7630b8cdb6e15 | refs/heads/master | 2023-07-17T22:11:47.848185 | 2021-08-29T20:39:47 | 2021-08-29T20:39:47 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 24,545 | py | # -*- coding: utf-8 -*-
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from collections import OrderedDict
import functools
import re
from typing import Dict, Sequence, Tuple, Type, Union
import pkg_resources
import google.api_core.client_options as ClientOptions # type: ignore
from google.api_core import exceptions as core_exceptions # type: ignore
from google.api_core import gapic_v1 # type: ignore
from google.api_core import retry as retries # type: ignore
from google.auth import credentials as ga_credentials # type: ignore
from google.oauth2 import service_account # type: ignore
from google.cloud.talent_v4beta1.services.tenant_service import pagers
from google.cloud.talent_v4beta1.types import tenant
from google.cloud.talent_v4beta1.types import tenant as gct_tenant
from google.cloud.talent_v4beta1.types import tenant_service
from .transports.base import TenantServiceTransport, DEFAULT_CLIENT_INFO
from .transports.grpc_asyncio import TenantServiceGrpcAsyncIOTransport
from .client import TenantServiceClient
class TenantServiceAsyncClient:
"""A service that handles tenant management, including CRUD and
enumeration.
"""
_client: TenantServiceClient
DEFAULT_ENDPOINT = TenantServiceClient.DEFAULT_ENDPOINT
DEFAULT_MTLS_ENDPOINT = TenantServiceClient.DEFAULT_MTLS_ENDPOINT
tenant_path = staticmethod(TenantServiceClient.tenant_path)
parse_tenant_path = staticmethod(TenantServiceClient.parse_tenant_path)
common_billing_account_path = staticmethod(TenantServiceClient.common_billing_account_path)
parse_common_billing_account_path = staticmethod(TenantServiceClient.parse_common_billing_account_path)
common_folder_path = staticmethod(TenantServiceClient.common_folder_path)
parse_common_folder_path = staticmethod(TenantServiceClient.parse_common_folder_path)
common_organization_path = staticmethod(TenantServiceClient.common_organization_path)
parse_common_organization_path = staticmethod(TenantServiceClient.parse_common_organization_path)
common_project_path = staticmethod(TenantServiceClient.common_project_path)
parse_common_project_path = staticmethod(TenantServiceClient.parse_common_project_path)
common_location_path = staticmethod(TenantServiceClient.common_location_path)
parse_common_location_path = staticmethod(TenantServiceClient.parse_common_location_path)
@classmethod
def from_service_account_info(cls, info: dict, *args, **kwargs):
"""Creates an instance of this client using the provided credentials
info.
Args:
info (dict): The service account private key info.
args: Additional arguments to pass to the constructor.
kwargs: Additional arguments to pass to the constructor.
Returns:
TenantServiceAsyncClient: The constructed client.
"""
return TenantServiceClient.from_service_account_info.__func__(TenantServiceAsyncClient, info, *args, **kwargs) # type: ignore
@classmethod
def from_service_account_file(cls, filename: str, *args, **kwargs):
"""Creates an instance of this client using the provided credentials
file.
Args:
filename (str): The path to the service account private key json
file.
args: Additional arguments to pass to the constructor.
kwargs: Additional arguments to pass to the constructor.
Returns:
TenantServiceAsyncClient: The constructed client.
"""
return TenantServiceClient.from_service_account_file.__func__(TenantServiceAsyncClient, filename, *args, **kwargs) # type: ignore
from_service_account_json = from_service_account_file
@property
def transport(self) -> TenantServiceTransport:
"""Returns the transport used by the client instance.
Returns:
TenantServiceTransport: The transport used by the client instance.
"""
return self._client.transport
get_transport_class = functools.partial(type(TenantServiceClient).get_transport_class, type(TenantServiceClient))
def __init__(self, *,
credentials: ga_credentials.Credentials = None,
transport: Union[str, TenantServiceTransport] = "grpc_asyncio",
client_options: ClientOptions = None,
client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO,
) -> None:
"""Instantiates the tenant service client.
Args:
credentials (Optional[google.auth.credentials.Credentials]): The
authorization credentials to attach to requests. These
credentials identify the application to the service; if none
are specified, the client will attempt to ascertain the
credentials from the environment.
transport (Union[str, ~.TenantServiceTransport]): The
transport to use. If set to None, a transport is chosen
automatically.
client_options (ClientOptions): Custom options for the client. It
won't take effect if a ``transport`` instance is provided.
(1) The ``api_endpoint`` property can be used to override the
default endpoint provided by the client. GOOGLE_API_USE_MTLS_ENDPOINT
environment variable can also be used to override the endpoint:
"always" (always use the default mTLS endpoint), "never" (always
use the default regular endpoint) and "auto" (auto switch to the
default mTLS endpoint if client certificate is present, this is
the default value). However, the ``api_endpoint`` property takes
precedence if provided.
(2) If GOOGLE_API_USE_CLIENT_CERTIFICATE environment variable
is "true", then the ``client_cert_source`` property can be used
to provide client certificate for mutual TLS transport. If
not provided, the default SSL client certificate will be used if
present. If GOOGLE_API_USE_CLIENT_CERTIFICATE is "false" or not
set, no client certificate will be used.
Raises:
google.auth.exceptions.MutualTlsChannelError: If mutual TLS transport
creation failed for any reason.
"""
self._client = TenantServiceClient(
credentials=credentials,
transport=transport,
client_options=client_options,
client_info=client_info,
)
async def create_tenant(self,
request: tenant_service.CreateTenantRequest = None,
*,
parent: str = None,
tenant: gct_tenant.Tenant = None,
retry: retries.Retry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> gct_tenant.Tenant:
r"""Creates a new tenant entity.
Args:
request (:class:`google.cloud.talent_v4beta1.types.CreateTenantRequest`):
The request object. The Request of the CreateTenant
method.
parent (:class:`str`):
Required. Resource name of the project under which the
tenant is created.
The format is "projects/{project_id}", for example,
"projects/foo".
This corresponds to the ``parent`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
tenant (:class:`google.cloud.talent_v4beta1.types.Tenant`):
Required. The tenant to be created.
This corresponds to the ``tenant`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.talent_v4beta1.types.Tenant:
A Tenant resource represents a tenant
in the service. A tenant is a group or
entity that shares common access with
specific privileges for resources like
profiles. Customer may create multiple
tenants to provide data isolation for
different groups.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([parent, tenant])
if request is not None and has_flattened_params:
raise ValueError("If the `request` argument is set, then none of "
"the individual field arguments should be set.")
request = tenant_service.CreateTenantRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if parent is not None:
request.parent = parent
if tenant is not None:
request.tenant = tenant
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.create_tenant,
default_timeout=30.0,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((
("parent", request.parent),
)),
)
# Send the request.
response = await rpc(
request,
retry=retry,
timeout=timeout,
metadata=metadata,
)
# Done; return the response.
return response
async def get_tenant(self,
request: tenant_service.GetTenantRequest = None,
*,
name: str = None,
retry: retries.Retry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> tenant.Tenant:
r"""Retrieves specified tenant.
Args:
request (:class:`google.cloud.talent_v4beta1.types.GetTenantRequest`):
The request object. Request for getting a tenant by
name.
name (:class:`str`):
Required. The resource name of the tenant to be
retrieved.
The format is
"projects/{project_id}/tenants/{tenant_id}", for
example, "projects/foo/tenants/bar".
This corresponds to the ``name`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.talent_v4beta1.types.Tenant:
A Tenant resource represents a tenant
in the service. A tenant is a group or
entity that shares common access with
specific privileges for resources like
profiles. Customer may create multiple
tenants to provide data isolation for
different groups.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([name])
if request is not None and has_flattened_params:
raise ValueError("If the `request` argument is set, then none of "
"the individual field arguments should be set.")
request = tenant_service.GetTenantRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if name is not None:
request.name = name
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.get_tenant,
default_retry=retries.Retry(
initial=0.1,maximum=60.0,multiplier=1.3, predicate=retries.if_exception_type(
core_exceptions.DeadlineExceeded,
core_exceptions.ServiceUnavailable,
),
deadline=30.0,
),
default_timeout=30.0,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((
("name", request.name),
)),
)
# Send the request.
response = await rpc(
request,
retry=retry,
timeout=timeout,
metadata=metadata,
)
# Done; return the response.
return response
async def update_tenant(self,
request: tenant_service.UpdateTenantRequest = None,
*,
tenant: gct_tenant.Tenant = None,
retry: retries.Retry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> gct_tenant.Tenant:
r"""Updates specified tenant.
Args:
request (:class:`google.cloud.talent_v4beta1.types.UpdateTenantRequest`):
The request object. Request for updating a specified
tenant.
tenant (:class:`google.cloud.talent_v4beta1.types.Tenant`):
Required. The tenant resource to
replace the current resource in the
system.
This corresponds to the ``tenant`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.talent_v4beta1.types.Tenant:
A Tenant resource represents a tenant
in the service. A tenant is a group or
entity that shares common access with
specific privileges for resources like
profiles. Customer may create multiple
tenants to provide data isolation for
different groups.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([tenant])
if request is not None and has_flattened_params:
raise ValueError("If the `request` argument is set, then none of "
"the individual field arguments should be set.")
request = tenant_service.UpdateTenantRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if tenant is not None:
request.tenant = tenant
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.update_tenant,
default_timeout=30.0,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((
("tenant.name", request.tenant.name),
)),
)
# Send the request.
response = await rpc(
request,
retry=retry,
timeout=timeout,
metadata=metadata,
)
# Done; return the response.
return response
async def delete_tenant(self,
request: tenant_service.DeleteTenantRequest = None,
*,
name: str = None,
retry: retries.Retry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> None:
r"""Deletes specified tenant.
Args:
request (:class:`google.cloud.talent_v4beta1.types.DeleteTenantRequest`):
The request object. Request to delete a tenant.
name (:class:`str`):
Required. The resource name of the tenant to be deleted.
The format is
"projects/{project_id}/tenants/{tenant_id}", for
example, "projects/foo/tenants/bar".
This corresponds to the ``name`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([name])
if request is not None and has_flattened_params:
raise ValueError("If the `request` argument is set, then none of "
"the individual field arguments should be set.")
request = tenant_service.DeleteTenantRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if name is not None:
request.name = name
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.delete_tenant,
default_retry=retries.Retry(
initial=0.1,maximum=60.0,multiplier=1.3, predicate=retries.if_exception_type(
core_exceptions.DeadlineExceeded,
core_exceptions.ServiceUnavailable,
),
deadline=30.0,
),
default_timeout=30.0,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((
("name", request.name),
)),
)
# Send the request.
await rpc(
request,
retry=retry,
timeout=timeout,
metadata=metadata,
)
async def list_tenants(self,
request: tenant_service.ListTenantsRequest = None,
*,
parent: str = None,
retry: retries.Retry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> pagers.ListTenantsAsyncPager:
r"""Lists all tenants associated with the project.
Args:
request (:class:`google.cloud.talent_v4beta1.types.ListTenantsRequest`):
The request object. List tenants for which the client
has ACL visibility.
parent (:class:`str`):
Required. Resource name of the project under which the
tenant is created.
The format is "projects/{project_id}", for example,
"projects/foo".
This corresponds to the ``parent`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.talent_v4beta1.services.tenant_service.pagers.ListTenantsAsyncPager:
The List tenants response object.
Iterating over this object will yield
results and resolve additional pages
automatically.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([parent])
if request is not None and has_flattened_params:
raise ValueError("If the `request` argument is set, then none of "
"the individual field arguments should be set.")
request = tenant_service.ListTenantsRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if parent is not None:
request.parent = parent
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.list_tenants,
default_retry=retries.Retry(
initial=0.1,maximum=60.0,multiplier=1.3, predicate=retries.if_exception_type(
core_exceptions.DeadlineExceeded,
core_exceptions.ServiceUnavailable,
),
deadline=30.0,
),
default_timeout=30.0,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((
("parent", request.parent),
)),
)
# Send the request.
response = await rpc(
request,
retry=retry,
timeout=timeout,
metadata=metadata,
)
# This method is paged; wrap the response in a pager, which provides
# an `__aiter__` convenience method.
response = pagers.ListTenantsAsyncPager(
method=rpc,
request=request,
response=response,
metadata=metadata,
)
# Done; return the response.
return response
try:
DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo(
gapic_version=pkg_resources.get_distribution(
"google-cloud-talent",
).version,
)
except pkg_resources.DistributionNotFound:
DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo()
__all__ = (
"TenantServiceAsyncClient",
)
| [
"bazel-bot-development[bot]@users.noreply.github.com"
] | bazel-bot-development[bot]@users.noreply.github.com |
923553c57830c43ea44016664a95c75a979de8cf | bd498cbbb28e33370298a84b693f93a3058d3138 | /Google/benchmarks/transformer/implementations/transformer-research-TF-tpu-v3-8192/lingvo/tasks/mt/decoder.py | 59ff206cb6bc0ffdb5d3a0bb6f0ed5ef67dafecc | [
"Apache-2.0"
] | permissive | piyushghai/training_results_v0.7 | afb303446e75e3e9789b0f6c40ce330b6b83a70c | e017c9359f66e2d814c6990d1ffa56654a73f5b0 | refs/heads/master | 2022-12-19T16:50:17.372320 | 2020-09-24T01:02:00 | 2020-09-24T18:01:01 | 298,127,245 | 0 | 1 | Apache-2.0 | 2020-09-24T00:27:21 | 2020-09-24T00:27:21 | null | UTF-8 | Python | false | false | 106,948 | py | # Lint as: python2, python3
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
#
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Machine translation decoder.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import math
import REDACTED.tensorflow_models.mlperf.models.rough.transformer_lingvo.lingvo.compat as tf
from REDACTED.tensorflow_models.mlperf.models.rough.transformer_lingvo.lingvo.core import attention
from REDACTED.tensorflow_models.mlperf.models.rough.transformer_lingvo.lingvo.core import base_decoder
from REDACTED.tensorflow_models.mlperf.models.rough.transformer_lingvo.lingvo.core import base_layer
from REDACTED.tensorflow_models.mlperf.models.rough.transformer_lingvo.lingvo.core import batch_major_attention
from REDACTED.tensorflow_models.mlperf.models.rough.transformer_lingvo.lingvo.core import layers
from REDACTED.tensorflow_models.mlperf.models.rough.transformer_lingvo.lingvo.core import layers_with_attention
from REDACTED.tensorflow_models.mlperf.models.rough.transformer_lingvo.lingvo.core import model_helper
from REDACTED.tensorflow_models.mlperf.models.rough.transformer_lingvo.lingvo.core import plot
from REDACTED.tensorflow_models.mlperf.models.rough.transformer_lingvo.lingvo.core import py_utils
from REDACTED.tensorflow_models.mlperf.models.rough.transformer_lingvo.lingvo.core import quant_utils
from REDACTED.tensorflow_models.mlperf.models.rough.transformer_lingvo.lingvo.core import rnn_cell
from REDACTED.tensorflow_models.mlperf.models.rough.transformer_lingvo.lingvo.core import rnn_layers
from REDACTED.tensorflow_models.mlperf.models.rough.transformer_lingvo.lingvo.core import summary_utils
import six
from six.moves import zip
from REDACTED.tensorflow.python.ops import inplace_ops # pylint: disable=g-direct-tensorflow-import
@tf.Defun()
def AssertIdShape(expected_ids_shape_pattern, ids_shape, *args):
dependencies = [
py_utils.assert_shape_match(ids_shape, expected_ids_shape_pattern)
] + [py_utils.assert_shape_match(ids_shape, x_shape) for x_shape in args]
return py_utils.with_dependencies(dependencies, ids_shape)
class MTBaseDecoder(base_decoder.BaseBeamSearchDecoder):
"""Base class for Lingvo MT decoders."""
@classmethod
def Params(cls):
p = super(MTBaseDecoder, cls).Params()
p.Define('label_smoothing', None, 'Label smoothing class.')
p.Define('softmax', layers.SimpleFullSoftmax.Params(), 'Softmax params.')
p.Define(
'per_word_avg_loss', False, 'Compute loss averaged per word. If False '
'loss is computed averaged per sequence.')
p.Define('unidi_rnn_type', 'func', 'Options: func, native_cudnn. '
'func: FRNN, native_cudnn: CuDNNLSTM.')
p.Define('feed_attention_context_vec_to_softmax', False,
'Whether to concatenate attention context vector to rnn output'
' before softmax.')
p.Define('per_example_tensors', False, 'Return per example tensors')
# Default config for the softmax part.
p.softmax.num_classes = 32000 # 32k
p.softmax.num_shards = 8
return p
@base_layer.initializer
def __init__(self, params):
super(MTBaseDecoder, self).__init__(params)
p = self.params
if p.label_smoothing is not None:
p.label_smoothing.name = 'smoother'
p.label_smoothing.num_classes = p.softmax.num_classes
self.CreateChild('smoother', p.label_smoothing)
@classmethod
def UpdateTargetVocabSize(cls, p, vocab_size, wpm_model=None):
"""Sets the params with the given vocab size and wpm model.
Args:
p: model params.
vocab_size: size of the vocabulary.
wpm_model: file name prefix pointing to a wordpiece model.
Returns:
Model params updated with the vocab size and wpm model.
"""
p.softmax.num_classes = vocab_size
return p
def _FPropSoftmax(self,
theta,
softmax_input,
target_labels,
target_weights,
target_paddings,
target_segment_ids=None,
time_axis=0):
"""Computes cross-entropy loss given the softmax input, labels and weights.
Args:
theta: A `.NestedMap` object containing weights' values of this
layer and its children layers.
softmax_input: A tensor of shape [time, batch, p.softmax.input_dim].
target_labels: A matrix of tf.int32. [time, batch].
target_weights: A matrix of params.dtype. [time, batch].
target_paddings: A matrix of params.dtype. [time, batch].
target_segment_ids: A matrix of params.dtype. [time, batch].
time_axis: If 0, the inputs are time-major: [time, batch, ...]; if 1, the
inputs are batch-major: [batch, time, ...].
Returns:
A tuple (metrics, per_example_tensors).
metrics:
A dictionary containing metrics for the xent loss and prediction
accuracy.
per_example_tensors:
A dictionary of per-example tensors.
"""
p = self.params
softmax_input = tf.reshape(softmax_input, [-1, p.softmax.input_dim])
if p.label_smoothing is None:
xent_loss = self.softmax.FProp(
theta.softmax, [softmax_input],
class_weights=tf.reshape(target_weights, [-1, 1]),
class_ids=tf.reshape(target_labels, [-1, 1]))
else:
# [time, batch, num_classes]
if time_axis == 0:
target_probs = tf.transpose(
self.smoother.FProp(
theta.smoother,
tf.transpose(target_paddings),
tf.transpose(target_labels),
target_ids=None), [1, 0, 2])
else:
target_probs = self.smoother.FProp(
theta.smoother, target_paddings, target_labels, target_ids=None)
xent_loss = self.softmax.FProp(
theta.softmax, [softmax_input],
class_weights=tf.reshape(target_weights, [-1, 1]),
class_probabilities=tf.reshape(target_probs,
[-1, p.softmax.num_classes]))
if p.per_word_avg_loss:
final_loss = tf.identity(xent_loss.avg_xent, name='loss')
loss_weight = tf.identity(xent_loss.total_weight, name='num_predictions')
else:
# NOTE: Per-sequence loss is the sum of each example's loss. The
# final loss for a training batch is the mean loss of sequences in
# the batch.
# [time, batch]
per_example_loss = tf.reshape(xent_loss.per_example_xent,
py_utils.GetShape(target_weights))
per_sequence_loss = tf.reduce_sum(
per_example_loss * target_weights, axis=time_axis)
if p.packed_input:
assert target_segment_ids is not None, (
'Need target segment ids for '
'normalizing loss when training with packed inputs.')
num_samples = tf.cast(
tf.reduce_sum(
tf.reduce_max(target_segment_ids, 0) -
tf.reduce_min(target_segment_ids, 0) + 1),
dtype=per_sequence_loss.dtype)
final_loss = tf.reduce_sum(per_sequence_loss) / num_samples
else:
final_loss = tf.reduce_mean(per_sequence_loss)
loss_weight = py_utils.GetShape(per_sequence_loss)[0]
metrics = {
'loss': (final_loss, loss_weight),
'log_pplx': (xent_loss.avg_xent, xent_loss.total_weight),
}
per_example_tensors = {}
if p.per_example_tensors:
per_example_tensors['per_example_loss'] = tf.reshape(
xent_loss.per_example_xent, py_utils.GetShape(target_weights))
per_example_tensors['per_sequence_loss'] = tf.reduce_sum(
per_example_tensors['per_example_loss'] * target_weights,
axis=time_axis)
per_example_tensors['loss'] = per_example_tensors['per_sequence_loss']
per_example_tensors['logits'] = tf.reshape(
xent_loss.logits,
tf.concat([py_utils.GetShape(target_weights), [-1]], 0))
per_example_tensors['log_probs'] = tf.reshape(
xent_loss.log_probs,
tf.concat([py_utils.GetShape(target_weights), [-1]], 0))
# NOTE: tf.argmax is not implemented for the JF backend, see b/36093673
# Skip the fraction_of_correct_next_step_preds during training.
if self.do_eval:
logits = xent_loss.logits
correct_preds = tf.cast(
tf.equal(
tf.cast(tf.reshape(tf.argmax(logits, 1), [-1]), tf.int32),
tf.reshape(target_labels, [-1])), p.dtype)
correct_next_preds = tf.reduce_sum(
correct_preds * tf.reshape(tf.cast(target_weights, p.dtype), [-1]))
num_preds = tf.reduce_sum(tf.cast(target_weights, p.dtype))
accuracy = tf.identity(
correct_next_preds / num_preds,
name='fraction_of_correct_next_step_preds')
metrics['fraction_of_correct_next_step_preds'] = (accuracy, num_preds)
return metrics, per_example_tensors
def ComputeLoss(self, theta, predictions, targets):
"""Populates a metrics dictionary based on the output of ComputePredictions.
Args:
theta: Nested map describing decoder model parameters.
predictions: NestedMap describing the decoding process, requiring:
.softmax_input: Tensor of shape [time, batch, params.softmax.input_dim].
targets: NestedMap describing the target sequences.
Returns:
Two dicts.
- A map from metric name (a python string) to a tuple (value, weight).
Both value and weight are scalar Tensors.
- A map from name to arbitrary tensors, where the first dimension must
be the batch index.
"""
segment_id = None
if self.params.packed_input:
segment_id = tf.transpose(targets.segment_ids)
if isinstance(predictions, py_utils.NestedMap):
predictions = predictions.softmax_input
return self._FPropSoftmax(theta, predictions, tf.transpose(targets.labels),
tf.transpose(targets.weights),
tf.transpose(targets.paddings), segment_id)
def _TruncateTargetSequence(self, targets):
"""Truncate padded time steps from all sequences."""
# The following tensors are all in the [batch, time] shape.
# Let's make a copy of targets.
targets = targets.Pack(targets.Flatten())
target_ids = targets.ids
target_labels = targets.labels
target_weights = targets.weights
target_paddings = targets.paddings
max_seq_length = tf.cast(
tf.round(tf.reduce_max(tf.reduce_sum(1.0 - target_paddings, 1))),
tf.int32)
summary_utils.scalar('max_seq_length', max_seq_length)
# Assert to make sure after max_seq_length, all are padded steps for all
# sequences.
target_paddings = py_utils.with_dependencies([
py_utils.assert_equal(
tf.constant(True, tf.bool),
tf.reduce_all(target_paddings[:, max_seq_length:] > 0.5))
], target_paddings)
target_ids = py_utils.with_dependencies([
AssertIdShape(
py_utils.GetShape(target_ids), py_utils.GetShape(target_labels),
py_utils.GetShape(target_paddings),
py_utils.GetShape(target_weights))
], target_ids)
targets.ids = target_ids[:, :max_seq_length]
targets.labels = target_labels[:, :max_seq_length]
targets.weights = target_weights[:, :max_seq_length]
targets.paddings = target_paddings[:, :max_seq_length]
return targets
def _AddAttenProbsSummary(self, source_paddings, targets, atten_probs):
"""Add summary of attention probs.
Args:
source_paddings: source padding, of shape [src_len, src_batch].
targets: A dict of string to tensors representing the targets one try to
predict. Each tensor in targets is of shape [tgt_batch, tgt_len].
atten_probs: a list of attention probs, each element is of shape [tgt_len,
tgt_batch, src_len].
"""
if not self.cluster.add_summary:
return
self._AddAttenProbsImageSummary(source_paddings, targets, atten_probs)
self._AddAttenProbsHistogramSummary(atten_probs)
def _AddAttenProbsHistogramSummary(self, atten_probs):
"""Add histogram summary of attention probs.
Args:
atten_probs: a list of attention probs, each element is of shape [tgt_len,
tgt_batch, src_len].
"""
for i, probs in enumerate(atten_probs):
# a prefix from the context will be used, which looks like
# fprop/wmt14_en_de_transformer/tower_0_0/dec/
summary_utils.histogram('atten{}'.format(i + 1), probs)
def _AddAttenProbsImageSummary(self, source_paddings, targets, atten_probs):
"""Add image summary of attention probs.
Args:
source_paddings: source padding, of shape [src_len, src_batch].
targets: A dict of string to tensors representing the targets one try to
predict. Each tensor in targets is of shape [tgt_batch, tgt_len].
atten_probs: a list of attention probs, each element is of shape [tgt_len,
tgt_batch, src_len].
"""
def PlotAttention(fig, axes, cur_atten_probs, title, set_x_label):
plot.AddImage(fig, axes, cur_atten_probs, title=title)
axes.set_ylabel(plot.ToUnicode('Output sequence index'), wrap=True)
if set_x_label:
axes.set_xlabel(plot.ToUnicode('Input sequence index'), wrap=True)
index = 0
srclen = tf.cast(
tf.round(tf.reduce_sum(1 - source_paddings[:, index])), tf.int32)
tgtlen = tf.cast(
tf.round(tf.reduce_sum(1 - targets.paddings[index, :])), tf.int32)
num_rows = len(atten_probs)
with plot.MatplotlibFigureSummary(
'decoder_example',
figsize=(6, 3 * num_rows),
max_outputs=1,
subplot_grid_shape=(num_rows, 1)) as fig:
for i, probs in enumerate(atten_probs):
# Extract first entry in batch of attention prob matrices
# [tgt_len, src_len]
probs = probs[:, index, :]
probs = tf.expand_dims(probs[:tgtlen, :srclen], 0)
fig.AddSubplot([probs],
PlotAttention,
title='atten_probs_%d' % i,
set_x_label=(i == len(atten_probs) - 1))
def _ExpandToNumHyps(self, source_enc_len, num_hyps_per_beam):
"""Repeat each value according to num hyps.
Args:
source_enc_len: source encoder length; int [batch].
num_hyps_per_beam: number of hypotheses
Returns:
New version of source_enc_len; int [batch * num_hyps_per_beam].
Target_batch is (num_hyps_per_beam * batch).
Example: src_enc_len = [3, 2, 1] and num_hyps_per_beam = 2
--> [3, 2, 1, 3, 2, 1]
"""
x = tf.tile(input=source_enc_len, multiples=[num_hyps_per_beam])
return x
class MTDecoderV1(MTBaseDecoder, quant_utils.QuantizableLayer):
"""MT decoder v1."""
@classmethod
def Params(cls):
p = super(MTDecoderV1, cls).Params()
# Shared embedding.
p.Define('emb', layers.EmbeddingLayer.Params(), 'Embedding layer params.')
p.Define('source_dim', 1024, 'Dimension of the source encoding.')
p.Define('attention', attention.AdditiveAttention.Params(),
'Additive attention params.')
p.Define('atten_rnn_cell_tpl', rnn_cell.LSTMCellSimple.Params(),
'Attention RNNCell params template.')
p.Define('rnn_cell_tpl', rnn_cell.LSTMCellSimple.Params(),
'RNNCell params template.')
p.Define('rnn_cell_dim', 1024, 'size of the rnn cells.')
p.Define('rnn_layers', 8, 'Number of rnn layers.')
p.Define('residual_start', 2, 'Start residual connections from this layer.')
p.Define('atten_rnn_cls', rnn_layers.FRNNWithAttention,
'Which atten rnn cls to use.')
p.Define('use_prev_atten_ctx', False,
'If True, all decoder layers use previous attention context as '
'input. Otherwise, only first decoder layer uses previous '
'attention context and the rest of the layers use current '
'attention context.')
p.Define('dropout_prob', 0.0, 'Prob at which we do dropout.')
# Default value was mildly tuned. Could be further tuned in the future.
p.Define('qlogsoftmax_range_min', -10.0, 'Quantization of the output of '
'log softmax.')
p.Define(
'use_zero_atten_state', False, 'To use zero attention state '
'instead of computing attention with zero query vector.')
p.Define('cc_schedule', None, 'Clipping cap schedule.')
p.Define(
'init_step_ids', False,
'Initializes beam search with first target id instead of <s>.'
'Use this when decoding starts with target_lang id intead of <s> '
'token at time step 0. Make sure the training data has '
'target_lang id as the first token in target sequence.')
disable_vn = py_utils.VariationalNoiseParams(1.0, False, False)
default_params_init = py_utils.WeightInit.Uniform(0.04)
# Default config for the embedding.
p.emb.vn = disable_vn
p.emb.vocab_size = 32000
p.emb.embedding_dim = 1024
p.emb.max_num_shards = 16
p.emb.params_init = default_params_init
# Default config for the attention model.
p.attention.vn = disable_vn
p.attention.hidden_dim = 1024
p.attention.params_init = None # Filled in after dims are known.
# Default config for the attention rnn cell.
p.atten_rnn_cell_tpl.vn = disable_vn
p.atten_rnn_cell_tpl.params_init = default_params_init
# Default config for the rnn cell.
p.rnn_cell_tpl.vn = disable_vn
p.rnn_cell_tpl.params_init = default_params_init
# Default config for the softmax part.
p.softmax.vn = disable_vn
p.softmax.num_classes = 32000 # 32k
p.softmax.num_shards = 16
p.softmax.params_init = default_params_init
# Default config for beam search.
p.target_seq_len = 300
p.beam_search.length_normalization = 0.2
p.beam_search.coverage_penalty = 0.2
return p
@classmethod
def UpdateTargetVocabSize(cls, p, vocab_size, wpm_model=None):
"""Updates the params with the input vocab_size and WPM model.
Args:
p: model params.
vocab_size: size of the vocabulary.
wpm_model: file name prefix pointing to a wordpiece model.
Returns:
Model params updated with the vocab size and wpm model.
"""
p = super(MTDecoderV1, cls).UpdateTargetVocabSize(p, vocab_size)
p.emb.vocab_size = vocab_size
return p
@base_layer.initializer
def __init__(self, params):
super(MTDecoderV1, self).__init__(params)
p = self.params
assert p.emb.vocab_size == p.softmax.num_classes
with tf.variable_scope(p.name):
if p.cc_schedule is None:
self.cc_schedule = None
else:
self.CreateChild('cc_schedule', p.cc_schedule)
if py_utils.use_tpu():
emb_device = self.cluster.WorkerDeviceInModelSplit(0)
else:
emb_device = ''
with tf.device(emb_device):
self.CreateChild('emb', p.emb)
p.attention.dtype = p.dtype
p.attention.source_dim = p.source_dim
p.attention.query_dim = p.rnn_cell_dim
p.attention.packed_input = p.packed_input
if p.attention.params_init is None:
p.attention.params_init = py_utils.WeightInit.Gaussian(
1. / math.sqrt(p.attention.source_dim + p.attention.query_dim),
seed=p.random_seed)
atten_params = p.attention.Copy()
params = p.atten_rnn_cell_tpl.Copy()
params.name = 'atten_rnn'
params.dtype = p.dtype
params.reset_cell_state = p.packed_input
params.num_input_nodes = p.emb.embedding_dim + p.attention.source_dim
params.num_output_nodes = p.rnn_cell_dim
atten_rnn_cell = params.Copy()
params = p.atten_rnn_cls.Params()
params.name = 'frnn_with_atten'
params.dtype = p.dtype
params.cell = atten_rnn_cell
params.attention = atten_params
params.output_prev_atten_ctx = p.use_prev_atten_ctx
params.packed_input = p.packed_input
params.use_zero_atten_state = p.use_zero_atten_state
params.atten_context_dim = p.attention.source_dim
self.CreateChild('frnn_with_atten', params)
# TODO(zhifengc): Avoid this?
self._atten = self.frnn_with_atten.attention
rnn_layers_params = []
for i in range(1, p.rnn_layers):
params = p.rnn_cell_tpl.Copy()
params.name = 'rnn%d' % i
params.dtype = p.dtype
params.num_input_nodes = p.rnn_cell_dim + p.attention.source_dim
params.num_output_nodes = p.rnn_cell_dim
params.reset_cell_state = p.packed_input
rnn_cell_p = params
params = model_helper.CreateUnidirectionalRNNParams(
self.params, rnn_cell_p)
params.name = 'frnn%d' % i
params.packed_input = p.packed_input
rnn_layers_params.append(params)
self.CreateChildren('frnn', rnn_layers_params)
p.softmax.dtype = p.dtype
if p.feed_attention_context_vec_to_softmax:
p.softmax.input_dim = p.rnn_cell_dim + p.attention.source_dim
else:
p.softmax.input_dim = p.rnn_cell_dim
self.CreateChild('softmax', p.softmax)
def ApplyDropout(self, x_in):
p = self.params
assert 0 <= p.dropout_prob and p.dropout_prob < 1.0
if self.do_eval or p.dropout_prob == 0.0:
return x_in
else:
return tf.nn.dropout(x_in, rate=p.dropout_prob)
def ApplyClipping(self, theta, x):
if self.cc_schedule:
return self.cc_schedule.ApplyClipping(theta.cc_schedule, x)
else:
return x
@py_utils.NameScopeDecorator('MTDecoderV1/ComputePredictions')
def ComputePredictions(self, theta, encoder_outputs, targets):
"""Decodes `targets` given encoded source.
Args:
theta: A `.NestedMap` object containing weights' values of this layer and
its children layers.
encoder_outputs: a NestedMap computed by encoder. Expected to contain:
encoded - source encoding, of shape [time, batch, depth].
padding - source encoding's padding, of shape [time, batch].
segment_id - (optional) source segment id, of shape [time, batch].
targets: A dict of string to tensors representing the targets one try to
predict. Each tensor in targets is of shape [batch, time].
Returns:
A `.NestedMap` containing information about the decoding process. At a
minimum, this should contain:
softmax_input: Tensor of shape [time, batch, params.softmax.input_dim].
attention: `.NestedMap` of attention distributions of shape [batch,
time, source_len].
source_enc_len: Lengths of source sentences. Tensor of shape [batch].
"""
p = self.params
source_paddings = encoder_outputs.padding
time, batch = py_utils.GetShape(source_paddings, 2)
source_encs = py_utils.HasShape(encoder_outputs.encoded,
[time, batch, p.source_dim])
with tf.name_scope(p.name):
target_ids = tf.transpose(targets.ids)
target_paddings = py_utils.HasRank(targets.paddings, 2)
target_paddings = tf.expand_dims(tf.transpose(target_paddings), 2)
if p.packed_input:
target_segment_id = tf.expand_dims(tf.transpose(targets.segment_ids), 2)
else:
target_segment_id = tf.zeros_like(target_paddings)
if py_utils.use_tpu():
emb_device = self.cluster.WorkerDeviceInModelSplit(0)
else:
emb_device = ''
with tf.device(emb_device):
inputs = self.emb.EmbLookup(theta.emb, target_ids)
inputs = self.ApplyClipping(theta, inputs)
summary_utils.histogram('input_emb', inputs)
inputs = self.ApplyDropout(inputs)
self._emb_out = inputs
# Layer 0 intertwines with attention.
(accumulated_states, _,
side_info) = self.frnn_with_atten.AccumulateStates(
theta.frnn_with_atten,
source_encs,
source_paddings,
inputs,
target_paddings,
src_segment_id=getattr(encoder_outputs, 'segment_id', None),
segment_id=target_segment_id)
(atten_ctxs, xs, atten_probs) = self.frnn_with_atten.PostProcessStates(
accumulated_states, side_info)
self._AddAttenProbsSummary(source_paddings, targets, [atten_probs])
atten_ctxs = self.ApplyClipping(theta, atten_ctxs)
summary_utils.histogram('atten_ctxs', atten_ctxs)
for i, (layer, layer_theta) in enumerate(zip(self.frnn, theta.frnn)):
# Forward through Layer-(i + 1) because Layer-0 handled before.
ys, _ = layer.FProp(
layer_theta,
tf.concat([xs, atten_ctxs], 2),
target_paddings,
segment_id=target_segment_id)
ys = self.ApplyDropout(ys)
if 1 + i >= p.residual_start:
xs += ys # Residual skip
xs = self.ApplyClipping(theta, xs)
else:
xs = ys
summary_utils.histogram('layer_out_%s' % i, xs)
if p.feed_attention_context_vec_to_softmax:
xs = tf.concat([xs, atten_ctxs], 2)
# Get intermediate attention information
atten_states = accumulated_states.atten_state
if isinstance(atten_states, py_utils.NestedMap):
additional_atten_probs = sorted(
[(name, tensor)
for name, tensor in atten_states.FlattenItems()
if name.endswith('probs')])
else:
additional_atten_probs = []
attention_map = py_utils.NestedMap(probs=accumulated_states.atten_probs)
attention_map.update(additional_atten_probs)
# Transpose attention probs from [target_length, batch, source_length]
# to [batch, target_length, source_length]
def _TransposeAttentions(x):
return tf.transpose(x, [1, 0, 2])
attention_map = attention_map.Transform(_TransposeAttentions)
if isinstance(source_paddings, tf.Tensor):
source_enc_len = tf.reduce_sum(1 - source_paddings, axis=0)
return py_utils.NestedMap(
softmax_input=xs,
attention=attention_map,
source_enc_len=source_enc_len)
def AddExtraDecodingInfo(self, encoder_outputs, targets):
"""Adds extra decoding information to encoded_outputs.
Args:
encoder_outputs: a NestedMap computed by encoder.
targets: a NestedMap containing target input fields.
Returns:
encoder_ouputs with extra information used for decoding.
"""
p = self.params
if p.init_step_ids:
encoder_outputs['init_step_ids'] = targets.ids[:, 0]
return encoder_outputs
@py_utils.NameScopeDecorator('MTDecoderV1/InitDecoder')
def _InitDecoder(self, theta, encoder_outputs, num_hyps):
"""Returns initial decoder states.
Args:
theta: A `.NestedMap` object containing weights' values of this layer and
its children layers.
encoder_outputs: a NestedMap computed by encoder.
num_hyps: Scalar Tensor of type int, Number of hypothesis maintained in
beam search, equal to beam_size * num_hyps_per_beam.
Returns:
Tuple of initial model states. Also inserts 'packed_src' to
'encoder_outputs'.
"""
p = self.params
source_paddings = encoder_outputs.padding
time, batch = py_utils.GetShape(source_paddings, 2)
source_encs = py_utils.HasShape(encoder_outputs.encoded,
[time, batch, p.source_dim])
rnn_states = [
self.frnn_with_atten.cell.zero_state(theta.frnn_with_atten.cell,
num_hyps)
]
for layer, layer_theta in zip(self.frnn, theta.frnn):
rnn_states.append(layer.rnn_cell.zero_state(layer_theta, num_hyps))
if p.use_zero_atten_state:
encoder_outputs.packed_src = self._atten.InitForSourcePacked(
theta.frnn_with_atten.atten, source_encs, source_encs,
source_paddings)
s_seq_len = tf.shape(source_encs)[0]
context_dim = tf.shape(source_encs)[2]
atten_context = tf.zeros([num_hyps, context_dim], dtype=source_encs.dtype)
atten_states = self._atten.ZeroAttentionState(s_seq_len, num_hyps)
atten_probs = tf.zeros([num_hyps, s_seq_len], dtype=source_encs.dtype)
else:
encoder_outputs.packed_src = self._atten.InitForSourcePacked(
theta.frnn_with_atten.atten, source_encs, source_encs,
source_paddings)
src_seq_len = tf.shape(source_encs)[0]
zero_atten_state = self._atten.ZeroAttentionState(src_seq_len, num_hyps)
(atten_context, atten_probs,
atten_states) = self._atten.ComputeContextVectorWithSource(
theta.frnn_with_atten.atten,
encoder_outputs.packed_src,
tf.zeros([num_hyps, p.rnn_cell_dim], dtype=py_utils.FPropDtype(p)),
attention_state=zero_atten_state)
assert atten_states is not None
return rnn_states, atten_context, atten_probs, atten_states
@py_utils.NameScopeDecorator('MTDecoderV1/DecodeStep')
def _DecodeStep(self, theta, encoder_outputs, embs, step_paddings,
prev_atten_context, rnn_states, prev_atten_states):
"""Decode one step."""
p = self.params
new_rnn_states = []
new_rnn_states_0, _ = self.frnn_with_atten.cell.FProp(
theta.frnn_with_atten.cell, rnn_states[0],
py_utils.NestedMap(
act=[tf.concat([embs, prev_atten_context], 1)],
padding=step_paddings,
reset_mask=tf.ones_like(step_paddings)))
new_rnn_states.append(new_rnn_states_0)
rnn_out = self.frnn_with_atten.cell.GetOutput(new_rnn_states_0)
cur_atten_context, atten_probs, atten_states = (
self._atten.ComputeContextVectorWithSource(
theta.frnn_with_atten.atten,
encoder_outputs.packed_src,
rnn_out,
attention_state=prev_atten_states))
assert atten_states is not None
if p.use_prev_atten_ctx:
atten_context = prev_atten_context
else:
atten_context = cur_atten_context
for i, (layer, layer_theta) in enumerate(zip(self.frnn, theta.frnn)):
new_rnn_states_i, _ = layer.rnn_cell.FProp(
layer_theta.cell, rnn_states[1 + i],
py_utils.NestedMap(
act=[tf.concat([rnn_out, atten_context], 1)],
padding=step_paddings,
reset_mask=tf.ones_like(step_paddings)))
new_rnn_states.append(new_rnn_states_i)
new_rnn_out = layer.rnn_cell.GetOutput(new_rnn_states_i)
if 1 + i >= p.residual_start:
rnn_out += new_rnn_out
rnn_out = self.ApplyClipping(theta, rnn_out)
else:
rnn_out = new_rnn_out
# Concatenating atten_context vec to rnn output before softmax might help
if p.feed_attention_context_vec_to_softmax:
step_out = tf.concat([rnn_out, atten_context], 1)
else:
step_out = rnn_out
return (cur_atten_context, atten_probs, new_rnn_states, step_out,
atten_states)
def _GetAttentionInitState(self):
"""Gets the attention initialization state.
It is valid to call this after `_DecoderInit()`. Inference subclasses use
this to split computation across subgraph boundaries.
Returns:
`.NestedMap` of attention source states.
"""
return self._atten.GetInitializationSourceState()
def _SetAttentionInitState(self, new_init_state):
"""Sets the attention initialization state.
Args:
new_init_state: `.NestedMap` compatible with that returned from
`_GetAttentionSourceState`.
"""
self._atten.SetInitializationSourceState(new_init_state)
def _InitBeamSearchStateCallback(self, theta, encoder_outputs,
num_hyps_per_beam):
"""Returns initial beams search states.
Args:
theta: a NestedMap of parameters.
encoder_outputs: a NestedMap computed by encoder.
num_hyps_per_beam: An int, number hyps to keep for source sentence.
Returns:
A tuple (initial_results, states).
initial_results: a `.NestedMap` of initial results.
atten_probs:
The initial attention probs, of shape [tgt_batch, src_len].
states: a `.NestedMap` of initial model states.
rnn_states:
Initial state of the RNN.
atten_context:
Initial attention context vector.
atten_states:
Initial attention state.
"""
p = self.params
num_beams = py_utils.GetShape(encoder_outputs.padding)[1]
num_hyps = num_beams * num_hyps_per_beam
rnn_states, init_atten_context, atten_probs, atten_states = (
self._InitDecoder(theta, encoder_outputs, num_hyps))
initial_results = py_utils.NestedMap(
log_probs=tf.zeros([num_hyps, p.softmax.num_classes],
dtype=py_utils.FPropDtype(p)),
atten_probs=atten_probs)
if p.init_step_ids and hasattr(encoder_outputs, 'init_step_ids'):
initial_results['step_ids'] = tf.expand_dims(
self._ExpandToNumHyps(encoder_outputs.init_step_ids,
num_hyps_per_beam), 1)
return initial_results, py_utils.NestedMap({
'time_step': tf.constant(0),
'rnn_states': rnn_states,
'atten_context': init_atten_context,
'atten_probs': atten_probs,
'atten_states': atten_states,
})
@py_utils.NameScopeDecorator('MTDecoderV1/PreBeamSearchStepCallback')
def _PreBeamSearchStepCallback(self, theta, encoder_outputs, step_ids, states,
num_hyps_per_beam):
"""Returns logits for sampling ids and the next model states.
Args:
theta: a NestedMap of parameters.
encoder_outputs: a NestedMap computed by encoder.
step_ids: A tensor of shape [tgt_batch, 1].
states: A `.NestedMap` of tensors representing states that the clients
would like to keep track of for each of the active hyps.
num_hyps_per_beam: Beam size.
Returns:
A tuple (results, out_states).
results: A `.NestedMap` of beam search results.
atten_probs:
The updated attention probs, of shape [tgt_batch, src_len].
log_probs:
Log prob for each of the tokens in the target vocab. This is of shape
[tgt_batch, vocab_size].
out_states: A `.NestedMap`. The updated states.
rnn_states:
Last state of the RNN.
atten_context:
Updated attention context vector.
atten_states:
Updates attention states.
"""
p = self.params
prev_rnn_states = states['rnn_states']
prev_atten_context = states['atten_context']
prev_atten_probs = states['atten_probs']
prev_atten_states = states['atten_states']
step_paddings = tf.zeros(py_utils.GetShape(step_ids), dtype=p.dtype)
embs = self.emb.EmbLookup(theta.emb, tf.reshape(step_ids, [-1]))
embs = self.ApplyClipping(theta, embs)
atten_context, atten_probs, rnn_states, step_out, atten_states = (
self._DecodeStep(theta, encoder_outputs, embs, step_paddings,
prev_atten_context, prev_rnn_states,
prev_atten_states))
atten_probs = tf.reshape(atten_probs, tf.shape(prev_atten_probs))
logits = self.softmax.Logits(theta.softmax, [step_out])
log_probs = self.fns.qlogsoftmax(
logits, qmin=p.qlogsoftmax_range_min, qmax=0.0)
if p.use_prev_atten_ctx:
cur_atten_probs = prev_atten_probs
else:
cur_atten_probs = atten_probs
bs_results = py_utils.NestedMap({
'atten_probs': cur_atten_probs, # the probs exposed to beam search
'log_probs': log_probs,
})
new_states = py_utils.NestedMap({
'time_step': states.time_step + 1,
'rnn_states': rnn_states,
'atten_context': atten_context,
'atten_probs': atten_probs, # the updated attention probs
'atten_states': atten_states,
})
return bs_results, new_states
def _PostBeamSearchStepCallback(self, theta, encoder_outputs, new_step_ids,
states):
# There is nothing to do here.
return states
class TransformerDecoder(MTBaseDecoder):
"""Transformer decoder.
Implements the decoder of Transformer model:
https://arxiv.org/abs/1706.03762.
"""
@classmethod
def Params(cls):
p = super(TransformerDecoder, cls).Params()
p.Define('token_emb', layers.EmbeddingLayer.Params(),
'Token embedding layer params.')
p.Define('position_emb', layers.PositionalEmbeddingLayer.Params(),
'Position embedding layer params.')
p.Define('source_dim', 1024, 'Dimension of encoder outputs.')
p.Define('model_dim', 1024, 'Model dimension that applies to embedding '
'layers and all Transformer layers.')
p.Define('num_trans_layers', 6, 'Number of Transformer layers.')
p.Define(
'trans_tpl', layers_with_attention.TransformerLayer.Params(),
'Transformer layer params. '
' Can be a list. num_trans_layers should be divisible by '
'len(trans_tpl).')
p.Define('input_dropout_prob', 0.0, 'Prob at which we do input dropout.')
p.Define(
'is_transparent', False, 'If set, expects a tensor of shape '
'[time, batch, source_dim, num_trans_layers] as source encodings.')
p.Define(
'add_multiheaded_attention_scalar_summary', False,
'If set, will include scalar summaries for multi-headed attention'
' to visualize the sparsity statistics of attention weights.')
# TODO(miachen): Extend this to more general logic of adding multiple
# embedding fields.
p.Define('task_emb', None, 'Task embedding layer params.')
p.Define(
'init_step_ids', False,
'Initializes beam search with first target id instead of <s>.'
'Use this when decoder has target language token intead of <s> '
'token at time step 0.'
'Make sure the training is done in similar manner.')
# MASS pretraining related (https://github.com/microsoft/MASS)
p.Define(
'use_lang_dependent_atten', False, 'If True, attention between '
'encoder and decoder is language dependent.')
# Default config for the token embedding.
p.token_emb.vocab_size = 32000
p.token_emb.embedding_dim = p.model_dim
p.token_emb.max_num_shards = 16
p.token_emb.params_init = py_utils.WeightInit.Gaussian(
1.0 / math.sqrt(p.token_emb.embedding_dim))
p.token_emb.scale_sqrt_depth = True
# Default config for the position embedding.
p.position_emb.embedding_dim = p.model_dim
# Default config for the transformer layers.
p.trans_tpl.source_dim = p.model_dim
p.trans_tpl.tr_atten_tpl.source_dim = p.model_dim
p.trans_tpl.tr_atten_tpl.num_attention_heads = 8
p.trans_tpl.tr_fflayer_tpl.input_dim = p.model_dim
p.trans_tpl.tr_fflayer_tpl.hidden_dim = 2048
# Default config for beam search.
p.target_seq_len = 300
p.beam_search.length_normalization = 0.5
p.beam_search.coverage_penalty = 0.0
p.beam_search.batch_major_state = False
return p
@base_layer.initializer
def __init__(self, params):
super(TransformerDecoder, self).__init__(params)
p = self.params
if p.softmax.cls == layers.SharedSoftmaxLayer:
self._token_emb_vocab_size = p.softmax.num_classes
self._token_emb_dim = p.model_dim
self._share_sm_emb = True
else:
self._token_emb_vocab_size = p.token_emb.vocab_size
self._token_emb_dim = p.token_emb.embedding_dim
self._share_sm_emb = False
assert self._token_emb_vocab_size == p.softmax.num_classes
assert self._token_emb_dim == p.position_emb.embedding_dim
if p.model_dim != self._token_emb_dim:
tf.logging.warning(
'token_emb.embedding_dim != model_dim (%s vs. %s), '
'creating a projection!')
proj_p = layers.ProjectionLayer.Params().Copy()
proj_p.name = 'emb_proj'
proj_p.input_dim = p.token_emb.embedding_dim
proj_p.output_dim = p.model_dim
self.CreateChild('emb_proj', proj_p)
if p.use_lang_dependent_atten and p.task_emb:
p.trans_tpl.num_aux_atten_post_proj = p.task_emb.vocab_size
p.softmax.input_dim = p.model_dim
if self._share_sm_emb:
# Taking shared emb/softmax layer out of the decoder variable scope so
# that it can also be shared by encoder if needed.
with tf.variable_scope('shared_emb', reuse=tf.AUTO_REUSE):
self.CreateChild('softmax', p.softmax)
with tf.variable_scope(p.name):
if not self._share_sm_emb:
self.CreateChild('token_emb', p.token_emb)
self.CreateChild('position_emb', p.position_emb)
if p.task_emb:
assert p.task_emb.embedding_dim == self._token_emb_dim
self.CreateChild('task_emb', p.task_emb)
dropout_tpl = layers.DropoutLayer.Params()
dropout_tpl.keep_prob = (1.0 - p.input_dropout_prob)
self.CreateChild('input_dropout', dropout_tpl)
params_trans_layers = []
denom = 1
if isinstance(p.trans_tpl, list):
denom = len(p.trans_tpl)
assert p.num_trans_layers % denom == 0
for i in range(p.num_trans_layers // denom):
if isinstance(p.trans_tpl, list):
for q in p.trans_tpl:
params = q.Copy()
params_trans_layers.append(params)
else:
params = p.trans_tpl.Copy()
params_trans_layers.append(params)
for i, params in enumerate(params_trans_layers):
params.name = 'trans_layer_%d' % i
params.packed_input = p.packed_input
params.has_aux_atten = True
params.mask_self_atten = True
self.CreateChildren('trans', params_trans_layers)
if not self._share_sm_emb:
self.CreateChild('softmax', p.softmax)
def _RemoveEOSProbs(self, p, probs, source_enc_len):
"""Remove the attention probs on EOS symbol and renormalize.
Args:
p: decoder params.
probs: attention probs matrix; float [batch, target_len, source_len].
source_enc_len: source encoder length; int [batch].
Returns:
probs with value on last actual token (EOS token) replaced by 0 and
renormalized so that final dim (src_len) sums to 1 again; float
[batch, target_len, source_len].
"""
batch = py_utils.GetShape(probs)[0]
source_enc_len = py_utils.HasShape(source_enc_len, [batch])
# Set -1 values
target_len = py_utils.GetShape(probs)[1]
replacements = tf.ones([py_utils.GetShape(probs)[0], target_len],
dtype=py_utils.FPropDtype(p)) * (-1)
index_0 = tf.reshape(tf.range(batch), shape=[batch, 1, 1])
index_0 *= tf.ones(shape=[batch, target_len, 1], dtype=tf.int32)
index_1 = tf.ones(shape=[batch, 1], dtype=tf.int32)
index_1 *= tf.expand_dims(tf.range(target_len), 0)
index_1 = tf.expand_dims(index_1, -1)
index_2 = tf.reshape(source_enc_len, shape=[batch, 1, 1]) - 1 # Note the -1
index_2 = tf.cast(index_2, tf.int32)
index_2 *= tf.ones(shape=[batch, target_len, 1], dtype=tf.int32)
index = tf.concat([index_0, index_1, index_2], axis=2)
# Original update matrix contained -1 values. Change all to 1 except for
# those positions coming from scatter which will be 0.
updates = tf.scatter_nd(
index, updates=replacements, shape=py_utils.GetShape(probs))
updates += 1
res = probs * updates
# Normalize to that probs sum to 1.
# Add eps to sum to deal with case where all probs except last one are 0.
# In this case then, attention probs will not sum to 1 but this seems still
# better then evenly distributing attention probs in this case.
s = tf.reduce_sum(res, axis=2, keepdims=True)
epsilon = tf.constant(value=1e-6, dtype=py_utils.FPropDtype(p))
s += epsilon
res /= s
return res
def _FProp(self, theta, encoder_outputs, targets):
"""Decodes `targets` given encoded source.
Args:
theta: A `.NestedMap` object containing weights' values of this layer and
its children layers.
encoder_outputs: a NestedMap computed by encoder. Expected to contain:
encoded - source encoding. When `p.is_transparent` is False, it is a
tensor of shape [time, batch, depth]. When `p.is_transparent`
is True, it is a tensor of shape
[time, batch, depth, num_trans_layers] if `self.do_eval` is
True, and a list of `num_trans_layers` tensors of shape
[time, batch, depth] if `self.do_eval` is False.
padding - source encoding's padding, of shape [time, batch].
segment_id - source segment id, of shape [time, batch].
targets: A dict of string to tensors representing the targets one try to
predict. Each tensor in targets is of shape [batch, time].
Returns:
A `.NestedMap` containing output of last decoder layer and attention probs
- softmax_input: Tensor of shape [time, batch, params.softmax.input_dim].
- attention: `.NestedMap` of attention distributions of shape
[batch, target_length, source_length].
"""
p = self.params
source_encs = encoder_outputs.encoded
source_paddings = encoder_outputs.padding
src_segment_id = getattr(encoder_outputs, 'segment_id', None)
time, batch = py_utils.GetShape(source_paddings, 2)
if p.is_transparent:
if self.do_eval:
source_encs = py_utils.HasShape(
source_encs, [time, batch, p.source_dim, p.num_trans_layers])
source_encs = tf.unstack(source_encs, axis=3)
else:
assert isinstance(source_encs, list)
assert len(source_encs) == p.num_trans_layers
for i in range(p.num_trans_layers):
source_encs[i] = py_utils.HasShape(source_encs[i],
[time, batch, p.source_dim])
else:
source_encs = py_utils.HasShape(source_encs, [time, batch, p.source_dim])
source_encs = [source_encs] * p.num_trans_layers
with tf.name_scope(p.name):
# [batch, time]
target_ids = targets.ids
# [time, batch]
target_paddings = tf.transpose(targets.paddings)
target_segment_pos = None
target_segment_id = None
if p.packed_input:
target_segment_id = tf.transpose(targets.segment_ids)
target_segment_pos = targets.segment_pos
assert src_segment_id is not None, ('Need to provide src_segment_id '
'for packed input.')
# Embedding layer
# [batch, time, model_dim]
if not self._share_sm_emb:
token_embs = self.token_emb.EmbLookup(theta.token_emb, target_ids)
else:
token_embs = self.softmax.EmbLookup(theta.softmax, target_ids)
target_time = py_utils.GetShape(target_ids)[1]
# [1, time, model_dim]
if p.packed_input:
posit_embs = self.position_emb.FPropWithPosition(
theta.position_emb, target_segment_pos)
else:
posit_embs = tf.expand_dims(
self.position_emb.FProp(theta.position_emb, target_time), 0)
# [time, batch, model_dim]
input_embs = token_embs + posit_embs
atten_idx = None
if p.task_emb:
if p.use_lang_dependent_atten:
atten_idx = targets.task_ids
# Works for both packed and unpacked inputs.
atten_idx = tf.reshape(tf.transpose(atten_idx), [-1])
input_embs += self.task_emb.EmbLookup(theta.task_emb, targets.task_ids)
if p.model_dim != self._token_emb_dim:
input_embs = self.emb_proj.FProp(theta.emb_proj, input_embs)
input_embs = tf.transpose(input_embs, [1, 0, 2])
input_embs = self.input_dropout.FProp(theta.input_dropout, input_embs)
if not p.packed_input:
src_enc_len = tf.reduce_sum(1 - source_paddings, axis=0)
num_hyps_per_beam = tf.div(
py_utils.GetShape(target_paddings)[1],
py_utils.GetShape(source_paddings)[1])
src_enc_len = self._ExpandToNumHyps(src_enc_len, num_hyps_per_beam)
layer_in = input_embs
per_layer_attn_probs = []
for i, (layer, layer_theta) in enumerate(zip(self.trans, theta.trans)):
# [time, batch, model_dim]
layer_out, probs = layer.FProp(
layer_theta,
layer_in,
target_paddings,
source_encs[i],
source_paddings,
source_segment_id=target_segment_id,
aux_segment_id=src_segment_id,
atten_idx=atten_idx)
layer_in = layer_out
pl_probs = tf.transpose(probs, [1, 0, 2])
if p.packed_input:
# For packed inputs we are currently not removing the EOS token.
per_layer_attn_probs.append(pl_probs)
else:
# Remove attention weight on last (EOS) token and re-normalize
# so that last dimension sums to 1. See b/129097156.
# Original probs shape: [trg time, batch, src time]
norma_atten_probs_3d = self._RemoveEOSProbs(p, pl_probs, src_enc_len)
per_layer_attn_probs.append(norma_atten_probs_3d)
# per_layer_attn_probs shape: [batch, trg time, src time]
self._AddAttenProbsSummary(source_paddings, targets, per_layer_attn_probs)
# Aggregate per-layer attention probs.
aggregated_atten_probs = (
tf.math.add_n(per_layer_attn_probs) / len(per_layer_attn_probs))
attention_map = py_utils.NestedMap(probs=aggregated_atten_probs)
return py_utils.NestedMap(
softmax_input=layer_out, attention=attention_map)
def AddExtraDecodingInfo(self, encoder_outputs, targets):
"""Adds extra decoding information to encoded_outputs.
Args:
encoder_outputs: a NestedMap computed by encoder.
targets: a NestedMap containing target input fields.
Returns:
encoder_ouputs with extra information used for decoding.
"""
p = self.params
if p.task_emb:
encoder_outputs['target_task_ids'] = targets.task_ids[:, 0]
if p.init_step_ids:
encoder_outputs['init_step_ids'] = targets.ids[:, 0]
return encoder_outputs
def ExtendStep(self, theta, encoder_outputs, new_ids, t, prefix_states):
"""Extend prefix as represented by `prefix_states` by one more step.
This function is expected to be called during fast decoding of Transformer
models.
Args:
theta: A `.NestedMap` object containing weights' values of this layer and
its children layers.
encoder_outputs: a NestedMap computed by encoder, containing:
- encoded: source encoding, of shape [time, batch, depth]. Can be [time,
bs, depth, num_trans_layers] if is_transparent is set.
- padding: source encoding's padding, of shape [time, batch].
new_ids: new input ids, of shape [batch].
t: a scalar, the current time step, 0-based.
prefix_states: a `.NestedMap` representing the prefix that has already
been decoded.
Returns:
A tuple (last_decoder_out, prefix_states, atten_probs), where
last_decoder_out is the output of the last decoder layer of
shape [batch, model_dim], `prefix_states` is the update prefix states,
and atten_probs contains attention in shape [batch, src_len] for the
given target position.
"""
p = self.params
source_paddings = encoder_outputs.padding
time, batch = py_utils.GetShape(source_paddings, 2)
if p.is_transparent:
source_encs = py_utils.HasShape(
encoder_outputs.encoded,
[time, batch, p.source_dim, p.num_trans_layers])
source_encs = tf.unstack(source_encs, axis=3)
else:
source_encs = py_utils.HasShape(encoder_outputs.encoded,
[time, batch, p.source_dim])
source_encs = [source_encs] * p.num_trans_layers
with tf.name_scope(p.name):
# Embedding layer
# [batch, time, model_dim]
if not self._share_sm_emb:
token_embs = self.token_emb.EmbLookup(theta.token_emb, new_ids)
else:
token_embs = self.softmax.EmbLookup(theta.softmax, new_ids)
# [time, model_dim]
posit_embs = tf.slice(
self.position_emb.FProp(theta.position_emb, p.target_seq_len), [t, 0],
[1, p.model_dim])
input_embs = token_embs + posit_embs
# Infer num_hyps_per_beam: new_ids has orig_batch_size * num_hyps_per_beam
# source_paddings has orig_batch_size.
num_hyps_per_beam = tf.div(
py_utils.GetShape(new_ids)[0],
py_utils.GetShape(source_paddings)[1])
atten_idx = None
if p.task_emb:
task_ids = self._ExpandToNumHyps(encoder_outputs.target_task_ids,
num_hyps_per_beam)
if p.use_lang_dependent_atten:
atten_idx = task_ids
input_embs += self.task_emb.EmbLookup(theta.task_emb, task_ids)
if p.model_dim != self._token_emb_dim:
input_embs = self.emb_proj.FProp(theta.emb_proj, input_embs)
input_embs = self.input_dropout.FProp(theta.input_dropout, input_embs)
# Make a copy of the input.
out_prefix_states = prefix_states.Pack(prefix_states.Flatten())
layer_in = input_embs
# Infer true source encoder length from the padding.
src_enc_len = tf.reduce_sum(1 - source_paddings, axis=0)
# Need to expand src_enc_len to reflect multiple hypotheses.
src_enc_len = self._ExpandToNumHyps(src_enc_len, num_hyps_per_beam)
atten_probs = []
for i, (layer, layer_theta) in enumerate(zip(self.trans, theta.trans)):
# [time, batch, model_dim]
layer_prefix_states = prefix_states['layer_%i' % i]
layer_out, probs, updated_prefix_states = layer.ExtendStep(
layer_theta,
layer_in,
layer_prefix_states,
source_encs[i],
source_paddings,
t if p.beam_search.name == 'tpu_beam_search' else None,
atten_idx=atten_idx)
out_prefix_states['layer_%i' % i] = updated_prefix_states
layer_in = layer_out
# Enforce shape: [batch, src_len]
probs = tf.squeeze(probs)
# Remove attention weight on last (EOS) token and re-normalize
# so that last dimension sums to 1. See b/129097156.
probs_3d = tf.expand_dims(probs, axis=1)
probs_3d = self._RemoveEOSProbs(p, probs_3d, src_enc_len)
probs = tf.squeeze(probs_3d, axis=1)
atten_probs.append(probs)
# Aggregate per-layer attention probs.
aggregated_atten_probs = tf.math.add_n(atten_probs) / len(atten_probs)
return layer_out, out_prefix_states, aggregated_atten_probs
def ComputePredictions(self, theta, encoder_outputs, targets):
"""Decodes `targets` given encoded source.
Args:
theta: A `.NestedMap` object containing weights' values of this layer and
its children layers.
encoder_outputs: a NestedMap computed by encoder. Expected to contain:
encoded - source encoding, of shape [time, batch, depth]. Can be [time,
batch, depth, num_layers] if is_transparent is set.
padding - source encoding's padding, of shape [time, batch].
segment_id - source segment id, of shape [time, batch].
targets: A dict of string to tensors representing the targets one try to
predict. Each tensor in targets is of shape [batch, time].
Returns:
A `.NestedMap` containing output of last decoder layer and attention probs
- softmax_input: Tensor of shape [time, batch, params.softmax.input_dim].
- attention: `.NestedMap` of attention distributions of shape
[batch, time, source_len].
"""
return self._FProp(theta, encoder_outputs, targets)
def SampleSequenceDecode(self, encoder_outputs):
"""Decode via sampling from softmax at each step.
Args:
encoder_outputs: the outputs of the encoder.
Returns:
BeamSearchDecodeOutput, same as what BeamSearchDecode returns.
"""
p = self.params
non_tpu = p.beam_search.name != 'tpu_beam_search'
def InitCallback(theta, encoder_outputs, num_hyps_per_beam=1):
"""Wrapper for _InitBeamSearchStateCallback for sequence sampler.
The main change is to ensure state tensors have fixed shapes.
Args:
theta: A `.NestedMap` object containing weights' values of this layer
and its children layers.
encoder_outputs: a NestedMap computed by encoder.
num_hyps_per_beam: An int, number hyps to keep for source sentence.
Returns:
A NestedMap of
- initial_results: a `.NestedMap` of initial results.
- states: a `.NestedMap` of initial model states.
"""
init_results, states = self._InitBeamSearchStateCallback(
theta, encoder_outputs, num_hyps_per_beam)
if non_tpu:
prefix_states = states['prefix_states']
for layer in range(p.num_trans_layers):
key = prefix_states['layer_%d' % layer]['key']
value = prefix_states['layer_%d' % layer]['value']
bs = key.shape[1]
atten_dim = key.shape[2]
zeros = tf.zeros([p.target_seq_len, bs, atten_dim],
dtype=py_utils.FPropDtype(p))
prefix_states['layer_%d' % layer]['key'] = tf.concat([key, zeros], 0)
prefix_states['layer_%d' % layer]['value'] = tf.concat([value, zeros],
0)
return init_results, states
def PreBeamSearchCallback(theta,
encoder_outputs,
step_ids,
states,
num_hyps_per_beam=1):
"""Wrapper for _PreBeamSearchStepCallback for sequence sampler.
The main change is to ensure state tensors have fixed shapes.
Args:
theta: A `.NestedMap` object containing weights' values of this layer
and its children layers.
encoder_outputs: a NestedMap computed by encoder.
step_ids: A tensor of shape [tgt_batch, 1].
states: A `.NestedMap` of tensors representing states that the clients
would like to keep track of for each of the active hyps.
num_hyps_per_beam: Beam size.
Returns:
A NestedMap of
- results: A `.NestedMap` of beam search results.
- out_states: A `.NestedMap`. The updated states.
"""
if non_tpu:
# Strip off paddings.
prefix_states = states['prefix_states']
target_time = states.time_step
for layer in range(p.num_trans_layers):
key = prefix_states['layer_%d' % layer]['key']
val = prefix_states['layer_%d' % layer]['value']
prefix_states['layer_%d' % layer]['key'] = tf.slice(
key, [0, 0, 0], [target_time, -1, -1])
prefix_states['layer_%d' % layer]['value'] = tf.slice(
val, [0, 0, 0], [target_time, -1, -1])
bs_results, new_states = self._PreBeamSearchStepCallback(
theta, encoder_outputs, step_ids, states, num_hyps_per_beam)
if non_tpu:
# Add back paddings (to maintain paddings shape).
bs = tf.shape(new_states.prefix_states['layer_0']['key'])[1]
dim = tf.shape(new_states.prefix_states['layer_0']['key'])[2]
pad = tf.zeros([p.target_seq_len - new_states.time_step, bs, dim],
dtype=py_utils.FPropDtype(p))
for layer in range(p.num_trans_layers):
key = new_states.prefix_states['layer_%d' % layer]['key']
val = new_states.prefix_states['layer_%d' % layer]['value']
new_states.prefix_states['layer_%d' % layer]['key'] = tf.concat(
[key, pad], axis=0)
new_states.prefix_states['layer_%d' % layer]['value'] = tf.concat(
[val, pad], axis=0)
return bs_results, new_states
random_seed = tf.random.uniform(
shape=[], maxval=(2**31 - 1), dtype=tf.int32, seed=p.random_seed)
sample = self.target_sequence_sampler.Sample(
self.theta, encoder_outputs, random_seed, InitCallback,
PreBeamSearchCallback, self._PostBeamSearchStepCallback)
bs = tf.shape(sample.ids)[0]
# Only need to make sure topk_hyps has the right shape
# [bs, num_hyps_per_beam], where num_hyps_per_beam=1 for sampling.
# TODO(yuancao): Support sampling multiple sequences and remove
# num_hyps_per_beam constraint.
assert self.params.beam_search.num_hyps_per_beam == 1
sample.topk_hyps = tf.zeros([bs, 1], dtype=tf.string)
sample.topk_ids = sample.ids
weights = 1 - sample.paddings
sample.topk_lens = tf.cast(tf.reduce_sum(weights, axis=1), dtype=tf.int32)
sample.topk_scores = tf.reduce_sum(
tf.math.log(tf.reduce_max(tf.nn.softmax(sample.logits), axis=2)) *
weights,
axis=1)
return sample
def _InitBeamSearchStateCallback(self, theta, encoder_outputs,
num_hyps_per_beam):
"""Returns initial beams search states.
Args:
theta: A `.NestedMap` object containing weights' values of this layer and
its children layers.
encoder_outputs: a NestedMap computed by encoder.
num_hyps_per_beam: An int, number hyps to keep for source sentence.
Returns:
A tuple (initial_results, states).
initial_results: a `.NestedMap` of initial results.
atten_probs:
The initial attention probs, of shape [tgt_batch, src_len].
states: a `.NestedMap` of initial model states.
source_encs:
A tensor of shape [src_batch, src_len, source_dim].
source_paddings:
A tensor of shape [src_batch, src_len].
target_ids:
Initial empty list of decoded ids. [num_hyps, 0].
"""
p = self.params
source_encs = encoder_outputs.encoded
num_hyps = py_utils.GetShape(source_encs)[1] * num_hyps_per_beam
source_len = py_utils.GetShape(source_encs)[0]
# Dummy attention probs
atten_probs = tf.ones([num_hyps, source_len]) / tf.cast(
source_len, tf.float32)
initial_results = py_utils.NestedMap(
log_probs=tf.zeros([num_hyps, p.softmax.num_classes],
dtype=py_utils.FPropDtype(p)),
atten_probs=atten_probs)
if p.init_step_ids:
initial_results['step_ids'] = tf.expand_dims(
self._ExpandToNumHyps(encoder_outputs.init_step_ids,
num_hyps_per_beam), 1)
batch_size = num_hyps
if isinstance(p.trans_tpl, list):
atten_hidden_dim = p.trans_tpl[0].tr_atten_tpl.atten_hidden_dim
assert [tpl.tr_atten_tpl.atten_hidden_dim for tpl in p.trans_tpl
].count(atten_hidden_dim) == len(
p.trans_tpl), 'atten_hidden_dim must match'
else:
atten_hidden_dim = p.trans_tpl.tr_atten_tpl.atten_hidden_dim
if not atten_hidden_dim:
atten_hidden_dim = p.model_dim
if p.beam_search.name == 'tpu_beam_search':
seq_len = p.target_seq_len
else:
seq_len = 0
prefix_states = py_utils.NestedMap()
for layer in range(p.num_trans_layers):
prefix_states['layer_%d' % layer] = py_utils.NestedMap({
'key':
tf.zeros([seq_len, batch_size, atten_hidden_dim],
dtype=py_utils.FPropDtype(p)),
'value':
tf.zeros([seq_len, batch_size, atten_hidden_dim],
dtype=py_utils.FPropDtype(p)),
})
return initial_results, py_utils.NestedMap({
'prefix_states': prefix_states,
'time_step': tf.constant(0)
})
def _PreBeamSearchStepCallback(self, theta, encoder_outputs, step_ids, states,
num_hyps_per_beam):
"""Returns logits for sampling ids and the next model states.
Args:
theta: A `.NestedMap` object containing weights' values of this layer and
its children layers.
encoder_outputs: a NestedMap computed by encoder.
step_ids: A tensor of shape [tgt_batch, 1].
states: A `.NestedMap` of tensors representing states that the clients
would like to keep track of for each of the active hyps.
num_hyps_per_beam: Beam size.
Returns:
A tuple (results, out_states).
results: A `.NestedMap` of beam search results.
atten_probs:
The updated attention probs, of shape [tgt_batch, src_len].
log_probs:
Log prob for each of the tokens in the target vocab. This is of
shape [tgt_batch, vocab_size].
out_states: A `.NestedMap`. The updated states.
source_encs:
A tensor of shape [src_batch, src_len, source_dim].
source_paddings:
A tensor of shape [src_batch, src_len].
target_ids:
Updated list of decoded ids. [num_hyps, Num of decoded ids].
"""
p = self.params
target_time = states.time_step
prefix_states = states.prefix_states
new_states = states.Pack(states.Flatten())
layer_out, updated_prefix_states, atten_probs = self.ExtendStep(
theta, encoder_outputs, tf.squeeze(step_ids, 1), target_time,
prefix_states)
new_states.prefix_states = updated_prefix_states
new_states.time_step = target_time + 1
softmax_input = tf.reshape(layer_out, [-1, p.softmax.input_dim])
logits = self.softmax.Logits(theta.softmax, [softmax_input])
num_hyps = py_utils.GetShape(step_ids)[0]
# [time * batch, num_classes] -> [time, batch, num_classes]
logits = tf.reshape(logits, (-1, num_hyps, p.softmax.num_classes))
# [time, batch, num_classes] -> [batch, time, num_classes]
logits = tf.transpose(logits, (1, 0, 2))
# Only return logits for the last ids
log_probs = tf.nn.log_softmax(tf.squeeze(logits, axis=1))
bs_results = py_utils.NestedMap({
'atten_probs': atten_probs,
'log_probs': log_probs,
})
return bs_results, new_states
def _PostBeamSearchStepCallback(self, theta, encoder_outputs, new_step_ids,
states):
# There is nothing to do here.
return states
def _AddAttenProbsScalarSummary(self, source_paddings, targets, atten_probs):
"""Add scalar summary of multi-headed transformer attention probs.
This summary is primarily used to show statistics of the multi-headed
attention that reveals potential sparsity related properties. The
multi-headed attention probability tensors are exposed by
`MultiHeadedAttention.ComputeContextVectorWithSource` with the name
`multi_headed_atten_prob`. The following statistics are summarized:
- 1_v_2: margin of the largest value vs. the 2nd largest
- 1_v_3: similar, but vs the 3rd largest
- mean: mean of the attention probs. NOTE: the sequences in a mini-batch
are not always of the same length. The attention probability for the
padded time index in target sequences are removed. However, the padding
for the source sequences are left unchanged. As a result, the atten
probs vectors will have some extra zero entries, so the mean calculated
here will be smaller than the true mean.
- source_padding_ratio: as explained above, the source paddings are not
handled when computing the mean. This summary show the average ratio
of time-steps that are padded values in the source sequences, to give
a reference of roughly how much the mean summarized above should be
adjusted.
- 1_v_mean: margin of the largest value vs the mean value.
- sum: the sum of the attention prob vectors. Should always be 1, for sanity
check only.
The quantity above are computed for each sequence in the mini-batch, each
valid (target) sequence index, and each attention head, and then the
average value is reported to the tensorboard as a scalar summary.
Args:
source_paddings: source padding, of shape [src_len, src_batch].
targets: A dict of string to tensors representing the targets one try to
predict. Each tensor in targets is of shape [tgt_batch, tgt_len].
atten_probs: a list of attention probs, each element is of shape [tgt_len,
tgt_batch, src_len].
"""
default_graph = tf.get_default_graph()
# looks like fprop/wmt14_en_de_transformer/tower_0_0/dec
name_scope = default_graph.get_name_scope()
# NOTE: shapes
# source_paddings: [src_len, src_batch]
# targets.paddings: [tgt_batch, tgt_len].
source_time = tf.shape(source_paddings)[0]
source_batch = tf.shape(source_paddings)[1]
target_time = tf.shape(targets.paddings)[1]
target_batch = tf.shape(targets.paddings)[0]
num_heads = self.trans[0].self_atten.params.num_attention_heads
with tf.control_dependencies([tf.assert_equal(source_batch, target_batch)]):
target_batch = tf.identity(target_batch)
source_padding_ratio = tf.cast(
tf.reduce_sum(source_paddings, axis=0), tf.float32)
source_padding_ratio /= tf.cast(tf.shape(source_paddings)[0], tf.float32)
summary_utils.scalar('source_padding_ratio',
tf.reduce_mean(source_padding_ratio))
for i in range(len(atten_probs)):
suffix = '_{}'.format(i) if i > 0 else ''
# Tensor exported from MultiHeadedAttention.ComputeContextVectorWithSource
# shape [target_time * batch_size, num_heads, source_time]
try:
mha_probs = default_graph.get_tensor_by_name(
name_scope + ('/aux_atten{}/MultiHeadedAttention/'
'ComputeContextVectorWithSource/'
'multi_headed_atten_prob:0').format(suffix))
except KeyError:
# no such tensor found, stop here
return
mha_probs = tf.reshape(
mha_probs, (target_time, target_batch, num_heads, source_time))
# remove time padding from target_time
# (tgt_t, batch, n_heads, src_t) => (n_valid, n_heads, src_t)
# explicit reshape is used here to give masks static ndims, otherwise
# tf.boolean_mask will fail
masks = tf.reshape(
tf.equal(targets.paddings, 0), (target_time, target_batch))
mha_probs = tf.boolean_mask(mha_probs, masks)
# note we did not remove invalid entries according to source_paddings,
# because the result will no longer be a rectangular tensor, just
# remember when interpreting some statistics like mean, there are some
# padded zero entries due to non-uniform sequence lengths
# (n_valid, n_heads, src_t) => (n_valid*n_heads, src_t)
mha_probs = tf.reshape(mha_probs, (-1, tf.shape(mha_probs)[-1]))
probs_top3, _ = tf.math.top_k(mha_probs, k=3)
probs_mean = tf.math.reduce_mean(mha_probs, axis=1)
probs_sum = tf.math.reduce_sum(mha_probs, axis=1) # sanity check
margins_12 = tf.reduce_mean(probs_top3[:, 0] - probs_top3[:, 1])
margins_13 = tf.reduce_mean(probs_top3[:, 0] - probs_top3[:, 2])
margins_1m = tf.reduce_mean(probs_top3[:, 0] - probs_mean)
summary_utils.scalar('1_v_2/atten{}'.format(i), margins_12)
summary_utils.scalar('1_v_3/atten{}'.format(i), margins_13)
summary_utils.scalar('1_v_mean/atten{}'.format(i), margins_1m)
summary_utils.scalar('mean/atten{}'.format(i), tf.reduce_mean(probs_mean))
summary_utils.scalar('sum/atten{}'.format(i), tf.reduce_mean(probs_sum))
def _AddAttenProbsSummary(self, source_paddings, targets, atten_probs):
"""Add summary of attention probs.
Args:
source_paddings: source padding, of shape [src_len, src_batch].
targets: A dict of string to tensors representing the targets one try to
predict. Each tensor in targets is of shape [tgt_batch, tgt_len].
atten_probs: a list of attention probs, each element is of shape [tgt_len,
tgt_batch, src_len].
"""
super(TransformerDecoder,
self)._AddAttenProbsSummary(source_paddings, targets, atten_probs)
if self.cluster.add_summary and self.params.add_multiheaded_attention_scalar_summary:
self._AddAttenProbsScalarSummary(source_paddings, targets, atten_probs)
class InsertionDecoder(base_decoder.BaseBeamSearchDecoder):
"""Basic Insertion decoder for MT (or any symbol based sequence).
References:
KERMIT: https://arxiv.org/pdf/1906.01604.pdf
Insertion Transformer: https://arxiv.org/pdf/1902.03249.pdf
"""
@classmethod
def Params(cls):
p = super(InsertionDecoder, cls).Params()
p.Define('token_emb', layers.EmbeddingLayer.Params(),
'Token embedding layer params.')
p.Define('position_emb', layers.PositionalEmbeddingLayer.Params(),
'Position embedding layer params.')
p.Define(
'model_dim', 1024, 'Model dimension that applies to embedding '
'layers and all Transformer layers.')
p.Define('num_trans_layers', 6, 'Number of Transformer layers.')
p.Define('trans_tpl', layers_with_attention.TransformerLayer.Params(),
'Transformer layer params.')
p.Define('softmax', layers.SimpleFullSoftmax.Params(), 'Softmax params.')
p.Define('input_dropout_prob', 0.0, 'Prob at which we do input dropout.')
# Default config for the token embeddings.
p.token_emb.vocab_size = 32000 * 2
p.token_emb.embedding_dim = p.model_dim
p.token_emb.max_num_shards = 16
p.token_emb.params_init = py_utils.WeightInit.Gaussian(
1.0 / math.sqrt(p.token_emb.embedding_dim))
p.token_emb.scale_sqrt_depth = True
# Default config for the position embeddings.
p.position_emb.embedding_dim = p.model_dim
# Default config for the transformer layers.
p.trans_tpl.source_dim = p.model_dim
p.trans_tpl.tr_atten_tpl.source_dim = p.model_dim
p.trans_tpl.tr_atten_tpl.num_attention_heads = 8
p.trans_tpl.tr_fflayer_tpl.input_dim = p.model_dim
p.trans_tpl.tr_fflayer_tpl.hidden_dim = 4096
# Default config for the softmax.
p.softmax.num_classes = 32000
p.softmax.num_shards = 8
p.target_seq_len = 300
return p
@classmethod
def UpdateTargetVocabSize(cls, p, vocab_size, wpm_model=None):
"""Sets the vocab size in the params.
Args:
p: model params.
vocab_size: size of the vocabulary.
wpm_model: file name prefix pointing to a wordpiece model.
Returns:
Model params updated with the vocab size and wpm model.
"""
p.softmax.num_classes = vocab_size
return p
@base_layer.initializer
def __init__(self, params):
super(InsertionDecoder, self).__init__(params)
p = self.params
assert p.token_emb.vocab_size % p.softmax.num_classes == 0
assert p.token_emb.embedding_dim == p.position_emb.embedding_dim
assert p.token_emb.embedding_dim == p.model_dim
with tf.variable_scope(p.name):
self.CreateChild('token_emb', p.token_emb)
self.CreateChild('position_emb', p.position_emb)
dropout_tpl = layers.DropoutLayer.Params()
dropout_tpl.keep_prob = (1.0 - p.input_dropout_prob)
self.CreateChild('input_dropout', dropout_tpl)
params_trans_layers = []
for i in range(p.num_trans_layers):
params = p.trans_tpl.Copy()
params.name = 'trans_layer_%d' % i
params.packed_input = p.packed_input
params.has_aux_atten = False
params.mask_self_atten = True
params_trans_layers.append(params)
self.CreateChildren('trans', params_trans_layers)
p.softmax.input_dim = p.model_dim
self.CreateChild('softmax', p.softmax)
def ComputePredictions(self, theta, encoder_outputs, targets):
"""Compute 1-step of the insertion iteration.
Args:
theta: A `.NestedMap` object containing weights' values of this layer and
its children layers.
encoder_outputs: This should be None.
targets: A `.NestedMap`.
- ids: The target ids of shape [batch_size, time_dim].
- paddings: The target paddings of shape [batch_size, time_dim].
Returns:
A `.NestedMap`.
- outputs: The contextualized output vectors of shape
[batch_size, time_dim, model_dim].
"""
p = self.params
# TODO(williamchan): Enable cross-attention.
assert encoder_outputs is None
with tf.name_scope(p.name):
# [batch, time]
target_ids = targets.ids
# [time, batch]
target_paddings = tf.transpose(targets.paddings)
# Embedding layer
# [batch, time, model_dim]
token_embs = self.token_emb.EmbLookup(theta.token_emb, target_ids)
target_time = py_utils.GetShape(target_ids)[1]
# [1, time, model_dim]
posit_embs = tf.expand_dims(
self.position_emb.FProp(theta.position_emb, target_time), 0)
# [time, batch, model_dim]
input_embs = token_embs + posit_embs
input_embs = tf.transpose(input_embs, [1, 0, 2])
input_embs = self.input_dropout.FProp(theta.input_dropout, input_embs)
layer_in = input_embs
for layer, layer_theta in zip(self.trans, theta.trans):
# [time, batch, model_dim]
layer_out, _ = layer.FProp(layer_theta, layer_in, target_paddings)
layer_in = layer_out
return py_utils.NestedMap(outputs=layer_out)
def ComputeLoss(self, theta, predictions, targets):
# pyformat: disable
"""Returns the insertion loss.
Args:
theta: A `.NestedMap` object capturing decoder model parameters.
predictions: A `.NestedMap` describing the decoding process, requiring
.outputs: Tensor of shape [time, batch, params.softmax.input_dim].
targets: A `.NestedMap`.
- target_indices: A Tensor capturing the relevant insertion tokens to
tf.gather_nd the log-probs.
- target_weights: A Tensor capturing the relevant insertion tokens'
weights.
Returns:
Two dicts.
- A map from metric name (a python string) to a tuple (value, weight).
Both value and weight are scalar Tensors.
- A map from name to arbitrary tensors, where the first dimension must
be the batch index.
"""
# pyformat: enable
p = self.params
batch_size = py_utils.GetShape(predictions.outputs)[0]
state = tf.reshape(predictions.outputs, [-1, p.softmax.input_dim])
logits = self.softmax.Logits(theta.softmax, state)
logits = tf.reshape(
logits,
tf.concat([
py_utils.GetShape(predictions.outputs)[:-1],
[p.softmax.num_classes]
], 0))
log_probs = tf.nn.log_softmax(logits)
# `target_indices` are in the form [batch, time, vocab], where as `logits`
# are in the form [time, batch, vocab]. We need to swap the columns.
target_indices = tf.concat([
predictions.tgt.target_indices[:, 1:2],
predictions.tgt.target_indices[:, 0:1],
predictions.tgt.target_indices[:, 2:3],
], 1)
loss = tf.reduce_sum(
tf.gather_nd(log_probs, target_indices) *
predictions.tgt.target_weights)
loss_weight = tf.cast(batch_size, tf.float32)
return ({
'loss': (loss, loss_weight)
}, {
'log_probs': log_probs,
'logits': logits
})
class TransformerBatchMajorDecoder(MTBaseDecoder):
"""Transformer decoder with batch major implementation.
Implements the decoder of Transformer model:
https://arxiv.org/abs/1706.03762.
"""
@classmethod
def Params(cls):
p = super(TransformerBatchMajorDecoder, cls).Params()
p.Define('token_emb', layers.EmbeddingLayer.Params(),
'Token embedding layer params.')
p.Define('shared_emb', None, 'Embedding shared with softmax.')
p.Define('position_emb', layers.PositionalEmbeddingLayer.Params(),
'Position embedding layer params.')
p.Define('source_dim', 1024, 'Dimension of encoder outputs.')
p.Define(
'model_dim', 1024, 'Model dimension that applies to embedding '
'layers and all Transformer layers.')
p.Define('num_trans_layers', 6, 'Number of Transformer layers.')
p.Define('trans_decoder_tpl',
batch_major_attention.TransformerDecoderLayer.Params(),
'Transformer layer params.')
p.Define('input_dropout_prob', 0.0, 'Prob at which we do input dropout.')
p.Define('input_dropout_tpl', layers.DropoutLayer.Params(),
'Input dropout layer params.')
p.Define('final_layer_norm', False,
'Whether or not to apply layer norm after transformer stack.')
p.Define('use_fused_layernorm', False, 'Whether to use fused layernorm.')
p.Define('use_fast_softmax', False,
'Whether or not to use a faster softmax with label smoothing.')
p.Define(
'input_data_format', 'TBC', 'The data format of input features: '
'TBC for [time, batch, feature_dim], '
'BTC for [batch, time, feature_dim].')
p.Define(
'prediction_data_format', 'TBC',
'The data format of predictions and per-example losses: '
'TBC for [time, batch, ...], '
'BTC for [batch, time, ...].')
# Default config for the token embedding.
p.token_emb.vocab_size = 32000
p.token_emb.embedding_dim = p.model_dim
p.token_emb.max_num_shards = 16
p.token_emb.params_init = py_utils.WeightInit.Gaussian(
1.0 / math.sqrt(p.token_emb.embedding_dim))
p.token_emb.scale_sqrt_depth = True
# Default config for the position embedding.
p.position_emb.embedding_dim = p.model_dim
# Default config for the transformer decoder layers.
p.trans_decoder_tpl.input_dim = p.model_dim
p.trans_decoder_tpl.tr_atten_tpl.input_dim = p.model_dim
p.trans_decoder_tpl.tr_atten_tpl.num_heads = 8
p.trans_decoder_tpl.tr_fflayer_tpl.input_dim = p.model_dim
p.trans_decoder_tpl.tr_fflayer_tpl.hidden_dim = 2048
# Default config for beam search.
p.target_seq_len = 300
p.beam_search.length_normalization = 0.5
p.beam_search.coverage_penalty = 0.0
p.beam_search.batch_major_state = False
p.beam_search.batch_major_compute = True
p.beam_search.short_seq_limit = 40
return p
@base_layer.initializer
def __init__(self, params):
super(TransformerBatchMajorDecoder, self).__init__(params)
p = self.params
if p.shared_emb:
with tf.variable_scope('shared_emb', reuse=tf.AUTO_REUSE):
self.CreateChild('softmax', p.shared_emb)
with tf.variable_scope(p.name):
if not p.shared_emb:
self.CreateChild('token_emb', p.token_emb)
self.CreateChild('position_emb', p.position_emb)
dropout_tpl = p.input_dropout_tpl.Copy()
dropout_tpl.keep_prob = (1.0 - p.input_dropout_prob)
self.CreateChild('input_dropout', dropout_tpl)
params_trans_layers = []
for i in range(p.num_trans_layers):
params = p.trans_decoder_tpl.Copy()
params.name = 'decoder_trans_layer_%d' % i
params_trans_layers.append(params)
self.CreateChildren('decoder_trans', params_trans_layers)
p.softmax.input_dim = p.model_dim
if not p.shared_emb:
self.CreateChild('softmax', p.softmax)
if p.final_layer_norm:
layer_norm_p = layers.LayerNorm.Params().Set(
name='final_ln',
input_dim=p.model_dim,
use_fused_layernorm=p.use_fused_layernorm,
fprop_dtype=p.input_dropout_tpl.fprop_dtype)
self.CreateChild('final_ln', layer_norm_p)
def _MaybeTransposeEncoderOutputs(self, encoder_outputs, target_data_format):
p = self.params
if p.input_data_format == target_data_format:
return encoder_outputs
transposed = py_utils.NestedMap(
encoded=tf.transpose(encoder_outputs.encoded, [1, 0, 2]),
padding=tf.transpose(encoder_outputs.padding))
if getattr(encoder_outputs, 'segment_id', None) is None:
transposed.segment_id = None
else:
transposed.segment_id = tf.transpose(encoder_outputs.segment_id)
return transposed
def _MaybeTransposeTargets(self, targets):
p = self.params
if p.prediction_data_format == 'BTC':
return targets
transposed = py_utils.NestedMap()
for k, v in targets.items():
if v is not None:
with tf.name_scope('transpose_%s' % k):
v = tf.transpose(py_utils.HasShape(v, [-1, -1]))
transposed[k] = v
return transposed
def _FProp(self, theta, encoder_outputs, targets):
"""Decodes `targets` given encoded source.
Args:
theta: A `.NestedMap` object containing weights' values of this layer and
its children layers.
encoder_outputs: A '.NestedMap' object computed by encoder. * encoded -
Source encoding of shape [source_time, source_batch, dim] or
[source_batch, source_time, dim], depending on p.input_data_format. *
paddings - Source encoding's padding of shape [source_time,
source_batch] or [source_batch, source_time].
targets: A dict of string to tensors representing the targets one try to
predict. Each tensor in targets is of shape [batch, target_time].
Returns:
softmax_input: Tensor of shape [target_time, batch, dim].
"""
p = self.params
# [batch, source_time, dim]
encoder_out_bm = self._MaybeTransposeEncoderOutputs(encoder_outputs, 'BTC')
aux_vec = encoder_out_bm.encoded
aux_paddings = encoder_out_bm.padding
aux_segment_id = getattr(encoder_out_bm, 'segment_id', None)
with tf.name_scope(p.name):
# [batch, target_time]
target_ids = targets.ids
target_paddings = targets.paddings
target_time = py_utils.GetShape(target_ids)[1]
target_segment_pos = None
target_segment_id = None
if p.packed_input:
target_segment_id = targets.segment_ids
target_segment_pos = targets.segment_pos
assert aux_segment_id is not None, ('Need to provide aux_segment_id '
'for packed input.')
# Embedding layer
# [batch, target_time, dim]
if not p.shared_emb:
token_embs = self.token_emb.EmbLookup(theta.token_emb, target_ids)
else:
token_embs = self.softmax.EmbLookup(theta.softmax, target_ids)
# [1, target_time, dim]
if p.packed_input:
posit_embs = self.position_emb.FPropWithPosition(
theta.position_emb, target_segment_pos)
else:
posit_embs = tf.expand_dims(
self.position_emb.FProp(theta.position_emb, target_time), 0)
# [batch, target_time, dim]
input_embs = token_embs + tf.cast(posit_embs, tf.bfloat16)
if p.input_dropout_tpl.fprop_dtype:
input_embs = tf.cast(input_embs, p.input_dropout_tpl.fprop_dtype)
target_paddings = tf.cast(target_paddings,
p.input_dropout_tpl.fprop_dtype)
input_embs = self.input_dropout.FProp(theta.input_dropout, input_embs)
layer_in = input_embs
# Explicitly set the input shape of Transformer layers, to avoid
# unknown shape error occurred to tf.einsum on nonTPU devices.
batch, _, dim = py_utils.GetShape(aux_vec, 3)
layer_in = tf.reshape(layer_in, [batch, target_time, dim])
if p.packed_input:
segment_mask = batch_major_attention.SegmentMask(
target_segment_id, target_segment_id, dtype=layer_in.dtype)
causal_padding = tf.expand_dims(
tf.tile(
tf.expand_dims(
batch_major_attention.CausalPadding(
target_time, dtype=layer_in.dtype), 0), [batch, 1, 1]),
1)
causal_mask = causal_padding * segment_mask.dtype.max * tf.constant(
-0.7, dtype=segment_mask.dtype)
segment_mask += causal_mask
aux_segment_mask = batch_major_attention.SegmentMask(
target_segment_id, aux_segment_id, dtype=layer_in.dtype)
for layer, layer_theta in zip(self.decoder_trans, theta.decoder_trans):
# [batch, target_time, dim]
layer_out, _ = layer.FProp(
layer_theta,
layer_in,
target_paddings,
aux_vec,
aux_paddings,
segment_mask=segment_mask if p.packed_input else None,
aux_segment_mask=aux_segment_mask if p.packed_input else None)
layer_in = layer_out
if p.final_layer_norm:
layer_out = self.final_ln.FProp(theta.final_ln, layer_out)
if p.prediction_data_format == 'TBC':
# Transpose the softmax_input to match the input requirement of
# ComputePredictions.
layer_out = tf.transpose(layer_out, [1, 0, 2])
return layer_out
def ExtendStep(self,
theta,
encoder_outputs,
new_ids,
time_step,
prefix_states,
use_short_seq_opt=False):
"""Extend prefix as represented by `prefix_states` by one more step.
This function is expected to be called during fast decoding of Transformer
models.
Args:
theta: A `.NestedMap` object containing weights' values of this layer and
its children layers.
encoder_outputs: A '.NestedMap' object computed by encoder. * encoded -
Source encoding of shape [source_time, source_batch, dim] or
[source_batch, source_time, dim], depending on p.input_data_format. *
paddings - Source encoding's padding of shape [source_time,
source_batch] or [source_batch, source_time].
new_ids: New input ids, of shape [target_batch, 1].
time_step: A scalar, the current decode step, 0-based.
prefix_states: A `.NestedMap` representing the previous decoded states.
key - [target_time, target_batch, num_heads, dim_per_head]. value -
[target_time, target_batch, num_heads, dim_per_head].
use_short_seq_opt: A bool, whether using short sequence optimization.
Returns:
last_decoder_out: The last decoder layer of shape [target_batch, dim].
updated_prefix_states: A `.NestedMap` representing the updated states.
key - [target_time, target_batch, num_heads, dim_per_head].
value - [target_time, target_batch, num_heads, dim_per_head].
"""
p = self.params
encoder_out_bm = self._MaybeTransposeEncoderOutputs(encoder_outputs, 'BTC')
# [source_batch, source_time, dim]
aux_vec = encoder_out_bm.encoded
# [source_batch, source_time]
aux_paddings = encoder_out_bm.padding
with tf.name_scope(p.name):
# Embedding layer
# [target_batch, 1, dim]
if not p.shared_emb:
token_embs = self.token_emb.EmbLookup(theta.token_emb, new_ids)
else:
token_embs = self.softmax.EmbLookup(theta.softmax, new_ids)
# [1, 1, dim]
if isinstance(time_step, tf.Tensor):
time_step_t = tf.reshape(time_step, [1, 1])
elif isinstance(time_step, six.integer_types):
time_step_t = tf.constant([[time_step]], dtype=tf.int32)
else:
raise ValueError('Unexpected input type `%s` for `time_step`.' %
type(time_step))
posit_embs = self.position_emb.FPropWithPosition(theta.position_emb,
time_step_t)
# [target_batch, 1, dim]
input_embs = token_embs + tf.cast(posit_embs, tf.bfloat16)
if p.input_dropout_tpl.fprop_dtype:
input_embs = tf.cast(input_embs, p.input_dropout_tpl.fprop_dtype)
# Make a copy of the input.
updated_prefix_states = prefix_states.DeepCopy()
input_embs = self.input_dropout.FProp(theta.input_dropout, input_embs)
layer_in = input_embs
for i, (layer, layer_theta) in enumerate(
zip(self.decoder_trans, theta.decoder_trans)):
# [target_batch, 1, dim]
layer_out, updated_states = layer.ExtendStep(
layer_theta, layer_in, aux_vec, aux_paddings,
prefix_states['layer_%i' % i], time_step, use_short_seq_opt)
updated_prefix_states['layer_%i' % i] = updated_states
layer_in = layer_out
# [target_batch, dim]
last_decoder_out = tf.squeeze(layer_out, 1)
if p.final_layer_norm:
last_decoder_out = self.final_ln.FProp(theta.final_ln, last_decoder_out)
return last_decoder_out, updated_prefix_states
def ComputePredictions(self, theta, encoder_outputs, targets):
"""Decodes `targets` given encoded source.
Args:
theta: A `.NestedMap` object containing weights' values of this layer and
its children layers.
encoder_outputs: A '.NestedMap' object computed by encoder. * encoded -
Source encoding of shape [source_time, source_batch, dim] or
[source_batch, source_time, dim], depending on p.input_data_format. *
paddings - Source encoding's padding of shape [source_time,
source_batch] or [source_batch, source_time].
targets: A dict of string to tensors representing the targets one try to
predict. Each tensor in targets is of shape [batch, target_time].
Returns:
Output of the last decoder layer, of shape [target_time, batch, dim].
"""
return self._FProp(theta, encoder_outputs, targets)
def _FPropFastSoftmax(self,
theta,
softmax_input,
target_labels,
target_weights,
time_axis=0):
"""Computes cross-entropy loss with label smoothing.
As compared to the _FPropSoftmax, this version is faster by removing the
data formatting overheads and bias of the linear projection. A normalizing
factor is also added to the xentropy result be better model quality.
Args:
theta: A `.NestedMap` object containing weights' values of this layer and
its children layers.
softmax_input: A tensor of shape [time, batch, p.softmax.input_dim].
target_labels: A matrix of tf.int32. [time, batch].
target_weights: A matrix of params.dtype. [time, batch].
time_axis: If 0, the inputs are time-major: [time, batch, ...]; if 1, the
inputs are batch-major: [batch, time, ...].
Returns:
A tuple (metrics, per_example_tensors).
metrics:
A dictionary containing metrics for the xent loss and prediction
accuracy.
per_example_tensors:
A dictionary of per-example tensors.
"""
p = self.params
assert p.label_smoothing is not None
assert p.per_word_avg_loss
softmax_input = tf.reshape(softmax_input, [-1, p.softmax.input_dim])
logits = self.softmax.SimpleLogits(theta.softmax, softmax_input)
logits = tf.cast(logits, tf.float32)
high_confidence = 1.0 - p.label_smoothing.uncertainty
low_confidence = p.label_smoothing.uncertainty / tf.cast(
p.label_smoothing.num_classes - 1, tf.float32)
normalizing = -(
high_confidence * tf.math.log(high_confidence) +
tf.cast(p.softmax.num_classes - 1, tf.float32) * low_confidence *
tf.math.log(low_confidence + 1e-20))
target_labels = tf.reshape(target_labels, [-1])
soft_targets = tf.one_hot(
tf.cast(target_labels, tf.int32),
depth=p.softmax.num_classes,
on_value=high_confidence,
off_value=low_confidence)
xentropy = tf.nn.softmax_cross_entropy_with_logits(
logits=logits, labels=soft_targets)
xent = xentropy - normalizing
target_weights_shape = py_utils.GetShape(target_weights)
orig_target_weights = target_weights
target_weights = tf.cast(tf.reshape(target_weights, [-1]), xent.dtype)
total_xent = tf.reduce_sum(xent * target_weights)
total_weights = tf.reduce_sum(target_weights)
final_loss = total_xent / total_weights
loss_weight = total_weights
metrics = {
'loss': (final_loss, loss_weight),
'log_pplx': (final_loss, loss_weight),
}
per_example_tensors = {}
if p.per_example_tensors:
per_example_tensors['per_example_loss'] = tf.reshape(
xent, target_weights_shape)
per_example_tensors['per_sequence_loss'] = tf.reduce_sum(
per_example_tensors['per_example_loss'] * orig_target_weights,
axis=time_axis)
per_example_tensors['loss'] = per_example_tensors['per_sequence_loss']
per_example_tensors['logits'] = tf.reshape(
logits, tf.concat([target_weights_shape, [-1]], 0))
per_example_tensors['log_probs'] = tf.reshape(
tf.nn.log_softmax(logits), tf.concat([target_weights_shape, [-1]], 0))
# NOTE: tf.argmax is not implemented for the JF backend, see b/36093673
# Skip the fraction_of_correct_next_step_preds during training.
if self.do_eval:
correct_preds = tf.cast(
tf.equal(
tf.cast(tf.reshape(tf.argmax(logits, 1), [-1]), tf.int32),
tf.reshape(target_labels, [-1])), p.dtype)
correct_next_preds = tf.reduce_sum(
correct_preds * tf.reshape(tf.cast(target_weights, p.dtype), [-1]))
num_preds = tf.reduce_sum(tf.cast(target_weights, p.dtype))
accuracy = tf.identity(
correct_next_preds / num_preds,
name='fraction_of_correct_next_step_preds')
metrics['fraction_of_correct_next_step_preds'] = (accuracy, num_preds)
return metrics, per_example_tensors
def ComputeLoss(self, theta, predictions, targets):
"""Populates a metrics dictionary based on the output of ComputePredictions.
Args:
theta: Nested map describing decoder model parameters.
predictions: NestedMap describing the decoding process, requiring:
.softmax_input: Tensor of shape [time, batch, params.softmax.input_dim].
targets: NestedMap describing the target sequences.
Returns:
Two dicts.
- A map from metric name (a python string) to a tuple (value, weight).
Both value and weight are scalar Tensors.
- A map from name to arbitrary tensors, where the first dimension must
be the batch index.
"""
p = self.params
targets = self._MaybeTransposeTargets(targets)
if isinstance(predictions, py_utils.NestedMap):
predictions = predictions.softmax_input
time_axis = {'TBC': 0, 'BTC': 1}.get(p.prediction_data_format)
if p.use_fast_softmax:
return self._FPropFastSoftmax(
theta,
predictions,
targets.labels,
targets.weights,
time_axis=time_axis)
else:
return self._FPropSoftmax(
theta,
predictions,
targets.labels,
targets.weights,
targets.paddings,
targets.get('segment_ids', None),
time_axis=time_axis)
def _InitBeamSearchStateCallback(self, theta, encoder_outputs,
num_hyps_per_beam):
"""Returns initial beams search states.
Args:
theta: A `.NestedMap` object containing weights' values of this layer and
its children layers.
encoder_outputs: A '.NestedMap' object computed by encoder. * encoded -
Source encoding of shape [source_time, source_batch, dim] or
[source_batch, source_time, dim], depending on p.input_data_format. *
paddings - Source encoding's padding of shape [source_time,
source_batch] or [source_batch, source_time].
num_hyps_per_beam: An int, number hyps to keep for source sentence.
Returns:
initial_results: A `.NestedMap` of initial beam search results.
log_probs - Log prob for each of the tokens in the target vocab,
of shape [target_batch, vocab_size].
atten_probs - The updated attention probs, of shape
[target_batch, source_time].
states: A `.NestedMap` of initial model states.
prefix_states - A `.NestedMap` representing the empty decoded states.
key - [target_time, target_batch, num_heads, dim_per_head].
value - [target_time, target_batch, num_heads, dim_per_head].
time_step - A scalar, the initial decode step (0).
"""
p = self.params
# [source_batch, source_time, dim]
encoder_out_bm = self._MaybeTransposeEncoderOutputs(encoder_outputs, 'BTC')
aux_vec = encoder_out_bm.encoded
target_batch = py_utils.GetShape(aux_vec)[0] * num_hyps_per_beam
source_time = py_utils.GetShape(aux_vec)[1]
target_time = p.target_seq_len
log_probs = tf.zeros([target_batch, p.softmax.num_classes],
dtype=py_utils.FPropDtype(p))
# Dummy attention probs
atten_probs = (
tf.ones([target_batch, source_time], dtype=py_utils.FPropDtype(p)) /
tf.cast(source_time, py_utils.FPropDtype(p)))
initial_results = py_utils.NestedMap(
log_probs=log_probs, atten_probs=atten_probs)
dim = p.trans_decoder_tpl.tr_atten_tpl.hidden_dim
if not dim:
dim = p.model_dim
num_heads = p.trans_decoder_tpl.tr_atten_tpl.num_heads
# If per-head dim is less than 128, make the cached shape 128 to avoid
# padding and more efficient interpolation in beamsearch.
if dim // num_heads < 128 and dim % 128 == 0:
num_heads = dim // 128
def _GenStates():
return py_utils.NestedMap({
'key':
inplace_ops.empty(
[target_time, target_batch, num_heads, dim // num_heads],
dtype=py_utils.FPropDtype(p.trans_decoder_tpl),
init=True),
'value':
inplace_ops.empty(
[target_time, target_batch, num_heads, dim // num_heads],
dtype=py_utils.FPropDtype(p.trans_decoder_tpl),
init=True),
})
prefix_states = py_utils.NestedMap({
'layer_%d' % layer: _GenStates() for layer in range(p.num_trans_layers)
})
return initial_results, py_utils.NestedMap({
'prefix_states': prefix_states,
'time_step': tf.constant(0)
})
def _PreBeamSearchStepCallback(self,
theta,
encoder_outputs,
new_ids,
states,
num_hyps_per_beam,
use_short_seq_opt=False):
"""Returns logits for sampling ids and the next model states.
Args:
theta: A `.NestedMap` object containing weights' values of this layer and
its children layers.
encoder_outputs: A '.NestedMap' object computed by encoder. * encoded -
Source encoding of shape [source_time, source_batch, dim] or
[source_batch, source_time, dim], depending on p.input_data_format. *
paddings - Source encoding's padding of shape [source_time,
source_batch] or [source_batch, source_time].
new_ids: A tensor of shape [target_batch, 1].
states: A `.NestedMap` of tensors representing states that the clients
would like to keep track of for each of the active hyps. prefix_states -
A `.NestedMap` representing the previous decoded states. key -
[target_time, target_batch, num_heads, dim_per_head]. value -
[target_time, target_batch, num_heads, dim_per_head]. time_step - A
scalar, the current decode step, 0-based.
num_hyps_per_beam: A scalar, beam size.
use_short_seq_opt: A bool, whether using short sequence optimization.
Returns:
bs_results: A `.NestedMap` of beam search results.
log_probs - Log prob for each of the tokens in the target vocab,
of shape [target_batch, vocab_size].
atten_probs - The updated attention probs, of shape
[target_batch, source_time].
new_states: A `.NestedMap` object. The updated states.
prefix_states - A `.NestedMap` representing the updated decoded states.
key - [target_time, target_batch, num_heads, dim_per_head].
value - [target_time, target_batch, num_heads, dim_per_head].
time_step - A scalar, the current decode step, 0-based.
"""
p = self.params
# [source_batch, source_time, dim]
encoder_out_bm = self._MaybeTransposeEncoderOutputs(encoder_outputs, 'BTC')
target_batch = py_utils.GetShape(new_ids)[0]
source_batch = target_batch // num_hyps_per_beam
new_states = states.Pack(states.Flatten())
time_step = states.time_step
prefix_states = states.prefix_states
# The inputs are ordered as num_hyps_per_beam by num_beams,
# which needs to be transposed for the layer computation.
# [num_hyps_per_beam, source_batch, 1]
new_ids = tf.reshape(new_ids, [num_hyps_per_beam, source_batch, 1])
# [source_batch, num_hyps_per_beam, 1]
new_ids = tf.transpose(new_ids, [1, 0, 2])
# [source_batch * num_hyps_per_beam, 1]
new_ids = tf.reshape(new_ids, [-1, 1])
softmax_input, updated_prefix_states = self.ExtendStep(
theta, encoder_outputs, new_ids, time_step, prefix_states,
use_short_seq_opt)
# Transpose the outputs as num_beams by num_hyps_per_beam to match the
# beam search requirement.
# [source_batch, num_hyps_per_beam, dim]
softmax_input = tf.reshape(softmax_input,
[source_batch, num_hyps_per_beam, -1])
# [num_hyps_per_beam, source_batch, dim]
softmax_input = tf.transpose(softmax_input, [1, 0, 2])
# [num_hyps_per_beam * source_batch, dim]
softmax_input = tf.reshape(softmax_input, [target_batch, -1])
# [target_batch, vocab_size]
logits = self.softmax.Logits(theta.softmax, [softmax_input])
# Only return logits for the last ids
log_probs = tf.nn.log_softmax(logits)
# Dummy attention probs
source_time = py_utils.GetShape(encoder_out_bm.padding)[1]
atten_probs = (
tf.ones([target_batch, source_time], dtype=py_utils.FPropDtype(p)) /
tf.cast(source_time, py_utils.FPropDtype(p)))
bs_results = py_utils.NestedMap({
'log_probs': log_probs,
'atten_probs': atten_probs,
})
new_states.prefix_states = updated_prefix_states
new_states.time_step = time_step + 1
return bs_results, new_states
def _PostBeamSearchStepCallback(self, theta, encoder_outputs, new_step_ids,
states):
# There is nothing to do here.
return states
| [
"vbittorf@google.com"
] | vbittorf@google.com |
dbb210648c4d90f7249728ed0cec7c1512ae0bec | 52b5773617a1b972a905de4d692540d26ff74926 | /.history/stringMethods_20200707100427.py | cea2c06286512296eee3554a7f49dc894e2a3569 | [] | no_license | MaryanneNjeri/pythonModules | 56f54bf098ae58ea069bf33f11ae94fa8eedcabc | f4e56b1e4dda2349267af634a46f6b9df6686020 | refs/heads/master | 2022-12-16T02:59:19.896129 | 2020-09-11T12:05:22 | 2020-09-11T12:05:22 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 132 | py | def array(arr):
newArr = []
for i in range(len(arr)):
b =
print(b)
array(["[6,7,5]","[1,8]"]) | [
"mary.jereh@gmail.com"
] | mary.jereh@gmail.com |
b0852967afd53f043a2055024b36f50275138912 | 134ff3c0719d4c0022eb0fb7c859bdbff5ca34b2 | /apps/jobbrowser/src/jobbrowser/models.py | f45e5ec181000982b3479e860d0125504c6cc3dd | [
"Apache-2.0"
] | permissive | civascu/hue | 22637f13a4cfc557716557661523131b6ac16da4 | 82f2de44789ff5a981ed725175bae7944832d1e9 | refs/heads/master | 2020-03-31T01:50:39.449966 | 2010-07-21T01:05:50 | 2010-07-21T01:07:15 | 788,284 | 0 | 0 | Apache-2.0 | 2019-02-04T07:03:12 | 2010-07-21T07:34:27 | Python | UTF-8 | Python | false | false | 19,479 | py | #!/usr/bin/env python
# Licensed to Cloudera, Inc. under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. Cloudera, Inc. licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from desktop.lib.view_util import format_time_diff
from hadoop import job_tracker
from hadoop import confparse
from urlparse import urlparse, urlunparse
import datetime
import logging
import lxml.html
import re
import urllib2
import hadoop.api.jobtracker.ttypes as ttypes
LOGGER = logging.getLogger(__name__)
class JobLinkage(object):
"""
A thin representation of a job, without much of the details.
Its purpose is to wrap a JobID to allow us to get further
information from Hadoop, without instantiating a full Job object
(which requires talking to Hadoop).
"""
def __init__(self, jobtracker, jobid):
"""
JobLinkage(jobtracker, jobid) -> JobLinkage
The jobid is the jobid string (not the thrift jobid)
"""
self._jobtracker = jobtracker
self.jobId = jobid
self.jobId_short = "_".join(jobid.split("_")[-2:])
def get_task(self, task_id):
"""Retrieve a TaskInProgress from hadoop."""
ttask = self._jobtracker.get_task(
self._jobtracker.thriftjobid_from_string(self.jobId),
self._jobtracker.thrifttaskid_from_string(task_id))
return Task(ttask, self._jobtracker)
class Job(JobLinkage):
"""
Creates a Job instance pulled from the job tracker Thrift interface.
"""
def __getitem__(self, item):
"""
For backwards-compatibility, resolve job["foo"] as job.foo
"""
return getattr(self, item)
@staticmethod
def from_id(jt, jobid):
"""
Returns a Job instance given a job tracker interface and an id. The job tracker interface is typically
located in request.jt.
"""
thriftjob = jt.get_job(jt.thriftjobid_from_string(jobid))
if not thriftjob:
raise Exception("could not find job with id %s" % jobid)
return Job(jt, thriftjob)
@staticmethod
def from_thriftjob(jt, thriftjob):
"""
Returns a Job instance given a job tracker interface and a thriftjob object returned from that job tracker interface.
The job tracker interface is typically located in request.jt
"""
return Job(jt, thriftjob)
def __init__(self, jt, thriftJob):
"""
Returns a Job instance given a job tracker interface and a thriftjob object returned from that
job tracker interface. The job tracker interface is typically located in request.jt
"""
JobLinkage.__init__(self, jt, thriftJob.jobID.asString)
self.jt = jt
self.job = thriftJob
self.tasks = []
if self.job.tasks is not None:
self.tasks = TaskList.from_thriftTaskList(self.job.tasks, jt)
self.task_map = dict( (task.taskId, task) for task in self.tasks )
self._counters = None
self._conf_keys = None
self._full_job_conf = None
self._init_attributes()
@property
def counters(self):
if self._counters is None:
rollups = self.jt.get_job_counter_rollups(self.job.jobID)
# We get back a structure with counter lists for maps, reduces, and total
# and we need to invert this
def aggregate_counters(ctrs_from_jt, key, target):
for group in ctrs_from_jt.groups:
if group.name not in target:
target[group.name] = {
'name': group.name,
'displayName': group.displayName,
'counters': {}
}
agg_counters = target[group.name]['counters']
for counter in group.counters.itervalues():
if counter.name not in agg_counters:
agg_counters[counter.name] = {
'name': counter.name,
'displayName': counter.displayName,
}
agg_counters[counter.name][key] = counter.value
self._counters = {}
aggregate_counters(rollups.mapCounters, "map", self._counters)
aggregate_counters(rollups.reduceCounters, "reduce", self._counters)
aggregate_counters(rollups.jobCounters, "total", self._counters)
return self._counters
@property
def conf_keys(self):
if self._conf_keys is None:
self._initialize_conf_keys()
return self._conf_keys
@property
def full_job_conf(self):
if self._full_job_conf is None:
self._initialize_conf_keys()
return self._full_job_conf
def _init_attributes(self):
self.queueName = self.job.profile.queueName
self.jobName = self.job.profile.name
self.user = self.job.profile.user
self.mapProgress = self.job.status.mapProgress
self.reduceProgress = self.job.status.reduceProgress
self.setupProgress = self.job.status.setupProgress
self.cleanupProgress = self.job.status.cleanupProgress
if self.job.desiredMaps == 0:
maps_percent_complete = 0
else:
maps_percent_complete = int(round(float(self.job.finishedMaps)/self.job.desiredMaps*100))
self.desiredMaps = self.job.desiredMaps
if self.job.desiredReduces == 0:
reduces_percent_complete = 0
else:
reduces_percent_complete = int(round(float(self.job.finishedReduces)/self.job.desiredReduces*100))
self.desiredReduces = self.job.desiredReduces
self.maps_percent_complete = maps_percent_complete
self.finishedMaps = self.job.finishedMaps
self.finishedReduces = self.job.finishedReduces
self.reduces_percent_complete = reduces_percent_complete
self.startTimeMs = self.job.startTime
self.startTimeFormatted = format_unixtime_ms(self.job.startTime)
self.launchTimeMs = self.job.launchTime
self.launchTimeFormatted = format_unixtime_ms(self.job.launchTime)
self.finishTimeMs = self.job.finishTime
self.finishTimeFormatted = format_unixtime_ms(self.job.finishTime)
self.status = self.job.status.runStateAsString
self.priority = self.job.priorityAsString
self.jobFile = self.job.profile.jobFile
finishTime = self.job.finishTime
if finishTime == 0:
finishTime = datetime.datetime.now()
else:
finishTime = datetime.datetime.fromtimestamp(finishTime/1000)
self.duration = finishTime - datetime.datetime.fromtimestamp(self.job.startTime/1000)
self.durationFormatted = format_time_diff(datetime.datetime.fromtimestamp(self.job.startTime/1000), finishTime)
def kill(self):
self.jt.kill_job(self.job.jobID)
def get_task(self, id):
try:
return self.task_map[id]
except:
return JobLinkage.get_task(self, id)
def filter_tasks(self, task_types=None, task_states=None, task_text=None):
"""
Filters the tasks of the job.
Pass in task_type and task_state as sets; None for "all".
task_text is used to search in the state, mostRecentState, and the ID.
"""
assert task_types is None or job_tracker.VALID_TASK_TYPES.issuperset(task_types)
assert task_states is None or job_tracker.VALID_TASK_STATES.issuperset(task_states)
def is_good_match(t):
if task_types is not None:
if t.task.taskID.taskTypeAsString.lower() not in task_types:
return False
if task_states is not None:
if t.state.lower() not in task_states:
return False
if task_text is not None:
tt_lower = task_text.lower()
if tt_lower not in t.state.lower() and tt_lower not in t.mostRecentState.lower() and tt_lower not in t.task.taskID.asString.lower():
return False
return True
return [ t for t in self.tasks if is_good_match(t) ]
def _initialize_conf_keys(self):
conf_keys = [
'mapred.mapper.class',
'mapred.reducer.class',
'mapred.input.format.class',
'mapred.output.format.class',
'mapred.input.dir',
'mapred.output.dir',
]
jobconf = get_jobconf(self.jt, self.jobId)
self._full_job_conf = jobconf
self._conf_keys = {}
for k, v in jobconf.iteritems():
if k in conf_keys:
self._conf_keys[dots_to_camel_case(k)] = v
class TaskList(object):
@staticmethod
def select(jt, jobid, task_types, task_states, text, count, offset):
"""
select(jt, jobid, task_types, task_states, text, count, offset) -> TaskList
Retrieve a TaskList from Hadoop according to the given criteria.
task_types is a set of job_tracker.VALID_TASK_TYPES. A value to None means everything.
task_states is a set of job_tracker.VALID_TASK_STATES. A value to None means everything.
"""
assert task_types is None or job_tracker.VALID_TASK_TYPES.issuperset(task_types)
assert task_states is None or job_tracker.VALID_TASK_STATES.issuperset(task_states)
if task_types is None:
task_types = job_tracker.VALID_TASK_TYPES
if task_states is None:
task_states = job_tracker.VALID_TASK_STATES
tjobid = jt.thriftjobid_from_string(jobid)
thrift_list = jt.get_task_list(tjobid, task_types, task_states, text, count, offset)
return TaskList.from_thriftTaskList(thrift_list, jt)
@staticmethod
def from_thriftTaskList(thrift_task_list, jobtracker):
"""TaskList.from_thriftTaskList(thrift_task_list, jobtracker) -> TaskList
"""
if thrift_task_list is None:
return None
return TaskList(thrift_task_list, jobtracker)
def __init__(self, tasklist, jobtracker):
self.__tasklist = tasklist # The thrift task list
self.__jt = jobtracker
self.__init_attributes()
def __init_attributes(self):
self.__tasksSoFar = [ Task(t, self.__jt) for t in self.__tasklist.tasks ]
self.__nTotalTasks = self.__tasklist.numTotalTasks
def __iter__(self):
return self.__tasksSoFar.__iter__()
def __len__(self):
return len(self.__tasksSoFar)
def __getitem__(self, key):
return self.__tasksSoFar[key]
@property
def tasks(self):
return self.__tasksSoFar
@property
def numTotalTasks(self):
return self.__nTotalTasks
class Task(object):
def __getitem__(self, item):
"""
For backwards-compatibility, resolve job["foo"] as job.foo
"""
return getattr(self, item)
def __init__(self, task, jt):
self.task = task
self.jt = jt
self._init_attributes()
self.attempt_map = {}
for id, attempt in self.task.taskStatuses.iteritems():
ta = TaskAttempt(attempt, task=self)
self.attempt_map[id] = ta
@property
def attempts(self):
return self.attempt_map.values()
def _init_attributes(self):
self.taskType = self.task.taskID.taskTypeAsString
self.taskId = self.task.taskID.asString
self.taskId_short = "_".join(self.taskId.split("_")[-2:])
self.startTimeMs = self.task.startTime
self.startTimeFormatted = format_unixtime_ms(self.task.startTime)
self.execStartTimeMs = self.task.execStartTime
self.execStartTimeFormatted = format_unixtime_ms(self.task.execStartTime)
self.execFinishTimeMs = self.task.execFinishTime
self.execFinishTimeFormatted = format_unixtime_ms(self.task.execFinishTime)
self.state = self.task.state
assert self.state in job_tracker.VALID_TASK_STATES
self.progress = self.task.progress
self.taskId = self.task.taskID.asString
self.jobId = self.task.taskID.jobID.asString
self.taskAttemptIds = self.task.taskStatuses.keys()
self.mostRecentState = self.task.mostRecentState
self.diagnosticMap = self.task.taskDiagnosticData
self.counters = self.task.counters
self.failed = self.task.failed
self.complete = self.task.complete
def get_attempt(self, id):
"""
Returns a TaskAttempt for a given id.
"""
return self.attempt_map[id]
class TaskAttempt(object):
def __getitem__(self, item):
"""
For backwards-compatibility, resolve task["foo"] as task.foo.
"""
return getattr(self, item)
def __init__(self, task_attempt, task):
assert task_attempt is not None
self.task_attempt = task_attempt
self.task = task
self._init_attributes();
def _init_attributes(self):
self.taskType = self.task_attempt.taskID.taskID.taskTypeAsString
self.attemptId = self.task_attempt.taskID.asString
self.attemptId_short = "_".join(self.attemptId.split("_")[-2:])
self.startTimeMs = self.task_attempt.startTime
self.startTimeFormatted = format_unixtime_ms(self.task_attempt.startTime)
self.finishTimeMs = self.task_attempt.finishTime
self.finishTimeFormatted = format_unixtime_ms(self.task_attempt.finishTime)
self.state = self.task_attempt.stateAsString.lower()
self.taskTrackerId = self.task_attempt.taskTracker
self.phase = self.task_attempt.phaseAsString
self.progress = self.task_attempt.progress
self.outputSize = self.task_attempt.outputSize
self.shuffleFinishTimeMs = self.task_attempt.shuffleFinishTime
self.shuffleFinishTimeFormatted = format_unixtime_ms(self.task_attempt.shuffleFinishTime)
self.sortFinishTimeMs = self.task_attempt.sortFinishTime
self.sortFinishTimeFormatted = format_unixtime_ms(self.task_attempt.sortFinishTime)
self.mapFinishTimeMs = self.task_attempt.mapFinishTime # DO NOT USE, NOT VALID IN 0.20
self.mapFinishTimeFormatted = format_unixtime_ms(self.task_attempt.mapFinishTime)
self.counters = self.task_attempt.counters
def get_tracker(self):
try:
tracker = Tracker.from_name(self.task.jt, self.taskTrackerId)
return tracker
except ttypes.TaskTrackerNotFoundException, e:
LOGGER.warn("Tracker %s not found: %s" % (self.taskTrackerId, e))
all_trackers = self.task.jt.all_task_trackers()
for t in all_trackers.trackers:
LOGGER.debug("Available tracker: %s" % t.trackerName)
raise ttypes.TaskTrackerNotFoundException(
"Cannot lookup TaskTracker '%s'" % (self.taskTrackerId,))
def get_task_log(self):
"""
get_task_log(task_id) -> (stdout_text, stderr_text, syslog_text)
Retrieve the task log from the TaskTracker, at this url:
http://<tracker_host>:<port>/tasklog?taskid=<attempt_id>
Optional query string:
&filter=<source> : where <source> is 'syslog', 'stdout', or 'stderr'.
&start=<offset> : specify the start offset of the log section, when using a filter.
&end=<offset> : specify the end offset of the log section, when using a filter.
"""
tracker = self.get_tracker()
url = urlunparse(('http',
'%s:%s' % (tracker.host, tracker.httpPort),
'tasklog',
None,
'taskid=%s' % (self.attemptId,),
None))
LOGGER.info('Retrieving %s' % (url,))
try:
data = urllib2.urlopen(url)
except urllib2.URLError:
raise urllib2.URLError("Cannot retrieve logs from TaskTracker '%s'" % (self.taskTrackerId,))
et = lxml.html.parse(data)
log_sections = et.findall('body/pre')
if len(log_sections) != 3:
LOGGER.warn('Error parsing task attempt log for %s at "%s". Found %d (not 3) log sections' %
(self.attemptId, url, len(log_sections)))
err = "Hue encountered an error while retrieving logs from '%s'" % (url,)
return (err, err, err)
return [ section.text for section in log_sections ]
class Tracker(object):
def __getitem__(self, item):
"""
For backwards-compatibility, resolve job["foo"] as job.foo.
"""
return getattr(self, item)
@staticmethod
def from_name(jt, trackername):
return Tracker(jt.task_tracker(trackername))
def __init__(self, thrifttracker):
self.tracker = thrifttracker
self._init_attributes();
def _init_attributes(self):
self.trackerId = self.tracker.trackerName
self.httpPort = self.tracker.httpPort
self.host = self.tracker.host
self.lastSeenMs = self.tracker.lastSeen
self.lastSeenFormatted = format_unixtime_ms(self.tracker.lastSeen)
self.totalVirtualMemory = self.tracker.totalVirtualMemory
self.totalPhysicalMemory = self.tracker.totalPhysicalMemory
self.availableSpace = self.tracker.availableSpace
self.failureCount = self.tracker.failureCount
self.mapCount = self.tracker.mapCount
self.reduceCount = self.tracker.reduceCount
self.maxMapTasks = self.tracker.maxMapTasks
self.maxReduceTasks = self.tracker.maxReduceTasks
self.taskReports = self.tracker.taskReports
class Cluster(object):
def __getitem__(self, item):
"""
For backwards-compatibility, resolve job["foo"] as job.foo
"""
return getattr(self, item)
def __init__(self, jt):
self.status = jt.cluster_status()
self._init_attributes();
def _init_attributes(self):
self.mapTasksInProgress = self.status.mapTasks
self.reduceTasksInProgress = self.status.reduceTasks
self.maxMapTasks = self.status.maxMapTasks
self.maxReduceTasks = self.status.maxReduceTasks
self.usedHeapMemory = self.status.usedMemory
self.maxHeapMemory = self.status.maxMemory
self.clusterStartTimeMs = self.status.startTime
self.clusterStartTimeFormatted = format_unixtime_ms(self.status.startTime)
self.identifier = self.status.identifier
self.taskTrackerExpiryInterval = self.status.taskTrackerExpiryInterval
self.totalJobSubmissions = self.status.totalSubmissions
self.state = self.status.stateAsString
self.numActiveTrackers = self.status.numActiveTrackers
self.activeTrackerNames = self.status.activeTrackerNames
self.numBlackListedTrackers = self.status.numBlacklistedTrackers
self.blacklistedTrackerNames = self.status.blacklistedTrackerNames
self.hostname = self.status.hostname
self.httpPort = self.status.httpPort
# self.currentTimeMs = curtime
# self.currentTimeFormatted = format_unixtime_ms(curtime)
def get_jobconf(jt, jobid):
"""
Returns a dict representation of the jobconf for the job corresponding
to jobid. filter_keys is an optional list of configuration keys to filter on.
"""
jid = jt.thriftjobid_from_string(jobid)
# This will throw if the the jobconf can't be found
xml_data = jt.get_job_xml(jid)
return confparse.ConfParse(xml_data)
def format_unixtime_ms(unixtime):
"""
Format a unix timestamp in ms to a human readable string
"""
if unixtime:
return str(datetime.datetime.fromtimestamp(unixtime/1000).strftime("%x %X %Z"))
else:
return ""
DOTS = re.compile("\.([a-z])")
def dots_to_camel_case(dots):
"""
Takes a string delimited with periods and returns a camel-case string.
Example: dots_to_camel_case("foo.bar.baz") //returns fooBarBaz
"""
def return_upper(match):
return match.groups()[0].upper()
return str(DOTS.sub(return_upper, dots))
def get_path(hdfs_url):
"""
Returns the path component of an HDFS url.
"""
# urlparse is lame, and only "uses_netloc" for a certain
# set of protocols. So we replace hdfs with gopher:
if hdfs_url.startswith("hdfs://"):
gopher_url = "gopher://" + hdfs_url[7:]
path = urlparse(gopher_url)[2] # path
return path
else:
return hdfs_url
| [
"bcwalrus@cloudera.com"
] | bcwalrus@cloudera.com |
a69b18877b22eeed94b6afd73ed99375d2f964fb | 43378f262acb3bbf6af8d4c0dc30d149fa5ba302 | /hello/migrations/0004_question_choice4.py | 880832f360284e679052b9a1931b4fdf9fded106 | [] | no_license | c-bata/django-squash-squashed-migratoins | 570d68de550f89ad710968a9c3f9cb353cba91a6 | 292a8d72d6eded7663a7e79ba94e1e3876c1250c | refs/heads/master | 2021-05-18T00:54:29.764796 | 2020-03-29T10:35:13 | 2020-03-29T10:35:16 | 251,034,327 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 393 | py | # Generated by Django 3.1 on 2020-03-29 10:35
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('hello', '0003_question_choice3'),
]
operations = [
migrations.AddField(
model_name='question',
name='choice4',
field=models.CharField(default='', max_length=20),
),
]
| [
"contact@c-bata.link"
] | contact@c-bata.link |
f0439f14153f366ba24c74df955485bb042f9030 | 9cbd523cdedc727f62c887612e8ae2c25c909964 | /tests/UI_test/functional/smoke_test_remote_parallel/test_TID_048.py | 5fc487754f89ef89b5248b32bade46944f6dc4fc | [] | no_license | louiscklaw/QA_test_scripts | 8a71d0bed99fae3b0dac4cd9414b3e34dcf5beed | 58b73594332053272d8dce2c812c93297259c782 | refs/heads/master | 2023-01-27T15:48:29.477848 | 2020-12-06T10:05:19 | 2020-12-06T10:05:19 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 630 | py | import os,sys
from pprint import pprint
import random
from time import sleep
sys.path.append(os.path.dirname(__file__))
from path_config import *
from urls import *
from steps import *
from pages.config import *
from jp import *
from urls import *
from setupLocalChrome import *
from test_TID_046 import *
def test_TID_048(json_metadata, table_num=41, food_quantity=5):
# clear before test
(r_browser, c_browser) = tour_TID_046(json_metadata, table_num, food_quantity)
check_TID_032.run_check(json_metadata, r_browser)
check_TID_048.run_check(json_metadata, r_browser, table_num)
return (r_browser, c_browser)
| [
"louiscklaw@gmail.com"
] | louiscklaw@gmail.com |
adb8edcc5d7786f61e95e57ac2b102bbbfebd784 | 3ea99519e25ec1bb605947a94b7a5ceb79b2870a | /modern_python/modernpython/lib/python3.6/site-packages/mypy/test/testinfer.py | 5a1475e15009fe67c361fb7640b73957a433ca8c | [] | no_license | tech-cow/spazzatura | 437c7502a0654a3d3db2fd1e96ce2e3e506243c0 | 45fc0932186d2ef0c5044745a23507a692cfcc26 | refs/heads/master | 2022-09-01T12:01:11.309768 | 2018-11-15T04:32:03 | 2018-11-15T04:32:03 | 130,414,653 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 7,344 | py | """Test cases for type inference helper functions."""
from typing import List, Optional, Tuple, Union
from mypy.test.helpers import Suite, assert_equal
from mypy.checkexpr import map_actuals_to_formals
from mypy.nodes import ARG_POS, ARG_OPT, ARG_STAR, ARG_STAR2, ARG_NAMED
from mypy.types import AnyType, TupleType, Type, TypeOfAny
from mypy.test.typefixture import TypeFixture
class MapActualsToFormalsSuite(Suite):
"""Test cases for checkexpr.map_actuals_to_formals."""
def test_basic(self) -> None:
self.assert_map([], [], [])
def test_positional_only(self) -> None:
self.assert_map([ARG_POS],
[ARG_POS],
[[0]])
self.assert_map([ARG_POS, ARG_POS],
[ARG_POS, ARG_POS],
[[0], [1]])
def test_optional(self) -> None:
self.assert_map([],
[ARG_OPT],
[[]])
self.assert_map([ARG_POS],
[ARG_OPT],
[[0]])
self.assert_map([ARG_POS],
[ARG_OPT, ARG_OPT],
[[0], []])
def test_callee_star(self) -> None:
self.assert_map([],
[ARG_STAR],
[[]])
self.assert_map([ARG_POS],
[ARG_STAR],
[[0]])
self.assert_map([ARG_POS, ARG_POS],
[ARG_STAR],
[[0, 1]])
def test_caller_star(self) -> None:
self.assert_map([ARG_STAR],
[ARG_STAR],
[[0]])
self.assert_map([ARG_POS, ARG_STAR],
[ARG_STAR],
[[0, 1]])
self.assert_map([ARG_STAR],
[ARG_POS, ARG_STAR],
[[0], [0]])
self.assert_map([ARG_STAR],
[ARG_OPT, ARG_STAR],
[[0], [0]])
def test_too_many_caller_args(self) -> None:
self.assert_map([ARG_POS],
[],
[])
self.assert_map([ARG_STAR],
[],
[])
self.assert_map([ARG_STAR],
[ARG_POS],
[[0]])
def test_tuple_star(self) -> None:
any_type = AnyType(TypeOfAny.special_form)
self.assert_vararg_map(
[ARG_STAR],
[ARG_POS],
[[0]],
self.tuple(any_type))
self.assert_vararg_map(
[ARG_STAR],
[ARG_POS, ARG_POS],
[[0], [0]],
self.tuple(any_type, any_type))
self.assert_vararg_map(
[ARG_STAR],
[ARG_POS, ARG_OPT, ARG_OPT],
[[0], [0], []],
self.tuple(any_type, any_type))
def tuple(self, *args: Type) -> TupleType:
return TupleType(list(args), TypeFixture().std_tuple)
def test_named_args(self) -> None:
self.assert_map(
['x'],
[(ARG_POS, 'x')],
[[0]])
self.assert_map(
['y', 'x'],
[(ARG_POS, 'x'), (ARG_POS, 'y')],
[[1], [0]])
def test_some_named_args(self) -> None:
self.assert_map(
['y'],
[(ARG_OPT, 'x'), (ARG_OPT, 'y'), (ARG_OPT, 'z')],
[[], [0], []])
def test_missing_named_arg(self) -> None:
self.assert_map(
['y'],
[(ARG_OPT, 'x')],
[[]])
def test_duplicate_named_arg(self) -> None:
self.assert_map(
['x', 'x'],
[(ARG_OPT, 'x')],
[[0, 1]])
def test_varargs_and_bare_asterisk(self) -> None:
self.assert_map(
[ARG_STAR],
[ARG_STAR, (ARG_NAMED, 'x')],
[[0], []])
self.assert_map(
[ARG_STAR, 'x'],
[ARG_STAR, (ARG_NAMED, 'x')],
[[0], [1]])
def test_keyword_varargs(self) -> None:
self.assert_map(
['x'],
[ARG_STAR2],
[[0]])
self.assert_map(
['x', ARG_STAR2],
[ARG_STAR2],
[[0, 1]])
self.assert_map(
['x', ARG_STAR2],
[(ARG_POS, 'x'), ARG_STAR2],
[[0], [1]])
self.assert_map(
[ARG_POS, ARG_STAR2],
[(ARG_POS, 'x'), ARG_STAR2],
[[0], [1]])
def test_both_kinds_of_varargs(self) -> None:
self.assert_map(
[ARG_STAR, ARG_STAR2],
[(ARG_POS, 'x'), (ARG_POS, 'y')],
[[0, 1], [0, 1]])
def test_special_cases(self) -> None:
self.assert_map([ARG_STAR],
[ARG_STAR, ARG_STAR2],
[[0], []])
self.assert_map([ARG_STAR, ARG_STAR2],
[ARG_STAR, ARG_STAR2],
[[0], [1]])
self.assert_map([ARG_STAR2],
[(ARG_POS, 'x'), ARG_STAR2],
[[0], [0]])
self.assert_map([ARG_STAR2],
[ARG_STAR2],
[[0]])
def assert_map(self,
caller_kinds_: List[Union[int, str]],
callee_kinds_: List[Union[int, Tuple[int, str]]],
expected: List[List[int]],
) -> None:
caller_kinds, caller_names = expand_caller_kinds(caller_kinds_)
callee_kinds, callee_names = expand_callee_kinds(callee_kinds_)
result = map_actuals_to_formals(
caller_kinds,
caller_names,
callee_kinds,
callee_names,
lambda i: AnyType(TypeOfAny.special_form))
assert_equal(result, expected)
def assert_vararg_map(self,
caller_kinds: List[int],
callee_kinds: List[int],
expected: List[List[int]],
vararg_type: Type,
) -> None:
result = map_actuals_to_formals(
caller_kinds,
[],
callee_kinds,
[],
lambda i: vararg_type)
assert_equal(result, expected)
def expand_caller_kinds(kinds_or_names: List[Union[int, str]]
) -> Tuple[List[int], List[Optional[str]]]:
kinds = []
names = [] # type: List[Optional[str]]
for k in kinds_or_names:
if isinstance(k, str):
kinds.append(ARG_NAMED)
names.append(k)
else:
kinds.append(k)
names.append(None)
return kinds, names
def expand_callee_kinds(kinds_and_names: List[Union[int, Tuple[int, str]]]
) -> Tuple[List[int], List[Optional[str]]]:
kinds = []
names = [] # type: List[Optional[str]]
for v in kinds_and_names:
if isinstance(v, tuple):
kinds.append(v[0])
names.append(v[1])
else:
kinds.append(v)
names.append(None)
return kinds, names
| [
"yuzhoujr@yuzhou-7480.internal.synopsys.com"
] | yuzhoujr@yuzhou-7480.internal.synopsys.com |
bc9f5067a043260d80975c5066c32c9c519df9e1 | 50008b3b7fb7e14f793e92f5b27bf302112a3cb4 | /recipes/Python/577200_Make_unique_file_name/recipe-577200.py | a59a9b0faba37a4fd185f0bfaaf2d8ab21caa915 | [
"MIT"
] | permissive | betty29/code-1 | db56807e19ac9cfe711b41d475a322c168cfdca6 | d097ca0ad6a6aee2180d32dce6a3322621f655fd | refs/heads/master | 2023-03-14T08:15:47.492844 | 2021-02-24T15:39:59 | 2021-02-24T15:39:59 | 341,878,663 | 0 | 0 | MIT | 2021-02-24T15:40:00 | 2021-02-24T11:31:15 | Python | UTF-8 | Python | false | false | 848 | py | '''
function for making unique non-existent file name
with saving source file extension
'''
import os
import sys
__author__ = 'Denis Barmenkov <denis.barmenkov@gmail.com>'
__source__ = 'http://code.activestate.com/recipes/577200-make-unique-file-name/'
def add_unique_postfix(fn):
if not os.path.exists(fn):
return fn
path, name = os.path.split(fn)
name, ext = os.path.splitext(name)
make_fn = lambda i: os.path.join(path, '%s(%d)%s' % (name, i, ext))
for i in xrange(2, sys.maxint):
uni_fn = make_fn(i)
if not os.path.exists(uni_fn):
return uni_fn
return None
def demo():
script_path = sys.argv[0]
print 'script file: %s' % script_path
fn_unique = add_unique_postfix(script_path)
print 'with unique postfix: %s' % fn_unique
if __name__ == '__main__':
demo()
| [
"betty@qburst.com"
] | betty@qburst.com |
830d974e14289df0ff87ee234e5405ffecc86e1c | d4255d83be93caace42fb3cbaedead298b62a5bf | /blog/tests.py | 3fea63a4536d3e6f14ef7291416b5b4b29814e0f | [] | no_license | Code-Institute-Submissions/Stream3-GIT | 6d56f02d8cbb6686b631b9bae18a4220a3e981eb | 4f55a070c2494c96961c0cbe6357e82db200be84 | refs/heads/master | 2021-01-20T02:07:14.160267 | 2017-04-25T01:29:07 | 2017-04-25T01:29:07 | 89,374,275 | 0 | 0 | null | 2017-04-25T15:04:36 | 2017-04-25T15:04:36 | null | UTF-8 | Python | false | false | 61 | py | from django.test import TestCase
#from .models import Post
| [
"alinechribeiro@gmail.com"
] | alinechribeiro@gmail.com |
0b186051c425659027a1a7dad9b073ba965873b2 | 7e45c50b01863103d540d156a03437b64b2896b3 | /tests/console/commands/test_check.py | 0df223b7498638f3c2c9eb9fc07ec8bd78c68878 | [
"MIT",
"LGPL-3.0-only",
"LGPL-2.1-only",
"BSD-4-Clause",
"GPL-2.0-only",
"Apache-2.0",
"BSD-2-Clause",
"GPL-3.0-or-later",
"LGPL-2.1-or-later",
"LGPL-3.0-or-later",
"BSD-3-Clause",
"LicenseRef-scancode-free-unknown",
"GPL-2.0-or-later",
"GPL-3.0-only"
] | permissive | AhmedRedaAmin/poetry | e7ac5ecc332da13cb9768ca286d5f49aec01750d | 5ba06bb44201cace7461f245e5f6440a168426ab | refs/heads/master | 2020-04-01T02:16:10.826116 | 2018-10-12T15:33:26 | 2018-10-12T15:33:26 | 152,771,840 | 0 | 0 | MIT | 2018-10-12T15:31:10 | 2018-10-12T15:31:09 | null | UTF-8 | Python | false | false | 268 | py | from cleo.testers import CommandTester
def test_about(app):
command = app.find("check")
tester = CommandTester(command)
tester.execute([("command", command.get_name())])
expected = """\
All set!
"""
assert tester.get_display(True) == expected
| [
"sebastien@eustace.io"
] | sebastien@eustace.io |
fb6b2ba693feea30a693f1c62fecef65ca6856b2 | 6858b0e8da83676634e6208829ada13d1ea46bd1 | /armada_uninstaller.py | 3d6e4e7ef0b143dddc54821bc486b1748cd3b80d | [
"Apache-2.0",
"LicenseRef-scancode-warranty-disclaimer"
] | permissive | iVerb/armada-pipeline | 452045da1b9dfc85c5d0bb4350feeee2061f761d | 9f0d0fd7c23fe382ca9c9ea1d44fcbb3dd5cbf01 | refs/heads/master | 2023-05-02T00:52:19.209982 | 2021-05-14T14:57:06 | 2021-05-14T14:57:06 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 17,920 | py | """
Module for prep asset popup.
"""
import os
import sys
import platform
import subprocess
import requests
import json
from Qt import QtCore
from Qt import QtWidgets
from Qt import QtGui
from core import definitions
from core import resource
from core import path_maker
import utilsa
logging = utilsa.Logger('armada')
FULL, MOUNT, STRUCTURE = (1, 2, 3)
class ArmadaUninstaller(QtWidgets.QDialog):
"""Downloads armada-pipeline release from GitHub repo
"""
# Signal vars
enter_pressed = QtCore.Signal(str)
enter_signal_str = "returnPressed"
esc_pressed = QtCore.Signal(str)
esc_signal_str = "escPressed"
download_complete = QtCore.Signal()
def __init__(self, setup=FULL):
"""
Args:
setup: What part of setup is the user entering into?
"""
super(ArmadaUninstaller, self).__init__()
self.logger = logging.getLogger('menu.' + self.__class__.__name__)
self.logger.info('Setup starting...')
self.setup = setup
self.setObjectName('armada_Installer')
self.armada_root_path = definitions.ROOT_PATH
# self.setWindowFlags(QtCore.Qt.FramelessWindowHint)
self.setWindowTitle('Armada Pipeline Uninstaller')
self.setWindowIcon(resource.icon('armada_logo', 'png'))
self.setAttribute(QtCore.Qt.WA_DeleteOnClose)
self.installEventFilter(self)
self.setStyleSheet(resource.style_sheet('setup'))
self.setFixedSize(1000, 500)
self.sizeHint()
# GUI ------------------------------
pixmap_banner = resource.pixmap(name='banner_setup', scope='help')
self.lbl_banner = QtWidgets.QLabel()
self.lbl_banner.setPixmap(pixmap_banner)
self.cb_style_sheet = """
QCheckBox::indicator:checked:disabled {{
image: url({0}/resources/icon/checkbox_unchecked.svg);
background: #29dff7;
}}
QCheckBox::indicator:unchecked:disabled{{
image: url({0}/resources/icon/checkbox_unchecked.svg);
}}
""".format(self.armada_root_path)
self.cb_s0_install = QtWidgets.QCheckBox('Uninstall Armada Pipeline')
self.cb_s0_install.setStyleSheet(self.cb_style_sheet)
self.cb_s0_install.setEnabled(False)
self.cb_s1_download = QtWidgets.QCheckBox('Uninstalling')
self.cb_s1_download.setStyleSheet(self.cb_style_sheet)
self.cb_s1_download.setEnabled(False)
self.cb_s2_complete = QtWidgets.QCheckBox('Uninstallation Complete')
self.cb_s2_complete.setStyleSheet(self.cb_style_sheet)
self.cb_s2_complete.setEnabled(False)
self.cb_delete_local_settings = QtWidgets.QCheckBox("Remove Armada's local settings?")
self.lbl_title = QtWidgets.QLabel()
# self.lbl_title.setSizePolicy(QtWidgets.QSizePolicy.Preferred, QtWidgets.QSizePolicy.MinimumExpanding)
# self.lbl_title.setMinimumHeight(400)
self.lbl_title.setStyleSheet("""
QLabel {
font-size: 30px;
font-family: Roboto;
color: #FFFFFF;
}""")
self.lbl_full_path = QtWidgets.QLabel()
self.lbl_full_path.setText("Full path:")
self.lbl_full_path.setStyleSheet(resource.style_sheet('setup'))
self.le_full_path = QtWidgets.QLabel()
serifFont = QtGui.QFont("Roboto", 10, QtGui.QFont.StyleItalic)
self.le_full_path.setFont(serifFont)
# self.le_full_path.setText('{0}/Armada Pipeline/armada_pipeline_{1}_win10'.format(self.le_install_dir.text(), self.armada_version))
self.le_full_path.setWordWrap(True)
self.btn_install_browse = QtWidgets.QPushButton("Browse")
self.btn_install_browse.setMinimumWidth(100)
self.task_description = QtWidgets.QLabel()
self.progress_bar = QtWidgets.QProgressBar()
self.progress_bar.setMinimum(0)
self.progress_bar.setMaximum(100)
self.progress_bar.setAlignment(QtCore.Qt.AlignCenter)
self.btn_left = QtWidgets.QPushButton("Cancel")
btn_left_retain = self.btn_left.sizePolicy()
btn_left_retain.setRetainSizeWhenHidden(True)
self.btn_left.setSizePolicy(btn_left_retain)
self.btn_left.setStyleSheet("""
QPushButton{
background-color:#636363;
height: 30px;
}
QPushButton:hover{
background: #369593;
}
QPushButton:hover:pressed{
background: #2e7a78;
}
QPushButton:pressed{
background: #2a615f;
}
QPushButton:disabled{
background: #3b3b3b;
}
""")
self.btn_right = QtWidgets.QPushButton("Install")
self.btn_right.setStyleSheet("""
QPushButton{
background-color:#636363;
height: 30px;
border-style: solid;
border-width: 3px;
border-color: #369593;
}
QPushButton:hover{
background: #369593;
}
QPushButton:hover:pressed{
background: #2e7a78;
border-style: solid;
border-width: 3px;
border-color: #2e7a78;
}
QPushButton:pressed{
background: #2a615f;
}
QPushButton:disabled{
background: #3b3b3b;
border-style: solid;
border-width: 0px;
border-color: #4abdbb;
border-radius: 0px;
}
""")
self.btn_right.setDisabled(True)
self.lbl_description = QtWidgets.QTextBrowser()
self.lbl_description.setReadOnly(True)
self.lbl_description.setOpenExternalLinks(True)
self.lbl_description.setStyleSheet("""
QTextEdit {
background-color: #262626;
color: #FFFFFF;
font: 14px "Roboto-thin";
border: 0px;
}""")
# State machine ------------------
self.state_machine = QtCore.QStateMachine()
self.s0_install = QtCore.QState()
self.s1_download = QtCore.QState()
self.s2_complete = QtCore.QState()
# Entry point for setup
# Transitions
self.trans_s0_s1 = self.s0_install.addTransition(self.btn_right.clicked, self.s1_download)
self.trans_s1_s2 = self.s1_download.addTransition(self.btn_right.clicked, self.s2_complete)
# Add states
self.state_machine.addState(self.s0_install)
self.state_machine.addState(self.s1_download)
self.state_machine.addState(self.s2_complete)
self.state_machine.setInitialState(self.s0_install)
# Connections
self.s0_install.entered.connect(self.on_s0_install_entered)
self.s1_download.entered.connect(self.on_uninstall_pressed)
self.s1_download.entered.connect(self.on_s1_download_entered)
self.s2_complete.entered.connect(self.on_s2_complete_entered)
# Properties
self.s0_install.assignProperty(self.btn_left, "text", "Cancel")
self.s0_install.assignProperty(self.btn_right, "text", "Install")
self.s1_download.assignProperty(self.btn_right, "text", "Next")
self.s2_complete.assignProperty(self.btn_right, "text", "Set Sail!")
self.state_machine.start()
# Layout ---------------------------
self.steps_layout = QtWidgets.QVBoxLayout()
self.steps_layout.addWidget(self.lbl_banner, 0, QtCore.Qt.AlignCenter | QtCore.Qt.AlignTop)
self.steps_layout.addWidget(self.cb_s0_install, 0, QtCore.Qt.AlignCenter)
self.steps_layout.addWidget(self.cb_s1_download, 0, QtCore.Qt.AlignCenter)
self.steps_layout.addWidget(self.cb_s2_complete, 0, QtCore.Qt.AlignCenter)
self.steps_layout.setContentsMargins(30, 30, 30, 100)
self.title_layout = QtWidgets.QHBoxLayout()
self.title_layout.addWidget(self.lbl_title)
# self.title_layout.setSizeConstraint(QtWidgets.QLayout.SetMinimumSize)
self.title_layout.setAlignment(QtCore.Qt.AlignCenter)
self.title_layout.setContentsMargins(20, 20, 20, 20)
self.full_path_layout = QtWidgets.QHBoxLayout()
self.full_path_layout.addWidget(self.cb_delete_local_settings, 0, QtCore.Qt.AlignLeft)
self.full_path_layout.addWidget(self.le_full_path, 1)
self.full_path_layout.setContentsMargins(0, 20, 0, 20)
# Structure layout
self.description_layout = QtWidgets.QHBoxLayout()
self.description_layout.addWidget(self.lbl_description, 1, QtCore.Qt.AlignTop)
self.description_layout.setContentsMargins(0, 0, 0, 0)
self.button_layout = QtWidgets.QHBoxLayout()
self.button_layout.addWidget(self.btn_left)
self.button_layout.addWidget(self.btn_right)
self.button_layout.setAlignment(QtCore.Qt.AlignBottom)
self.button_layout.setContentsMargins(20, 20, 20, 20)
self.info_layout = QtWidgets.QVBoxLayout()
self.info_layout.addLayout(self.description_layout)
self.info_layout.addLayout(self.full_path_layout)
self.info_layout.setContentsMargins(30, 30, 30, 30)
self.user_layout = QtWidgets.QVBoxLayout()
self.user_layout.addLayout(self.title_layout)
self.user_layout.addLayout(self.info_layout)
self.user_layout.addWidget(self.task_description)
self.user_layout.addWidget(self.progress_bar)
self.user_layout.addLayout(self.button_layout, QtCore.Qt.AlignBottom)
self.main_layout = QtWidgets.QHBoxLayout()
self.main_layout.addLayout(self.steps_layout)
self.main_layout.addLayout(self.user_layout)
self.setLayout(self.main_layout)
# Connections
self.btn_install_browse.clicked.connect(self.on_browse_pressed)
self.esc_pressed.connect(self.on_cancel_pressed)
# Wait for user input
self.exec_()
def setProgress(self, value):
# print('progress value = {}'.format(value))
if value > 100:
value = 100
self.progress_bar.setValue(value)
def on_le_mount_text_changed(self, text):
"""
Remove banned characters from name string
"""
self.le_full_path.setText('{0}/Armada Pipeline'.format(self.le_install_dir.text()))
# Check if path exists
if os.path.exists(text):
self.btn_right.setEnabled(True)
else:
self.btn_right.setEnabled(False)
def on_browse_pressed(self):
self.file_dialog = QtWidgets.QFileDialog(self, directory=self.le_install_dir.text())
self.file_dialog.setFileMode(self.file_dialog.Directory)
path = self.file_dialog.getExistingDirectory(self, "Choose install directory")
if path == "":
pass
else:
self.le_install_dir.setText(path)
def on_s0_install_entered(self):
# Steps
self.cb_s0_style = """
QCheckBox::indicator:checked:disabled {{
image: url({0}/resources/icon/checkbox_unchecked.svg);
background: #29dff7;
}}
QCheckBox::indicator:unchecked:disabled{{
image: url({0}/resources/icon/checkbox_unchecked.svg);
}}
""".format(self.armada_root_path)
self.cb_s0_install.setChecked(True)
self.cb_s0_install.setStyleSheet(self.cb_s0_style)
self.cb_s2_complete.setChecked(False)
self.cb_s2_complete.setStyleSheet(self.cb_s0_style)
self.lbl_description.clear()
self.lbl_description.setHtml("""<p>Your project files are safe and will not be touched during uninstallation!</p>
<br></br>
<br></br>
<p>Would you like to remove Armada's local settings as well?</p>""")
self.lbl_description.setFixedHeight(int(self.lbl_description.document().size().height()))
self.lbl_title.setText('Uninstall Armada Pipeline')
try:
self.btn_right.clicked.disconnect(self.on_accept_pressed)
self.enter_pressed.disconnect(self.on_accept_pressed)
except:
pass
# S0
self.enter_pressed.connect(self.on_accept_pressed)
self.btn_left.clicked.connect(self.on_cancel_pressed)
# Global gui update
self.btn_right.setDisabled(False)
self.adjustSize()
def on_s1_download_entered(self):
# Steps
self.cb_s1_style = """
QCheckBox::indicator:checked:disabled {{
image: url({0}/resources/icon/checkbox_unchecked.svg);
background: #3693f6;
}}
QCheckBox::indicator:unchecked:disabled{{
image: url({0}/resources/icon/checkbox_unchecked.svg);
}}
""".format(self.armada_root_path)
self.cb_s1_download.setChecked(True)
self.cb_s1_download.setStyleSheet(self.cb_s1_style)
self.lbl_description.clear()
self.lbl_title.setText('Installing')
# Hide install path gui
self.lbl_install_dir.hide()
self.le_install_dir.hide()
self.btn_install_browse.hide()
self.lbl_full_path.hide()
self.le_full_path.hide()
self.install_dir_layout.setContentsMargins(0, 0, 0, 0)
self.lbl_armada_ver.hide()
self.cb_version_numbers.hide()
self.armada_version_layout.setContentsMargins(0, 0, 0, 0)
# S0
self.btn_left.hide()
self.adjustSize()
def on_s2_complete_entered(self):
# Steps
self.cb_s2_style = """
QCheckBox::indicator:checked:disabled {{
image: url({0}/resources/icon/checkbox_unchecked.svg);
background: #de6cff;
}}
QCheckBox::indicator:unchecked:disabled{{
image: url({0}/resources/icon/checkbox_unchecked.svg);
}}
""".format(self.armada_root_path)
self.cb_s2_complete.setChecked(True)
self.cb_s2_complete.setStyleSheet(self.cb_s2_style)
# Show mount gui
self.lbl_install_dir.hide()
self.le_install_dir.hide()
self.btn_install_browse.hide()
self.lbl_full_path.hide()
self.le_full_path.hide()
self.install_dir_layout.setContentsMargins(0, 0, 0, 0)
self.lbl_armada_ver.hide()
self.cb_version_numbers.hide()
self.armada_version_layout.setContentsMargins(0, 0, 0, 0)
self.lbl_description.clear()
self.lbl_description.setFixedHeight(int(self.lbl_description.document().size().toSize().width()))
self.lbl_description.setHtml("""
<p>You're ready to shove off! Bon voyage!<br>
</br>
<br></br>
<br></br>
Armada Pipeline v{0} was successfully installed in:</p>
<blockquote><i>{1}</i></blockquote>""".format(self.cb_version_numbers.currentText(), self.le_full_path.text()))
self.install_dir_layout.setContentsMargins(0, 0, 0, 0)
self.lbl_title.setText('Installation Complete')
self.progress_bar.hide()
self.task_description.hide()
# Global gui update
self.btn_right.setDisabled(False)
self.btn_right.clicked.connect(self.on_accept_pressed)
self.enter_pressed.connect(self.on_accept_pressed)
self.adjustSize()
def on_cancel_pressed(self):
"""Cancel button pressed
"""
import sys
sys.exit()
def on_uninstall_pressed(self):
print('uninstalling')
# Root path stuff
if getattr(sys, 'frozen', False):
# If the application is run as a bundle, the pyInstaller bootloader
# extends the sys module by a flag frozen=True and sets the app
# path into variable _MEIPASS'.
print('frozen')
ROOT_PATH = sys._MEIPASS.replace("\\", '/')
else:
# application_path = os.path.dirname(os.path.abspath(__file__))
print('not frozen')
ROOT_PATH = os.path.abspath(os.path.join(os.path.dirname(os.path.realpath(__file__)), '..')).replace("\\", '/')
self.btn_left.setDisabled(True)
self.btn_right.setDisabled(True)
self.thread = DownloadThread(self, ROOT_PATH)
self.thread.update_gui.connect(self.on_update_gui)
self.thread.update_progress.connect(self.setProgress)
self.thread.set_extracted_dir.connect(self.on_set_extracted)
self.thread.start()
def on_set_extracted(self, str):
print('Extracted directory = {}'.format(str))
self.extracted_directory = str
self.btn_right.setDisabled(False)
def on_update_gui(self, text):
self.task_description.setText(text)
def on_accept_pressed(self):
"""Run Armada after installation
"""
install_dir = self.le_install_dir.text()
print(install_dir)
# from pyshortcuts import make_shortcut
#
# make_shortcut('/home/user/bin/myapp.py', name='MyApp',
# icon='/home/user/icons/myicon.ico', startmenu=True, desktop=True)
# Path defaults
if platform.system().lower() in ['windows']:
armada_exe = 'armada_pipeline.exe'
elif platform.system().lower() in ['darwin']:
armada_exe = 'armada_pipeline'
subprocess.Popen(os.path.join(self.extracted_directory, armada_exe))
self.close()
def keyPressEvent(self, event):
if event.key() == QtCore.Qt.Key_Return:
self.enter_pressed.emit(self.enter_signal_str)
return True
if event.key() == QtCore.Qt.Key_Escape:
self.esc_pressed.emit(self.esc_signal_str)
return True
else:
super(ArmadaUninstaller, self).keyPressEvent(event)
def closeEvent(self, event):
self.deleteLater()
import urllib
import urllib.request
class DownloadThread(QtCore.QThread):
update_gui = QtCore.Signal(str)
update_progress = QtCore.Signal(float)
set_extracted_dir = QtCore.Signal(str)
def __init__(self, url, tmp_file_name, save_path, le_full_path, ):
super(DownloadThread, self).__init__()
self.url = url
self.tmp_file_name = tmp_file_name
self.save_path = save_path
self.le_full_path = le_full_path
def run(self):
# Set the text to the current task
self.update_gui.emit("Uninstalling...")
# Download data
u = urllib.request.urlopen(self.url)
meta = u.info()
file_size = int(meta.get('Content-Length'))
params = meta.get('Content-Disposition')
filename = params.split('; filename=')[1]
f = open(self.save_path, 'wb')
downloaded_bytes = 0
block_size = 1024 * 8
while True:
buffer = u.read(block_size)
if not buffer:
break
f.write(buffer)
downloaded_bytes += block_size
self.update_progress.emit(float(downloaded_bytes) / file_size * 100)
f.close()
# unzip
self.update_gui.emit("Swabbin' the decks...")
import zipfile
zf = zipfile.ZipFile(self.save_path)
uncompress_size = sum((file.file_size for file in zf.infolist()))
extracted_size = 0
if platform.system().lower() in ['windows']:
for file in zf.infolist():
extracted_size += file.file_size
percentage = extracted_size * 100 / uncompress_size
self.update_progress.emit(percentage)
zf.extract(file.filename, self.le_full_path)
elif platform.system().lower() in ['darwin']:
for file in zf.infolist():
extracted_size += file.file_size
percentage = extracted_size * 100 / uncompress_size
self.update_progress.emit(percentage)
f = os.path.join(self.le_full_path, file.filename)
zf.extract(file, self.le_full_path)
subprocess.call(['chmod', 'u+x', f])
zf.close()
# Rename unzipped folder
try:
os.rename(self.save_path.rpartition('.zip')[0], os.path.join(self.le_full_path, filename.rpartition('.zip')[0]))
self.set_extracted_dir.emit(os.path.join(self.le_full_path, filename.rpartition('.zip')[0]).replace('\\', '/'))
except FileExistsError as e:
os.remove(self.save_path)
os.remove(self.save_path.rpartition('.zip')[0])
raise FileExistsError('')
# Clean up by deleting zip file
os.remove(self.save_path)
self.update_gui.emit("Complete!")
return
if __name__ == "__main__":
# Run Armada launcher
app = QtWidgets.QApplication(sys.argv)
# QtGui.QFontDatabase.addApplicationFont('resources/fonts/Roboto/Roboto-Thin.ttf')
window = ArmadaUninstaller()
sys.exit(app.exec_())
| [
"borbs727@gmail.com"
] | borbs727@gmail.com |
81cf2206986eae587556c8ed802ef919b41191b3 | 966efb6db04789f795474ee5047c497ce3c8c9dd | /100/q37.py | 03e23cb87cfcad1d7b43088c8295fbcd6d6391c9 | [] | no_license | gitmengzh/100-Python-exercises | 43b52ced1688fc30da61025183bcbc7d9f63446f | 00746148cececfed4beb2cd29a983a382aa419c8 | refs/heads/master | 2020-07-06T08:16:40.539517 | 2019-10-01T13:23:56 | 2019-10-01T13:23:56 | 202,952,305 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 218 | py | '''
定义一个函数,生成一个List,List内容为1-20的平方,然后打印List前五个值
'''
def printList5():
l = []
for i in range(1,21):
l.append(i**2)
print(l[:5])
test = printList5()
| [
"mengzh1618@gmail.com"
] | mengzh1618@gmail.com |
3ab390a92b158a2faf4ee829165e0ad9cf072fec | dc8a337ea1d8a285577d33e5cfd4dbbe846ee1a0 | /src/main/scala/contest/155/SmallestStringWithSwaps.py | ae3db42c83edd3b2678ee0651eb398bec50107b6 | [] | no_license | joestalker1/leetcode | 8a5cdda17abd33c3eef859732f75d7bec77a9d0e | ae392ddbc7eb56cb814b9e9715043c98a89a6314 | refs/heads/master | 2023-04-13T22:09:54.407864 | 2023-04-09T19:22:54 | 2023-04-09T19:22:54 | 131,803,943 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,179 | py | from collections import defaultdict
class Solution:
def smallestStringWithSwaps(self, s, pairs):
if not s:
return None
if len(pairs) == 0:
return s
def find(parent, i):
if parent[i] != i:
p = find(parent, parent[i])
parent[i] = p
return parent[i]
def union(parent, i, j):
p1 = find(parent, i)
p2 = find(parent, j)
if p1 != p2:
parent[p1] = p2
parent = [i for i in range(len(s))]
for i,j in pairs:
union(parent, i, j)
chars = defaultdict(list)
for i in range(len(s)):
chars[find(parent, i)].append(s[i])
for k in chars:
chars[k].sort()
res = []
for i in range(len(s)):
res.append(chars[find(parent, i)].pop(0))
return ''.join(res)
sol = Solution()
print(sol.smallestStringWithSwaps("udyyek", [[3,3],[3,0],[5,1],[3,1],[3,4],[3,5]]))#"deykuy"
#print(sol.smallestStringWithSwaps(s = "dcab", pairs = [[0,3],[1,2],[0,2]]))
#print(sol.smallestStringWithSwaps(s = "dcab", pairs = [[0,3],[1,2]]))
| [
"stalker.comp@gmail.com"
] | stalker.comp@gmail.com |
34f7f8f4141f68a3303100cff04e36a6121b6fd0 | cc0caf0362909490377a44b08a726dca2d093c4f | /principal_planb.py | a757ef7fb2bf186aca27ade6fb3d85c822c8aaa4 | [] | no_license | stefifm/Testing | bf334f97425ac4463e86e39a5bf97061827214c8 | 4a4cf4f93f050fe12244235774448a46f9a226db | refs/heads/master | 2023-01-04T12:25:16.951844 | 2020-11-03T00:24:13 | 2020-11-03T00:24:13 | 294,291,961 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 736 | py | import planb
print("Este es el plan b")
def principal():
n = 16
participantes = []
planb.carga_automatica(participantes)
planb.orden_sort(participantes)
planb.mostrar_participantes(participantes)
#generaciópn de los cruces
print("OCTAVOS\n")
planb.octavos(participantes)
print("INTENTO DE CUARTOS\n")
cuartos = planb.ganadores(participantes)
planb.cruces(cuartos)
print("\nINTENTO DE SEMIFINAL\n")
semis = planb.ganadores(cuartos)
planb.cruces(semis)
print("\nINTENTO DE FINAL\n")
final = planb.ganadores(semis)
pri, seg = planb.final(final)
print("El primero es:",pri)
print("El segundo es:",seg)
if __name__ == "__main__":
principal() | [
"bruerastefania@gmail.com"
] | bruerastefania@gmail.com |
dadac39624c61550b9c4d7a21b0dbee6e168b988 | dd4d1a61ec680a86d4b569490bf2a898ea0d7557 | /appengine/predator/analysis/culprit.py | 5b0046ceb5e8bcbfbe29b14926c7099efe54fc84 | [
"BSD-3-Clause"
] | permissive | mcgreevy/chromium-infra | f1a68914b47bcbe3cd8a424f43741dd74fedddf4 | 09064105713603f7bf75c772e8354800a1bfa256 | refs/heads/master | 2022-10-29T23:21:46.894543 | 2017-05-16T06:22:50 | 2017-05-16T06:22:50 | 91,423,078 | 1 | 1 | BSD-3-Clause | 2022-10-01T18:48:03 | 2017-05-16T06:23:34 | Python | UTF-8 | Python | false | false | 4,439 | py | # Copyright 2016 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
from collections import namedtuple
class Culprit(namedtuple('Culprit',
['project', 'components', 'cls', 'regression_range', 'algorithm'])):
"""The result of successfully identifying the culprit of a crash report.
That is, this is what ``Predator.FindCultprit`` returns. It encapsulates
all the information predator discovered during its various analyses.
Args:
project (str): the most-suspected project
components (list of str): the suspected crbug components.
cls (list of ??): the suspected CLs.
regression_range (tuple): a pair of the last-good and first-bad versions.
algorithm (str): What algorithm was used to produce this object.
"""
__slots__ = ()
@property
def fields(self):
return self._fields
# TODO(http://crbug/644476): better name for this method.
def ToDicts(self):
"""Convert this object to a pair of anonymous dicts for JSON.
Returns:
(analysis_result_dict, tag_dict)
The analysis result is a dict like below:
{
# Indicate if Findit found any suspects_cls, project,
# components or regression_range.
"found": true,
"suspected_project": "chromium-v8", # Which project is most suspected.
"feedback_url": "https://.."
"suspected_cls": [
{
"revision": "commit-hash",
"url": "https://chromium.googlesource.com/chromium/src/+/...",
"review_url": "https://codereview.chromium.org/issue-number",
"project_path": "third_party/pdfium",
"author": "who@chromium.org",
"time": "2015-08-17 03:38:16",
"reason": "a plain string with '\n' as line break to expla..."
"reason": [('MinDistance', 1, 'minimum distance is 0.'),
('TopFrame', 0.9, 'top frame is2nd frame.')],
"changed_files": [
{"file": "file_name1.cc",
"blame_url": "https://...",
"info": "minimum distance (LOC) 0, frame #2"},
{"file": "file_name2.cc",
"blame_url": "https://...",
"info": "minimum distance (LOC) 20, frame #4"},
...
],
"confidence": 0.60
},
...,
],
"regression_range": [ # Detected regression range.
"53.0.2765.0",
"53.0.2766.0"
],
"suspected_components": [ # A list of crbug components to file bugs.
"Blink>JavaScript"
]
}
The code review url might not always be available, because not all
commits go through code review. In that case, commit url should
be used instead.
The tag dict are allowed key/value pairs to tag the analysis result
for query and monitoring purpose on Findit side. For allowed keys,
please refer to crash_analysis.py and fracas_crash_analysis.py:
For results with normal culprit-finding algorithm: {
'found_suspects': True,
'has_regression_range': True,
'solution': 'core_algorithm',
}
For results using git blame without a regression range: {
'found_suspects': True,
'has_regression_range': False,
'solution': 'blame',
}
If nothing is found: {
'found_suspects': False,
}
"""
result = {}
result['found'] = (
bool(self.project) or
bool(self.components) or
bool(self.cls) or
bool(self.regression_range))
if self.regression_range:
result['regression_range'] = self.regression_range
if self.project:
result['suspected_project'] = self.project
if self.components:
result['suspected_components'] = self.components
if self.cls:
result['suspected_cls'] = [cl.ToDict() for cl in self.cls]
tags = {
'found_suspects': bool(self.cls),
'has_regression_range': bool(self.regression_range),
'found_project': bool(self.project),
'found_components': bool(self.components),
'solution': self.algorithm,
}
return result, tags
| [
"commit-bot@chromium.org"
] | commit-bot@chromium.org |
f1b4f05f3d60bb61c3fb15df319d3bfab9891807 | ca7aa979e7059467e158830b76673f5b77a0f5a3 | /Python_codes/p00002/s318942277.py | e8ba2d4572b2599ea579aebe12436eac026105c3 | [] | no_license | Aasthaengg/IBMdataset | 7abb6cbcc4fb03ef5ca68ac64ba460c4a64f8901 | f33f1c5c3b16d0ea8d1f5a7d479ad288bb3f48d8 | refs/heads/main | 2023-04-22T10:22:44.763102 | 2021-05-13T17:27:22 | 2021-05-13T17:27:22 | 367,112,348 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 442 | py | # -*-coding:utf-8-*-
def get_input():
while True:
try:
yield "".join(input())
except EOFError:
break
if __name__=="__main__":
array = list(get_input())
for i in range(len(array)):
temp = array[i].split()
a = int(temp[0])
b = int(temp[1])
ans = a + b
print(len(str(ans))) | [
"66529651+Aastha2104@users.noreply.github.com"
] | 66529651+Aastha2104@users.noreply.github.com |
4accc9e2844547831a443a791d21841b2e5915b5 | ba03b99d73886349d66883b8c328b8eff805772d | /307 range sum query - mutable.py | 49e74e5d9b3363ce8e56bcc3cd1106b2f2c1ff0a | [] | no_license | liyi0206/leetcode-python | 38cc33eb74b006e7e6609eda86e1ae8d5e278247 | 2c4a54070b20d2fe33b81d889ad0ad0c6aa5fb5c | refs/heads/master | 2016-09-12T22:54:09.622652 | 2016-05-26T05:20:44 | 2016-05-26T05:20:44 | 59,178,964 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,510 | py | class NumArray(object):
def __init__(self, nums):
"""
initialize your data structure here.
:type nums: List[int]
"""
self.nums,self.n =nums,len(nums)
# self.sums is the sume of self value and ++lowbit values.
# ++lowbit would be larger sibling or parent++ larger sibling.
self.sums =[0]*(self.n+1)
for i in xrange(self.n):
self.add(i+1,nums[i]) # update self.sums
def update(self, i, val):
"""
:type i: int
:type val: int
:rtype: int
"""
self.add(i+1,val-self.nums[i]) # update self.sums
self.nums[i]=val
def sumRange(self, i, j):
"""
sum of elements nums[i..j], inclusive.
:type i: int
:type j: int
:rtype: int
"""
if not self.nums: return 0 # edge case
return self.sum(j+1)-self.sum(i)
### UTILS ###
def lowbit(self,x):
return x&(-x)
def add(self,x,val): # for update, idx ++lowbit, sums[idx]+=delta_val
while x<=self.n: # stop rule x<=n
self.sums[x]+=val
x+=self.lowbit(x)
def sum(self,x): # for sumRange, idx --lowbit, res+=sums[idx]
res=0 # stop rule x>0
while x>0:
res+=self.sums[x]
x-=self.lowbit(x)
return res
nums=[1,3,5]
numArray = NumArray(nums)
print numArray.sumRange(0,2) #9
numArray.update(1,2)
print numArray.sumRange(0,2) #8 | [
"ly.protegee@gmail.com"
] | ly.protegee@gmail.com |
edc8f7043c8ab86364b5bdf6be0c590f63897936 | a5f0e7c09c36bb2fc91f95e5f3ec7f95c0ed305e | /cafe_backend/apps/users/migrations/0014_auto_20190713_0401.py | 06098044ac3a7f73d18794b9f487be6e535f1f9f | [] | no_license | ecmascriptguru/cafe_backend | e703047c7f04d68596f76dcbff06828afbf5cc68 | 0c4152692d68e951481b39f0789bc58e94e0d20c | refs/heads/master | 2022-10-26T00:31:50.070430 | 2020-06-18T15:30:02 | 2020-06-18T15:30:02 | 184,465,639 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 403 | py | # Generated by Django 2.0.9 on 2019-07-12 20:01
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('users', '0013_employee'),
]
operations = [
migrations.AlterModelOptions(
name='table',
options={'ordering': ('user__first_name',), 'verbose_name': 'Table', 'verbose_name_plural': 'Tables'},
),
]
| [
"ecmascript.guru@gmail.com"
] | ecmascript.guru@gmail.com |
07640491735fee876a9190454f0ed96d02020b80 | b1b734ab75a6fe114733d3c0b8ca5046d54b407d | /third_party/onnx/setup.py | 51a580599ef8f409d1f55de017ef5580fe47b707 | [
"MIT",
"BSD-3-Clause",
"LicenseRef-scancode-generic-cla",
"BSD-2-Clause",
"Apache-2.0"
] | permissive | waybarrios/video_nonlocal_net_caffe2 | 754fea2b96318d677144f16faadf59cb6b00189b | b19c2ac3ddc1836d90d7d0fccb60d710c017253e | refs/heads/master | 2020-04-20T03:15:12.286080 | 2019-01-31T20:44:01 | 2019-01-31T20:44:01 | 168,593,110 | 0 | 0 | Apache-2.0 | 2019-01-31T20:40:40 | 2019-01-31T20:40:39 | null | UTF-8 | Python | false | false | 15,409 | py | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from distutils.spawn import find_executable
from distutils import sysconfig, dep_util, log
import setuptools
import setuptools.command.build_py
import setuptools.command.develop
import setuptools.command.build_ext
import platform
import fnmatch
from collections import namedtuple
import os
import hashlib
import shutil
import subprocess
import sys
import tempfile
from textwrap import dedent
from tools.ninja_builder import ninja_build_ext
import glob
import json
try:
import ninja # noqa
WITH_NINJA = True
except ImportError:
WITH_NINJA = False
TOP_DIR = os.path.realpath(os.path.dirname(__file__))
SRC_DIR = os.path.join(TOP_DIR, 'onnx')
TP_DIR = os.path.join(TOP_DIR, 'third_party')
PROTOC = find_executable('protoc')
DEFAULT_ONNX_NAMESPACE = 'onnx'
ONNX_ML = bool(os.getenv('ONNX_ML') == '1')
ONNX_NAMESPACE = os.getenv('ONNX_NAMESPACE', DEFAULT_ONNX_NAMESPACE)
install_requires = ['six']
setup_requires = []
tests_require = []
################################################################################
# Version
################################################################################
try:
git_version = subprocess.check_output(['git', 'rev-parse', 'HEAD'],
cwd=TOP_DIR).decode('ascii').strip()
except (OSError, subprocess.CalledProcessError):
git_version = None
with open(os.path.join(TOP_DIR, 'VERSION_NUMBER')) as version_file:
VersionInfo = namedtuple('VersionInfo', ['version', 'git_version'])(
version=version_file.read().strip(),
git_version=git_version
)
################################################################################
# Utilities
################################################################################
def die(msg):
log.error(msg)
sys.exit(1)
def true_or_die(b, msg):
if not b:
die(msg)
return b
def recursive_glob(directory, pattern):
return [os.path.join(dirpath, f)
for dirpath, dirnames, files in os.walk(directory)
for f in fnmatch.filter(files, pattern)]
# https://stackoverflow.com/a/3431838/2143581
def md5(fname):
hash_md5 = hashlib.md5()
with open(fname, 'rb') as f:
for chunk in iter(lambda: f.read(4096), b''):
hash_md5.update(chunk)
return hash_md5.hexdigest()
################################################################################
# Pre Check
################################################################################
true_or_die(PROTOC, 'Could not find "protoc" executable!')
################################################################################
# Dependencies
################################################################################
class Dependency(object):
def __init__(self):
self.include_dirs = []
self.libraries = []
class Python(Dependency):
def __init__(self):
super(Python, self).__init__()
self.include_dirs = [sysconfig.get_python_inc()]
class Protobuf(Dependency):
def __init__(self):
super(Protobuf, self).__init__()
# TODO: allow user specify protobuf include_dirs libraries with flags
use_conda = os.getenv('CONDA_PREFIX') and platform.system() == 'Windows'
libs = []
if os.getenv('PROTOBUF_LIBDIR'):
libs.append(os.path.join(os.getenv('PROTOBUF_LIBDIR'), "libprotobuf"))
elif use_conda:
libs.append(os.path.join(os.getenv('CONDA_PREFIX'), "Library", "lib", "libprotobuf"))
else:
libs.append("protobuf")
includes = []
if os.getenv('PROTOBUF_INCDIR'):
includes.append(os.path.join(os.getenv('PROTOBUF_INCDIR')))
elif use_conda:
includes.append(os.path.join(os.getenv('CONDA_PREFIX'), "Library", "Include"))
else:
print("Warning: Environment Variable PROTOBUF_INCDIR or CONDA_PREFIX is not set, which may cause protobuf including folder error.")
self.libraries = libs
self.include_dirs = includes
class Pybind11(Dependency):
def __init__(self):
super(Pybind11, self).__init__()
self.include_dirs = [os.path.join(TP_DIR, 'pybind11', 'include')]
################################################################################
# Customized commands
################################################################################
class ONNXCommand(setuptools.Command):
user_options = []
def initialize_options(self):
pass
def finalize_options(self):
pass
class build_proto_in(ONNXCommand):
def run(self):
tmp_dir = tempfile.mkdtemp()
gen_script = os.path.join(SRC_DIR, 'gen_proto.py')
stems = ['onnx', 'onnx-operators']
in_files = [gen_script]
out_files = []
need_rename = (ONNX_NAMESPACE != DEFAULT_ONNX_NAMESPACE)
for stem in stems:
in_files.append(
os.path.join(SRC_DIR, '{}.in.proto'.format(stem)))
if ONNX_ML:
proto_base = '{}_{}-ml'.format(stem,
ONNX_NAMESPACE) if need_rename else '{}-ml'.format(stem)
if need_rename:
out_files.append(os.path.join(SRC_DIR, '{}-ml.pb.h'.format(stem)))
else:
proto_base = '{}_{}'.format(stem, ONNX_NAMESPACE) if need_rename else stem
if need_rename:
out_files.append(os.path.join(SRC_DIR, '{}.pb.h'.format(stem)))
out_files.extend([
os.path.join(SRC_DIR, '{}_pb.py'.format(stem.replace('-', '_'))),
os.path.join(SRC_DIR, '{}.proto'.format(proto_base)),
os.path.join(SRC_DIR, '{}.proto3'.format(proto_base)),
])
log.info('compiling *.in.proto to temp dir {}'.format(tmp_dir))
command_list = [
sys.executable, gen_script,
'-p', ONNX_NAMESPACE,
'-o', tmp_dir
]
if ONNX_ML:
command_list.append('--ml')
subprocess.check_call(command_list + stems)
for out_f in out_files:
tmp_f = os.path.join(tmp_dir, os.path.basename(out_f))
if os.path.exists(out_f) and md5(out_f) == md5(tmp_f):
log.info("Skip updating {} since it's the same.".format(out_f))
continue
log.info("Copying {} to {}".format(tmp_f, out_f))
shutil.copyfile(tmp_f, out_f)
shutil.rmtree(tmp_dir)
class build_proto(ONNXCommand):
def run(self):
self.run_command('build_proto_in')
stems = ['onnx', 'onnx-operators']
need_rename = (ONNX_NAMESPACE != DEFAULT_ONNX_NAMESPACE)
for stem in stems:
if ONNX_ML:
proto_base = '{}_{}-ml'.format(stem,
ONNX_NAMESPACE) if need_rename else '{}-ml'.format(stem)
else:
proto_base = '{}_{}'.format(stem, ONNX_NAMESPACE) if need_rename else stem
proto = os.path.join(SRC_DIR, '{}.proto'.format(proto_base))
pb2 = "{}_{}".format(stem.replace('-', '_'), ONNX_NAMESPACE.replace('-',
'_')) if need_rename else stem.replace('-', '_')
if ONNX_ML:
pb2 += "_ml"
outputs = [
os.path.join(SRC_DIR, '{}.pb.cc'.format(proto_base)),
os.path.join(SRC_DIR, '{}.pb.h'.format(proto_base)),
os.path.join(SRC_DIR, '{}_pb2.py'.format(pb2)),
os.path.join(SRC_DIR, '{}_pb.py'.format(stem.replace('-', '_'))),
]
if ONNX_ML:
outputs.append(os.path.join(SRC_DIR, '{}-ml.pb.h'.format(stem)))
else:
outputs.append(os.path.join(SRC_DIR, '{}.pb.h'.format(stem)))
if self.force or any(dep_util.newer(proto, o) for o in outputs):
log.info('compiling {}'.format(proto))
subprocess.check_call([
PROTOC,
'--proto_path', SRC_DIR,
'--python_out', SRC_DIR,
'--cpp_out', SRC_DIR,
proto
])
class create_version(ONNXCommand):
def run(self):
with open(os.path.join(SRC_DIR, 'version.py'), 'w') as f:
f.write(dedent('''\
# This file is generated by setup.py. DO NOT EDIT!
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
version = '{version}'
git_version = '{git_version}'
'''.format(**dict(VersionInfo._asdict()))))
class build_py(setuptools.command.build_py.build_py):
def run(self):
self.run_command('create_version')
self.run_command('build_proto')
return setuptools.command.build_py.build_py.run(self)
class develop(setuptools.command.develop.develop):
def run(self):
self.run_command('create_version')
setuptools.command.develop.develop.run(self)
self.create_compile_commands()
def create_compile_commands(self):
def load(filename):
with open(filename) as f:
return json.load(f)
ninja_files = glob.glob('build/*_compile_commands.json')
all_commands = [entry for f in ninja_files for entry in load(f)]
with open('compile_commands.json', 'w') as f:
json.dump(all_commands, f, indent=2)
build_ext_parent = ninja_build_ext if WITH_NINJA \
else setuptools.command.build_ext.build_ext
class build_ext(build_ext_parent):
def run(self):
self.run_command('build_proto')
for ext in self.extensions:
ext.pre_run()
return setuptools.command.build_ext.build_ext.run(self)
cmdclass = {
'build_proto': build_proto,
'build_proto_in': build_proto_in,
'create_version': create_version,
'build_py': build_py,
'develop': develop,
'build_ext': build_ext,
}
################################################################################
# Extensions
################################################################################
class ONNXExtension(setuptools.Extension):
def pre_run(self):
pass
def create_extension(ExtType, name, sources, dependencies, extra_link_args, extra_objects, define_macros):
include_dirs = sum([dep.include_dirs for dep in dependencies], [TOP_DIR])
libraries = sum([dep.libraries for dep in dependencies], [])
extra_compile_args = ['-std=c++11']
if sys.platform == 'darwin':
extra_compile_args.append('-stdlib=libc++')
if os.getenv('CONDA_PREFIX'):
include_dirs.append(os.path.join(os.getenv('CONDA_PREFIX'), "include"))
if platform.system() == 'Windows':
extra_compile_args.append('/MT')
return ExtType(
name=name,
define_macros=define_macros,
sources=sources,
include_dirs=include_dirs,
libraries=libraries,
extra_compile_args=extra_compile_args,
extra_objects=extra_objects,
extra_link_args=extra_link_args,
language='c++',
)
class ONNXCpp2PyExtension(setuptools.Extension):
def pre_run(self):
self.sources = recursive_glob(SRC_DIR, '*.cc')
need_rename = (ONNX_NAMESPACE != DEFAULT_ONNX_NAMESPACE)
original_onnx = [
os.path.join(SRC_DIR, "onnx.pb.cc"),
os.path.join(SRC_DIR, "onnx-operators.pb.cc"),
]
original_onnx_ml = [
os.path.join(SRC_DIR, "onnx-ml.pb.cc"),
os.path.join(SRC_DIR, "onnx-operators-ml.pb.cc"),
]
if ONNX_ML:
# Remove onnx.pb.cc, onnx-operators.pb.cc from sources.
sources_filter = original_onnx
if need_rename:
sources_filter.extend(original_onnx_ml)
else:
# Remove onnx-ml.pb.cc, onnx-operators-ml.pb.cc from sources.
sources_filter = original_onnx_ml
if need_rename:
sources_filter.extend(original_onnx)
for source_filter in sources_filter:
if source_filter in self.sources:
self.sources.remove(source_filter)
cpp2py_deps = [Pybind11(), Python()]
cpp2py_link_args = []
cpp2py_extra_objects = []
build_for_release = os.getenv('ONNX_BINARY_BUILD')
if build_for_release and platform.system() == 'Linux':
# Cribbed from PyTorch
# get path of libstdc++ and link manually.
# for reasons unknown, -static-libstdc++ doesn't fully link some symbols
CXXNAME = os.getenv('CXX', 'g++')
path = subprocess.check_output([CXXNAME, '-print-file-name=libstdc++.a'])
path = path[:-1]
if type(path) != str: # python 3
path = path.decode(sys.stdout.encoding)
cpp2py_link_args += [path]
# Hard coded look for the static libraries from Conda
assert os.getenv('CONDA_PREFIX')
cpp2py_extra_objects.extend([os.path.join(os.getenv('CONDA_PREFIX'), 'lib', 'libprotobuf.a'),
os.path.join(os.getenv('CONDA_PREFIX'), 'lib', 'libprotobuf-lite.a')])
else:
cpp2py_deps.append(Protobuf())
define_macros = [('ONNX_NAMESPACE', ONNX_NAMESPACE)]
if ONNX_ML:
define_macros.append(('ONNX_ML', '1'))
ext_modules = [
create_extension(ONNXCpp2PyExtension,
str('onnx.onnx_cpp2py_export'),
sources=[], # sources will be propagated in pre_run
dependencies=cpp2py_deps,
extra_link_args=cpp2py_link_args,
extra_objects=cpp2py_extra_objects,
define_macros=define_macros)
]
################################################################################
# Packages
################################################################################
# no need to do fancy stuff so far
packages = setuptools.find_packages()
install_requires.extend(['protobuf', 'numpy'])
################################################################################
# Test
################################################################################
setup_requires.append('pytest-runner')
tests_require.append('pytest-cov')
tests_require.append('nbval')
tests_require.append('tabulate')
################################################################################
# Final
################################################################################
setuptools.setup(
name="onnx",
version=VersionInfo.version,
description="Open Neural Network Exchange",
ext_modules=ext_modules,
cmdclass=cmdclass,
packages=packages,
include_package_data=True,
install_requires=install_requires,
setup_requires=setup_requires,
tests_require=tests_require,
author='bddppq',
author_email='jbai@fb.com',
url='https://github.com/onnx/onnx',
entry_points={
'console_scripts': [
'check-model = onnx.bin.checker:check_model',
'check-node = onnx.bin.checker:check_node',
'backend-test-tools = onnx.backend.test.cmd_tools:main',
]
},
)
| [
"gemfield@civilnet.cn"
] | gemfield@civilnet.cn |
ea12358f5a23a570db2306e5474ccf2056c99a16 | 86a26119af259e3858cb5e57ea2e41e3b25c5fa7 | /Python Project/Employee_Home.py | 621e7d19f6311f44bdc6037990004706ce2ea09c | [] | no_license | deshmukhshweta/project2 | 747ca7972a7bfdc4aed20dbb4ee3f6d2f009ca83 | 8bf07454d259456dc616e7283c266b35fe7b870d | refs/heads/master | 2020-04-19T09:57:05.541157 | 2019-01-29T09:27:01 | 2019-01-29T09:27:01 | 168,125,342 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 6,602 | py | #! /usr/bin/env python
# -*- coding: utf-8 -*-
#
# GUI module generated by PAGE version 4.13
# In conjunction with Tcl version 8.6
# May 31, 2018 12:17:17 AM
import sys
try:
from Tkinter import *
except ImportError:
from tkinter import *
try:
import ttk
py3 = False
except ImportError:
import tkinter.ttk as ttk
py3 = True
import Employee_Home_support
def vp_start_gui():
'''Starting point when module is the main routine.'''
global val, w, root
root = Tk()
top = Employee_Home (root)
Employee_Home_support.init(root, top)
root.mainloop()
w = None
def create_Employee_Home(root, *args, **kwargs):
'''Starting point when module is imported by another program.'''
global w, w_win, rt
rt = root
w = Toplevel (root)
top = Employee_Home (w)
Employee_Home_support.init(w, top, *args, **kwargs)
return (w, top)
def destroy_Employee_Home():
global w
w.destroy()
w = None
class Employee_Home:
def __init__(self, top=None):
'''This class configures and populates the toplevel window.
top is the toplevel containing window.'''
_bgcolor = '#d9d9d9' # X11 color: 'gray85'
_fgcolor = '#000000' # X11 color: 'black'
_compcolor = '#d9d9d9' # X11 color: 'gray85'
_ana1color = '#d9d9d9' # X11 color: 'gray85'
_ana2color = '#d9d9d9' # X11 color: 'gray85'
font9 = "-family {Segoe UI} -size 14 -weight bold -slant roman" \
" -underline 0 -overstrike 0"
top.geometry("273x498+429+126")
top.title("Employee Home")
top.configure(background="#d89ed0")
top.configure(highlightbackground="#d9d9d9")
top.configure(highlightcolor="black")
self.Frame1 = Frame(top)
self.Frame1.place(relx=0.0, rely=0.0, relheight=0.99, relwidth=1.01)
self.Frame1.configure(relief=GROOVE)
self.Frame1.configure(borderwidth="5")
self.Frame1.configure(relief=GROOVE)
self.Frame1.configure(background="#9ea0d8")
self.Frame1.configure(highlightbackground="#d9d9d9")
self.Frame1.configure(highlightcolor="black")
self.Frame1.configure(width=275)
self.Label1 = Label(self.Frame1)
self.Label1.place(relx=0.07, rely=0.02, height=31, width=224)
self.Label1.configure(activebackground="#f9f9f9")
self.Label1.configure(activeforeground="black")
self.Label1.configure(background="#9ea0d8")
self.Label1.configure(disabledforeground="#a3a3a3")
self.Label1.configure(font=font9)
self.Label1.configure(foreground="#000000")
self.Label1.configure(highlightbackground="#d9d9d9")
self.Label1.configure(highlightcolor="black")
self.Label1.configure(text='''Employee Home''')
self.Button1 = Button(self.Frame1)
self.Button1.place(relx=0.11, rely=0.16, height=34, width=207)
self.Button1.configure(activebackground="#d9d9d9")
self.Button1.configure(activeforeground="#000000")
self.Button1.configure(background="#9ea0d8")
self.Button1.configure(command=Employee_Home_support.admin_stocker)
self.Button1.configure(disabledforeground="#a3a3a3")
self.Button1.configure(font=font9)
self.Button1.configure(foreground="#000000")
self.Button1.configure(highlightbackground="#d9d9d9")
self.Button1.configure(highlightcolor="#000000")
self.Button1.configure(pady="0")
self.Button1.configure(text='''Stocker''')
self.Button1_1 = Button(self.Frame1)
self.Button1_1.place(relx=0.11, rely=0.32, height=34, width=207)
self.Button1_1.configure(activebackground="#d9d9d9")
self.Button1_1.configure(activeforeground="#000000")
self.Button1_1.configure(background="#9ea0d8")
self.Button1_1.configure(command=Employee_Home_support.admin_dispatcher)
self.Button1_1.configure(disabledforeground="#a3a3a3")
self.Button1_1.configure(font=font9)
self.Button1_1.configure(foreground="#000000")
self.Button1_1.configure(highlightbackground="#d9d9d9")
self.Button1_1.configure(highlightcolor="black")
self.Button1_1.configure(pady="0")
self.Button1_1.configure(text='''Dispatcher''')
self.Button1_2 = Button(self.Frame1)
self.Button1_2.place(relx=0.11, rely=0.48, height=34, width=207)
self.Button1_2.configure(activebackground="#d9d9d9")
self.Button1_2.configure(activeforeground="#000000")
self.Button1_2.configure(background="#9ea0d8")
self.Button1_2.configure(command=Employee_Home_support.admin_product)
self.Button1_2.configure(disabledforeground="#a3a3a3")
self.Button1_2.configure(font=font9)
self.Button1_2.configure(foreground="#000000")
self.Button1_2.configure(highlightbackground="#d9d9d9")
self.Button1_2.configure(highlightcolor="black")
self.Button1_2.configure(pady="0")
self.Button1_2.configure(text='''Product''')
self.Button1_3 = Button(self.Frame1)
self.Button1_3.place(relx=0.13, rely=0.65, height=34, width=207)
self.Button1_3.configure(activebackground="#d9d9d9")
self.Button1_3.configure(activeforeground="#000000")
self.Button1_3.configure(background="#9ea0d8")
self.Button1_3.configure(command=Employee_Home_support.admin_sales)
self.Button1_3.configure(disabledforeground="#a3a3a3")
self.Button1_3.configure(font=font9)
self.Button1_3.configure(foreground="#000000")
self.Button1_3.configure(highlightbackground="#d9d9d9")
self.Button1_3.configure(highlightcolor="black")
self.Button1_3.configure(pady="0")
self.Button1_3.configure(text='''Sales''')
self.Button1_4 = Button(self.Frame1)
self.Button1_4.place(relx=0.13, rely=0.81, height=34, width=207)
self.Button1_4.configure(activebackground="#d9d9d9")
self.Button1_4.configure(activeforeground="#000000")
self.Button1_4.configure(background="#9ea0d8")
self.Button1_4.configure(command=Employee_Home_support.admin_logout)
self.Button1_4.configure(disabledforeground="#a3a3a3")
self.Button1_4.configure(font=font9)
self.Button1_4.configure(foreground="#000000")
self.Button1_4.configure(highlightbackground="#d9d9d9")
self.Button1_4.configure(highlightcolor="black")
self.Button1_4.configure(pady="0")
self.Button1_4.configure(text='''Logout''')
if __name__ == '__main__':
vp_start_gui()
| [
"123deshmukhshweta@gmail.com"
] | 123deshmukhshweta@gmail.com |
ee7eee828cc5606cf575a5d449139425023499f8 | 71c7683331a9037fda7254b3a7b1ffddd6a4c4c8 | /Phys/Bs2MuMuParams/python/Bs2MuMuParams/BDTparam_BDTpaper_summer13.py | fa010b43f4c369d81fe645db70f00488b6e74255 | [] | no_license | pseyfert-cern-gitlab-backup/Urania | edc58ba4271089e55900f8bb4a5909e9e9c12d35 | 1b1c353ed5f1b45b3605990f60f49881b9785efd | refs/heads/master | 2021-05-18T13:33:22.732970 | 2017-12-15T14:42:04 | 2017-12-15T14:42:04 | 251,259,622 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 18,128 | py | #--------------------------------------
#
# Parameters of the BDT BsMuMu analysis
#
#--------------------------------------
#DONE
# mass reso
# mass mean
# CBTrans
# CBExpo
# signal BDT
# justine factor
# misID numbers
from math import *
from errors import *
import code
import alphaparam_summer13_2011 as alpha11
import alphaparam_spring13_2012 as alpha12
from alphaparam_summer13 import *
import BDTparam_BDTpaper_spring13_2011 as bdt2011
import BDTparam_BDTpaper_spring13_2012 as bdt2012
#===============================================
#
# 2011+2012 datasets as one dataset - summer 2013 analysis
# with BDT12
#
#===============================================
lumi_S20r1 = 1018. #970.7
lumi_S20 = 2028.2
#------------------------------------------------
# Parameters for the toys
#------------------------------------------------
BDT_binning_8 = [0., 0.25, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.]
BDT_binning = BDT_binning_8
def average_bybinsize( list, binning = BDT_binning ):
"""
"""
if len(list) != len(binning)-1:
raise ValueError
bin_size = []
for i in range(len(binning)-1):
bin_size.append( float(binning[i+1]-binning[i]) )
list = map(lambda x,y : x*y, list, bin_size )
list = map(lambda x : x.get_value(), list)
#print bin_size
#print list
#print float(binning[-1]-binning[0])
return sum( list ) / float(binning[-1]-binning[0])
#------------------------------------------------------
# DLL cut correction
# Barbara
#------------------------------------------------------
# https://groups.cern.ch/group/bsmumu-authors/Lists/Archive/Flat.aspx?RootFolder=%2Fgroup%2Fbsmumu-authors%2FLists%2FArchive%2FBDT%20corrections%20for%20PID%20cut&FolderCTID=0x01200200FDDA4DAA3D364E4F8BB386765CF56DC5
DLLCor1 = ((bdt2011.DLLCor1*lumi_S20r1)+(bdt2012.DLLCor1*lumi_S20)) / ( lumi_S20r1 + lumi_S20 )
DLLCor2 = ((bdt2011.DLLCor2*lumi_S20r1)+(bdt2012.DLLCor2*lumi_S20)) / ( lumi_S20r1 + lumi_S20 )
DLLCor3 = ((bdt2011.DLLCor3*lumi_S20r1)+(bdt2012.DLLCor3*lumi_S20)) / ( lumi_S20r1 + lumi_S20 )
DLLCor4 = ((bdt2011.DLLCor4*lumi_S20r1)+(bdt2012.DLLCor4*lumi_S20)) / ( lumi_S20r1 + lumi_S20 )
DLLCor5 = ((bdt2011.DLLCor5*lumi_S20r1)+(bdt2012.DLLCor5*lumi_S20)) / ( lumi_S20r1 + lumi_S20 )
DLLCor6 = ((bdt2011.DLLCor6*lumi_S20r1)+(bdt2012.DLLCor6*lumi_S20)) / ( lumi_S20r1 + lumi_S20 )
DLLCor7 = ((bdt2011.DLLCor7*lumi_S20r1)+(bdt2012.DLLCor7*lumi_S20)) / ( lumi_S20r1 + lumi_S20 )
DLLCor8 = ((bdt2011.DLLCor8*lumi_S20r1)+(bdt2012.DLLCor8*lumi_S20)) / ( lumi_S20r1 + lumi_S20 )
DLLCor = [DLLCor1, DLLCor2, DLLCor3, DLLCor4, DLLCor5, DLLCor6, DLLCor7, DLLCor8]
DLLCor = map(lambda x : EVal(x,0.),DLLCor)
DLLCor_ave = average_bybinsize( DLLCor )
#------------------------------------------------------
#
# PDF signal calibration
#
#------------------------------------------------------
# Diegos numbers
# not used anymore
# Justine Calibration
# not used anymore
# LNF calibration
# not used anymore
#------------------------------------------------------
# Zueri calibration
#------------------------------------------------------
ZFrac1 = ((bdt2011.ZFrac1*lumi_S20r1)+(bdt2012.ZFrac1*lumi_S20))/(lumi_S20r1+lumi_S20)
ZFrac2 = ((bdt2011.ZFrac2*lumi_S20r1)+(bdt2012.ZFrac2*lumi_S20))/(lumi_S20r1+lumi_S20)
ZFrac3 = ((bdt2011.ZFrac3*lumi_S20r1)+(bdt2012.ZFrac3*lumi_S20))/(lumi_S20r1+lumi_S20)
ZFrac4 = ((bdt2011.ZFrac4*lumi_S20r1)+(bdt2012.ZFrac4*lumi_S20))/(lumi_S20r1+lumi_S20)
ZFrac5 = ((bdt2011.ZFrac5*lumi_S20r1)+(bdt2012.ZFrac5*lumi_S20))/(lumi_S20r1+lumi_S20)
ZFrac6 = ((bdt2011.ZFrac6*lumi_S20r1)+(bdt2012.ZFrac6*lumi_S20))/(lumi_S20r1+lumi_S20)
ZFrac7 = ((bdt2011.ZFrac7*lumi_S20r1)+(bdt2012.ZFrac7*lumi_S20))/(lumi_S20r1+lumi_S20)
ZFrac8 = ((bdt2011.ZFrac8*lumi_S20r1)+(bdt2012.ZFrac8*lumi_S20))/(lumi_S20r1+lumi_S20)
[ZFrac1, ZFrac2, ZFrac3, ZFrac4, ZFrac5, ZFrac6, ZFrac7, ZFrac8] = map(lambda x: x.compress_errors(), [ZFrac1, ZFrac2, ZFrac3, ZFrac4, ZFrac5, ZFrac6, ZFrac7, ZFrac8])
#------------------------------------------------------
# time-dependant acceptance correction 'Mathieu'
#------------------------------------------------------
TimeAccbin1 = EVal( 0.967734301051, 0.000264372373692) # Mathieu 20130610
TimeAccbin2 = EVal( 0.98105170664 , 0.000348119031836) # Mathieu 20130610
TimeAccbin3 = EVal( 0.988194423339, 0.000431279272493) # Mathieu 20130610
TimeAccbin4 = EVal( 0.994875378244, 0.000491803785874) # Mathieu 20130610
TimeAccbin5 = EVal( 1.0007148427 , 0.000526551225211) # Mathieu 20130610
TimeAccbin6 = EVal( 1.00967244812 , 0.000567601916876) # Mathieu 20130610
TimeAccbin7 = EVal( 1.02838141777 , 0.000658379806371) # Mathieu 20130610
TimeAccbin8 = EVal( 1.08063251538 , 0.000897922729864) # Mathieu 20130610
TimeAcc = [TimeAccbin1, TimeAccbin2, TimeAccbin3, TimeAccbin4, TimeAccbin5, TimeAccbin6, TimeAccbin7, TimeAccbin8]
# average by bin size
TimeAcc_ave = average_bybinsize( TimeAcc )
#print 'TimeAcc_ave', TimeAcc_ave
TimeAcc_corr = map(lambda x: x/TimeAcc_ave, TimeAcc )
TimeAcc1 = TimeAcc_corr[0]
TimeAcc2 = TimeAcc_corr[1]
TimeAcc3 = TimeAcc_corr[2]
TimeAcc4 = TimeAcc_corr[3]
TimeAcc5 = TimeAcc_corr[4]
TimeAcc6 = TimeAcc_corr[5]
TimeAcc7 = TimeAcc_corr[6]
TimeAcc8 = TimeAcc_corr[7]
#------------------------------------------------------
# trigger bias correction on the GL or 'Justine' correction
#------------------------------------------------------
Justine1 = ((bdt2011.Justine1*lumi_S20r1)+(bdt2012.Justine1*lumi_S20))/(lumi_S20r1+lumi_S20)
Justine2 = ((bdt2011.Justine2*lumi_S20r1)+(bdt2012.Justine2*lumi_S20))/(lumi_S20r1+lumi_S20)
Justine3 = ((bdt2011.Justine3*lumi_S20r1)+(bdt2012.Justine3*lumi_S20))/(lumi_S20r1+lumi_S20)
Justine4 = ((bdt2011.Justine4*lumi_S20r1)+(bdt2012.Justine4*lumi_S20))/(lumi_S20r1+lumi_S20)
Justine5 = ((bdt2011.Justine5*lumi_S20r1)+(bdt2012.Justine5*lumi_S20))/(lumi_S20r1+lumi_S20)
Justine6 = ((bdt2011.Justine6*lumi_S20r1)+(bdt2012.Justine6*lumi_S20))/(lumi_S20r1+lumi_S20)
Justine7 = ((bdt2011.Justine7*lumi_S20r1)+(bdt2012.Justine7*lumi_S20))/(lumi_S20r1+lumi_S20)
Justine8 = ((bdt2011.Justine8*lumi_S20r1)+(bdt2012.Justine8*lumi_S20))/(lumi_S20r1+lumi_S20)
Justine = [Justine1, Justine2, Justine3, Justine4, Justine5, Justine6, Justine7, Justine8]
# Jose- Compute the Justine average by bin size, and the correction factor
Justine_ave = average_bybinsize( Justine )
Justine_corr = map(lambda x: x/Justine_ave, Justine )
# Jose- Compute the Justine average by bin size, and the correction factor with DLL cuts
Justine_DLL = map(lambda x,y: x/y, Justine, DLLCor)
Justine_DLL_ave = average_bybinsize( Justine_DLL )
Justine_DLL_corr = map(lambda x: x/Justine_DLL_ave, Justine_DLL )
J = Justine_DLL_corr
J_ave1 = J[0] # EVal(-9999.,-9999.) # requirementn of the toys ##DIEGO::CHECK And that's what will be passed to the toys under the JUSTINE labels
J_ave2 = J[1] #
J_ave3 = J[2] #
J_ave4 = J[3] #
J_ave5 = J[4] #
J_ave6 = J[5] #
J_ave7 = J[6] #
J_ave8 = J[7] #
#code.interact(local=locals())
#------------------------------------------------------
# Mass measurements
#------------------------------------------------------
#https://groups.cern.ch/group/bsmumu-authors/Lists/Archive/Flat.aspx?RootFolder=%2Fgroup%2Fbsmumu-authors%2FLists%2FArchive%2FMass%20average&FolderCTID=0x01200200FDDA4DAA3D364E4F8BB386765CF56DC5
MassMeanBs = EVal(5371.85,[0.17, 0.19]) # Christian 20130624
MassMeanBd = EVal(5284.90,[0.10, 0.20]) # Christian 20130624
#https://groups.cern.ch/group/bsmumu-authors/Lists/Archive/Flat.aspx?RootFolder=%2Fgroup%2Fbsmumu-authors%2FLists%2FArchive%2FMass%20average&FolderCTID=0x01200200FDDA4DAA3D364E4F8BB386765CF56DC5
MassResoBs = EVal(23.24,[0.08,0.44]) # Christian 20130624
MassResoBd = EVal(22.83,[0.07,0.42]) # Christian 20130624
#https://groups.cern.ch/group/bsmumu-authors/Lists/Archive/Flat.aspx?RootFolder=%2Fgroup%2Fbsmumu-authors%2FLists%2FArchive%2FMass%20average&FolderCTID=0x01200200FDDA4DAA3D364E4F8BB386765CF56DC5
CBTrans = EVal(2.065,[0.005,0.010]) # Christian 20130624
CBExpo = EVal(1.118,[0.013,0.038]) # Christian 20130624
#------------------------------------------------------
# PDF for the bgk: k-coeficient of the exp fit
#------------------------------------------------------
# https://groups.cern.ch/group/bsmumu-authors/Lists/Archive/Flat.aspx?RootFolder=%2Fgroup%2Fbsmumu-authors%2FLists%2FArchive%2Freference%20blind%20fit%20input%20for%20toys%20reference%20values&FolderCTID=0x01200200FDDA4DAA3D364E4F8BB386765CF56DC5
# exponents
BkgMassk1 = EValAsym( -6.9394e-04, +1.38e-05,-1.38e-05) # Ale 20130705
BkgMassk2 = EValAsym( -4.8992e-04, +1.00e-04,-9.98e-05) # Ale 20130705
BkgMassk3 = EValAsym( -4.7006e-04, +2.15e-04,-2.13e-04) # Ale 20130705
BkgMassk4 = EValAsym( -8.6918e-04, +3.48e-04,-3.43e-04) # Ale 20130705
BkgMassk5 = EValAsym( -3.3349e-04, +7.10e-04,-6.20e-04) # Ale 20130705
BkgMassk6 = EValAsym( +6.0893e-05, +1.12e-03,-9.28e-04) # Ale 20130705
BkgMassk7 = EValAsym( -4.1794e-04, +1.61e-03,-1.37e-03) # Ale 20130705
BkgMassk8 = EValAsym( -4.1794e-04, +1.61e-03,-1.37e-03) # Ale 20130705
# total number of events in the (used to fit) bkg sidebands
SbGL1 = 43244
SbGL2 = 1104
SbGL3 = 226
SbGL4 = 113
SbGL5 = 58
SbGL6 = 28
SbGL7 = 13
SbGL8 = 8
FracCombBin1 = EValAsym(1.2484 ,0.00052911 ,-0.0002264 ) # Ale 20130705
FracCombBin2 = EValAsym(1.2074 ,0.012774 ,-0.0092682 ) # Ale 20130705
FracCombBin3 = EValAsym(1.1542 ,0.038088 ,-0.028229 ) # Ale 20130705
FracCombBin4 = EValAsym(1.0731 ,0.074398 ,-0.058035 ) # Ale 20130705
FracCombBin5 = EValAsym(0.86539 ,0.14557 ,-0.12325 ) # Ale 20130705
FracCombBin6 = EValAsym(0.72936 ,0.20483 ,-0.15522 ) # Ale 20130705
FracCombBin7 = EValAsym(0.64307 ,0.32284 ,-0.1993 ) # Ale 20130705
FracCombBin8 = EValAsym(0.31486 ,0.46038 ,-0.2083 ) # Ale 20130705
# additional systematics
SystBkgBin1 = 0. # Marco 150212
SystBkgBin2 = 0. # Marco 250112
SystBkgBin3 = 0. # Marco 250112
SystBkgBin4 = 0. # Marco 250112
SystBkgBin5 = 0. # Marco 250112
SystBkgBin6 = 0. # Marco 250112
SystBkgBin7 = 0. # Marco 250112
SystBkgBin8 = 0. # Marco 250112
#------------------------------------------------------
# MisID Bkg
#------------------------------------------------------
def compute_nbhh_misid(dmisid,factors,justines):
#factors = map(lambda x: EVal(x,0.),factors)
bin_TIS = [TisTot,Tis2,Tis3,Tis4,Tis5,Tis6,Tis7,Tis8]
bin_Justine = justines[:]
bin_Justine[0] = EVal(1.,0.)
bin_bhhmm = map(lambda ntis,jus: ntis/(BdRatE_trg*jus),bin_TIS,bin_Justine)
Ntot = bin_bhhmm[0]
Nsum = reduce(lambda x,y:x+y,bin_bhhmm[1:])
bin_bhhmm[0] = Ntot-Nsum
bin_bhhmm_id = map(lambda f,n:f*n*dmisid,bin_bhhmm,factors)
Nbhh_id = reduce(lambda x,y:x+y,bin_bhhmm_id)
Nbhh_id2 = Ntot*dmisid
def format(eval):
return '%4.3f'%val(eval)+'+- %4.3f'%err(eval)
print ' NBhh misid total ',format(Nbhh_id),' in ave ',format(Nbhh_id2)
print ' NBhh misid bins ',str(map(lambda x:format(x),bin_bhhmm_id))
return Nbhh_id,bin_bhhmm_id
#------------------------------------------------------
# values if NO DLL cut
#------------------------------------------------------
# NOT USED ANYMORE
#------------------------------------------------------
# values WITH DLL cut
#------------------------------------------------------
BhhYield_S20r1 = EVal(20143., 572.) #EVal(19264., 550.) # CHE mail subject B->hh yields
BhhYield_S20 = EVal(49653., 507.) # CHE mail subject B->hh yields
#
BhhMisID_DLL_S20r1 = EVal(1.1e-5,[0.037e-5]) # Fatima 20130626
BhhMisID_DLL_S20r1.add_error(0.1e-5) # diff with no DeltaM cut
BhhMisID_DLL_S20r1.add_relative_error(1/17.) # trigger on probe
BhhMisID_DLL_S20r1= BhhMisID_DLL_S20r1.compress_errors()
BhhMisID_DLL_S20 = EVal(1.2e-5,0.036e-5) # Fatima 20130626
BhhMisID_DLL_S20 .add_error(0.1e-5) # diff with no DeltaM cut
BhhMisID_DLL_S20 = BhhMisID_DLL_S20.compress_errors()
MisIDGlobalFactor_S20r1 = BhhMisID_DLL_S20r1 / alpha11.BdRatE_trg
MisIDGlobalFactor_S20 = BhhMisID_DLL_S20 / alpha12.BdRatE_trg
# to be passed to table
MisIDTotalYield = MisIDGlobalFactor_S20r1*BhhYield_S20r1 + MisIDGlobalFactor_S20 * BhhYield_S20
print MisIDTotalYield
probs_S20r1 = bdt2011.probs
probs_S20 = bdt2012.probs
BhhMisID_DLL_factors_S20r1 = map(lambda x: x/BhhMisID_DLL_S20r1, probs_S20r1)
BhhMisID_DLL_factors_S20 = map(lambda x: x/BhhMisID_DLL_S20 , probs_S20)
BhhMisID_DLL_factors = map(lambda p11, p12: ((p11 * lumi_S20r1) + (p12 * lumi_S20)) / ( lumi_S20r1 + lumi_S20 ), BhhMisID_DLL_factors_S20r1, BhhMisID_DLL_factors_S20)
print BhhMisID_DLL_factors
#BhhMisID_DLL_factors = map(lambda x: x/BhhMisID_DLL, probs)
BhhMisID_DLL_factors_ave = average_bybinsize( BhhMisID_DLL_factors )
BhhMisID_DLL_factors_corr = map( lambda x : x/BhhMisID_DLL_factors_ave, BhhMisID_DLL_factors )
BhhMisID_DLL_factors_corr = map( lambda x,y: x/y, Justine_corr, BhhMisID_DLL_factors_corr )
BhhMisID_DLL_factors_corr_ave = average_bybinsize( BhhMisID_DLL_factors_corr )
BhhMisID_DLL_factors_corr = map( lambda x : x/BhhMisID_DLL_factors_corr_ave, BhhMisID_DLL_factors_corr )
def compute_nbhh_misid(BhhYield, BmmE_trg, TISE_trg, BdE_HLT2MC, BhhMisID_DLL, MisIDGlobalFactor, BhhMisID_DLL_factors_corr, BDT_frac):
print 'inputs:'
print 'BhhYield', BhhYield
print 'BmmE_trg', BmmE_trg
print 'TISE_trg', TISE_trg
print 'BdE_HLT2MC', BdE_HLT2MC
print 'BhhMisID_DLL', BhhMisID_DLL
print 'MisIDGlobalFactor', MisIDGlobalFactor, '=', BmmE_trg * BhhMisID_DLL/ (TISE_trg*BdE_HLT2MC), '(BmmE_trg * BhhMisID_DLL/ (TISE_trg*BdE_HLT2MC)'
print ' NBhh misid total ', BhhYield * MisIDGlobalFactor
print ' should be equal to ', BhhYield *BmmE_trg * BhhMisID_DLL/ (TISE_trg*BdE_HLT2MC)
BDT_frac_corr = map(lambda bdt, misIDcorr: bdt/misIDcorr, BDT_frac, BhhMisID_DLL_factors_corr)
for i in range(len(BDT_frac)):
print 'bin', i+1, ': ', BhhYield *BmmE_trg * BhhMisID_DLL/ (TISE_trg*BdE_HLT2MC) * BDT_frac_corr[i]
print 'BDT>0.8 : ', BhhYield *BmmE_trg * BhhMisID_DLL/ (TISE_trg*BdE_HLT2MC) * (BDT_frac_corr[-2] + BDT_frac_corr[-1])
def compute_nbhh_misid_combdataset(MisIDTotalYield, BhhMisID_DLL_factors_corr, BDT_frac):
print 'inputs:'
print 'MisIDTotalYield', MisIDTotalYield
BDT_frac_corr = map(lambda bdt, misIDcorr: bdt/misIDcorr, BDT_frac, BhhMisID_DLL_factors_corr)
for i in range(len(BDT_frac)):
print 'bin', i+1, ': ', MisIDTotalYield * BDT_frac_corr[i]
print 'BDT>0.8 : ', MisIDTotalYield * (BDT_frac_corr[-2] + BDT_frac_corr[-1])
compute_nbhh_misid_combdataset(MisIDTotalYield, BhhMisID_DLL_factors_corr, [ZFrac1, ZFrac2, ZFrac3, ZFrac4, ZFrac5, ZFrac6, ZFrac7, ZFrac8])
## Mis_ave = average_bybinsize( BhhMisID_DLL_factors_corr )
## BhhMisID_DLL_factors_corr = map(lambda x : x/Mis_ave, BhhMisID_DLL_factors_corr)
## BkgPeakNcan,BkgPeakNcanlist = compute_nbhh_misid(BhhMisID_DLL,BhhMisID_DLL_factors,Justine_DLL_corr)
## print ' BkgPeakNcan ',BkgPeakNcan
## # MisIDfBDTBin = BhhMisID_factors_corr ## DIEGO,MARCO -CHECK
## Mis = map(lambda x,y: x/y, Justine, )
## Mis_ave = average_bybinsize( Mis )
## BhhMisID_DLL_factors_corr = map(lambda x : x/Mis_ave, Mis)
MisIDfBDTBin1 = BhhMisID_DLL_factors_corr[0] #EVal(-999.,-999.)
MisIDfBDTBin2 = BhhMisID_DLL_factors_corr[1] #
MisIDfBDTBin3 = BhhMisID_DLL_factors_corr[2] #
MisIDfBDTBin4 = BhhMisID_DLL_factors_corr[3] #
MisIDfBDTBin5 = BhhMisID_DLL_factors_corr[4] #
MisIDfBDTBin6 = BhhMisID_DLL_factors_corr[5] #
MisIDfBDTBin7 = BhhMisID_DLL_factors_corr[6] #
MisIDfBDTBin8 = BhhMisID_DLL_factors_corr[7] #
# fraction of the peaking bkg in the Bd, Bs 60 MeV mass windows
fpeakBd = EValAsym(0.48 ,0.2 ,0.08 ) # Diego 270112
fpeakBs = EValAsym(0.088,0.03,0.021) # Diego 270112
# Measured Bs BR
BRMeasuredBs = EValAsym(0.8e-9,1.8e-9,1.3e-9)
#------------------------------------------------------
# Sidebands definition
#------------------------------------------------------
BlindWidth = 60 # Marco 210112
BMassT0 = 4900. # Marco 210112
BMassBlind0 = val(MassMeanBd)-BlindWidth # Marco 210112
BMassBlind1 = val(MassMeanBs)+BlindWidth # Marco 210112
BMassT1 = 6000. # Marco 210112
GL1MassSb1 = BMassT0 # Marco 210112
GL1MassSb2 = BMassBlind0 # Marco 210112
GL1MassSb3 = BMassBlind1 # Marco 210112
GL1MassSb4 = BMassT1 # Marco 210112
GL2MassSb1 = BMassT0 # Marco 210112
GL2MassSb2 = BMassBlind0 # Marco 210112
GL2MassSb3 = BMassBlind1 # Marco 210112
GL2MassSb4 = BMassT1 # Marco 210112
GL3MassSb1 = BMassT0 # Marco 210112
GL3MassSb2 = BMassBlind0 # Marco 210112
GL3MassSb3 = BMassBlind1 # Marco 210112
GL3MassSb4 = BMassT1 # Marco 210112
GL4MassSb1 = BMassT0 # Marco 210112
GL4MassSb2 = BMassBlind0 # Marco 210112
GL4MassSb3 = BMassBlind1 # Marco 210112
GL4MassSb4 = BMassT1 # Marco 210112
GL5MassSb1 = BMassT0 # Marco 210112
GL5MassSb2 = BMassBlind0 # Marco 210112
GL5MassSb3 = BMassBlind1 # Marco 210112
GL5MassSb4 = BMassT1 # Marco 210112
GL6MassSb1 = BMassT0 # Marco 210112
GL6MassSb2 = BMassBlind0 # Marco 210112
GL6MassSb3 = BMassBlind1 # Marco 210112
GL6MassSb4 = BMassT1 # Marco 210112
GL7MassSb1 = BMassT0 # Marco 210112
GL7MassSb2 = BMassBlind0 # Marco 210112
GL7MassSb3 = BMassBlind1 # Marco 210112
GL7MassSb4 = BMassT1 # Marco 210112
GL8MassSb1 = BMassT0 # Marco 210112
GL8MassSb2 = BMassBlind0 # Marco 210112
GL8MassSb3 = BMassBlind1 # Marco 210112
GL8MassSb4 = BMassT1 # Marco 210112
| [
"liblhcb@cern.ch"
] | liblhcb@cern.ch |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.