repo_name stringlengths 5 100 | path stringlengths 4 294 | copies stringclasses 990 values | size stringlengths 4 7 | content stringlengths 666 1M | license stringclasses 15 values |
|---|---|---|---|---|---|
bgxavier/nova | nova/tests/unit/virt/hyperv/test_base.py | 24 | 1073 | # Copyright 2014 Cloudbase Solutions Srl
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from nova import test
class HyperVBaseTestCase(test.NoDBTestCase):
def setUp(self):
super(HyperVBaseTestCase, self).setUp()
wmi_patcher = mock.patch('__builtin__.wmi', create=True)
platform_patcher = mock.patch('sys.platform', 'win32')
platform_patcher.start()
wmi_patcher.start()
self.addCleanup(wmi_patcher.stop)
self.addCleanup(platform_patcher.stop)
| apache-2.0 |
MalloyPower/parsing-python | front-end/testsuite-python-lib/Python-3.5.0/Lib/pipes.py | 172 | 8916 | """Conversion pipeline templates.
The problem:
------------
Suppose you have some data that you want to convert to another format,
such as from GIF image format to PPM image format. Maybe the
conversion involves several steps (e.g. piping it through compress or
uuencode). Some of the conversion steps may require that their input
is a disk file, others may be able to read standard input; similar for
their output. The input to the entire conversion may also be read
from a disk file or from an open file, and similar for its output.
The module lets you construct a pipeline template by sticking one or
more conversion steps together. It will take care of creating and
removing temporary files if they are necessary to hold intermediate
data. You can then use the template to do conversions from many
different sources to many different destinations. The temporary
file names used are different each time the template is used.
The templates are objects so you can create templates for many
different conversion steps and store them in a dictionary, for
instance.
Directions:
-----------
To create a template:
t = Template()
To add a conversion step to a template:
t.append(command, kind)
where kind is a string of two characters: the first is '-' if the
command reads its standard input or 'f' if it requires a file; the
second likewise for the output. The command must be valid /bin/sh
syntax. If input or output files are required, they are passed as
$IN and $OUT; otherwise, it must be possible to use the command in
a pipeline.
To add a conversion step at the beginning:
t.prepend(command, kind)
To convert a file to another file using a template:
sts = t.copy(infile, outfile)
If infile or outfile are the empty string, standard input is read or
standard output is written, respectively. The return value is the
exit status of the conversion pipeline.
To open a file for reading or writing through a conversion pipeline:
fp = t.open(file, mode)
where mode is 'r' to read the file, or 'w' to write it -- just like
for the built-in function open() or for os.popen().
To create a new template object initialized to a given one:
t2 = t.clone()
""" # '
import re
import os
import tempfile
# we import the quote function rather than the module for backward compat
# (quote used to be an undocumented but used function in pipes)
from shlex import quote
__all__ = ["Template"]
# Conversion step kinds
FILEIN_FILEOUT = 'ff' # Must read & write real files
STDIN_FILEOUT = '-f' # Must write a real file
FILEIN_STDOUT = 'f-' # Must read a real file
STDIN_STDOUT = '--' # Normal pipeline element
SOURCE = '.-' # Must be first, writes stdout
SINK = '-.' # Must be last, reads stdin
stepkinds = [FILEIN_FILEOUT, STDIN_FILEOUT, FILEIN_STDOUT, STDIN_STDOUT, \
SOURCE, SINK]
class Template:
"""Class representing a pipeline template."""
def __init__(self):
"""Template() returns a fresh pipeline template."""
self.debugging = 0
self.reset()
def __repr__(self):
"""t.__repr__() implements repr(t)."""
return '<Template instance, steps=%r>' % (self.steps,)
def reset(self):
"""t.reset() restores a pipeline template to its initial state."""
self.steps = []
def clone(self):
"""t.clone() returns a new pipeline template with identical
initial state as the current one."""
t = Template()
t.steps = self.steps[:]
t.debugging = self.debugging
return t
def debug(self, flag):
"""t.debug(flag) turns debugging on or off."""
self.debugging = flag
def append(self, cmd, kind):
"""t.append(cmd, kind) adds a new step at the end."""
if type(cmd) is not type(''):
raise TypeError('Template.append: cmd must be a string')
if kind not in stepkinds:
raise ValueError('Template.append: bad kind %r' % (kind,))
if kind == SOURCE:
raise ValueError('Template.append: SOURCE can only be prepended')
if self.steps and self.steps[-1][1] == SINK:
raise ValueError('Template.append: already ends with SINK')
if kind[0] == 'f' and not re.search(r'\$IN\b', cmd):
raise ValueError('Template.append: missing $IN in cmd')
if kind[1] == 'f' and not re.search(r'\$OUT\b', cmd):
raise ValueError('Template.append: missing $OUT in cmd')
self.steps.append((cmd, kind))
def prepend(self, cmd, kind):
"""t.prepend(cmd, kind) adds a new step at the front."""
if type(cmd) is not type(''):
raise TypeError('Template.prepend: cmd must be a string')
if kind not in stepkinds:
raise ValueError('Template.prepend: bad kind %r' % (kind,))
if kind == SINK:
raise ValueError('Template.prepend: SINK can only be appended')
if self.steps and self.steps[0][1] == SOURCE:
raise ValueError('Template.prepend: already begins with SOURCE')
if kind[0] == 'f' and not re.search(r'\$IN\b', cmd):
raise ValueError('Template.prepend: missing $IN in cmd')
if kind[1] == 'f' and not re.search(r'\$OUT\b', cmd):
raise ValueError('Template.prepend: missing $OUT in cmd')
self.steps.insert(0, (cmd, kind))
def open(self, file, rw):
"""t.open(file, rw) returns a pipe or file object open for
reading or writing; the file is the other end of the pipeline."""
if rw == 'r':
return self.open_r(file)
if rw == 'w':
return self.open_w(file)
raise ValueError('Template.open: rw must be \'r\' or \'w\', not %r'
% (rw,))
def open_r(self, file):
"""t.open_r(file) and t.open_w(file) implement
t.open(file, 'r') and t.open(file, 'w') respectively."""
if not self.steps:
return open(file, 'r')
if self.steps[-1][1] == SINK:
raise ValueError('Template.open_r: pipeline ends width SINK')
cmd = self.makepipeline(file, '')
return os.popen(cmd, 'r')
def open_w(self, file):
if not self.steps:
return open(file, 'w')
if self.steps[0][1] == SOURCE:
raise ValueError('Template.open_w: pipeline begins with SOURCE')
cmd = self.makepipeline('', file)
return os.popen(cmd, 'w')
def copy(self, infile, outfile):
return os.system(self.makepipeline(infile, outfile))
def makepipeline(self, infile, outfile):
cmd = makepipeline(infile, self.steps, outfile)
if self.debugging:
print(cmd)
cmd = 'set -x; ' + cmd
return cmd
def makepipeline(infile, steps, outfile):
# Build a list with for each command:
# [input filename or '', command string, kind, output filename or '']
list = []
for cmd, kind in steps:
list.append(['', cmd, kind, ''])
#
# Make sure there is at least one step
#
if not list:
list.append(['', 'cat', '--', ''])
#
# Take care of the input and output ends
#
[cmd, kind] = list[0][1:3]
if kind[0] == 'f' and not infile:
list.insert(0, ['', 'cat', '--', ''])
list[0][0] = infile
#
[cmd, kind] = list[-1][1:3]
if kind[1] == 'f' and not outfile:
list.append(['', 'cat', '--', ''])
list[-1][-1] = outfile
#
# Invent temporary files to connect stages that need files
#
garbage = []
for i in range(1, len(list)):
lkind = list[i-1][2]
rkind = list[i][2]
if lkind[1] == 'f' or rkind[0] == 'f':
(fd, temp) = tempfile.mkstemp()
os.close(fd)
garbage.append(temp)
list[i-1][-1] = list[i][0] = temp
#
for item in list:
[inf, cmd, kind, outf] = item
if kind[1] == 'f':
cmd = 'OUT=' + quote(outf) + '; ' + cmd
if kind[0] == 'f':
cmd = 'IN=' + quote(inf) + '; ' + cmd
if kind[0] == '-' and inf:
cmd = cmd + ' <' + quote(inf)
if kind[1] == '-' and outf:
cmd = cmd + ' >' + quote(outf)
item[1] = cmd
#
cmdlist = list[0][1]
for item in list[1:]:
[cmd, kind] = item[1:3]
if item[0] == '':
if 'f' in kind:
cmd = '{ ' + cmd + '; }'
cmdlist = cmdlist + ' |\n' + cmd
else:
cmdlist = cmdlist + '\n' + cmd
#
if garbage:
rmcmd = 'rm -f'
for file in garbage:
rmcmd = rmcmd + ' ' + quote(file)
trapcmd = 'trap ' + quote(rmcmd + '; exit') + ' 1 2 3 13 14 15'
cmdlist = trapcmd + '\n' + cmdlist + '\n' + rmcmd
#
return cmdlist
| mit |
koniiiik/django | django/shortcuts.py | 117 | 5429 | """
This module collects helper functions and classes that "span" multiple levels
of MVC. In other words, these functions/classes introduce controlled coupling
for convenience's sake.
"""
from django.http import (
Http404, HttpResponse, HttpResponsePermanentRedirect, HttpResponseRedirect,
)
from django.template import loader
from django.urls import NoReverseMatch, reverse
from django.utils import six
from django.utils.encoding import force_text
from django.utils.functional import Promise
def render_to_response(template_name, context=None, content_type=None, status=None, using=None):
"""
Returns a HttpResponse whose content is filled with the result of calling
django.template.loader.render_to_string() with the passed arguments.
"""
content = loader.render_to_string(template_name, context, using=using)
return HttpResponse(content, content_type, status)
def render(request, template_name, context=None, content_type=None, status=None, using=None):
"""
Returns a HttpResponse whose content is filled with the result of calling
django.template.loader.render_to_string() with the passed arguments.
"""
content = loader.render_to_string(template_name, context, request, using=using)
return HttpResponse(content, content_type, status)
def redirect(to, *args, **kwargs):
"""
Returns an HttpResponseRedirect to the appropriate URL for the arguments
passed.
The arguments could be:
* A model: the model's `get_absolute_url()` function will be called.
* A view name, possibly with arguments: `urls.reverse()` will be used
to reverse-resolve the name.
* A URL, which will be used as-is for the redirect location.
By default issues a temporary redirect; pass permanent=True to issue a
permanent redirect
"""
if kwargs.pop('permanent', False):
redirect_class = HttpResponsePermanentRedirect
else:
redirect_class = HttpResponseRedirect
return redirect_class(resolve_url(to, *args, **kwargs))
def _get_queryset(klass):
"""
Return a QuerySet or a Manager.
Duck typing in action: any class with a `get()` method (for
get_object_or_404) or a `filter()` method (for get_list_or_404) might do
the job.
"""
# If it is a model class or anything else with ._default_manager
if hasattr(klass, '_default_manager'):
return klass._default_manager.all()
return klass
def get_object_or_404(klass, *args, **kwargs):
"""
Uses get() to return an object, or raises a Http404 exception if the object
does not exist.
klass may be a Model, Manager, or QuerySet object. All other passed
arguments and keyword arguments are used in the get() query.
Note: Like with get(), an MultipleObjectsReturned will be raised if more than one
object is found.
"""
queryset = _get_queryset(klass)
try:
return queryset.get(*args, **kwargs)
except AttributeError:
klass__name = klass.__name__ if isinstance(klass, type) else klass.__class__.__name__
raise ValueError(
"First argument to get_object_or_404() must be a Model, Manager, "
"or QuerySet, not '%s'." % klass__name
)
except queryset.model.DoesNotExist:
raise Http404('No %s matches the given query.' % queryset.model._meta.object_name)
def get_list_or_404(klass, *args, **kwargs):
"""
Uses filter() to return a list of objects, or raise a Http404 exception if
the list is empty.
klass may be a Model, Manager, or QuerySet object. All other passed
arguments and keyword arguments are used in the filter() query.
"""
queryset = _get_queryset(klass)
try:
obj_list = list(queryset.filter(*args, **kwargs))
except AttributeError:
klass__name = klass.__name__ if isinstance(klass, type) else klass.__class__.__name__
raise ValueError(
"First argument to get_list_or_404() must be a Model, Manager, or "
"QuerySet, not '%s'." % klass__name
)
if not obj_list:
raise Http404('No %s matches the given query.' % queryset.model._meta.object_name)
return obj_list
def resolve_url(to, *args, **kwargs):
"""
Return a URL appropriate for the arguments passed.
The arguments could be:
* A model: the model's `get_absolute_url()` function will be called.
* A view name, possibly with arguments: `urls.reverse()` will be used
to reverse-resolve the name.
* A URL, which will be returned as-is.
"""
# If it's a model, use get_absolute_url()
if hasattr(to, 'get_absolute_url'):
return to.get_absolute_url()
if isinstance(to, Promise):
# Expand the lazy instance, as it can cause issues when it is passed
# further to some Python functions like urlparse.
to = force_text(to)
if isinstance(to, six.string_types):
# Handle relative URLs
if to.startswith(('./', '../')):
return to
# Next try a reverse URL resolution.
try:
return reverse(to, args=args, kwargs=kwargs)
except NoReverseMatch:
# If this is a callable, re-raise.
if callable(to):
raise
# If this doesn't "feel" like a URL, re-raise.
if '/' not in to and '.' not in to:
raise
# Finally, fall back and assume it's a URL
return to
| bsd-3-clause |
klmitch/neutron | neutron/agent/linux/dibbler.py | 8 | 6893 | # Copyright 2015 Cisco Systems
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import shutil
import jinja2
from oslo_config import cfg
from oslo_log import log as logging
import six
from neutron.agent.linux import external_process
from neutron.agent.linux import pd
from neutron.agent.linux import pd_driver
from neutron.agent.linux import utils
from neutron.common import constants
from neutron.common import utils as common_utils
LOG = logging.getLogger(__name__)
PD_SERVICE_NAME = 'dibbler'
CONFIG_TEMPLATE = jinja2.Template("""
# Config for dibbler-client.
# Use enterprise number based duid
duid-type duid-en {{ enterprise_number }} {{ va_id }}
# 8 (Debug) is most verbose. 7 (Info) is usually the best option
log-level 8
# No automatic downlink address assignment
downlink-prefix-ifaces "none"
# Use script to notify l3_agent of assigned prefix
script {{ script_path }}
# Ask for prefix over the external gateway interface
iface {{ interface_name }} {
# Bind to generated LLA
bind-to-address {{ bind_address }}
# ask for address
pd 1
}
""")
# The first line must be #!/usr/bin/env bash
SCRIPT_TEMPLATE = jinja2.Template("""#!/usr/bin/env bash
exec neutron-pd-notify $1 {{ prefix_path }} {{ l3_agent_pid }}
""")
class PDDibbler(pd_driver.PDDriverBase):
def __init__(self, router_id, subnet_id, ri_ifname):
super(PDDibbler, self).__init__(router_id, subnet_id, ri_ifname)
self.requestor_id = "%s:%s:%s" % (self.router_id,
self.subnet_id,
self.ri_ifname)
self.dibbler_client_working_area = "%s/%s" % (cfg.CONF.pd_confs,
self.requestor_id)
self.prefix_path = "%s/prefix" % self.dibbler_client_working_area
self.pid_path = "%s/client.pid" % self.dibbler_client_working_area
self.converted_subnet_id = self.subnet_id.replace('-', '')
def _is_dibbler_client_running(self):
return utils.get_value_from_file(self.pid_path)
def _generate_dibbler_conf(self, ex_gw_ifname, lla):
dcwa = self.dibbler_client_working_area
script_path = utils.get_conf_file_name(dcwa, 'notify', 'sh', True)
buf = six.StringIO()
buf.write('%s' % SCRIPT_TEMPLATE.render(
prefix_path=self.prefix_path,
l3_agent_pid=os.getpid()))
common_utils.replace_file(script_path, buf.getvalue())
os.chmod(script_path, 0o744)
dibbler_conf = utils.get_conf_file_name(dcwa, 'client', 'conf', False)
buf = six.StringIO()
buf.write('%s' % CONFIG_TEMPLATE.render(
enterprise_number=cfg.CONF.vendor_pen,
va_id='0x%s' % self.converted_subnet_id,
script_path='"%s/notify.sh"' % dcwa,
interface_name='"%s"' % ex_gw_ifname,
bind_address='%s' % lla))
common_utils.replace_file(dibbler_conf, buf.getvalue())
return dcwa
def _spawn_dibbler(self, pmon, router_ns, dibbler_conf):
def callback(pid_file):
dibbler_cmd = ['dibbler-client',
'start',
'-w', '%s' % dibbler_conf]
return dibbler_cmd
pm = external_process.ProcessManager(
uuid=self.requestor_id,
default_cmd_callback=callback,
namespace=router_ns,
service=PD_SERVICE_NAME,
conf=cfg.CONF,
pid_file=self.pid_path)
pm.enable(reload_cfg=False)
pmon.register(uuid=self.requestor_id,
service_name=PD_SERVICE_NAME,
monitored_process=pm)
def enable(self, pmon, router_ns, ex_gw_ifname, lla):
LOG.debug("Enable IPv6 PD for router %s subnet %s ri_ifname %s",
self.router_id, self.subnet_id, self.ri_ifname)
if not self._is_dibbler_client_running():
dibbler_conf = self._generate_dibbler_conf(ex_gw_ifname, lla)
self._spawn_dibbler(pmon, router_ns, dibbler_conf)
LOG.debug("dibbler client enabled for router %s subnet %s"
" ri_ifname %s",
self.router_id, self.subnet_id, self.ri_ifname)
def disable(self, pmon, router_ns):
LOG.debug("Disable IPv6 PD for router %s subnet %s ri_ifname %s",
self.router_id, self.subnet_id, self.ri_ifname)
dcwa = self.dibbler_client_working_area
def callback(pid_file):
dibbler_cmd = ['dibbler-client',
'stop',
'-w', '%s' % dcwa]
return dibbler_cmd
pmon.unregister(uuid=self.requestor_id,
service_name=PD_SERVICE_NAME)
pm = external_process.ProcessManager(
uuid=self.requestor_id,
namespace=router_ns,
service=PD_SERVICE_NAME,
conf=cfg.CONF,
pid_file=self.pid_path)
pm.disable(get_stop_command=callback)
shutil.rmtree(dcwa, ignore_errors=True)
LOG.debug("dibbler client disabled for router %s subnet %s "
"ri_ifname %s",
self.router_id, self.subnet_id, self.ri_ifname)
def get_prefix(self):
prefix = utils.get_value_from_file(self.prefix_path)
if not prefix:
prefix = constants.PROVISIONAL_IPV6_PD_PREFIX
return prefix
@staticmethod
def get_sync_data():
try:
requestor_ids = os.listdir(cfg.CONF.pd_confs)
except OSError:
return []
sync_data = []
requestors = (r.split(':') for r in requestor_ids if r.count(':') == 2)
for router_id, subnet_id, ri_ifname in requestors:
pd_info = pd.PDInfo()
pd_info.router_id = router_id
pd_info.subnet_id = subnet_id
pd_info.ri_ifname = ri_ifname
pd_info.driver = PDDibbler(router_id, subnet_id, ri_ifname)
pd_info.client_started = (
pd_info.driver._is_dibbler_client_running())
pd_info.prefix = pd_info.driver.get_prefix()
sync_data.append(pd_info)
return sync_data
| apache-2.0 |
HyperBaton/ansible | lib/ansible/modules/network/avi/avi_microservicegroup.py | 28 | 3952 | #!/usr/bin/python
#
# @author: Gaurav Rastogi (grastogi@avinetworks.com)
# Eric Anderson (eanderson@avinetworks.com)
# module_check: supported
#
# Copyright: (c) 2017 Gaurav Rastogi, <grastogi@avinetworks.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
#
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: avi_microservicegroup
author: Gaurav Rastogi (@grastogi23) <grastogi@avinetworks.com>
short_description: Module for setup of MicroServiceGroup Avi RESTful Object
description:
- This module is used to configure MicroServiceGroup object
- more examples at U(https://github.com/avinetworks/devops)
requirements: [ avisdk ]
version_added: "2.4"
options:
state:
description:
- The state that should be applied on the entity.
default: present
choices: ["absent", "present"]
avi_api_update_method:
description:
- Default method for object update is HTTP PUT.
- Setting to patch will override that behavior to use HTTP PATCH.
version_added: "2.5"
default: put
choices: ["put", "patch"]
avi_api_patch_op:
description:
- Patch operation to use when using avi_api_update_method as patch.
version_added: "2.5"
choices: ["add", "replace", "delete"]
created_by:
description:
- Creator name.
description:
description:
- User defined description for the object.
name:
description:
- Name of the microservice group.
required: true
service_refs:
description:
- Configure microservice(es).
- It is a reference to an object of type microservice.
tenant_ref:
description:
- It is a reference to an object of type tenant.
url:
description:
- Avi controller URL of the object.
uuid:
description:
- Uuid of the microservice group.
extends_documentation_fragment:
- avi
'''
EXAMPLES = """
- name: Create a Microservice Group that can be used for setting up Network security policy
avi_microservicegroup:
controller: '{{ controller }}'
username: '{{ username }}'
password: '{{ password }}'
description: Group created by my Secure My App UI.
name: vs-msg-marketing
tenant_ref: admin
"""
RETURN = '''
obj:
description: MicroServiceGroup (api/microservicegroup) object
returned: success, changed
type: dict
'''
from ansible.module_utils.basic import AnsibleModule
try:
from ansible.module_utils.network.avi.avi import (
avi_common_argument_spec, avi_ansible_api, HAS_AVI)
except ImportError:
HAS_AVI = False
def main():
argument_specs = dict(
state=dict(default='present',
choices=['absent', 'present']),
avi_api_update_method=dict(default='put',
choices=['put', 'patch']),
avi_api_patch_op=dict(choices=['add', 'replace', 'delete']),
created_by=dict(type='str',),
description=dict(type='str',),
name=dict(type='str', required=True),
service_refs=dict(type='list',),
tenant_ref=dict(type='str',),
url=dict(type='str',),
uuid=dict(type='str',),
)
argument_specs.update(avi_common_argument_spec())
module = AnsibleModule(
argument_spec=argument_specs, supports_check_mode=True)
if not HAS_AVI:
return module.fail_json(msg=(
'Avi python API SDK (avisdk>=17.1) or requests is not installed. '
'For more details visit https://github.com/avinetworks/sdk.'))
return avi_ansible_api(module, 'microservicegroup',
set([]))
if __name__ == '__main__':
main()
| gpl-3.0 |
gcode-mirror/audacity | lib-src/lv2/lv2/plugins/eg01-amp.lv2/waflib/extras/doxygen.py | 105 | 4815 | #! /usr/bin/env python
# encoding: utf-8
# WARNING! Do not edit! http://waf.googlecode.com/git/docs/wafbook/single.html#_obtaining_the_waf_file
from fnmatch import fnmatchcase
import os,os.path,re,stat
from waflib import Task,Utils,Node,Logs
from waflib.TaskGen import feature
DOXY_STR='${DOXYGEN} - '
DOXY_FMTS='html latex man rft xml'.split()
DOXY_FILE_PATTERNS='*.'+' *.'.join('''
c cc cxx cpp c++ java ii ixx ipp i++ inl h hh hxx hpp h++ idl odl cs php php3
inc m mm py f90c cc cxx cpp c++ java ii ixx ipp i++ inl h hh hxx
'''.split())
re_rl=re.compile('\\\\\r*\n',re.MULTILINE)
re_nl=re.compile('\r*\n',re.M)
def parse_doxy(txt):
tbl={}
txt=re_rl.sub('',txt)
lines=re_nl.split(txt)
for x in lines:
x=x.strip()
if not x or x.startswith('#')or x.find('=')<0:
continue
if x.find('+=')>=0:
tmp=x.split('+=')
key=tmp[0].strip()
if key in tbl:
tbl[key]+=' '+'+='.join(tmp[1:]).strip()
else:
tbl[key]='+='.join(tmp[1:]).strip()
else:
tmp=x.split('=')
tbl[tmp[0].strip()]='='.join(tmp[1:]).strip()
return tbl
class doxygen(Task.Task):
vars=['DOXYGEN','DOXYFLAGS']
color='BLUE'
def runnable_status(self):
'''
self.pars are populated in runnable_status - because this function is being
run *before* both self.pars "consumers" - scan() and run()
set output_dir (node) for the output
'''
for x in self.run_after:
if not x.hasrun:
return Task.ASK_LATER
if not getattr(self,'pars',None):
txt=self.inputs[0].read()
self.pars=parse_doxy(txt)
if not self.pars.get('OUTPUT_DIRECTORY'):
self.pars['OUTPUT_DIRECTORY']=self.inputs[0].parent.get_bld().abspath()
self.doxy_inputs=getattr(self,'doxy_inputs',[])
if not self.pars.get('INPUT'):
self.doxy_inputs.append(self.inputs[0].parent)
else:
for i in self.pars.get('INPUT').split():
if os.path.isabs(i):
node=self.generator.bld.root.find_node(i)
else:
node=self.generator.path.find_node(i)
if not node:
self.generator.bld.fatal('Could not find the doxygen input %r'%i)
self.doxy_inputs.append(node)
if not getattr(self,'output_dir',None):
bld=self.generator.bld
self.output_dir=bld.root.find_dir(self.pars['OUTPUT_DIRECTORY'])
if not self.output_dir:
self.output_dir=bld.path.find_or_declare(self.pars['OUTPUT_DIRECTORY'])
self.signature()
return Task.Task.runnable_status(self)
def scan(self):
if self.pars.get('RECURSIVE')=='YES':
Logs.warn("Doxygen RECURSIVE dependencies are not supported")
exclude_patterns=self.pars.get('EXCLUDE_PATTERNS','').split()
file_patterns=self.pars.get('FILE_PATTERNS','').split()
if not file_patterns:
file_patterns=DOXY_FILE_PATTERNS
nodes=[]
names=[]
for node in self.doxy_inputs:
if os.path.isdir(node.abspath()):
for m in node.ant_glob(file_patterns):
nodes.append(m)
else:
nodes.append(node)
return(nodes,names)
def run(self):
dct=self.pars.copy()
dct['INPUT']=' '.join([x.abspath()for x in self.doxy_inputs])
code='\n'.join(['%s = %s'%(x,dct[x])for x in self.pars])
code=code
cmd=Utils.subst_vars(DOXY_STR,self.env)
env=self.env.env or None
proc=Utils.subprocess.Popen(cmd,shell=True,stdin=Utils.subprocess.PIPE,env=env,cwd=self.generator.bld.path.get_bld().abspath())
proc.communicate(code)
return proc.returncode
def post_run(self):
nodes=self.output_dir.ant_glob('**/*',quiet=True)
for x in nodes:
x.sig=Utils.h_file(x.abspath())
self.outputs+=nodes
return Task.Task.post_run(self)
class tar(Task.Task):
run_str='${TAR} ${TAROPTS} ${TGT} ${SRC}'
color='RED'
after=['doxygen']
def runnable_status(self):
for x in getattr(self,'input_tasks',[]):
if not x.hasrun:
return Task.ASK_LATER
if not getattr(self,'tar_done_adding',None):
self.tar_done_adding=True
for x in getattr(self,'input_tasks',[]):
self.set_inputs(x.outputs)
if not self.inputs:
return Task.SKIP_ME
return Task.Task.runnable_status(self)
def __str__(self):
tgt_str=' '.join([a.nice_path(self.env)for a in self.outputs])
return'%s: %s\n'%(self.__class__.__name__,tgt_str)
@feature('doxygen')
def process_doxy(self):
if not getattr(self,'doxyfile',None):
self.generator.bld.fatal('no doxyfile??')
node=self.doxyfile
if not isinstance(node,Node.Node):
node=self.path.find_resource(node)
if not node:
raise ValueError('doxygen file not found')
dsk=self.create_task('doxygen',node)
if getattr(self,'doxy_tar',None):
tsk=self.create_task('tar')
tsk.input_tasks=[dsk]
tsk.set_outputs(self.path.find_or_declare(self.doxy_tar))
if self.doxy_tar.endswith('bz2'):
tsk.env['TAROPTS']=['cjf']
elif self.doxy_tar.endswith('gz'):
tsk.env['TAROPTS']=['czf']
else:
tsk.env['TAROPTS']=['cf']
def configure(conf):
conf.find_program('doxygen',var='DOXYGEN')
conf.find_program('tar',var='TAR')
| gpl-2.0 |
stinebuu/nest-simulator | pynest/examples/clopath_synapse_spike_pairing.py | 12 | 5804 | # -*- coding: utf-8 -*-
#
# clopath_synapse_spike_pairing.py
#
# This file is part of NEST.
#
# Copyright (C) 2004 The NEST Initiative
#
# NEST is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 2 of the License, or
# (at your option) any later version.
#
# NEST is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with NEST. If not, see <http://www.gnu.org/licenses/>.
"""
Clopath Rule: Spike pairing experiment
----------------------------------------
This script simulates one ``aeif_psc_delta_clopath`` neuron that is connected with
a Clopath connection [1]_. The synapse receives pairs of a pre- and a postsynaptic
spikes that are separated by either 10 ms (pre before post) or -10 ms (post
before pre). The change of the synaptic weight is measured after five of such
pairs. This experiment is repeated five times with different rates of the
sequence of the spike pairs: 10Hz, 20Hz, 30Hz, 40Hz, and 50Hz.
References
~~~~~~~~~~~
.. [1] Clopath C, Büsing L, Vasilaki E, Gerstner W (2010). Connectivity reflects coding:
a model of voltage-based STDP with homeostasis.
Nature Neuroscience 13:3, 344--352
"""
import numpy as np
import matplotlib.pyplot as plt
import nest
##############################################################################
# First we specify the neuron parameters. To enable voltage dependent
# prefactor ``A_LTD(u_bar_bar)`` add ``A_LTD_const: False`` to the dictionary.
nrn_params = {'V_m': -70.6,
'E_L': -70.6,
'C_m': 281.0,
'theta_minus': -70.6,
'theta_plus': -45.3,
'A_LTD': 14.0e-5,
'A_LTP': 8.0e-5,
'tau_minus': 10.0,
'tau_plus': 7.0,
'delay_u_bars': 4.0,
'a': 4.0,
'b': 0.0805,
'V_reset': -70.6 + 21.0,
'V_clamp': 33.0,
't_clamp': 2.0,
't_ref': 0.0,
}
##############################################################################
# Hardcoded spike times of presynaptic spike generator
spike_times_pre = [
# Presynaptic spike before the postsynaptic
[20., 120., 220., 320., 420.],
[20., 70., 120., 170., 220.],
[20., 53.3, 86.7, 120., 153.3],
[20., 45., 70., 95., 120.],
[20., 40., 60., 80., 100.],
# Presynaptic spike after the postsynaptic
[120., 220., 320., 420., 520., 620.],
[70., 120., 170., 220., 270., 320.],
[53.3, 86.6, 120., 153.3, 186.6, 220.],
[45., 70., 95., 120., 145., 170.],
[40., 60., 80., 100., 120., 140.]]
##############################################################################
# Hardcoded spike times of postsynaptic spike generator
spike_times_post = [
[10., 110., 210., 310., 410.],
[10., 60., 110., 160., 210.],
[10., 43.3, 76.7, 110., 143.3],
[10., 35., 60., 85., 110.],
[10., 30., 50., 70., 90.],
[130., 230., 330., 430., 530., 630.],
[80., 130., 180., 230., 280., 330.],
[63.3, 96.6, 130., 163.3, 196.6, 230.],
[55., 80., 105., 130., 155., 180.],
[50., 70., 90., 110., 130., 150.]]
init_w = 0.5
syn_weights = []
resolution = 0.1
##############################################################################
# Loop over pairs of spike trains
for (s_t_pre, s_t_post) in zip(spike_times_pre, spike_times_post):
nest.ResetKernel()
nest.SetKernelStatus({"resolution": resolution})
# Create one neuron
nrn = nest.Create("aeif_psc_delta_clopath", 1, nrn_params)
# We need a parrot neuron since spike generators can only
# be connected with static connections
prrt_nrn = nest.Create("parrot_neuron", 1)
# Create and connect spike generators
spike_gen_pre = nest.Create("spike_generator", 1, {
"spike_times": s_t_pre})
nest.Connect(spike_gen_pre, prrt_nrn,
syn_spec={"delay": resolution})
spike_gen_post = nest.Create("spike_generator", 1, {
"spike_times": s_t_post})
nest.Connect(spike_gen_post, nrn, syn_spec={
"delay": resolution, "weight": 80.0})
# Create weight recorder
wr = nest.Create('weight_recorder', 1)
# Create Clopath connection with weight recorder
nest.CopyModel("clopath_synapse", "clopath_synapse_rec",
{"weight_recorder": wr})
syn_dict = {"synapse_model": "clopath_synapse_rec",
"weight": init_w, "delay": resolution}
nest.Connect(prrt_nrn, nrn, syn_spec=syn_dict)
# Simulation
simulation_time = (10.0 + max(s_t_pre[-1], s_t_post[-1]))
nest.Simulate(simulation_time)
# Extract and save synaptic weights
weights = wr.get("events", "weights")
syn_weights.append(weights[-1])
syn_weights = np.array(syn_weights)
# scaling of the weights so that they are comparable to [1]
syn_weights = 100.0*15.0*(syn_weights - init_w)/init_w + 100.0
# Plot results
fig1, axA = plt.subplots(1, sharex=False)
axA.plot([10., 20., 30., 40., 50.], syn_weights[5:], color='b', lw=2.5, ls='-',
label="pre-post pairing")
axA.plot([10., 20., 30., 40., 50.], syn_weights[:5], color='g', lw=2.5, ls='-',
label="post-pre pairing")
axA.set_ylabel("normalized weight change")
axA.set_xlabel("rho (Hz)")
axA.legend()
axA.set_title("synaptic weight")
plt.show()
| gpl-2.0 |
theseyi/WhereHows | wherehows-etl/src/main/resources/jython/requests/packages/urllib3/util/connection.py | 353 | 3380 | from __future__ import absolute_import
import socket
try:
from select import poll, POLLIN
except ImportError: # `poll` doesn't exist on OSX and other platforms
poll = False
try:
from select import select
except ImportError: # `select` doesn't exist on AppEngine.
select = False
def is_connection_dropped(conn): # Platform-specific
"""
Returns True if the connection is dropped and should be closed.
:param conn:
:class:`httplib.HTTPConnection` object.
Note: For platforms like AppEngine, this will always return ``False`` to
let the platform handle connection recycling transparently for us.
"""
sock = getattr(conn, 'sock', False)
if sock is False: # Platform-specific: AppEngine
return False
if sock is None: # Connection already closed (such as by httplib).
return True
if not poll:
if not select: # Platform-specific: AppEngine
return False
try:
return select([sock], [], [], 0.0)[0]
except socket.error:
return True
# This version is better on platforms that support it.
p = poll()
p.register(sock, POLLIN)
for (fno, ev) in p.poll(0.0):
if fno == sock.fileno():
# Either data is buffered (bad), or the connection is dropped.
return True
# This function is copied from socket.py in the Python 2.7 standard
# library test suite. Added to its signature is only `socket_options`.
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
if host.startswith('['):
host = host.strip('[]')
err = None
for res in socket.getaddrinfo(host, port, 0, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
# This is the only addition urllib3 makes to this function.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
sock.connect(sa)
return sock
except socket.error as e:
err = e
if sock is not None:
sock.close()
sock = None
if err is not None:
raise err
raise socket.error("getaddrinfo returns an empty list")
def _set_socket_options(sock, options):
if options is None:
return
for opt in options:
sock.setsockopt(*opt)
| apache-2.0 |
CTSRD-SOAAP/chromium-42.0.2311.135 | breakpad/src/tools/python/filter_syms.py | 76 | 8032 | #!/usr/bin/env python
# Copyright (c) 2012 Google Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Normalizes and de-duplicates paths within Breakpad symbol files.
When using DWARF for storing debug symbols, some file information will be
stored relative to the current working directory of the current compilation
unit, and may be further relativized based upon how the file was #included.
This helper can be used to parse the Breakpad symbol file generated from such
DWARF files and normalize and de-duplicate the FILE records found within,
updating any references to the FILE records in the other record types.
"""
import macpath
import ntpath
import optparse
import os
import posixpath
import sys
class BreakpadParseError(Exception):
"""Unsupported Breakpad symbol record exception class."""
pass
class SymbolFileParser(object):
"""Parser for Breakpad symbol files.
The format of these files is documented at
https://code.google.com/p/google-breakpad/wiki/SymbolFiles
"""
def __init__(self, input_stream, output_stream, ignored_prefixes=None,
path_handler=os.path):
"""Inits a SymbolFileParser to read symbol records from |input_stream| and
write the processed output to |output_stream|.
|ignored_prefixes| contains a list of optional path prefixes that
should be stripped from the final, normalized path outputs.
For example, if the Breakpad symbol file had all paths starting with a
common prefix, such as:
FILE 1 /b/build/src/foo.cc
FILE 2 /b/build/src/bar.cc
Then adding "/b/build/src" as an ignored prefix would result in an output
file that contained:
FILE 1 foo.cc
FILE 2 bar.cc
Note that |ignored_prefixes| does not necessarily contain file system
paths, as the contents of the DWARF DW_AT_comp_dir attribute is dependent
upon the host system and compiler, and may contain additional information
such as hostname or compiler version.
"""
self.unique_files = {}
self.duplicate_files = {}
self.input_stream = input_stream
self.output_stream = output_stream
self.ignored_prefixes = ignored_prefixes or []
self.path_handler = path_handler
def Process(self):
"""Processes the Breakpad symbol file."""
for line in self.input_stream:
parsed = self._ParseRecord(line.rstrip())
if parsed:
self.output_stream.write(parsed + '\n')
def _ParseRecord(self, record):
"""Parses a single Breakpad symbol record - a single line from the symbol
file.
Returns:
The modified string to write to the output file, or None if no line
should be written.
"""
record_type = record.partition(' ')[0]
if record_type == 'FILE':
return self._ParseFileRecord(record)
elif self._IsLineRecord(record_type):
return self._ParseLineRecord(record)
else:
# Simply pass the record through unaltered.
return record
def _NormalizePath(self, path):
"""Normalizes a file path to its canonical form.
As this may not execute on the machine or file system originally
responsible for compilation, it may be necessary to further correct paths
for symlinks, junctions, or other such file system indirections.
Returns:
A unique, canonical representation for the the file path.
"""
return self.path_handler.normpath(path)
def _AdjustPath(self, path):
"""Adjusts the supplied path after performing path de-duplication.
This may be used to perform secondary adjustments, such as removing a
common prefix, such as "/D/build", or replacing the file system path with
information from the version control system.
Returns:
The actual path to use when writing the FILE record.
"""
return path[len(filter(path.startswith,
self.ignored_prefixes + [''])[0]):]
def _ParseFileRecord(self, file_record):
"""Parses and corrects a FILE record."""
file_info = file_record[5:].split(' ', 3)
if len(file_info) > 2:
raise BreakpadParseError('Unsupported FILE record: ' + file_record)
file_index = int(file_info[0])
file_name = self._NormalizePath(file_info[1])
existing_file_index = self.unique_files.get(file_name)
if existing_file_index is None:
self.unique_files[file_name] = file_index
file_info[1] = self._AdjustPath(file_name)
return 'FILE ' + ' '.join(file_info)
else:
self.duplicate_files[file_index] = existing_file_index
return None
def _IsLineRecord(self, record_type):
"""Determines if the current record type is a Line record"""
try:
line = int(record_type, 16)
except (ValueError, TypeError):
return False
return True
def _ParseLineRecord(self, line_record):
"""Parses and corrects a Line record."""
line_info = line_record.split(' ', 5)
if len(line_info) > 4:
raise BreakpadParseError('Unsupported Line record: ' + line_record)
file_index = int(line_info[3])
line_info[3] = str(self.duplicate_files.get(file_index, file_index))
return ' '.join(line_info)
def main():
option_parser = optparse.OptionParser()
option_parser.add_option("-p", "--prefix",
action="append", dest="prefixes", type="string",
default=[],
help="A path prefix that should be removed from "
"all FILE lines. May be repeated to specify "
"multiple prefixes.")
option_parser.add_option("-t", "--path_type",
action="store", type="choice", dest="path_handler",
choices=['win32', 'posix'],
help="Indicates how file paths should be "
"interpreted. The default is to treat paths "
"the same as the OS running Python (eg: "
"os.path)")
options, args = option_parser.parse_args()
if args:
option_parser.error('Unknown argument: %s' % args)
path_handler = { 'win32': ntpath,
'posix': posixpath }.get(options.path_handler, os.path)
try:
symbol_parser = SymbolFileParser(sys.stdin, sys.stdout, options.prefixes,
path_handler)
symbol_parser.Process()
except BreakpadParseError, e:
print >> sys.stderr, 'Got an error while processing symbol file'
print >> sys.stderr, str(e)
return 1
return 0
if __name__ == '__main__':
sys.exit(main())
| bsd-3-clause |
tmuelle2/phantomjs | src/qt/qtbase/src/3rdparty/freetype/src/tools/docmaker/utils.py | 515 | 3063 | # Utils (c) 2002, 2004, 2007, 2008 David Turner <david@freetype.org>
#
import string, sys, os, glob
# current output directory
#
output_dir = None
# This function is used to sort the index. It is a simple lexicographical
# sort, except that it places capital letters before lowercase ones.
#
def index_sort( s1, s2 ):
if not s1:
return -1
if not s2:
return 1
l1 = len( s1 )
l2 = len( s2 )
m1 = string.lower( s1 )
m2 = string.lower( s2 )
for i in range( l1 ):
if i >= l2 or m1[i] > m2[i]:
return 1
if m1[i] < m2[i]:
return -1
if s1[i] < s2[i]:
return -1
if s1[i] > s2[i]:
return 1
if l2 > l1:
return -1
return 0
# Sort input_list, placing the elements of order_list in front.
#
def sort_order_list( input_list, order_list ):
new_list = order_list[:]
for id in input_list:
if not id in order_list:
new_list.append( id )
return new_list
# Open the standard output to a given project documentation file. Use
# "output_dir" to determine the filename location if necessary and save the
# old stdout in a tuple that is returned by this function.
#
def open_output( filename ):
global output_dir
if output_dir and output_dir != "":
filename = output_dir + os.sep + filename
old_stdout = sys.stdout
new_file = open( filename, "w" )
sys.stdout = new_file
return ( new_file, old_stdout )
# Close the output that was returned by "close_output".
#
def close_output( output ):
output[0].close()
sys.stdout = output[1]
# Check output directory.
#
def check_output():
global output_dir
if output_dir:
if output_dir != "":
if not os.path.isdir( output_dir ):
sys.stderr.write( "argument" + " '" + output_dir + "' " + \
"is not a valid directory" )
sys.exit( 2 )
else:
output_dir = None
def file_exists( pathname ):
"""checks that a given file exists"""
result = 1
try:
file = open( pathname, "r" )
file.close()
except:
result = None
sys.stderr.write( pathname + " couldn't be accessed\n" )
return result
def make_file_list( args = None ):
"""builds a list of input files from command-line arguments"""
file_list = []
# sys.stderr.write( repr( sys.argv[1 :] ) + '\n' )
if not args:
args = sys.argv[1 :]
for pathname in args:
if string.find( pathname, '*' ) >= 0:
newpath = glob.glob( pathname )
newpath.sort() # sort files -- this is important because
# of the order of files
else:
newpath = [pathname]
file_list.extend( newpath )
if len( file_list ) == 0:
file_list = None
else:
# now filter the file list to remove non-existing ones
file_list = filter( file_exists, file_list )
return file_list
# eof
| bsd-3-clause |
ravindrapanda/tensorflow | tensorflow/python/estimator/canned/head_test.py | 4 | 151483 | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for head.py."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import six
from tensorflow.core.framework import summary_pb2
from tensorflow.python.estimator import model_fn
from tensorflow.python.estimator.canned import dnn_testing_utils
from tensorflow.python.estimator.canned import head as head_lib
from tensorflow.python.estimator.canned import metric_keys
from tensorflow.python.estimator.canned import prediction_keys
from tensorflow.python.estimator.inputs import numpy_io
from tensorflow.python.feature_column import feature_column as feature_column_lib
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import errors
from tensorflow.python.framework import ops
from tensorflow.python.framework import sparse_tensor
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import check_ops
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import string_ops
from tensorflow.python.ops.losses import losses
from tensorflow.python.platform import test
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.training import monitored_session
from tensorflow.python.training import queue_runner_impl
_DEFAULT_SERVING_KEY = signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY
def _initialize_variables(test_case, scaffold):
scaffold.finalize()
test_case.assertIsNone(scaffold.init_feed_dict)
test_case.assertIsNone(scaffold.init_fn)
scaffold.init_op.run()
scaffold.ready_for_local_init_op.eval()
scaffold.local_init_op.run()
scaffold.ready_op.eval()
test_case.assertIsNotNone(scaffold.saver)
def _assert_simple_summaries(test_case, expected_summaries, summary_str,
tol=1e-6):
"""Assert summary the specified simple values.
Args:
test_case: test case.
expected_summaries: Dict of expected tags and simple values.
summary_str: Serialized `summary_pb2.Summary`.
tol: Tolerance for relative and absolute.
"""
summary = summary_pb2.Summary()
summary.ParseFromString(summary_str)
test_case.assertAllClose(expected_summaries, {
v.tag: v.simple_value for v in summary.value
}, rtol=tol, atol=tol)
def _assert_no_hooks(test_case, spec):
test_case.assertAllEqual([], spec.training_chief_hooks)
test_case.assertAllEqual([], spec.training_hooks)
def _sigmoid(logits):
return 1 / (1 + np.exp(-logits))
class MultiClassHeadWithSoftmaxCrossEntropyLoss(test.TestCase):
def setUp(self):
ops.reset_default_graph()
def test_n_classes_is_none(self):
with self.assertRaisesRegexp(ValueError, 'n_classes must be > 2'):
head_lib._multi_class_head_with_softmax_cross_entropy_loss(
n_classes=None)
def test_n_classes_is_2(self):
with self.assertRaisesRegexp(ValueError, 'n_classes must be > 2'):
head_lib._multi_class_head_with_softmax_cross_entropy_loss(
n_classes=2)
def test_invalid_loss_reduction(self):
with self.assertRaisesRegexp(
ValueError, r'Invalid loss_reduction: invalid_loss_reduction'):
head_lib._multi_class_head_with_softmax_cross_entropy_loss(
n_classes=3, loss_reduction='invalid_loss_reduction')
with self.assertRaisesRegexp(
ValueError, r'Invalid loss_reduction: none'):
head_lib._multi_class_head_with_softmax_cross_entropy_loss(
n_classes=3, loss_reduction=losses.Reduction.NONE)
def test_loss_fn_arg_labels_missing(self):
def _loss_fn(logits):
del logits # Unused
with self.assertRaisesRegexp(
ValueError,
r'loss_fn must contain argument: labels\. '
r'Given arguments: \(\'logits\',\)'):
head_lib._multi_class_head_with_softmax_cross_entropy_loss(
n_classes=3, loss_fn=_loss_fn)
def test_loss_fn_arg_logits_missing(self):
def _loss_fn(labels):
del labels # unused
with self.assertRaisesRegexp(
ValueError,
r'loss_fn must contain argument: logits\. '
r'Given arguments: \(\'labels\',\)'):
head_lib._multi_class_head_with_softmax_cross_entropy_loss(
n_classes=3, loss_fn=_loss_fn)
def test_loss_fn_arg_features_ok(self):
def _loss_fn(labels, logits, features):
del labels, logits, features # Unused
head_lib._multi_class_head_with_softmax_cross_entropy_loss(
n_classes=3, loss_fn=_loss_fn)
def test_loss_fn_arg_invalid(self):
def _loss_fn(labels, logits, name=None):
del labels, logits, name # Unused
with self.assertRaisesRegexp(
ValueError,
r'loss_fn has unexpected args: \[\'name\'\]'):
head_lib._multi_class_head_with_softmax_cross_entropy_loss(
n_classes=3, loss_fn=_loss_fn)
def test_invalid_logits_shape(self):
n_classes = 3
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(n_classes)
self.assertEqual(n_classes, head.logits_dimension)
# Logits should be shape (batch_size, 3).
logits_2x2 = np.array(((45., 44.), (41., 42.),))
# Static shape.
with self.assertRaisesRegexp(ValueError, 'logits shape'):
head.create_estimator_spec(
features={'x': np.array(((30.,), (42.,),))},
mode=model_fn.ModeKeys.PREDICT,
logits=logits_2x2)
# Dynamic shape.
logits_placeholder = array_ops.placeholder(dtype=dtypes.float32)
spec = head.create_estimator_spec(
features={'x': np.array(((30.,), (42.,),))},
mode=model_fn.ModeKeys.PREDICT,
logits=logits_placeholder)
with self.test_session():
with self.assertRaisesRegexp(errors.OpError, 'logits shape'):
spec.predictions[prediction_keys.PredictionKeys.PROBABILITIES].eval({
logits_placeholder: logits_2x2
})
def test_invalid_labels_shape(self):
n_classes = 3
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(n_classes)
self.assertEqual(n_classes, head.logits_dimension)
# Logits should be shape (batch_size, 3).
# Labels should be shape (batch_size, 1).
labels_2x2 = np.array(((45, 44), (41, 42),), dtype=np.int)
logits_2x3 = np.array(((1., 2., 3.), (1., 2., 3.),))
features = {'x': np.array(((42.,),))}
# Static shape.
with self.assertRaisesRegexp(ValueError, 'Mismatched label shape'):
head.create_loss(
features=features,
mode=model_fn.ModeKeys.EVAL,
logits=logits_2x3,
labels=labels_2x2)
# Dynamic shape.
labels_placeholder = array_ops.placeholder(dtype=dtypes.int64)
logits_placeholder = array_ops.placeholder(dtype=dtypes.float32)
training_loss = head.create_loss(
features=features,
mode=model_fn.ModeKeys.EVAL,
logits=logits_placeholder,
labels=labels_placeholder)[0]
with self.test_session():
with self.assertRaisesRegexp(
errors.InvalidArgumentError,
r'\[expected_labels_shape: \] \[2 1\] \[labels_shape: \] \[2 2\]'):
training_loss.eval({
logits_placeholder: logits_2x3,
labels_placeholder: labels_2x2
})
def test_invalid_labels_type(self):
n_classes = 3
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(n_classes)
self.assertEqual(n_classes, head.logits_dimension)
# Logits should be shape (batch_size, 3).
# Labels should be shape (batch_size, 1).
labels_2x1 = np.array(((1.,), (1.,),))
logits_2x3 = np.array(((1., 2., 3.), (1., 2., 3.),))
features = {'x': np.array(((42.,),))}
# Static shape.
with self.assertRaisesRegexp(ValueError, 'Labels dtype'):
head.create_loss(
features=features,
mode=model_fn.ModeKeys.EVAL,
logits=logits_2x3,
labels=labels_2x1)
# Dynamic shape.
labels_placeholder = array_ops.placeholder(dtype=dtypes.float32)
logits_placeholder = array_ops.placeholder(dtype=dtypes.float32)
with self.assertRaisesRegexp(ValueError, 'Labels dtype'):
head.create_loss(
features=features,
mode=model_fn.ModeKeys.EVAL,
logits=logits_placeholder,
labels=labels_placeholder)
def test_invalid_labels_values(self):
n_classes = 3
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(n_classes)
self.assertEqual(n_classes, head.logits_dimension)
labels_2x1_with_large_id = np.array(((45,), (1,),), dtype=np.int)
labels_2x1_with_negative_id = np.array(((-5,), (1,),), dtype=np.int)
logits_2x3 = np.array(((1., 2., 4.), (1., 2., 3.),))
labels_placeholder = array_ops.placeholder(dtype=dtypes.int64)
logits_placeholder = array_ops.placeholder(dtype=dtypes.float32)
training_loss = head.create_loss(
features={'x': np.array(((42.,),))},
mode=model_fn.ModeKeys.EVAL,
logits=logits_placeholder,
labels=labels_placeholder)[0]
with self.test_session():
with self.assertRaisesOpError('Label IDs must < n_classes'):
training_loss.eval({
labels_placeholder: labels_2x1_with_large_id,
logits_placeholder: logits_2x3
})
with self.test_session():
with self.assertRaisesOpError('Label IDs must >= 0'):
training_loss.eval({
labels_placeholder: labels_2x1_with_negative_id,
logits_placeholder: logits_2x3
})
def test_invalid_labels_sparse_tensor(self):
n_classes = 3
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(n_classes)
self.assertEqual(n_classes, head.logits_dimension)
labels_2x1 = sparse_tensor.SparseTensor(
values=['english', 'italian'],
indices=[[0, 0], [1, 0]],
dense_shape=[2, 1])
logits_2x3 = np.array(((1., 2., 4.), (1., 2., 3.),))
with self.assertRaisesRegexp(
ValueError, 'SparseTensor labels are not supported.'):
head.create_loss(
features={'x': np.array(((42.,),))},
mode=model_fn.ModeKeys.EVAL,
logits=logits_2x3,
labels=labels_2x1)
def test_incompatible_labels_shape(self):
n_classes = 3
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(n_classes)
self.assertEqual(n_classes, head.logits_dimension)
# Logits should be shape (batch_size, 3).
# Labels should be shape (batch_size, 1).
# Here batch sizes are different.
values_3x1 = np.array(((1,), (1,), (1,),))
values_2x3 = np.array(((1., 2., 3.), (1., 2., 3.),))
features = {'x': values_2x3}
# Static shape.
with self.assertRaisesRegexp(ValueError, 'Dimensions must be equal'):
head.create_loss(
features=features,
mode=model_fn.ModeKeys.EVAL,
logits=values_2x3,
labels=values_3x1)
# Dynamic shape.
labels_placeholder = array_ops.placeholder(dtype=dtypes.int64)
logits_placeholder = array_ops.placeholder(dtype=dtypes.float32)
training_loss = head.create_loss(
features=features,
mode=model_fn.ModeKeys.EVAL,
logits=logits_placeholder,
labels=labels_placeholder)[0]
with self.test_session():
with self.assertRaisesRegexp(
errors.InvalidArgumentError,
r'\[expected_labels_shape: \] \[2 1\] \[labels_shape: \] \[3 1\]'):
training_loss.eval({
labels_placeholder: values_3x1,
logits_placeholder: values_2x3
})
def test_name(self):
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(
n_classes=3, name='foo')
self.assertEqual('foo', head.name)
def test_predict(self):
n_classes = 3
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(n_classes)
self.assertEqual(n_classes, head.logits_dimension)
logits = [[1., 0., 0.], [0., 0., 1.]]
expected_probabilities = [[0.576117, 0.2119416, 0.2119416],
[0.2119416, 0.2119416, 0.576117]]
expected_class_ids = [[0], [2]]
expected_classes = [[b'0'], [b'2']]
expected_export_classes = [[b'0', b'1', b'2']] * 2
spec = head.create_estimator_spec(
features={'x': np.array(((42,),), dtype=np.int32)},
mode=model_fn.ModeKeys.PREDICT,
logits=logits)
self.assertItemsEqual(
(_DEFAULT_SERVING_KEY, 'predict', 'classification'),
spec.export_outputs.keys())
# Assert predictions and export_outputs.
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
self.assertIsNone(spec.scaffold.summary_op)
predictions = sess.run(spec.predictions)
self.assertAllClose(logits,
predictions[prediction_keys.PredictionKeys.LOGITS])
self.assertAllClose(
expected_probabilities,
predictions[prediction_keys.PredictionKeys.PROBABILITIES])
self.assertAllClose(expected_class_ids,
predictions[prediction_keys.PredictionKeys.CLASS_IDS])
self.assertAllEqual(expected_classes,
predictions[prediction_keys.PredictionKeys.CLASSES])
self.assertAllClose(
expected_probabilities,
sess.run(spec.export_outputs[_DEFAULT_SERVING_KEY].scores))
self.assertAllEqual(
expected_export_classes,
sess.run(spec.export_outputs[_DEFAULT_SERVING_KEY].classes))
def test_predict_with_vocabulary_list(self):
n_classes = 3
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(
n_classes, label_vocabulary=['aang', 'iroh', 'zuko'])
logits = [[1., 0., 0.], [0., 0., 1.]]
expected_classes = [[b'aang'], [b'zuko']]
expected_export_classes = [[b'aang', b'iroh', b'zuko']] * 2
spec = head.create_estimator_spec(
features={'x': np.array(((42,),), dtype=np.int32)},
mode=model_fn.ModeKeys.PREDICT,
logits=logits)
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
self.assertAllEqual(
expected_classes,
sess.run(spec.predictions[prediction_keys.PredictionKeys.CLASSES]))
self.assertAllEqual(
expected_export_classes,
sess.run(spec.export_outputs[_DEFAULT_SERVING_KEY].classes))
def test_weight_should_not_impact_prediction(self):
n_classes = 3
logits = [[1., 0., 0.], [0., 0., 1.]]
expected_probabilities = [[0.576117, 0.2119416, 0.2119416],
[0.2119416, 0.2119416, 0.576117]]
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(
n_classes, weight_column='label_weights')
weights_2x1 = [[1.], [2.]]
spec = head.create_estimator_spec(
features={
'x': np.array(((42,),), dtype=np.int32),
'label_weights': weights_2x1,
},
mode=model_fn.ModeKeys.PREDICT,
logits=logits)
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
predictions = sess.run(spec.predictions)
self.assertAllClose(logits,
predictions[prediction_keys.PredictionKeys.LOGITS])
self.assertAllClose(
expected_probabilities,
predictions[prediction_keys.PredictionKeys.PROBABILITIES])
def test_eval_create_loss(self):
n_classes = 3
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(n_classes)
logits = np.array(((10, 0, 0), (0, 10, 0),), dtype=np.float32)
labels = np.array(((1,), (1,)), dtype=np.int64)
features = {'x': np.array(((42,),), dtype=np.int32)}
# loss = cross_entropy(labels, logits) = [10, 0].
expected_training_loss = 10.
# Create loss.
training_loss = head.create_loss(
features=features,
mode=model_fn.ModeKeys.EVAL,
logits=logits,
labels=labels)[0]
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
self.assertAllClose(
expected_training_loss, training_loss.eval(), rtol=1e-2, atol=1e-2)
def test_eval_create_loss_loss_fn(self):
"""Tests head.create_loss for eval mode and custom loss_fn."""
loss = np.array([[1.], [2.]], dtype=np.float32)
logits_input = np.array([[-10., 10., 0.], [-15., 10., 0]], dtype=np.float32)
labels_input = np.array([[1], [2]], dtype=np.int64)
def _loss_fn(labels, logits):
check_labels = control_flow_ops.Assert(
math_ops.reduce_all(math_ops.equal(labels, labels_input)),
data=[labels])
check_logits = control_flow_ops.Assert(
math_ops.reduce_all(math_ops.equal(logits, logits_input)),
data=[logits])
with ops.control_dependencies([check_labels, check_logits]):
return constant_op.constant(loss)
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(
n_classes=3, loss_fn=_loss_fn)
actual_training_loss = head.create_loss(
features={'x': np.array(((42,),), dtype=np.int32)},
mode=model_fn.ModeKeys.EVAL,
logits=logits_input,
labels=labels_input)[0]
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
self.assertAllClose(np.sum(loss), actual_training_loss.eval())
def test_eval_create_loss_loss_fn_wrong_shape(self):
"""Tests custom loss_fn that returns Tensor of unexpected shape."""
loss = np.array([1., 2.], dtype=np.float32)
def _loss_fn(labels, logits):
del labels, logits # Unused
return constant_op.constant(loss)
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(
n_classes=3, loss_fn=_loss_fn)
logits = np.array([[-10., 10., 0.], [-15., 10., 0.]], dtype=np.float32)
labels = np.array([[1], [2]], dtype=np.int64)
actual_training_loss = head.create_loss(
features={'x': np.array(((42,),), dtype=np.int32)},
mode=model_fn.ModeKeys.EVAL,
logits=logits,
labels=labels)[0]
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
with self.assertRaisesRegexp(
errors.InvalidArgumentError,
r'\[loss_fn must return Tensor of shape \[D0, D1, ... DN, 1\]\. \] '
r'\[logits_shape: \] \[2 3\] \[loss_shape: \] \[2\]'):
actual_training_loss.eval()
def test_eval_labels_none(self):
"""Tests that error is raised when labels is None."""
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(
n_classes=3)
with self.assertRaisesRegexp(
ValueError, r'You must provide a labels Tensor\. Given: None\.'):
head.create_estimator_spec(
features={'x': np.array(((42,),), dtype=np.int32)},
mode=model_fn.ModeKeys.EVAL,
logits=np.array(((10, 0, 0), (0, 10, 0),), dtype=np.float32),
labels=None)
def test_eval(self):
n_classes = 3
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(n_classes)
logits = np.array(((10, 0, 0), (0, 10, 0),), dtype=np.float32)
labels = np.array(((1,), (1,)), dtype=np.int64)
features = {'x': np.array(((42,),), dtype=np.int32)}
# loss = sum(cross_entropy(labels, logits)) = sum(10, 0) = 10.
expected_loss = 10.
# Create estimator spec.
spec = head.create_estimator_spec(
features=features,
mode=model_fn.ModeKeys.EVAL,
logits=logits,
labels=labels)
keys = metric_keys.MetricKeys
expected_metrics = {
keys.LOSS_MEAN: expected_loss / 2,
keys.ACCURACY: 0.5, # 1 of 2 labels is correct.
}
# Assert spec contains expected tensors.
self.assertIsNotNone(spec.loss)
self.assertItemsEqual(expected_metrics.keys(), spec.eval_metric_ops.keys())
self.assertIsNone(spec.train_op)
self.assertIsNone(spec.export_outputs)
_assert_no_hooks(self, spec)
# Assert predictions, loss, and metrics.
tol = 1e-2
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
self.assertIsNone(spec.scaffold.summary_op)
value_ops = {k: spec.eval_metric_ops[k][0] for k in spec.eval_metric_ops}
update_ops = {k: spec.eval_metric_ops[k][1] for k in spec.eval_metric_ops}
loss, metrics = sess.run((spec.loss, update_ops))
self.assertAllClose(expected_loss, loss, rtol=tol, atol=tol)
# Check results of both update (in `metrics`) and value ops.
self.assertAllClose(expected_metrics, metrics, rtol=tol, atol=tol)
self.assertAllClose(
expected_metrics, {k: value_ops[k].eval()
for k in value_ops},
rtol=tol,
atol=tol)
def test_eval_metric_ops_with_head_name(self):
n_classes = 3
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(
n_classes, name='some_multiclass_head')
logits = np.array(((10, 0, 0), (0, 10, 0),), dtype=np.float32)
labels = np.array(((1,), (1,)), dtype=np.int64)
features = {'x': np.array(((42,),), dtype=np.int32)}
# Create estimator spec.
spec = head.create_estimator_spec(
features=features,
mode=model_fn.ModeKeys.EVAL,
logits=logits,
labels=labels)
expected_metric_keys = [
'{}/some_multiclass_head'.format(metric_keys.MetricKeys.LOSS_MEAN),
'{}/some_multiclass_head'.format(metric_keys.MetricKeys.ACCURACY)
]
self.assertItemsEqual(expected_metric_keys, spec.eval_metric_ops.keys())
def test_eval_with_regularization_losses(self):
n_classes = 3
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(
n_classes, loss_reduction=losses.Reduction.SUM_OVER_BATCH_SIZE)
logits = np.array(((10, 0, 0), (0, 10, 0),), dtype=np.float32)
labels = np.array(((1,), (1,)), dtype=np.int64)
features = {'x': np.array(((42,),), dtype=np.int32)}
regularization_losses = [1.5, 0.5]
expected_regularization_loss = 2.
# unregularized_loss = sum(cross_entropy(labels, logits)) / batch_size
# = sum(10, 0) / 2 = 5.
expected_unregularized_loss = 5.
expected_regularized_loss = (
expected_unregularized_loss + expected_regularization_loss)
# Create estimator spec.
spec = head.create_estimator_spec(
features=features,
mode=model_fn.ModeKeys.EVAL,
logits=logits,
labels=labels,
regularization_losses=regularization_losses)
keys = metric_keys.MetricKeys
expected_metrics = {
keys.LOSS_MEAN: expected_unregularized_loss,
keys.LOSS_REGULARIZATION: expected_regularization_loss,
keys.ACCURACY: 0.5, # 1 of 2 labels is correct.
}
# Assert predictions, loss, and metrics.
tol = 1e-2
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
self.assertIsNone(spec.scaffold.summary_op)
value_ops = {k: spec.eval_metric_ops[k][0] for k in spec.eval_metric_ops}
update_ops = {k: spec.eval_metric_ops[k][1] for k in spec.eval_metric_ops}
loss, metrics = sess.run((spec.loss, update_ops))
self.assertAllClose(expected_regularized_loss, loss, rtol=tol, atol=tol)
# Check results of both update (in `metrics`) and value ops.
self.assertAllClose(expected_metrics, metrics, rtol=tol, atol=tol)
self.assertAllClose(
expected_metrics, {k: value_ops[k].eval()
for k in value_ops},
rtol=tol,
atol=tol)
def test_eval_with_label_vocabulary_create_loss(self):
n_classes = 3
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(
n_classes, label_vocabulary=['aang', 'iroh', 'zuko'])
logits = [[10., 0, 0], [0, 10, 0]]
labels = [[b'iroh'], [b'iroh']]
features = {'x': np.array(((42,),), dtype=np.int32)}
# loss = cross_entropy(labels, logits) = [10, 0].
expected_training_loss = 10.
training_loss = head.create_loss(
features=features,
mode=model_fn.ModeKeys.EVAL,
logits=logits,
labels=labels)[0]
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
self.assertAllClose(
expected_training_loss, training_loss.eval(), rtol=1e-2, atol=1e-2)
def test_eval_with_label_vocabulary(self):
n_classes = 3
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(
n_classes, label_vocabulary=['aang', 'iroh', 'zuko'])
logits = [[10., 0, 0], [0, 10, 0]]
labels = [[b'iroh'], [b'iroh']]
features = {'x': np.array(((42,),), dtype=np.int32)}
# loss = sum(cross_entropy(labels, logits)) = sum(10, 0) = 10.
expected_loss = 10.
spec = head.create_estimator_spec(
features=features,
mode=model_fn.ModeKeys.EVAL,
logits=logits,
labels=labels)
keys = metric_keys.MetricKeys
expected_metrics = {
keys.LOSS_MEAN: expected_loss / 2,
keys.ACCURACY: 0.5, # 1 of 2 labels is correct.
}
tol = 1e-2
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
value_ops = {k: spec.eval_metric_ops[k][0] for k in spec.eval_metric_ops}
update_ops = {k: spec.eval_metric_ops[k][1] for k in spec.eval_metric_ops}
loss, metrics = sess.run((spec.loss, update_ops))
self.assertAllClose(expected_loss, loss, rtol=tol, atol=tol)
# Check results of both update (in `metrics`) and value ops.
self.assertAllClose(expected_metrics, metrics, rtol=tol, atol=tol)
self.assertAllClose(
expected_metrics, {k: value_ops[k].eval() for k in value_ops},
rtol=tol, atol=tol)
def test_weighted_multi_example_eval(self):
n_classes = 3
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(
n_classes, weight_column='label_weights')
# Create estimator spec.
logits = np.array(((10, 0, 0), (0, 10, 0), (0, 0, 10),), dtype=np.float32)
labels = np.array(((1,), (2,), (2,)), dtype=np.int64)
weights_3x1 = np.array(((1.,), (2.,), (3.,)), dtype=np.float64)
# loss = sum(cross_entropy(labels, logits) * [1, 2, 3])
# = sum([10, 10, 0] * [1, 2, 3]) = 30
expected_loss = 30.
spec = head.create_estimator_spec(
features={
'x': np.array(((42,),), dtype=np.int32),
'label_weights': weights_3x1,
},
mode=model_fn.ModeKeys.EVAL,
logits=logits,
labels=labels)
keys = metric_keys.MetricKeys
expected_metrics = {
keys.LOSS_MEAN: expected_loss / np.sum(weights_3x1),
# Weighted accuracy is 1 * 3.0 / sum weights = 0.5
keys.ACCURACY: 0.5,
}
# Assert spec contains expected tensors.
self.assertIsNotNone(spec.loss)
self.assertItemsEqual(expected_metrics.keys(), spec.eval_metric_ops.keys())
self.assertIsNone(spec.train_op)
self.assertIsNone(spec.export_outputs)
_assert_no_hooks(self, spec)
# Assert loss, and metrics.
tol = 1e-2
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
self.assertIsNone(spec.scaffold.summary_op)
value_ops = {k: spec.eval_metric_ops[k][0] for k in spec.eval_metric_ops}
update_ops = {k: spec.eval_metric_ops[k][1] for k in spec.eval_metric_ops}
loss, metrics = sess.run((spec.loss, update_ops))
self.assertAllClose(expected_loss, loss, rtol=tol, atol=tol)
# Check results of both update (in `metrics`) and value ops.
self.assertAllClose(expected_metrics, metrics, rtol=tol, atol=tol)
self.assertAllClose(
expected_metrics, {k: value_ops[k].eval() for k in value_ops},
rtol=tol, atol=tol)
def test_train_create_loss(self):
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(
n_classes=3)
logits = np.array(((10, 0, 0), (0, 10, 0),), dtype=np.float32)
labels = np.array(((1,), (1,)), dtype=np.int64)
features = {'x': np.array(((42,),), dtype=np.int32)}
# unreduced_loss = cross_entropy(labels, logits) = [10, 0].
expected_unreduced_loss = [[10.], [0.]]
# Weights default to 1.
expected_weights = 1.
# training_loss = 1 * 10 + 1 * 0
expected_training_loss = 10.
training_loss, unreduced_loss, actual_weights, _ = head.create_loss(
features=features,
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels)
tol = 1e-2
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
self.assertAllClose(
expected_training_loss, training_loss.eval(), rtol=tol, atol=tol)
self.assertAllClose(
expected_unreduced_loss, unreduced_loss.eval(), rtol=tol, atol=tol)
self.assertAllClose(expected_weights, actual_weights)
def test_train_create_loss_loss_reduction(self):
"""Tests create_loss with loss_reduction."""
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(
n_classes=3, loss_reduction=losses.Reduction.SUM_BY_NONZERO_WEIGHTS)
logits = np.array(((10, 0, 0), (0, 10, 0),), dtype=np.float32)
labels = np.array(((1,), (1,)), dtype=np.int64)
features = {'x': np.array(((42,),), dtype=np.int32)}
# unreduced_loss = cross_entropy(labels, logits) = [10, 0].
expected_unreduced_loss = [[10.], [0.]]
# Weights default to 1.
expected_weights = 1.
# training_loss = 1 * 10 + 1 * 0 / num_nonzero_weights
expected_training_loss = 10. / 2.
training_loss, unreduced_loss, actual_weights, _ = head.create_loss(
features=features,
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels)
tol = 1e-2
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
self.assertAllClose(
expected_training_loss, training_loss.eval(), rtol=tol, atol=tol)
self.assertAllClose(
expected_unreduced_loss, unreduced_loss.eval(), rtol=tol, atol=tol)
self.assertAllClose(expected_weights, actual_weights)
def test_train_labels_none(self):
"""Tests that error is raised when labels is None."""
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(
n_classes=3)
def _no_op_train_fn(loss):
del loss
return control_flow_ops.no_op()
with self.assertRaisesRegexp(
ValueError, r'You must provide a labels Tensor\. Given: None\.'):
head.create_estimator_spec(
features={'x': np.array(((42,),), dtype=np.int32)},
mode=model_fn.ModeKeys.TRAIN,
logits=np.array(((10, 0, 0), (0, 10, 0),), dtype=np.float32),
labels=None,
train_op_fn=_no_op_train_fn)
def test_train(self):
n_classes = 3
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(n_classes)
logits = np.array(((10, 0, 0), (0, 10, 0),), dtype=np.float32)
labels = np.array(((1,), (1,)), dtype=np.int64)
features = {'x': np.array(((42,),), dtype=np.int32)}
expected_train_result = 'my_train_op'
def _train_op_fn(loss):
return string_ops.string_join(
[constant_op.constant(expected_train_result),
string_ops.as_string(loss, precision=2)])
# loss = sum(cross_entropy(labels, logits)) = sum(10, 0) = 10.
expected_loss = 10.
spec = head.create_estimator_spec(
features=features,
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels,
train_op_fn=_train_op_fn)
self.assertIsNotNone(spec.loss)
self.assertEqual({}, spec.eval_metric_ops)
self.assertIsNotNone(spec.train_op)
self.assertIsNone(spec.export_outputs)
_assert_no_hooks(self, spec)
# Assert predictions, loss, train_op, and summaries.
tol = 1e-2
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
self.assertIsNotNone(spec.scaffold.summary_op)
loss, train_result, summary_str = sess.run((spec.loss, spec.train_op,
spec.scaffold.summary_op))
self.assertAllClose(expected_loss, loss, rtol=tol, atol=tol)
self.assertEqual(
six.b('{0:s}{1:.2f}'.format(expected_train_result, expected_loss)),
train_result)
_assert_simple_summaries(self, {
metric_keys.MetricKeys.LOSS: expected_loss,
metric_keys.MetricKeys.LOSS_MEAN: expected_loss / 2,
}, summary_str, tol)
def test_train_summaries_with_head_name(self):
n_classes = 3
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(
n_classes, name='some_multiclass_head')
logits = np.array(((10, 0, 0), (0, 10, 0),), dtype=np.float32)
labels = np.array(((1,), (1,)), dtype=np.int64)
# loss = sum(cross_entropy(labels, logits)) = sum(10, 0) = 10.
expected_loss = 10.
features = {'x': np.array(((42,),), dtype=np.int32)}
def _train_op_fn(loss):
del loss
return control_flow_ops.no_op()
spec = head.create_estimator_spec(
features=features,
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels,
train_op_fn=_train_op_fn)
# Assert summaries.
tol = 1e-2
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
self.assertIsNotNone(spec.scaffold.summary_op)
summary_str = sess.run(spec.scaffold.summary_op)
_assert_simple_summaries(self, {
'{}/some_multiclass_head'.format(metric_keys.MetricKeys.LOSS):
expected_loss,
'{}/some_multiclass_head'.format(metric_keys.MetricKeys.LOSS_MEAN):
expected_loss / 2,
}, summary_str, tol)
def test_train_with_regularization_losses(self):
n_classes = 3
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(
n_classes, loss_reduction=losses.Reduction.SUM_OVER_BATCH_SIZE)
logits = np.array(((10, 0, 0), (0, 10, 0),), dtype=np.float32)
labels = np.array(((1,), (1,)), dtype=np.int64)
features = {'x': np.array(((42,),), dtype=np.int32)}
expected_train_result = 'my_train_op'
def _train_op_fn(loss):
return string_ops.string_join(
[constant_op.constant(expected_train_result),
string_ops.as_string(loss, precision=2)])
regularization_losses = [1.5, 0.5]
expected_regularization_loss = 2.
# unregularized_loss = sum(cross_entropy(labels, logits)) / batch_size
# = sum(10, 0) / 2 = 5.
# loss = unregularized_loss + regularization_loss = 7.
expected_loss = 7.
spec = head.create_estimator_spec(
features=features,
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels,
train_op_fn=_train_op_fn,
regularization_losses=regularization_losses)
# Assert predictions, loss, train_op, and summaries.
tol = 1e-2
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
self.assertIsNotNone(spec.scaffold.summary_op)
loss, train_result, summary_str = sess.run((spec.loss, spec.train_op,
spec.scaffold.summary_op))
self.assertAllClose(expected_loss, loss, rtol=tol, atol=tol)
self.assertEqual(
six.b('{0:s}{1:.2f}'.format(expected_train_result, expected_loss)),
train_result)
_assert_simple_summaries(self, {
metric_keys.MetricKeys.LOSS: expected_loss,
metric_keys.MetricKeys.LOSS_REGULARIZATION: (
expected_regularization_loss),
}, summary_str, tol)
def test_train_one_dim_create_loss(self):
"""Tests create_loss with 1D labels and weights (shape [batch_size])."""
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(
n_classes=3, weight_column='label_weights')
logits = np.array(((10, 0, 0), (0, 10, 0), (0, 0, 10),), dtype=np.float32)
labels_rank_1 = np.array((1, 2, 2,), dtype=np.int64)
weights_rank_1 = np.array((1., 2., 3.,), dtype=np.float64)
features = {
'x': np.array(((42,),), dtype=np.float32),
'label_weights': weights_rank_1
}
# unreduced_loss = cross_entropy(labels, logits) = [10, 10, 0].
expected_unreduced_loss = [[10.], [10.], [0.]]
# weights are reshaped to [3, 1] to match logits.
expected_weights = [[1.], [2.], [3.]]
# training_loss = 1 * 10 + 2 * 10 + 3 * 0 = 30.
expected_training_loss = 30.
training_loss, unreduced_loss, actual_weights, _ = head.create_loss(
features=features,
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels_rank_1)
tol = 1e-2
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
self.assertAllClose(
expected_training_loss, training_loss.eval(), rtol=tol, atol=tol)
self.assertAllClose(
expected_unreduced_loss, unreduced_loss.eval(), rtol=tol, atol=tol)
self.assertAllClose(expected_weights, actual_weights.eval())
def test_train_one_dim(self):
"""Tests train with 1D labels and weights (shape [batch_size])."""
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(
n_classes=3, weight_column='label_weights')
logits = np.array(((10, 0, 0), (0, 10, 0), (0, 0, 10),), dtype=np.float32)
labels_rank_1 = np.array((1, 2, 2,), dtype=np.int64)
weights_rank_1 = np.array((1., 2., 3.,), dtype=np.float64)
self.assertEqual((3,), labels_rank_1.shape)
self.assertEqual((3,), weights_rank_1.shape)
expected_train_result = 'my_train_op'
def _train_op_fn(loss):
return string_ops.string_join(
[constant_op.constant(expected_train_result),
string_ops.as_string(loss, precision=2)])
# loss = sum(cross_entropy(labels, logits) * [1, 2, 3])
# = sum([10, 10, 0] * [1, 2, 3]) = 30
expected_loss = 30.
features = {
'x': np.array(((42,),), dtype=np.float32),
'label_weights': weights_rank_1
}
spec = head.create_estimator_spec(
features=features,
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels_rank_1,
train_op_fn=_train_op_fn)
self.assertIsNotNone(spec.loss)
self.assertEqual({}, spec.eval_metric_ops)
self.assertIsNotNone(spec.train_op)
self.assertIsNone(spec.export_outputs)
_assert_no_hooks(self, spec)
# Assert predictions, loss, train_op, and summaries.
tol = 1e-2
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
self.assertIsNotNone(spec.scaffold.summary_op)
loss, train_result, summary_str = sess.run((spec.loss, spec.train_op,
spec.scaffold.summary_op))
self.assertAllClose(expected_loss, loss, rtol=tol, atol=tol)
self.assertEqual(
six.b('{0:s}{1:.2f}'.format(expected_train_result, expected_loss)),
train_result)
_assert_simple_summaries(self, {
metric_keys.MetricKeys.LOSS: expected_loss,
metric_keys.MetricKeys.LOSS_MEAN: (
expected_loss / np.sum(weights_rank_1)),
}, summary_str, tol)
def test_train_with_vocabulary_create_loss(self):
n_classes = 3
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(
n_classes, label_vocabulary=['aang', 'iroh', 'zuko'])
logits = [[10., 0, 0], [0, 10, 0]]
labels = [[b'iroh'], [b'iroh']]
features = {'x': np.array(((42,),), dtype=np.int32)}
# loss = cross_entropy(labels, logits) = [10, 0].
expected_training_loss = 10.
training_loss = head.create_loss(
features=features,
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels)[0]
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
self.assertAllClose(
expected_training_loss, training_loss.eval(), rtol=1e-2, atol=1e-2)
def test_train_with_vocabulary(self):
n_classes = 3
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(
n_classes, label_vocabulary=['aang', 'iroh', 'zuko'])
logits = [[10., 0, 0], [0, 10, 0]]
labels = [[b'iroh'], [b'iroh']]
features = {'x': np.array(((42,),), dtype=np.int32)}
def _train_op_fn(loss):
del loss
return control_flow_ops.no_op()
# loss = sum(cross_entropy(labels, logits)) = sum(10, 0) = 10.
expected_loss = 10.
spec = head.create_estimator_spec(
features=features,
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels,
train_op_fn=_train_op_fn)
tol = 1e-2
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
loss = sess.run(spec.loss)
self.assertAllClose(expected_loss, loss, rtol=tol, atol=tol)
def test_weighted_multi_example_train(self):
n_classes = 3
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(
n_classes, weight_column='label_weights')
# Create estimator spec.
logits = np.array(((10, 0, 0), (0, 10, 0), (0, 0, 10),), dtype=np.float32)
labels = np.array(((1,), (2,), (2,)), dtype=np.int64)
weights_3x1 = np.array(((1.,), (2.,), (3.,)), dtype=np.float64)
expected_train_result = 'my_train_op'
# loss = sum(cross_entropy(labels, logits) * [1, 2, 3])
# = sum([10, 10, 0] * [1, 2, 3]) = 30
expected_loss = 30.
def _train_op_fn(loss):
return string_ops.string_join(
[constant_op.constant(expected_train_result),
string_ops.as_string(loss, precision=2)])
spec = head.create_estimator_spec(
features={
'x': np.array(((42,),), dtype=np.float32),
'label_weights': weights_3x1,
},
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels,
train_op_fn=_train_op_fn)
self.assertIsNotNone(spec.loss)
self.assertEqual({}, spec.eval_metric_ops)
self.assertIsNotNone(spec.train_op)
self.assertIsNone(spec.export_outputs)
_assert_no_hooks(self, spec)
# Assert predictions, loss, train_op, and summaries.
tol = 1e-2
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
self.assertIsNotNone(spec.scaffold.summary_op)
loss, train_result, summary_str = sess.run((spec.loss, spec.train_op,
spec.scaffold.summary_op))
self.assertAllClose(expected_loss, loss, rtol=tol, atol=tol)
self.assertEqual(
six.b('{0:s}{1:.2f}'.format(expected_train_result, expected_loss)),
train_result)
_assert_simple_summaries(self, {
metric_keys.MetricKeys.LOSS: expected_loss,
# loss mean = sum(cross_entropy(labels, logits) * [1,2,3]) / (1+2+3)
# = sum([10, 10, 0] * [1, 2, 3]) / 6 = 30 / 6
metric_keys.MetricKeys.LOSS_MEAN:
expected_loss / np.sum(weights_3x1),
}, summary_str, tol)
def test_multi_dim_weighted_train_create_loss(self):
"""Logits of shape [2, 2, 2], labels [2, 2, 1], weights [2, 2]."""
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(
n_classes=3, weight_column='weights')
logits = np.array([[[10, 0, 0], [12, 0, 0]],
[[0, 10, 0], [0, 15, 0]]], dtype=np.float32)
labels = np.array([[[0], [1]], [[1], [2]]], dtype=np.int64)
weights = np.array([[1., 1.5], [2., 2.5]], dtype=np.float32)
# unreduced_loss = cross_entropy(labels, logits) = [[0, 12], [0, 15]].
expected_unreduced_loss = [[[0.], [12.]], [[0.], [15.]]]
# weights are reshaped to [2, 2, 1] to match logits.
expected_weights = [[[1.], [1.5]], [[2.], [2.5]]]
# training_loss = 1*0 + 1.5*12 + 2*0 + 2.5*15 = 55.5
expected_training_loss = 55.5
training_loss, unreduced_loss, actual_weights, _ = head.create_loss(
features={'weights': weights},
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels)
tol = 1e-2
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
self.assertAllClose(
expected_training_loss, training_loss.eval(), rtol=tol, atol=tol)
self.assertAllClose(
expected_unreduced_loss, unreduced_loss.eval(), rtol=tol, atol=tol)
self.assertAllClose(expected_weights, actual_weights.eval())
def test_multi_dim_weighted_train(self):
"""Logits of shape [2, 2, 2], labels [2, 2, 1], weights [2, 2]."""
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(
n_classes=3, weight_column='weights')
logits = np.array([[[10, 0, 0], [12, 0, 0]],
[[0, 10, 0], [0, 15, 0]]], dtype=np.float32)
labels = np.array([[[0], [1]], [[1], [2]]], dtype=np.int64)
weights = np.array([[1., 1.5], [2., 2.5]], dtype=np.float32)
expected_train_result = 'my_train_op'
def _train_op_fn(loss):
return string_ops.string_join(
[constant_op.constant(expected_train_result),
string_ops.as_string(loss, precision=2)])
# loss = cross_entropy(labels, logits) = [[0, 12], [0, 15]].
# weighted_sum_loss = 1*0 + 1.5*12 + 2*0 + 2.5*15 = 55.5
expected_loss = 55.5
spec = head.create_estimator_spec(
features={'weights': weights},
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels,
train_op_fn=_train_op_fn)
# Assert predictions, loss, train_op, and summaries.
tol = 1e-2
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
loss, train_result = sess.run((spec.loss, spec.train_op))
self.assertAllClose(expected_loss, loss, rtol=tol, atol=tol)
self.assertEqual(
six.b('{0:s}{1:.2f}'.format(expected_train_result, expected_loss)),
train_result)
def test_multi_dim_train_weights_wrong_inner_dim(self):
"""Logits of shape [2, 2, 2], labels [2, 2, 1], weights [2, 1]."""
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(
n_classes=3, weight_column='weights')
logits = np.array([[[10, 0, 0], [12, 0, 0]],
[[0, 10, 0], [0, 15, 0]]], dtype=np.float32)
labels = np.array([[[0], [1]], [[1], [2]]], dtype=np.int64)
weights = np.array([[1.], [2.]], dtype=np.float32)
def _no_op_train_fn(loss):
del loss
return control_flow_ops.no_op()
spec = head.create_estimator_spec(
features={'weights': weights},
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels,
train_op_fn=_no_op_train_fn)
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
with self.assertRaisesRegexp(
errors.InvalidArgumentError,
r'\[logits_shape: \] \[2 2 3\] \[weights_shape: \] \[2 1\]'):
spec.loss.eval()
def test_multi_dim_train_weights_wrong_outer_dim(self):
"""Logits of shape [2, 2, 2], labels [2, 2, 1], weights [2, 2, 3]."""
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(
n_classes=3, weight_column='weights')
logits = np.array([[[10, 0, 0], [12, 0, 0]],
[[0, 10, 0], [0, 15, 0]]], dtype=np.float32)
labels = np.array([[[0], [1]], [[1], [2]]], dtype=np.int64)
weights = np.array([[[1., 1.1, 1.2], [1.5, 1.6, 1.7]],
[[2., 2.1, 2.2], [2.5, 2.6, 2.7]]])
weights_placeholder = array_ops.placeholder(dtype=dtypes.float32)
def _no_op_train_fn(loss):
del loss
return control_flow_ops.no_op()
spec = head.create_estimator_spec(
features={'weights': weights_placeholder},
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels,
train_op_fn=_no_op_train_fn)
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
with self.assertRaisesRegexp(
errors.InvalidArgumentError,
r'\[logits_shape: \]\s\[2 2 3\]\s\[weights_shape: \]\s\[2 2 3\]'):
spec.loss.eval({weights_placeholder: weights})
def test_multi_dim_weighted_eval(self):
"""Logits of shape [2, 2, 2], labels [2, 2, 1], weights [2, 2]."""
head = head_lib._multi_class_head_with_softmax_cross_entropy_loss(
n_classes=3, weight_column='weights')
logits = np.array([[[10, 0, 0], [12, 0, 0]],
[[0, 10, 0], [0, 15, 0]]], dtype=np.float32)
labels = np.array([[[0], [1]], [[1], [2]]], dtype=np.int64)
weights = np.array([[1., 1.5], [2., 2.5]], dtype=np.float32)
# loss = cross_entropy(labels, logits) = [[0, 12], [0, 15]].
# weighted_sum_loss = 1*0 + 1.5*12 + 2*0 + 2.5*15 = 55.5
expected_loss = 55.5
# Create estimator spec.
spec = head.create_estimator_spec(
features={'weights': weights},
mode=model_fn.ModeKeys.EVAL,
logits=logits,
labels=labels)
keys = metric_keys.MetricKeys
expected_metrics = {
keys.LOSS_MEAN: expected_loss / np.sum(weights),
keys.ACCURACY: (1.*1. + 1.5*0. + 2.*1. + 2.5*0.) / np.sum(weights),
}
# Assert predictions, loss, and metrics.
tol = 1e-2
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
value_ops = {k: spec.eval_metric_ops[k][0] for k in spec.eval_metric_ops}
update_ops = {k: spec.eval_metric_ops[k][1] for k in spec.eval_metric_ops}
loss, metrics = sess.run((spec.loss, update_ops))
self.assertAllClose(expected_loss, loss, rtol=tol, atol=tol)
# Check results of both update (in `metrics`) and value ops.
self.assertAllClose(expected_metrics, metrics, rtol=tol, atol=tol)
self.assertAllClose(
expected_metrics, {k: value_ops[k].eval() for k in value_ops},
rtol=tol, atol=tol)
class BinaryLogisticHeadWithSigmoidCrossEntropyLossTest(test.TestCase):
def setUp(self):
ops.reset_default_graph()
def test_threshold_too_small(self):
with self.assertRaisesRegexp(ValueError, r'thresholds not in \(0, 1\)'):
head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss(
thresholds=(0., 0.5))
def test_threshold_too_large(self):
with self.assertRaisesRegexp(ValueError, r'thresholds not in \(0, 1\)'):
head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss(
thresholds=(0.5, 1.))
def test_invalid_loss_reduction(self):
with self.assertRaisesRegexp(
ValueError, r'Invalid loss_reduction: invalid_loss_reduction'):
head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss(
loss_reduction='invalid_loss_reduction')
with self.assertRaisesRegexp(
ValueError, r'Invalid loss_reduction: none'):
head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss(
loss_reduction=losses.Reduction.NONE)
def test_loss_fn_arg_labels_missing(self):
def _loss_fn(logits):
del logits # Unused
with self.assertRaisesRegexp(
ValueError,
r'loss_fn must contain argument: labels\. '
r'Given arguments: \(\'logits\',\)'):
head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss(
loss_fn=_loss_fn)
def test_loss_fn_arg_logits_missing(self):
def _loss_fn(labels):
del labels # unused
with self.assertRaisesRegexp(
ValueError,
r'loss_fn must contain argument: logits\. '
r'Given arguments: \(\'labels\',\)'):
head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss(
loss_fn=_loss_fn)
def test_loss_fn_arg_features_ok(self):
def _loss_fn(labels, logits, features):
del labels, logits, features # Unused
head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss(
loss_fn=_loss_fn)
def test_loss_fn_arg_invalid(self):
def _loss_fn(labels, logits, name=None):
del labels, logits, name # Unused
with self.assertRaisesRegexp(
ValueError,
r'loss_fn has unexpected args: \[\'name\'\]'):
head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss(
loss_fn=_loss_fn)
def test_invalid_logits_shape(self):
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss()
self.assertEqual(1, head.logits_dimension)
# Logits should be shape (batch_size, 1).
logits_2x2 = np.array(((45., 44.), (41., 42.),))
# Static shape.
with self.assertRaisesRegexp(ValueError, 'logits shape'):
head.create_estimator_spec(
features={'x': np.array(((42.,),))},
mode=model_fn.ModeKeys.PREDICT,
logits=logits_2x2)
# Dynamic shape.
logits_placeholder = array_ops.placeholder(dtype=dtypes.float32)
spec = head.create_estimator_spec(
features={'x': np.array(((42.,),))},
mode=model_fn.ModeKeys.PREDICT,
logits=logits_placeholder)
with self.test_session():
with self.assertRaisesRegexp(errors.OpError, 'logits shape'):
spec.predictions[prediction_keys.PredictionKeys.PROBABILITIES].eval({
logits_placeholder: logits_2x2
})
def test_invalid_labels_shape(self):
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss()
self.assertEqual(1, head.logits_dimension)
# Labels and logits should be shape (batch_size, 1).
labels_2x2 = np.array(((45., 44.), (41., 42.),))
logits_2x1 = np.array(((45.,), (41.,),))
# Static shape.
with self.assertRaisesRegexp(ValueError, 'Mismatched label shape'):
head.create_loss(
features={'x': np.array(((42.,),))},
mode=model_fn.ModeKeys.EVAL,
logits=logits_2x1,
labels=labels_2x2)
# Dynamic shape.
labels_placeholder = array_ops.placeholder(dtype=dtypes.float32)
logits_placeholder = array_ops.placeholder(dtype=dtypes.float32)
training_loss = head.create_loss(
features={'x': np.array(((42.,),))},
mode=model_fn.ModeKeys.EVAL,
logits=logits_placeholder,
labels=labels_placeholder)[0]
with self.test_session():
with self.assertRaisesRegexp(
errors.InvalidArgumentError,
r'\[expected_labels_shape: \] \[2 1\] \[labels_shape: \] \[2 2\]'):
training_loss.eval({
logits_placeholder: logits_2x1,
labels_placeholder: labels_2x2
})
def test_incompatible_labels_shape(self):
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss()
self.assertEqual(1, head.logits_dimension)
# Both logits and labels should be shape (batch_size, 1).
values_2x1 = np.array(((0.,), (1.,),))
values_3x1 = np.array(((0.,), (1.,), (0.,),))
# Static shape.
with self.assertRaisesRegexp(
ValueError, 'logits and labels must have the same shape'):
head.create_loss(
features={'x': values_2x1},
mode=model_fn.ModeKeys.EVAL,
logits=values_2x1,
labels=values_3x1)
with self.assertRaisesRegexp(
ValueError, 'logits and labels must have the same shape'):
head.create_loss(
features={'x': values_2x1},
mode=model_fn.ModeKeys.EVAL,
logits=values_3x1,
labels=values_2x1)
# Dynamic shape.
labels_placeholder = array_ops.placeholder(dtype=dtypes.float32)
logits_placeholder = array_ops.placeholder(dtype=dtypes.float32)
training_loss = head.create_loss(
features={'x': values_2x1},
mode=model_fn.ModeKeys.EVAL,
logits=logits_placeholder,
labels=labels_placeholder)[0]
with self.test_session():
with self.assertRaisesRegexp(
errors.InvalidArgumentError,
r'\[expected_labels_shape: \] \[3 1\] \[labels_shape: \] \[2 1\]'):
training_loss.eval({
labels_placeholder: values_2x1,
logits_placeholder: values_3x1
})
with self.test_session():
with self.assertRaisesRegexp(
errors.InvalidArgumentError,
r'\[expected_labels_shape: \] \[2 1\] \[labels_shape: \] \[3 1\]'):
training_loss.eval({
labels_placeholder: values_3x1,
logits_placeholder: values_2x1
})
def test_name(self):
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss(
name='foo')
self.assertEqual('foo', head.name)
def test_predict(self):
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss()
self.assertEqual(1, head.logits_dimension)
# Create estimator spec.
logits = [[0.3], [-0.4]]
expected_logistics = [[0.574443], [0.401312]]
expected_probabilities = [[0.425557, 0.574443], [0.598688, 0.401312]]
expected_class_ids = [[1], [0]]
expected_classes = [[b'1'], [b'0']]
expected_export_classes = [[b'0', b'1']] * 2
spec = head.create_estimator_spec(
features={'x': np.array(((42,),), dtype=np.int32)},
mode=model_fn.ModeKeys.PREDICT,
logits=logits)
# Assert spec contains expected tensors.
self.assertIsNone(spec.loss)
self.assertEqual({}, spec.eval_metric_ops)
self.assertIsNone(spec.train_op)
self.assertItemsEqual(('classification', 'regression', 'predict',
_DEFAULT_SERVING_KEY), spec.export_outputs.keys())
_assert_no_hooks(self, spec)
# Assert predictions.
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
self.assertIsNone(spec.scaffold.summary_op)
predictions = sess.run(spec.predictions)
self.assertAllClose(logits,
predictions[prediction_keys.PredictionKeys.LOGITS])
self.assertAllClose(expected_logistics,
predictions[prediction_keys.PredictionKeys.LOGISTIC])
self.assertAllClose(
expected_probabilities,
predictions[prediction_keys.PredictionKeys.PROBABILITIES])
self.assertAllClose(expected_class_ids,
predictions[prediction_keys.PredictionKeys.CLASS_IDS])
self.assertAllEqual(expected_classes,
predictions[prediction_keys.PredictionKeys.CLASSES])
self.assertAllClose(
expected_probabilities,
sess.run(spec.export_outputs[_DEFAULT_SERVING_KEY].scores))
self.assertAllEqual(
expected_export_classes,
sess.run(spec.export_outputs[_DEFAULT_SERVING_KEY].classes))
self.assertAllClose(expected_logistics,
sess.run(spec.export_outputs['regression'].value))
def test_predict_with_vocabulary_list(self):
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss(
label_vocabulary=['aang', 'iroh'])
logits = [[1.], [0.]]
expected_classes = [[b'iroh'], [b'aang']]
spec = head.create_estimator_spec(
features={'x': np.array(((42,),), dtype=np.int32)},
mode=model_fn.ModeKeys.PREDICT,
logits=logits)
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
self.assertAllEqual(
expected_classes,
sess.run(spec.predictions[prediction_keys.PredictionKeys.CLASSES]))
def test_eval_create_loss(self):
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss()
logits = np.array(((45,), (-41,),), dtype=np.float32)
labels = np.array(((1,), (1,),), dtype=np.int32)
features = {'x': np.array(((42,),), dtype=np.int32)}
# loss = cross_entropy(labels, logits) = [0, 41].
expected_training_loss = 41.
# Create loss.
training_loss = head.create_loss(
features=features,
mode=model_fn.ModeKeys.EVAL,
logits=logits,
labels=labels)[0]
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
self.assertAllClose(
expected_training_loss, training_loss.eval(), rtol=1e-2, atol=1e-2)
def test_eval_labels_none(self):
"""Tests that error is raised when labels is None."""
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss()
with self.assertRaisesRegexp(
ValueError, r'You must provide a labels Tensor\. Given: None\.'):
head.create_estimator_spec(
features={'x': np.array(((42,),), dtype=np.int32)},
mode=model_fn.ModeKeys.EVAL,
logits=np.array(((45,), (-41,),), dtype=np.float32),
labels=None)
def test_eval(self):
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss()
logits = np.array(((45,), (-41,),), dtype=np.float32)
labels = np.array(((1,), (1,),), dtype=np.int32)
features = {'x': np.array(((42,),), dtype=np.int32)}
# Create estimator spec.
spec = head.create_estimator_spec(
features=features,
mode=model_fn.ModeKeys.EVAL,
logits=logits,
labels=labels)
keys = metric_keys.MetricKeys
expected_metrics = {
# loss = sum(cross_entropy(labels, logits)) = sum(0, 41) = 41
# loss_mean = loss/2 = 41./2 = 20.5
keys.LOSS_MEAN: 20.5,
keys.ACCURACY: 1./2,
keys.PREDICTION_MEAN: 1./2,
keys.LABEL_MEAN: 2./2,
keys.ACCURACY_BASELINE: 2./2,
keys.AUC: 0.,
keys.AUC_PR: 1.,
}
# Assert spec contains expected tensors.
self.assertIsNotNone(spec.loss)
self.assertItemsEqual(expected_metrics.keys(), spec.eval_metric_ops.keys())
self.assertIsNone(spec.train_op)
self.assertIsNone(spec.export_outputs)
_assert_no_hooks(self, spec)
# Assert predictions, loss, and metrics.
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
self.assertIsNone(spec.scaffold.summary_op)
value_ops = {k: spec.eval_metric_ops[k][0] for k in spec.eval_metric_ops}
update_ops = {k: spec.eval_metric_ops[k][1] for k in spec.eval_metric_ops}
loss, metrics = sess.run((spec.loss, update_ops))
self.assertAllClose(41., loss)
# Check results of both update (in `metrics`) and value ops.
self.assertAllClose(expected_metrics, metrics)
self.assertAllClose(
expected_metrics, {k: value_ops[k].eval() for k in value_ops})
def test_eval_metric_ops_with_head_name(self):
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss(
name='some_binary_head')
logits = np.array(((45,), (-41,),), dtype=np.float32)
labels = np.array(((1,), (1,),), dtype=np.int32)
features = {'x': np.array(((42,),), dtype=np.int32)}
# Create estimator spec.
spec = head.create_estimator_spec(
features=features,
mode=model_fn.ModeKeys.EVAL,
logits=logits,
labels=labels)
expected_metric_keys = [
'{}/some_binary_head'.format(metric_keys.MetricKeys.LOSS_MEAN),
'{}/some_binary_head'.format(metric_keys.MetricKeys.ACCURACY),
'{}/some_binary_head'.format(metric_keys.MetricKeys.PREDICTION_MEAN),
'{}/some_binary_head'.format(metric_keys.MetricKeys.LABEL_MEAN),
'{}/some_binary_head'.format(metric_keys.MetricKeys.ACCURACY_BASELINE),
'{}/some_binary_head'.format(metric_keys.MetricKeys.AUC),
'{}/some_binary_head'.format(metric_keys.MetricKeys.AUC_PR)
]
self.assertItemsEqual(expected_metric_keys, spec.eval_metric_ops.keys())
def test_eval_with_regularization_losses(self):
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss(
loss_reduction=losses.Reduction.SUM_OVER_BATCH_SIZE)
logits = np.array(((45,), (-41,),), dtype=np.float32)
labels = np.array(((1,), (1,),), dtype=np.int32)
features = {'x': np.array(((42,),), dtype=np.int32)}
regularization_losses = [1.5, 0.5]
expected_regularization_loss = 2.
# unregularized_loss = sum(cross_entropy(labels, logits)) / batch_size
# = sum(0, 41) / 2 = 20.5
expected_unregularized_loss = 20.5
expected_regularized_loss = (
expected_unregularized_loss + expected_regularization_loss)
# Create estimator spec.
spec = head.create_estimator_spec(
features=features,
mode=model_fn.ModeKeys.EVAL,
logits=logits,
labels=labels,
regularization_losses=regularization_losses)
keys = metric_keys.MetricKeys
expected_metrics = {
keys.LOSS_MEAN: expected_unregularized_loss,
keys.LOSS_REGULARIZATION: expected_regularization_loss,
keys.ACCURACY: 1./2,
keys.PREDICTION_MEAN: 1./2,
keys.LABEL_MEAN: 2./2,
keys.ACCURACY_BASELINE: 2./2,
keys.AUC: 0.,
keys.AUC_PR: 1.,
}
# Assert predictions, loss, and metrics.
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
self.assertIsNone(spec.scaffold.summary_op)
value_ops = {k: spec.eval_metric_ops[k][0] for k in spec.eval_metric_ops}
update_ops = {k: spec.eval_metric_ops[k][1] for k in spec.eval_metric_ops}
loss, metrics = sess.run((spec.loss, update_ops))
self.assertAllClose(expected_regularized_loss, loss)
# Check results of both update (in `metrics`) and value ops.
self.assertAllClose(expected_metrics, metrics)
self.assertAllClose(
expected_metrics, {k: value_ops[k].eval() for k in value_ops})
def test_eval_with_vocabulary_list_create_loss(self):
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss(
label_vocabulary=['aang', 'iroh'])
logits = np.array(((45,), (-41,),), dtype=np.float32)
labels = [[b'iroh'], [b'iroh']]
features = {'x': np.array(((42,),), dtype=np.int32)}
# Create loss.
training_loss = head.create_loss(
features=features,
mode=model_fn.ModeKeys.EVAL,
logits=logits,
labels=labels)[0]
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
self.assertAllClose(41., training_loss.eval())
def test_eval_with_vocabulary_list(self):
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss(
label_vocabulary=['aang', 'iroh'])
logits = np.array(((45,), (-41,),), dtype=np.float32)
labels = [[b'iroh'], [b'iroh']]
features = {'x': np.array(((42,),), dtype=np.int32)}
# Create estimator spec.
spec = head.create_estimator_spec(
features=features,
mode=model_fn.ModeKeys.EVAL,
logits=logits,
labels=labels)
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
self.assertIsNone(spec.scaffold.summary_op)
value_ops = {k: spec.eval_metric_ops[k][0] for k in spec.eval_metric_ops}
update_ops = {k: spec.eval_metric_ops[k][1] for k in spec.eval_metric_ops}
sess.run(update_ops)
self.assertAllClose(1. / 2,
value_ops[metric_keys.MetricKeys.ACCURACY].eval())
def test_eval_with_thresholds_create_loss(self):
thresholds = [0.25, 0.5, 0.75]
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss(
thresholds=thresholds)
logits = np.array(((-1,), (1,),), dtype=np.float32)
labels = np.array(((1,), (1,),), dtype=np.int32)
features = {'x': np.array(((42,),), dtype=np.int32)}
# probabilities[i] = 1/(1 + exp(-logits[i])) =>
# probabilities = [1/(1 + exp(1)), 1/(1 + exp(-1))] = [0.269, 0.731]
# loss = -ln(probabilities[label[i]])) = [-ln(0.269), -ln(0.731)]
# = [1.31304389, 0.31334182]
# weighted sum loss = 1.62638571
expected_training_loss = 1.62638571
# Create loss.
training_loss = head.create_loss(
features=features,
mode=model_fn.ModeKeys.EVAL,
logits=logits,
labels=labels)[0]
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
self.assertAllClose(
expected_training_loss, training_loss.eval(), rtol=1e-2, atol=1e-2)
def test_eval_with_thresholds(self):
thresholds = [0.25, 0.5, 0.75]
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss(
thresholds=thresholds)
logits = np.array(((-1,), (1,),), dtype=np.float32)
labels = np.array(((1,), (1,),), dtype=np.int32)
features = {'x': np.array(((42,),), dtype=np.int32)}
# Create estimator spec.
spec = head.create_estimator_spec(
features=features,
mode=model_fn.ModeKeys.EVAL,
logits=logits,
labels=labels)
# probabilities[i] = 1/(1 + exp(-logits[i])) =>
# probabilities = [1/(1 + exp(1)), 1/(1 + exp(-1))] = [0.269, 0.731]
# loss = -sum(ln(probabilities[label[i]])) = -ln(0.269) -ln(0.731)
# = 1.62652338
keys = metric_keys.MetricKeys
expected_metrics = {
keys.LOSS_MEAN: 1.62652338 / 2.,
keys.ACCURACY: 1./2,
keys.PREDICTION_MEAN: 1./2,
keys.LABEL_MEAN: 2./2,
keys.ACCURACY_BASELINE: 2./2,
keys.AUC: 0.,
keys.AUC_PR: 1.,
keys.ACCURACY_AT_THRESHOLD % thresholds[0]: 1.,
keys.PRECISION_AT_THRESHOLD % thresholds[0]: 1.,
keys.RECALL_AT_THRESHOLD % thresholds[0]: 1.,
keys.ACCURACY_AT_THRESHOLD % thresholds[1]: .5,
keys.PRECISION_AT_THRESHOLD % thresholds[1]: 1.,
keys.RECALL_AT_THRESHOLD % thresholds[1]: .5,
keys.ACCURACY_AT_THRESHOLD % thresholds[2]: 0.,
keys.PRECISION_AT_THRESHOLD % thresholds[2]: 0.,
keys.RECALL_AT_THRESHOLD % thresholds[2]: 0.,
}
self.assertItemsEqual(expected_metrics.keys(), spec.eval_metric_ops.keys())
tol = 1e-2
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
self.assertIsNone(spec.scaffold.summary_op)
value_ops = {k: spec.eval_metric_ops[k][0] for k in spec.eval_metric_ops}
update_ops = {k: spec.eval_metric_ops[k][1] for k in spec.eval_metric_ops}
loss, metrics = sess.run((spec.loss, update_ops))
self.assertAllClose(1.62652338, loss)
# Check results of both update (in `metrics`) and value ops.
self.assertAllClose(expected_metrics, metrics, rtol=tol, atol=tol)
self.assertAllClose(
expected_metrics, {k: value_ops[k].eval()
for k in value_ops},
atol=tol,
rtol=tol)
def test_train_create_loss(self):
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss()
logits = np.array(((45,), (-41,),), dtype=np.float32)
labels = np.array(((1,), (1,),), dtype=np.float64)
features = {'x': np.array(((42,),), dtype=np.float32)}
# unreduced_loss = cross_entropy(labels, logits) = [0, 41]
expected_unreduced_loss = [[0.], [41.]]
# weights default to 1.
expected_weights = 1.
# training loss = 1 * 0 + 1 * 41
expected_training_loss = 41.
# Create loss.
training_loss, unreduced_loss, actual_weights, _ = head.create_loss(
features=features,
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels)
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
self.assertAllClose(expected_training_loss, training_loss.eval())
self.assertAllClose(expected_unreduced_loss, unreduced_loss.eval())
self.assertAllClose(expected_weights, actual_weights)
def test_train_create_loss_loss_reduction(self):
"""Tests create_loss with loss_reduction."""
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss(
loss_reduction=losses.Reduction.SUM_BY_NONZERO_WEIGHTS)
logits = np.array(((45,), (-41,),), dtype=np.float32)
labels = np.array(((1,), (1,),), dtype=np.float64)
features = {'x': np.array(((42,),), dtype=np.float32)}
# unreduced_loss = cross_entropy(labels, logits) = [0, 41]
expected_unreduced_loss = [[0.], [41.]]
# weights default to 1.
expected_weights = 1.
# training loss = (1 * 0 + 1 * 41) / num_nonzero_weights
expected_training_loss = 41. / 2.
# Create loss.
training_loss, unreduced_loss, actual_weights, _ = head.create_loss(
features=features,
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels)
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
self.assertAllClose(expected_training_loss, training_loss.eval())
self.assertAllClose(expected_unreduced_loss, unreduced_loss.eval())
self.assertAllClose(expected_weights, actual_weights)
def test_eval_create_loss_loss_fn(self):
"""Tests head.create_loss for eval mode and custom loss_fn."""
loss = np.array([[1.], [2.]], dtype=np.float32)
logits_input = np.array([[-10.], [10.]], dtype=np.float32)
labels_input = np.array([[1], [0]], dtype=np.int64)
def _loss_fn(labels, logits):
check_labels = control_flow_ops.Assert(
math_ops.reduce_all(math_ops.equal(labels, labels_input)),
data=[labels])
check_logits = control_flow_ops.Assert(
math_ops.reduce_all(math_ops.equal(logits, logits_input)),
data=[logits])
with ops.control_dependencies([check_labels, check_logits]):
return constant_op.constant(loss)
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss(
loss_fn=_loss_fn)
actual_training_loss = head.create_loss(
features={'x': np.array(((42,),), dtype=np.int32)},
mode=model_fn.ModeKeys.EVAL,
logits=logits_input,
labels=labels_input)[0]
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
self.assertAllClose(np.sum(loss), actual_training_loss.eval())
def test_eval_create_loss_loss_fn_wrong_shape(self):
"""Tests custom loss_fn that returns Tensor of unexpected shape."""
loss = np.array([1., 2.], dtype=np.float32)
def _loss_fn(labels, logits):
del labels, logits # Unused
return constant_op.constant(loss)
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss(
loss_fn=_loss_fn)
logits = np.array([[-10.], [10.]], dtype=np.float32)
labels = np.array([[1], [0]], dtype=np.int64)
actual_training_loss = head.create_loss(
features={'x': np.array(((42,),), dtype=np.int32)},
mode=model_fn.ModeKeys.EVAL,
logits=logits,
labels=labels)[0]
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
with self.assertRaisesRegexp(
errors.InvalidArgumentError,
r'\[loss_fn must return Tensor of shape \[D0, D1, ... DN, 1\]\. \] '
r'\[logits_shape: \] \[2 1\] \[loss_shape: \] \[2\]'):
actual_training_loss.eval()
def test_train_labels_none(self):
"""Tests that error is raised when labels is None."""
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss()
def _no_op_train_fn(loss):
del loss
return control_flow_ops.no_op()
with self.assertRaisesRegexp(
ValueError, r'You must provide a labels Tensor\. Given: None\.'):
head.create_estimator_spec(
features={'x': np.array(((42,),), dtype=np.int32)},
mode=model_fn.ModeKeys.TRAIN,
logits=np.array(((45,), (-41,),), dtype=np.float32),
labels=None,
train_op_fn=_no_op_train_fn)
def test_train(self):
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss()
logits = np.array(((45,), (-41,),), dtype=np.float32)
labels = np.array(((1,), (1,),), dtype=np.float64)
expected_train_result = b'my_train_op'
features = {'x': np.array(((42,),), dtype=np.float32)}
# loss = sum(cross_entropy(labels, logits)) = sum(0, 41) = 41
expected_loss = 41.
def _train_op_fn(loss):
with ops.control_dependencies((check_ops.assert_equal(
math_ops.to_float(expected_loss), math_ops.to_float(loss),
name='assert_loss'),)):
return constant_op.constant(expected_train_result)
# Create estimator spec.
spec = head.create_estimator_spec(
features=features,
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels,
train_op_fn=_train_op_fn)
# Assert spec contains expected tensors.
self.assertIsNotNone(spec.loss)
self.assertEqual({}, spec.eval_metric_ops)
self.assertIsNotNone(spec.train_op)
self.assertIsNone(spec.export_outputs)
_assert_no_hooks(self, spec)
# Assert predictions, loss, train_op, and summaries.
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
self.assertIsNotNone(spec.scaffold.summary_op)
loss, train_result, summary_str = sess.run((spec.loss, spec.train_op,
spec.scaffold.summary_op))
self.assertAllClose(expected_loss, loss)
self.assertEqual(expected_train_result, train_result)
_assert_simple_summaries(self, {
metric_keys.MetricKeys.LOSS: expected_loss,
# loss_mean = loss/2 = 41/2 = 20.5
metric_keys.MetricKeys.LOSS_MEAN: 20.5,
}, summary_str)
def test_train_summaries_with_head_name(self):
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss(
name='some_binary_head')
logits = np.array(((45,), (-41,),), dtype=np.float32)
labels = np.array(((1,), (1,),), dtype=np.float64)
features = {'x': np.array(((42,),), dtype=np.float32)}
# loss = sum(cross_entropy(labels, logits)) = sum(0, 41) = 41
expected_loss = 41.
def _train_op_fn(loss):
del loss
return control_flow_ops.no_op()
# Create estimator spec.
spec = head.create_estimator_spec(
features=features,
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels,
train_op_fn=_train_op_fn)
# Assert summaries.
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
self.assertIsNotNone(spec.scaffold.summary_op)
summary_str = sess.run(spec.scaffold.summary_op)
_assert_simple_summaries(
self,
{
'{}/some_binary_head'.format(metric_keys.MetricKeys.LOSS):
expected_loss,
# loss_mean = loss/2 = 41/2 = 20.5
'{}/some_binary_head'.format(metric_keys.MetricKeys.LOSS_MEAN):
20.5,
},
summary_str)
def test_train_with_regularization_losses(self):
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss(
loss_reduction=losses.Reduction.SUM_OVER_BATCH_SIZE)
logits = np.array(((45,), (-41,),), dtype=np.float32)
labels = np.array(((1,), (1,),), dtype=np.float64)
expected_train_result = b'my_train_op'
features = {'x': np.array(((42,),), dtype=np.float32)}
regularization_losses = [1.5, 0.5]
expected_regularization_loss = 2.
# unregularized_loss = sum(cross_entropy(labels, logits)) / batch_size
# = sum(0, 41) / 2 = 20.5
# loss = unregularized_loss + regularization_loss = 7.
expected_loss = 22.5
def _train_op_fn(loss):
with ops.control_dependencies((check_ops.assert_equal(
math_ops.to_float(expected_loss), math_ops.to_float(loss),
name='assert_loss'),)):
return constant_op.constant(expected_train_result)
# Create estimator spec.
spec = head.create_estimator_spec(
features=features,
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels,
train_op_fn=_train_op_fn,
regularization_losses=regularization_losses)
# Assert predictions, loss, train_op, and summaries.
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
self.assertIsNotNone(spec.scaffold.summary_op)
loss, train_result, summary_str = sess.run((spec.loss, spec.train_op,
spec.scaffold.summary_op))
self.assertAllClose(expected_loss, loss)
self.assertEqual(expected_train_result, train_result)
_assert_simple_summaries(self, {
metric_keys.MetricKeys.LOSS: expected_loss,
metric_keys.MetricKeys.LOSS_REGULARIZATION: (
expected_regularization_loss),
}, summary_str)
def test_float_labels_train_create_loss(self):
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss()
logits = np.array([[0.5], [-0.3]], dtype=np.float32)
labels = np.array([[0.8], [0.4]], dtype=np.float32)
features = {'x': np.array([[42]], dtype=np.float32)}
# loss = cross_entropy(labels, logits)
# = -label[i]*sigmoid(logit[i]) -(1-label[i])*sigmoid(-logit[i])
# = [-0.8 * log(sigmoid(0.5)) -0.2 * log(sigmoid(-0.5)),
# -0.4 * log(sigmoid(-0.3)) -0.6 * log(sigmoid(0.3))]
# = [0.57407698418, 0.67435524446]
# weighted sum loss = 0.57407698418 + 0.67435524446
expected_training_loss = 1.24843222864
# Create loss.
training_loss = head.create_loss(
features=features,
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels)[0]
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
self.assertAllClose(
expected_training_loss, training_loss.eval(), rtol=1e-2, atol=1e-2)
def test_float_labels_train(self):
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss()
logits = np.array([[0.5], [-0.3]], dtype=np.float32)
labels = np.array([[0.8], [0.4]], dtype=np.float32)
expected_train_result = b'my_train_op'
features = {'x': np.array([[42]], dtype=np.float32)}
# loss = sum(cross_entropy(labels, logits))
# = sum(-label[i]*sigmoid(logit[i]) -(1-label[i])*sigmoid(-logit[i]))
# = -0.8 * log(sigmoid(0.5)) -0.2 * log(sigmoid(-0.5))
# -0.4 * log(sigmoid(-0.3)) -0.6 * log(sigmoid(0.3))
# = 1.2484322
expected_loss = 1.2484322
def _train_op_fn(loss):
with ops.control_dependencies((dnn_testing_utils.assert_close(
math_ops.to_float(expected_loss), math_ops.to_float(loss)),)):
return constant_op.constant(expected_train_result)
# Create estimator spec.
spec = head.create_estimator_spec(
features=features,
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels,
train_op_fn=_train_op_fn)
# Assert predictions, loss, train_op, and summaries.
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
loss, train_result = sess.run((spec.loss, spec.train_op))
self.assertAlmostEqual(expected_loss, loss, delta=1.e-5)
self.assertEqual(expected_train_result, train_result)
def test_float_labels_eval_create_loss(self):
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss()
logits = np.array([[0.5], [-0.3]], dtype=np.float32)
labels = np.array([[0.8], [0.4]], dtype=np.float32)
features = {'x': np.array([[42]], dtype=np.float32)}
# loss = cross_entropy(labels, logits)
# = -label[i]*sigmoid(logit[i]) -(1-label[i])*sigmoid(-logit[i])
# = [-0.8 * log(sigmoid(0.5)) -0.2 * log(sigmoid(-0.5)),
# -0.4 * log(sigmoid(-0.3)) -0.6 * log(sigmoid(0.3))]
# = [0.57407698418, 0.67435524446]
# weighted sum loss = 0.57407698418 + 0.67435524446
expected_training_loss = 1.24843222864
# Create loss.
training_loss = head.create_loss(
features=features,
mode=model_fn.ModeKeys.EVAL,
logits=logits,
labels=labels)[0]
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
self.assertAllClose(
expected_training_loss, training_loss.eval(), rtol=1e-2, atol=1e-2)
def test_float_labels_eval(self):
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss()
logits = np.array([[0.5], [-0.3]], dtype=np.float32)
labels = np.array([[0.8], [0.4]], dtype=np.float32)
features = {'x': np.array([[42]], dtype=np.float32)}
# Create estimator spec.
spec = head.create_estimator_spec(
features=features,
mode=model_fn.ModeKeys.EVAL,
logits=logits,
labels=labels)
# loss = sum(cross_entropy(labels, logits))
# = sum(-label[i]*sigmoid(logit[i]) -(1-label[i])*sigmoid(-logit[i]))
# = -0.8 * log(sigmoid(0.5)) -0.2 * log(sigmoid(-0.5))
# -0.4 * log(sigmoid(-0.3)) -0.6 * log(sigmoid(0.3))
# = 1.2484322
expected_loss = 1.2484322
# Assert loss.
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
self.assertIsNone(spec.scaffold.summary_op)
update_ops = {k: spec.eval_metric_ops[k][1] for k in spec.eval_metric_ops}
loss, metrics = sess.run((spec.loss, update_ops))
self.assertAlmostEqual(expected_loss, loss, delta=1.e-5)
self.assertAlmostEqual(
expected_loss / 2., metrics[metric_keys.MetricKeys.LOSS_MEAN])
def test_weighted_multi_example_predict(self):
"""3 examples, 1 batch."""
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss(
weight_column='label_weights')
# Create estimator spec.
logits = np.array(((45,), (-41,), (44,)), dtype=np.int32)
spec = head.create_estimator_spec(
features={
'x': np.array(((42,), (43,), (44,)), dtype=np.int32),
'label_weights': np.array(((1.,), (.1,), (1.5,)), dtype=np.float32),
},
mode=model_fn.ModeKeys.PREDICT,
logits=logits)
# Assert predictions, loss, and metrics.
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
predictions = sess.run(spec.predictions)
self.assertAllClose(
logits.astype(np.float32),
predictions[prediction_keys.PredictionKeys.LOGITS])
self.assertAllClose(
_sigmoid(logits).astype(np.float32),
predictions[prediction_keys.PredictionKeys.LOGISTIC])
self.assertAllClose(
[[0., 1.], [1., 0.],
[0., 1.]], predictions[prediction_keys.PredictionKeys.PROBABILITIES])
self.assertAllClose([[1], [0], [1]],
predictions[prediction_keys.PredictionKeys.CLASS_IDS])
self.assertAllEqual([[b'1'], [b'0'], [b'1']],
predictions[prediction_keys.PredictionKeys.CLASSES])
def test_weighted_multi_example_eval(self):
"""3 examples, 1 batch."""
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss(
weight_column='label_weights')
# Create estimator spec.
logits = np.array(((45,), (-41,), (44,)), dtype=np.int32)
spec = head.create_estimator_spec(
features={
'x': np.array(((42,), (43,), (44,)), dtype=np.int32),
'label_weights': np.array(((1.,), (.1,), (1.5,)), dtype=np.float32),
},
mode=model_fn.ModeKeys.EVAL,
logits=logits,
labels=np.array(((1,), (1,), (0,)), dtype=np.int32))
# label_mean = (1*1 + .1*1 + 1.5*0)/(1 + .1 + 1.5) = 1.1/2.6
# = .42307692307
expected_label_mean = .42307692307
keys = metric_keys.MetricKeys
expected_metrics = {
# losses = label_weights*cross_entropy(labels, logits)
# = (1*0 + .1*41 + 1.5*44) = (1, 4.1, 66)
# loss = sum(losses) = 1 + 4.1 + 66 = 70.1
# loss_mean = loss/sum(label_weights) = 70.1/(1 + .1 + 1.5)
# = 70.1/2.6 = 26.9615384615
keys.LOSS_MEAN: 26.9615384615,
# accuracy = (1*1 + .1*0 + 1.5*0)/(1 + .1 + 1.5) = 1/2.6 = .38461538461
keys.ACCURACY: .38461538461,
# prediction_mean = (1*1 + .1*0 + 1.5*1)/(1 + .1 + 1.5) = 2.5/2.6
# = .96153846153
keys.PREDICTION_MEAN: .96153846153,
keys.LABEL_MEAN: expected_label_mean,
keys.ACCURACY_BASELINE: 1 - expected_label_mean,
keys.AUC: .45454565,
keys.AUC_PR: .6737757325172424,
}
# Assert spec contains expected tensors.
self.assertIsNotNone(spec.loss)
self.assertItemsEqual(expected_metrics.keys(), spec.eval_metric_ops.keys())
# Assert predictions, loss, and metrics.
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
value_ops = {k: spec.eval_metric_ops[k][0] for k in spec.eval_metric_ops}
update_ops = {k: spec.eval_metric_ops[k][1] for k in spec.eval_metric_ops}
loss, metrics = sess.run((spec.loss, update_ops))
self.assertAllClose(70.1, loss)
# Check results of both update (in `metrics`) and value ops.
self.assertAllClose(expected_metrics, metrics)
self.assertAllClose(
expected_metrics, {k: value_ops[k].eval() for k in value_ops})
def test_train_one_dim_create_loss(self):
"""Tests create_loss with 1D labels and weights (shape [batch_size])."""
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss(
weight_column='label_weights')
# Create estimator spec.
logits = np.array(((45,), (-41,), (44,)), dtype=np.float32)
labels_rank_1 = np.array((1., 1., 0.,))
weights_rank_1 = np.array(((1., .1, 1.5,)), dtype=np.float64)
features = {
'x': np.array(((42.,), (43.,), (44.,)), dtype=np.float32),
'label_weights': weights_rank_1,
}
# unreduced_loss = cross_entropy(labels, logits) = [0, 41, 44]
expected_unreduced_loss = [[0.], [41.], [44.]]
# weights are reshaped to [3, 1] to match logits.
expected_weights = [[1.], [.1], [1.5]]
# training loss = 1 * 0 + .1 * 41 + 1.5 * 44
expected_training_loss = 70.1
# Create loss.
training_loss, unreduced_loss, actual_weights, _ = head.create_loss(
features=features,
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels_rank_1)
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
self.assertAllClose(
expected_training_loss, training_loss.eval(),
rtol=1e-2, atol=1e-2)
self.assertAllClose(
expected_unreduced_loss, unreduced_loss.eval(),
rtol=1e-2, atol=1e-2)
self.assertAllClose(expected_weights, actual_weights.eval())
def test_train_one_dim(self):
"""Tests train with 1D labels and weights (shape [batch_size])."""
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss(
weight_column='label_weights')
# Create estimator spec.
logits = np.array(((45,), (-41,), (44,)), dtype=np.float32)
labels_rank_1 = np.array((1., 1., 0.,))
weights_rank_1 = np.array(((1., .1, 1.5,)), dtype=np.float64)
self.assertEqual((3,), labels_rank_1.shape)
self.assertEqual((3,), weights_rank_1.shape)
features = {
'x': np.array(((42.,), (43.,), (44.,)), dtype=np.float32),
'label_weights': weights_rank_1,
}
expected_train_result = b'my_train_op'
# losses = label_weights*cross_entropy(labels, logits)
# = (1*0 + .1*41 + 1.5*44) = (1, 4.1, 66)
# loss = sum(losses) = 1 + 4.1 + 66 = 70.1
expected_loss = 70.1
def _train_op_fn(loss):
with ops.control_dependencies((check_ops.assert_equal(
math_ops.to_float(expected_loss), math_ops.to_float(loss),
name='assert_loss'),)):
return constant_op.constant(expected_train_result)
spec = head.create_estimator_spec(
features=features,
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels_rank_1,
train_op_fn=_train_op_fn)
# Assert spec contains expected tensors.
self.assertIsNotNone(spec.loss)
self.assertIsNotNone(spec.train_op)
# Assert predictions, loss, and metrics.
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
self.assertIsNotNone(spec.scaffold.summary_op)
loss, train_result, summary_str = sess.run((
spec.loss, spec.train_op, spec.scaffold.summary_op))
self.assertAllClose(expected_loss, loss)
self.assertEqual(expected_train_result, train_result)
_assert_simple_summaries(self, {
metric_keys.MetricKeys.LOSS: expected_loss,
# loss_mean = loss/sum(label_weights) = 70.1/(1 + .1 + 1.5)
# = 70.1/2.6 = 26.9615384615
metric_keys.MetricKeys.LOSS_MEAN: 26.9615384615,
}, summary_str)
def test_weighted_multi_example_train(self):
"""3 examples, 1 batch."""
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss(
weight_column='label_weights')
# Create estimator spec.
logits = np.array(((45,), (-41,), (44,)), dtype=np.float32)
expected_train_result = b'my_train_op'
# losses = label_weights*cross_entropy(labels, logits)
# = (1*0 + .1*41 + 1.5*44) = (1, 4.1, 66)
# loss = sum(losses) = 1 + 4.1 + 66 = 70.1
expected_loss = 70.1
def _train_op_fn(loss):
with ops.control_dependencies((check_ops.assert_equal(
math_ops.to_float(expected_loss), math_ops.to_float(loss),
name='assert_loss'),)):
return constant_op.constant(expected_train_result)
spec = head.create_estimator_spec(
features={
'x': np.array(((42.,), (43.,), (44.,)), dtype=np.float32),
'label_weights': np.array(((1.,), (.1,), (1.5,)), dtype=np.float64),
},
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=np.array(((1.,), (1.,), (0.,))),
train_op_fn=_train_op_fn)
# Assert spec contains expected tensors.
self.assertIsNotNone(spec.loss)
self.assertIsNotNone(spec.train_op)
# Assert predictions, loss, and metrics.
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
self.assertIsNotNone(spec.scaffold.summary_op)
loss, train_result, summary_str = sess.run((
spec.loss, spec.train_op, spec.scaffold.summary_op))
self.assertAllClose(expected_loss, loss)
self.assertEqual(expected_train_result, train_result)
_assert_simple_summaries(self, {
metric_keys.MetricKeys.LOSS: expected_loss,
# loss_mean = loss/sum(label_weights) = 70.1/(1 + .1 + 1.5)
# = 70.1/2.6 = 26.9615384615
metric_keys.MetricKeys.LOSS_MEAN: 26.9615384615,
}, summary_str)
def test_multi_dim_weighted_train_create_loss(self):
"""Logits and labels of shape [2, 2, 1], weights [2, 2]."""
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss(
weight_column='weights')
logits = np.array([[[10], [-10]], [[12], [-12]]], dtype=np.float32)
labels = np.array([[[0], [0]], [[1], [1]]], dtype=np.float64)
weights = np.array([[1., 1.5], [2., 2.5]], dtype=np.float32)
# unreduced_loss = cross_entropy(labels, logits) = [[10, 0], [0, 12]].
expected_unreduced_loss = [[[10.], [0.]], [[0.], [12.]]]
# Weights are reshaped to [2, 2, 1] to match logits.
expected_weights = [[[1.], [1.5]], [[2.], [2.5]]]
# training_loss = 1*10 + 1.5*0 + 2*0 + 2.5*12 = 40
expected_training_loss = 40.
# Create loss.
training_loss, unreduced_loss, actual_weights, _ = head.create_loss(
features={'weights': weights},
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels)
tol = 1e-2
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
self.assertAllClose(
expected_training_loss, training_loss.eval(),
rtol=tol, atol=tol)
self.assertAllClose(
expected_unreduced_loss, unreduced_loss.eval(),
rtol=tol, atol=tol)
self.assertAllClose(expected_weights, actual_weights.eval())
def test_multi_dim_weighted_train(self):
"""Logits and labels of shape [2, 2, 1], weights [2, 2]."""
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss(
weight_column='weights')
logits = np.array([[[10], [-10]], [[12], [-12]]], dtype=np.float32)
labels = np.array([[[0], [0]], [[1], [1]]], dtype=np.float64)
weights = np.array([[1., 1.5], [2., 2.5]], dtype=np.float32)
# loss = cross_entropy(labels, logits) = [[10, 0], [0, 12]].
# weighted_sum_loss = 1*10 + 1.5*0 + 2*0 + 2.5*12 = 40
expected_loss = 40.
expected_train_result = 'my_train_op'
def _train_op_fn(loss):
return string_ops.string_join(
[constant_op.constant(expected_train_result),
string_ops.as_string(loss, precision=2)])
# Create estimator spec.
spec = head.create_estimator_spec(
features={'weights': weights},
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels,
train_op_fn=_train_op_fn)
# Assert predictions, loss, train_op, and summaries.
tol = 1e-2
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
loss, train_result = sess.run((spec.loss, spec.train_op))
self.assertAllClose(expected_loss, loss, rtol=tol, atol=tol)
self.assertEqual(
six.b('{0:s}{1:.2f}'.format(expected_train_result, expected_loss)),
train_result)
def test_multi_dim_train_weights_wrong_inner_dim(self):
"""Logits and labels of shape [2, 2, 1], weights [2, 1]."""
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss(
weight_column='weights')
logits = np.array([[[10], [-10]], [[12], [-12]]], dtype=np.float32)
labels = np.array([[[0], [0]], [[1], [1]]], dtype=np.float64)
weights = np.array([[1.], [2.]], dtype=np.float32)
def _no_op_train_fn(loss):
del loss
return control_flow_ops.no_op()
spec = head.create_estimator_spec(
features={'weights': weights},
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels,
train_op_fn=_no_op_train_fn)
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
with self.assertRaisesRegexp(
errors.InvalidArgumentError,
r'\[logits_shape: \] \[2 2 1\] \[weights_shape: \] \[2 1\]'):
spec.loss.eval()
def test_multi_dim_train_weights_wrong_outer_dim(self):
"""Logits and labels of shape [2, 2, 1], weights [2, 2, 2]."""
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss(
weight_column='weights')
logits = np.array([[[10], [-10]], [[12], [-12]]], dtype=np.float32)
labels = np.array([[[0], [0]], [[1], [1]]], dtype=np.float64)
weights_placeholder = array_ops.placeholder(dtype=dtypes.float32)
def _no_op_train_fn(loss):
del loss
return control_flow_ops.no_op()
spec = head.create_estimator_spec(
features={'weights': weights_placeholder},
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels,
train_op_fn=_no_op_train_fn)
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
with self.assertRaisesRegexp(
errors.InvalidArgumentError,
r'\[logits_shape: \]\s\[2 2 1\]\s\[weights_shape: \]\s\[2 2 2\]'):
spec.loss.eval({
weights_placeholder: np.array([[[1., 1.1], [1.5, 1.6]],
[[2., 2.1], [2.5, 2.6]]])})
def test_multi_dim_weighted_eval(self):
"""Logits and labels of shape [2, 2, 1], weights [2, 2]."""
head = head_lib._binary_logistic_head_with_sigmoid_cross_entropy_loss(
weight_column='weights')
logits = np.array([[[10], [-10]], [[12], [-12]]], dtype=np.float32)
labels = np.array([[[0], [0]], [[1], [1]]], dtype=np.float64)
weights = np.array([[1., 1.5], [2., 2.5]], dtype=np.float32)
# loss = cross_entropy(labels, logits) = [[10, 0], [0, 12]].
# weighted_sum_loss = 1*10 + 1.5*0 + 2*0 + 2.5*12 = 40
expected_loss = 40.
# Create estimator spec.
spec = head.create_estimator_spec(
features={'weights': weights},
mode=model_fn.ModeKeys.EVAL,
logits=logits,
labels=labels)
keys = metric_keys.MetricKeys
expected_metrics = {
keys.LOSS_MEAN: expected_loss / np.sum(weights),
keys.ACCURACY: (1.*0. + 1.5*1. + 2.*1. + 2.5*0.) / np.sum(weights),
keys.PREDICTION_MEAN: (1.*1 + 1.5*0 + 2.*1 + 2.5*0) / np.sum(weights),
keys.LABEL_MEAN: (1.*0 + 1.5*0 + 2.*1 + 2.5*1) / np.sum(weights),
keys.ACCURACY_BASELINE: (1.*0 + 1.5*0 + 2.*1 + 2.5*1) / np.sum(weights),
# We cannot reliably calculate AUC with only 4 data points, but the
# values should not change because of backwards-compatibility.
keys.AUC: 0.5222,
keys.AUC_PR: 0.7341,
}
tol = 1e-2
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
value_ops = {k: spec.eval_metric_ops[k][0] for k in spec.eval_metric_ops}
update_ops = {k: spec.eval_metric_ops[k][1] for k in spec.eval_metric_ops}
loss, metrics = sess.run((spec.loss, update_ops))
self.assertAllClose(expected_loss, loss, rtol=tol, atol=tol)
# Check results of both update (in `metrics`) and value ops.
self.assertAllClose(expected_metrics, metrics, rtol=tol, atol=tol)
self.assertAllClose(
expected_metrics, {k: value_ops[k].eval() for k in value_ops},
rtol=tol, atol=tol)
class RegressionHeadWithMeanSquaredErrorLossTest(test.TestCase):
def setUp(self):
ops.reset_default_graph()
def test_invalid_label_dimension(self):
with self.assertRaisesRegexp(ValueError, r'Invalid label_dimension'):
head_lib._regression_head_with_mean_squared_error_loss(label_dimension=-1)
with self.assertRaisesRegexp(ValueError, r'Invalid label_dimension'):
head_lib._regression_head_with_mean_squared_error_loss(label_dimension=0)
def test_invalid_loss_reduction(self):
with self.assertRaisesRegexp(
ValueError, r'Invalid loss_reduction: invalid_loss_reduction'):
head_lib._regression_head_with_mean_squared_error_loss(
loss_reduction='invalid_loss_reduction')
with self.assertRaisesRegexp(
ValueError, r'Invalid loss_reduction: none'):
head_lib._regression_head_with_mean_squared_error_loss(
loss_reduction=losses.Reduction.NONE)
def test_loss_fn_arg_labels_missing(self):
def _loss_fn(logits):
del logits # Unused
with self.assertRaisesRegexp(
ValueError,
r'loss_fn must contain argument: labels\. '
r'Given arguments: \(\'logits\',\)'):
head_lib._regression_head_with_mean_squared_error_loss(loss_fn=_loss_fn)
def test_loss_fn_arg_logits_missing(self):
def _loss_fn(labels):
del labels # unused
with self.assertRaisesRegexp(
ValueError,
r'loss_fn must contain argument: logits\. '
r'Given arguments: \(\'labels\',\)'):
head_lib._regression_head_with_mean_squared_error_loss(loss_fn=_loss_fn)
def test_loss_fn_arg_features_ok(self):
def _loss_fn(labels, logits, features):
del labels, logits, features # Unused
head_lib._regression_head_with_mean_squared_error_loss(loss_fn=_loss_fn)
def test_loss_fn_arg_invalid(self):
def _loss_fn(labels, logits, name=None):
del labels, logits, name # Unused
with self.assertRaisesRegexp(
ValueError,
r'loss_fn has unexpected args: \[\'name\'\]'):
head_lib._regression_head_with_mean_squared_error_loss(loss_fn=_loss_fn)
def test_invalid_logits(self):
head = head_lib._regression_head_with_mean_squared_error_loss(
label_dimension=3)
self.assertEqual(3, head.logits_dimension)
logits_1d = np.array(((45.,), (41.,),))
# Static shape.
with self.assertRaisesRegexp(ValueError, 'logits shape'):
head.create_estimator_spec(
features={'x': np.array(((42.,),))},
mode=model_fn.ModeKeys.PREDICT,
logits=logits_1d)
# Dynamic shape.
logits_placeholder = array_ops.placeholder(dtype=dtypes.float32)
spec = head.create_estimator_spec(
features={'x': np.array(((42.,),))},
mode=model_fn.ModeKeys.PREDICT,
logits=logits_placeholder)
with self.test_session():
with self.assertRaisesRegexp(errors.OpError, 'logits shape'):
spec.predictions[prediction_keys.PredictionKeys.PREDICTIONS].eval({
logits_placeholder: logits_1d
})
def test_incompatible_labels_eval(self):
head = head_lib._regression_head_with_mean_squared_error_loss(
label_dimension=3)
self.assertEqual(3, head.logits_dimension)
values_3d = np.array(((45., 46., 47.), (41., 42., 43.),))
values_1d = np.array(((43.,), (44.,),))
# Static shape.
with self.assertRaisesRegexp(ValueError, 'Mismatched label shape'):
head.create_loss(
features={'x': values_1d},
mode=model_fn.ModeKeys.EVAL,
logits=values_3d,
labels=values_1d)
with self.assertRaisesRegexp(ValueError, 'logits shape'):
head.create_estimator_spec(
features={'x': values_3d}, labels=values_3d,
mode=model_fn.ModeKeys.EVAL, logits=values_1d, train_op_fn=None)
# Dynamic shape.
labels_placeholder = array_ops.placeholder(dtype=dtypes.float32)
logits_placeholder = array_ops.placeholder(dtype=dtypes.float32)
spec = head.create_estimator_spec(
features={'x': values_1d},
mode=model_fn.ModeKeys.EVAL,
logits=logits_placeholder,
labels=labels_placeholder)
with self.test_session():
with self.assertRaisesRegexp(errors.OpError, 'logits shape'):
spec.loss.eval({
labels_placeholder: values_3d,
logits_placeholder: values_1d
})
training_loss = head.create_loss(
features={'x': values_1d},
mode=model_fn.ModeKeys.EVAL,
logits=logits_placeholder,
labels=labels_placeholder)[0]
with self.test_session():
with self.assertRaisesRegexp(
errors.InvalidArgumentError,
r'\[expected_labels_shape: \] \[2 3\] \[labels_shape: \] \[2 1\]'):
training_loss.eval({
labels_placeholder: values_1d,
logits_placeholder: values_3d
})
def test_incompatible_labels_train(self):
head = head_lib._regression_head_with_mean_squared_error_loss(
label_dimension=3)
self.assertEqual(3, head.logits_dimension)
values_3d = np.array(((45., 46., 47.), (41., 42., 43.),))
values_1d = np.array(((43.,), (44.,),))
# Static shape.
with self.assertRaisesRegexp(ValueError, 'Mismatched label shape'):
head.create_loss(
features={'x': values_1d},
mode=model_fn.ModeKeys.TRAIN,
logits=values_3d,
labels=values_1d)
with self.assertRaisesRegexp(ValueError, 'logits shape'):
head.create_estimator_spec(
features={'x': values_3d},
mode=model_fn.ModeKeys.TRAIN,
logits=values_1d,
labels=values_3d,
train_op_fn=lambda x: x)
# Dynamic shape.
labels_placeholder = array_ops.placeholder(dtype=dtypes.float32)
logits_placeholder = array_ops.placeholder(dtype=dtypes.float32)
spec = head.create_estimator_spec(
features={'x': values_1d},
mode=model_fn.ModeKeys.TRAIN,
logits=logits_placeholder,
labels=labels_placeholder,
train_op_fn=lambda x: x)
with self.test_session():
with self.assertRaisesRegexp(errors.OpError, 'logits shape'):
spec.loss.eval({
labels_placeholder: values_3d,
logits_placeholder: values_1d
})
training_loss = head.create_loss(
features={'x': values_1d},
mode=model_fn.ModeKeys.TRAIN,
logits=logits_placeholder,
labels=labels_placeholder)[0]
with self.test_session():
with self.assertRaisesRegexp(
errors.InvalidArgumentError,
r'\[expected_labels_shape: \] \[2 3\] \[labels_shape: \] \[2 1\]'):
training_loss.eval({
labels_placeholder: values_1d,
logits_placeholder: values_3d
})
def test_name(self):
head = head_lib._regression_head_with_mean_squared_error_loss(
name='foo')
self.assertEqual('foo', head.name)
def test_predict(self):
head = head_lib._regression_head_with_mean_squared_error_loss()
self.assertEqual(1, head.logits_dimension)
# Create estimator spec.
logits = np.array(((45,), (41,),), dtype=np.int32)
spec = head.create_estimator_spec(
features={'x': np.array(((42.,),), dtype=np.int32)},
mode=model_fn.ModeKeys.PREDICT,
logits=logits)
# Assert spec contains expected tensors.
prediction_key = prediction_keys.PredictionKeys.PREDICTIONS
self.assertItemsEqual((prediction_key,), spec.predictions.keys())
self.assertEqual(dtypes.float32, spec.predictions[prediction_key].dtype)
self.assertIsNone(spec.loss)
self.assertEqual({}, spec.eval_metric_ops)
self.assertIsNone(spec.train_op)
self.assertItemsEqual(
(signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY,
'predict',
'regression'),
spec.export_outputs.keys())
_assert_no_hooks(self, spec)
# Assert predictions.
with self.test_session():
_initialize_variables(self, spec.scaffold)
self.assertAllClose(logits, spec.predictions[prediction_key].eval())
def test_eval_create_loss(self):
head = head_lib._regression_head_with_mean_squared_error_loss()
logits = np.array(((45,), (41,),), dtype=np.float32)
labels = np.array(((43,), (44,),), dtype=np.int32)
features = {'x': np.array(((42,),), dtype=np.float32)}
# Create loss.
training_loss = head.create_loss(
features=features,
mode=model_fn.ModeKeys.EVAL,
logits=logits,
labels=labels)[0]
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
# loss = [(43-45)^2, (44-41)] = [4, 9]
self.assertAllClose(13., training_loss.eval())
def test_eval_create_loss_loss_fn(self):
"""Tests head.create_loss for eval mode and custom loss_fn."""
loss = np.array([[0., 1.], [2., 3.]], dtype=np.float32)
logits_input = np.array([[-1., 1.], [-2., 2.]], dtype=np.float32)
labels_input = np.array([[1., 0.], [2., -1.]], dtype=np.float32)
def _loss_fn(labels, logits):
check_labels = control_flow_ops.Assert(
math_ops.reduce_all(math_ops.equal(labels, labels_input)),
data=[labels])
check_logits = control_flow_ops.Assert(
math_ops.reduce_all(math_ops.equal(logits, logits_input)),
data=[logits])
with ops.control_dependencies([check_labels, check_logits]):
return constant_op.constant(loss)
head = head_lib._regression_head_with_mean_squared_error_loss(
label_dimension=2, loss_fn=_loss_fn)
actual_training_loss = head.create_loss(
features={'x': np.array(((42,),), dtype=np.int32)},
mode=model_fn.ModeKeys.EVAL,
logits=logits_input,
labels=labels_input)[0]
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
self.assertAllClose(np.sum(loss), actual_training_loss.eval())
def test_eval_create_loss_loss_fn_wrong_shape(self):
"""Tests custom loss_fn that returns Tensor of unexpected shape."""
loss = np.array([[1.], [2.]], dtype=np.float32)
def _loss_fn(labels, logits):
del labels, logits # Unused
return constant_op.constant(loss)
head = head_lib._regression_head_with_mean_squared_error_loss(
label_dimension=2, loss_fn=_loss_fn)
logits = np.array([[-1., 1.], [-2., 2.]], dtype=np.float32)
labels = np.array([[1., 0.], [2., -1.]], dtype=np.float32)
actual_training_loss = head.create_loss(
features={'x': np.array(((42,),), dtype=np.int32)},
mode=model_fn.ModeKeys.EVAL,
logits=logits,
labels=labels)[0]
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
with self.assertRaisesRegexp(
errors.InvalidArgumentError,
r'\[loss_fn must return Tensor of shape \[D0, D1, ... DN, 2\]\. \] '
r'\[logits_shape: \] \[2 2\] \[loss_shape: \] \[2 1\]'):
actual_training_loss.eval()
def test_eval_labels_none(self):
"""Tests that error is raised when labels is None."""
head = head_lib._regression_head_with_mean_squared_error_loss()
with self.assertRaisesRegexp(
ValueError, r'You must provide a labels Tensor\. Given: None\.'):
head.create_estimator_spec(
features={'x': np.array(((42,),), dtype=np.int32)},
mode=model_fn.ModeKeys.EVAL,
logits=np.array(((45,), (41,),), dtype=np.float32),
labels=None)
def test_eval(self):
head = head_lib._regression_head_with_mean_squared_error_loss()
self.assertEqual(1, head.logits_dimension)
logits = np.array(((45,), (41,),), dtype=np.float32)
labels = np.array(((43,), (44,),), dtype=np.int32)
features = {'x': np.array(((42,),), dtype=np.float32)}
# Create estimator spec.
spec = head.create_estimator_spec(
features=features,
mode=model_fn.ModeKeys.EVAL,
logits=logits,
labels=labels)
# Assert spec contains expected tensors.
prediction_key = prediction_keys.PredictionKeys.PREDICTIONS
self.assertItemsEqual((prediction_key,), spec.predictions.keys())
self.assertEqual(dtypes.float32, spec.predictions[prediction_key].dtype)
self.assertEqual(dtypes.float32, spec.loss.dtype)
self.assertItemsEqual(
(metric_keys.MetricKeys.LOSS_MEAN,), spec.eval_metric_ops.keys())
self.assertIsNone(spec.train_op)
self.assertIsNone(spec.export_outputs)
_assert_no_hooks(self, spec)
# Assert predictions, loss, and metrics.
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
self.assertIsNone(spec.scaffold.summary_op)
loss_mean_value_op, loss_mean_update_op = spec.eval_metric_ops[
metric_keys.MetricKeys.LOSS_MEAN]
predictions, loss, loss_mean = sess.run((
spec.predictions[prediction_key], spec.loss, loss_mean_update_op))
self.assertAllClose(logits, predictions)
# loss = (43-45)^2 + (44-41)^2 = 4+9 = 13
self.assertAllClose(13., loss)
# loss_mean = loss/2 = 13/2 = 6.5
expected_loss_mean = 6.5
# Check results of both update (in `loss_mean`) and value ops.
self.assertAllClose(expected_loss_mean, loss_mean)
self.assertAllClose(expected_loss_mean, loss_mean_value_op.eval())
def test_eval_metric_ops_with_head_name_for_regression(self):
head = head_lib._regression_head_with_mean_squared_error_loss(
name='some_regression_head')
logits = np.array(((1,), (9,)), dtype=np.float32)
labels = np.array(((1,), (1,)), dtype=np.int64)
features = {'x': np.array(((42,),), dtype=np.int32)}
# Create estimator spec.
spec = head.create_estimator_spec(
features=features,
mode=model_fn.ModeKeys.EVAL,
logits=logits,
labels=labels)
expected_metric_keys = [
'{}/some_regression_head'.format(metric_keys.MetricKeys.LOSS_MEAN),
]
self.assertItemsEqual(expected_metric_keys, spec.eval_metric_ops.keys())
def test_eval_with_regularization_losses(self):
head = head_lib._regression_head_with_mean_squared_error_loss(
loss_reduction=losses.Reduction.SUM_OVER_BATCH_SIZE)
self.assertEqual(1, head.logits_dimension)
logits = np.array(((45,), (41,),), dtype=np.float32)
labels = np.array(((43,), (44,),), dtype=np.int32)
features = {'x': np.array(((42,),), dtype=np.float32)}
regularization_losses = [1.5, 0.5]
expected_regularization_loss = 2.
# unregularized_loss = ((43-45)^2 + (44-41)^2) / batch_size
# = (4 + 9) / 2 = 6.5
expected_unregularized_loss = 6.5
expected_regularized_loss = (
expected_unregularized_loss + expected_regularization_loss)
# Create estimator spec.
spec = head.create_estimator_spec(
features=features,
mode=model_fn.ModeKeys.EVAL,
logits=logits,
labels=labels,
regularization_losses=regularization_losses)
keys = metric_keys.MetricKeys
expected_metrics = {
keys.LOSS_MEAN: expected_unregularized_loss,
keys.LOSS_REGULARIZATION: expected_regularization_loss,
}
# Assert predictions, loss, and metrics.
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
self.assertIsNone(spec.scaffold.summary_op)
value_ops = {k: spec.eval_metric_ops[k][0] for k in spec.eval_metric_ops}
update_ops = {k: spec.eval_metric_ops[k][1] for k in spec.eval_metric_ops}
prediction_key = prediction_keys.PredictionKeys.PREDICTIONS
predictions, loss, metrics = sess.run((
spec.predictions[prediction_key], spec.loss, update_ops))
self.assertAllClose(logits, predictions)
self.assertAllClose(expected_regularized_loss, loss)
# Check results of both update (in `metrics`) and value ops.
self.assertAllClose(expected_metrics, metrics)
self.assertAllClose(
expected_metrics, {k: value_ops[k].eval() for k in value_ops})
def test_train_create_loss(self):
head = head_lib._regression_head_with_mean_squared_error_loss()
logits = np.array(((45,), (41,),), dtype=np.float32)
labels = np.array(((43,), (44,),), dtype=np.int32)
features = {'x': np.array(((42,),), dtype=np.float32)}
# unreduced_loss = [(43-45)^2, (44-41)] = [4, 9]
expected_unreduced_loss = [[4.], [9.]]
# weights default to 1.
expected_weights = 1
# training_loss = 1 * 4 + 1 * 9 = 13
expected_training_loss = 13.
# Create loss.
training_loss, unreduced_loss, actual_weights, _ = head.create_loss(
features=features,
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels)
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
self.assertAllClose(expected_training_loss, training_loss.eval())
self.assertAllClose(expected_unreduced_loss, unreduced_loss.eval())
self.assertAllClose(expected_weights, actual_weights)
def test_train_create_loss_loss_reduction(self):
"""Tests create_loss with loss_reduction."""
head = head_lib._regression_head_with_mean_squared_error_loss(
loss_reduction=losses.Reduction.SUM_BY_NONZERO_WEIGHTS)
logits = np.array(((45,), (41,),), dtype=np.float32)
labels = np.array(((43,), (44,),), dtype=np.int32)
features = {'x': np.array(((42,),), dtype=np.float32)}
# unreduced_loss = [(43-45)^2, (44-41)] = [4, 9]
expected_unreduced_loss = [[4.], [9.]]
# weights default to 1.
expected_weights = 1
# training_loss = (1 * 4 + 1 * 9) / num_nonzero_weights
expected_training_loss = 13. / 2.
# Create loss.
training_loss, unreduced_loss, actual_weights, _ = head.create_loss(
features=features,
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels)
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
self.assertAllClose(expected_training_loss, training_loss.eval())
self.assertAllClose(expected_unreduced_loss, unreduced_loss.eval())
self.assertAllClose(expected_weights, actual_weights)
def test_train_labels_none(self):
"""Tests that error is raised when labels is None."""
head = head_lib._regression_head_with_mean_squared_error_loss()
def _no_op_train_fn(loss):
del loss
return control_flow_ops.no_op()
with self.assertRaisesRegexp(
ValueError, r'You must provide a labels Tensor\. Given: None\.'):
head.create_estimator_spec(
features={'x': np.array(((42,),), dtype=np.int32)},
mode=model_fn.ModeKeys.TRAIN,
logits=np.array(((45,), (41,),), dtype=np.float32),
labels=None,
train_op_fn=_no_op_train_fn)
def test_train(self):
head = head_lib._regression_head_with_mean_squared_error_loss()
self.assertEqual(1, head.logits_dimension)
# Create estimator spec.
logits = np.array(((45,), (41,),), dtype=np.float32)
labels = np.array(((43.,), (44.,),), dtype=np.float64)
expected_train_result = b'my_train_op'
features = {'x': np.array(((42.,),), dtype=np.float32)}
# loss = (43-45)^2 + (44-41)^2 = 4 + 9 = 13
expected_loss = 13
def _train_op_fn(loss):
with ops.control_dependencies((check_ops.assert_equal(
math_ops.to_float(expected_loss), math_ops.to_float(loss),
name='assert_loss'),)):
return constant_op.constant(expected_train_result)
spec = head.create_estimator_spec(
features=features,
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels,
train_op_fn=_train_op_fn)
# Assert spec contains expected tensors.
prediction_key = prediction_keys.PredictionKeys.PREDICTIONS
self.assertItemsEqual((prediction_key,), spec.predictions.keys())
self.assertEqual(dtypes.float32, spec.predictions[prediction_key].dtype)
self.assertEqual(dtypes.float32, spec.loss.dtype)
self.assertEqual({}, spec.eval_metric_ops)
self.assertIsNotNone(spec.train_op)
self.assertIsNone(spec.export_outputs)
_assert_no_hooks(self, spec)
# Assert predictions, loss, train_op, and summaries.
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
self.assertIsNotNone(spec.scaffold.summary_op)
predictions, loss, train_result, summary_str = sess.run((
spec.predictions[prediction_key], spec.loss, spec.train_op,
spec.scaffold.summary_op))
self.assertAllClose(logits, predictions)
self.assertAllClose(expected_loss, loss)
self.assertEqual(expected_train_result, train_result)
_assert_simple_summaries(self, {
metric_keys.MetricKeys.LOSS: expected_loss,
# loss_mean = loss/2 = 13/2 = 6.5
metric_keys.MetricKeys.LOSS_MEAN: 6.5,
}, summary_str)
def test_train_summaries_with_head_name(self):
head = head_lib._regression_head_with_mean_squared_error_loss(
name='some_regression_head')
self.assertEqual(1, head.logits_dimension)
# Create estimator spec.
logits = np.array(((45,), (41,),), dtype=np.float32)
labels = np.array(((43.,), (44.,),), dtype=np.float64)
features = {'x': np.array(((42.,),), dtype=np.float32)}
# loss = (43-45)^2 + (44-41)^2 = 4 + 9 = 13
expected_loss = 13
def _train_op_fn(loss):
del loss
return control_flow_ops.no_op()
spec = head.create_estimator_spec(
features=features,
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels,
train_op_fn=_train_op_fn)
# Assert summaries.
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
self.assertIsNotNone(spec.scaffold.summary_op)
summary_str = sess.run(spec.scaffold.summary_op)
_assert_simple_summaries(
self,
{
'{}/some_regression_head'.format(metric_keys.MetricKeys.LOSS):
expected_loss,
# loss_mean = loss/2 = 13/2 = 6.5
'{}/some_regression_head'
.format(metric_keys.MetricKeys.LOSS_MEAN):
6.5,
},
summary_str)
def test_train_with_regularization_losses(self):
head = head_lib._regression_head_with_mean_squared_error_loss(
loss_reduction=losses.Reduction.SUM_OVER_BATCH_SIZE)
self.assertEqual(1, head.logits_dimension)
# Create estimator spec.
logits = np.array(((45,), (41,),), dtype=np.float32)
labels = np.array(((43.,), (44.,),), dtype=np.float64)
expected_train_result = b'my_train_op'
features = {'x': np.array(((42.,),), dtype=np.float32)}
regularization_losses = [1.5, 0.5]
expected_regularization_loss = 2.
# unregularized_loss = ((43-45)^2 + (44-41)^2) / batch_size
# = (4 + 9) / 2 = 6.5
# loss = unregularized_loss + regularization_loss = 8.5
expected_loss = 8.5
def _train_op_fn(loss):
with ops.control_dependencies((check_ops.assert_equal(
math_ops.to_float(expected_loss), math_ops.to_float(loss),
name='assert_loss'),)):
return constant_op.constant(expected_train_result)
spec = head.create_estimator_spec(
features=features,
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels,
train_op_fn=_train_op_fn,
regularization_losses=regularization_losses)
# Assert predictions, loss, train_op, and summaries.
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
self.assertIsNotNone(spec.scaffold.summary_op)
prediction_key = prediction_keys.PredictionKeys.PREDICTIONS
predictions, loss, train_result, summary_str = sess.run((
spec.predictions[prediction_key], spec.loss, spec.train_op,
spec.scaffold.summary_op))
self.assertAllClose(logits, predictions)
self.assertAllClose(expected_loss, loss)
self.assertEqual(expected_train_result, train_result)
_assert_simple_summaries(self, {
metric_keys.MetricKeys.LOSS: expected_loss,
metric_keys.MetricKeys.LOSS_REGULARIZATION: (
expected_regularization_loss),
}, summary_str)
def test_weighted_multi_example_eval(self):
"""1d label, 3 examples, 1 batch."""
head = head_lib._regression_head_with_mean_squared_error_loss(
weight_column='label_weights')
self.assertEqual(1, head.logits_dimension)
# Create estimator spec.
logits = np.array(((45,), (41,), (44,)), dtype=np.int32)
spec = head.create_estimator_spec(
features={
'x': np.array(((42,), (43,), (44,)), dtype=np.int32),
'label_weights': np.array(((1.,), (.1,), (1.5,)), dtype=np.float32),
},
mode=model_fn.ModeKeys.EVAL,
logits=logits,
labels=np.array(((35,), (42,), (45,)), dtype=np.int32))
# Assert spec contains expected tensors.
prediction_key = prediction_keys.PredictionKeys.PREDICTIONS
self.assertItemsEqual((prediction_key,), spec.predictions.keys())
self.assertEqual(dtypes.float32, spec.predictions[prediction_key].dtype)
self.assertEqual(dtypes.float32, spec.loss.dtype)
self.assertItemsEqual(
(metric_keys.MetricKeys.LOSS_MEAN,), spec.eval_metric_ops.keys())
self.assertIsNone(spec.train_op)
self.assertIsNone(spec.export_outputs)
_assert_no_hooks(self, spec)
# Assert predictions, loss, and metrics.
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
self.assertIsNone(spec.scaffold.summary_op)
loss_mean_value_op, loss_mean_update_op = spec.eval_metric_ops[
metric_keys.MetricKeys.LOSS_MEAN]
predictions, loss, loss_mean = sess.run((
spec.predictions[prediction_key], spec.loss, loss_mean_update_op))
self.assertAllClose(logits, predictions)
# loss = 1*(35-45)^2 + .1*(42-41)^2 + 1.5*(45-44)^2 = 100+.1+1.5 = 101.6
self.assertAllClose(101.6, loss)
# loss_mean = loss/(1+.1+1.5) = 101.6/2.6 = 39.0769231
expected_loss_mean = 39.0769231
# Check results of both update (in `loss_mean`) and value ops.
self.assertAllClose(expected_loss_mean, loss_mean)
self.assertAllClose(expected_loss_mean, loss_mean_value_op.eval())
def test_weight_with_numeric_column(self):
"""1d label, 3 examples, 1 batch."""
head = head_lib._regression_head_with_mean_squared_error_loss(
weight_column=feature_column_lib.numeric_column(
'label_weights', normalizer_fn=lambda x: x + 1.))
# Create estimator spec.
logits = np.array(((45,), (41,), (44,)), dtype=np.int32)
spec = head.create_estimator_spec(
features={
'x':
np.array(((42,), (43,), (44,)), dtype=np.int32),
'label_weights':
np.array(((0.,), (-0.9,), (0.5,)), dtype=np.float32),
},
mode=model_fn.ModeKeys.EVAL,
logits=logits,
labels=np.array(((35,), (42,), (45,)), dtype=np.int32))
# Assert loss.
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
loss = sess.run(spec.loss)
# loss = 1*(35-45)^2 + .1*(42-41)^2 + 1.5*(45-44)^2 = 100+.1+1.5 = 101.6
self.assertAllClose(101.6, loss)
def test_weighted_multi_example_train(self):
"""1d label, 3 examples, 1 batch."""
head = head_lib._regression_head_with_mean_squared_error_loss(
weight_column='label_weights')
self.assertEqual(1, head.logits_dimension)
# Create estimator spec.
logits = np.array(((45,), (41,), (44,)), dtype=np.float32)
expected_train_result = b'my_train_op'
# loss = 1*(35-45)^2 + .1*(42-41)^2 + 1.5*(45-44)^2 = 100+.1+1.5 = 101.6
expected_loss = 101.6
def _train_op_fn(loss):
with ops.control_dependencies((check_ops.assert_equal(
math_ops.to_float(expected_loss), math_ops.to_float(loss),
name='assert_loss'),)):
return constant_op.constant(expected_train_result)
spec = head.create_estimator_spec(
features={
'x': np.array(((42,), (43,), (44,)), dtype=np.float32),
'label_weights': np.array(((1.,), (.1,), (1.5,)), dtype=np.float64),
},
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=np.array(((35.,), (42.,), (45.,)), dtype=np.float32),
train_op_fn=_train_op_fn)
# Assert spec contains expected tensors.
prediction_key = prediction_keys.PredictionKeys.PREDICTIONS
self.assertItemsEqual((prediction_key,), spec.predictions.keys())
self.assertEqual(dtypes.float32, spec.predictions[prediction_key].dtype)
self.assertEqual(dtypes.float32, spec.loss.dtype)
self.assertEqual({}, spec.eval_metric_ops)
self.assertIsNotNone(spec.train_op)
self.assertIsNone(spec.export_outputs)
_assert_no_hooks(self, spec)
# Assert predictions, loss, train_op, and summaries.
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
self.assertIsNotNone(spec.scaffold.summary_op)
predictions, loss, train_result, summary_str = sess.run((
spec.predictions[prediction_key], spec.loss, spec.train_op,
spec.scaffold.summary_op))
self.assertAllClose(logits, predictions)
self.assertAllClose(expected_loss, loss)
self.assertEqual(expected_train_result, train_result)
_assert_simple_summaries(self, {
metric_keys.MetricKeys.LOSS: expected_loss,
# loss_mean = loss/(1+.1+1.5) = 101.6/2.6 = 39.0769231
metric_keys.MetricKeys.LOSS_MEAN: 39.0769231,
}, summary_str)
def test_train_one_dim_create_loss(self):
"""Tests create_loss with 1D labels and weights (shape [batch_size])."""
head = head_lib._regression_head_with_mean_squared_error_loss(
weight_column='label_weights')
logits = np.array(((45,), (41,), (44,)), dtype=np.float32)
x_feature_rank_1 = np.array((42., 43., 44.,), dtype=np.float32)
weight_rank_1 = np.array((1., .1, 1.5,), dtype=np.float64)
labels_rank_1 = np.array((35., 42., 45.,))
# unreduced_loss = [(35-45)^2, (42-41)^2, (45-44)^2] = [100, 1, 1].
expected_unreduced_loss = [[100.], [1.], [1.]]
# weights are reshaped to [3, 1] to match logits.
expected_weights = [[1.], [.1], [1.5]]
# training_loss = 100 * 1 + 1 * .1 + 1.5 * 1 = 101.6
expected_training_loss = 101.6
features = {'x': x_feature_rank_1, 'label_weights': weight_rank_1}
# Create loss.
training_loss, unreduced_loss, actual_weights, _ = head.create_loss(
features=features,
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels_rank_1)
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
self.assertAllClose(expected_training_loss, training_loss.eval())
self.assertAllClose(expected_unreduced_loss, unreduced_loss.eval())
self.assertAllClose(expected_weights, actual_weights.eval())
def test_train_one_dim(self):
"""Tests train with 1D labels and weights (shape [batch_size])."""
head = head_lib._regression_head_with_mean_squared_error_loss(
weight_column='label_weights')
self.assertEqual(1, head.logits_dimension)
# Create estimator spec.
logits = np.array(((45,), (41,), (44,)), dtype=np.float32)
expected_train_result = b'my_train_op'
# loss = 1*(35-45)^2 + .1*(42-41)^2 + 1.5*(45-44)^2 = 100+.1+1.5 = 101.6
expected_loss = 101.6
def _train_op_fn(loss):
with ops.control_dependencies((check_ops.assert_equal(
math_ops.to_float(expected_loss), math_ops.to_float(loss),
name='assert_loss'),)):
return constant_op.constant(expected_train_result)
x_feature_rank_1 = np.array((42., 43., 44.,), dtype=np.float32)
weight_rank_1 = np.array((1., .1, 1.5,), dtype=np.float64)
labels_rank_1 = np.array((35., 42., 45.,))
features = {'x': x_feature_rank_1, 'label_weights': weight_rank_1}
self.assertEqual((3,), x_feature_rank_1.shape)
self.assertEqual((3,), weight_rank_1.shape)
self.assertEqual((3,), labels_rank_1.shape)
spec = head.create_estimator_spec(
features=features,
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels_rank_1,
train_op_fn=_train_op_fn)
# Assert spec contains expected tensors.
prediction_key = prediction_keys.PredictionKeys.PREDICTIONS
self.assertItemsEqual((prediction_key,), spec.predictions.keys())
self.assertEqual(dtypes.float32, spec.predictions[prediction_key].dtype)
self.assertEqual(dtypes.float32, spec.loss.dtype)
self.assertEqual({}, spec.eval_metric_ops)
self.assertIsNotNone(spec.train_op)
self.assertIsNone(spec.export_outputs)
_assert_no_hooks(self, spec)
# Assert predictions, loss, train_op, and summaries.
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
self.assertIsNotNone(spec.scaffold.summary_op)
predictions, loss, train_result, summary_str = sess.run((
spec.predictions[prediction_key], spec.loss, spec.train_op,
spec.scaffold.summary_op))
self.assertAllClose(logits, predictions)
self.assertAllClose(expected_loss, loss)
self.assertEqual(expected_train_result, train_result)
_assert_simple_summaries(self, {
metric_keys.MetricKeys.LOSS: expected_loss,
# loss_mean = loss/(1+.1+1.5) = 101.6/2.6 = 39.0769231
metric_keys.MetricKeys.LOSS_MEAN: 39.0769231,
}, summary_str)
def test_weighted_multi_value_eval_create_loss(self):
"""3d label, 1 example, 1 batch."""
head = head_lib._regression_head_with_mean_squared_error_loss(
weight_column='label_weights', label_dimension=3)
logits = np.array(((45., 41., 44.),))
labels = np.array(((35., 42., 45.),))
features = {
'x': np.array(((42., 43., 44.),)),
'label_weights': np.array(((1., .1, 1.5),))
}
# Create loss.
training_loss = head.create_loss(
features=features,
mode=model_fn.ModeKeys.EVAL,
logits=logits,
labels=labels)[0]
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
# loss = [(35-45)^2, (42-41)^2, (45-44)^2] = [100, 1, 1].
# weighted sum loss = 1 * 100 + .1 * 1 + 1.5 * 1 = 101.6
self.assertAllClose(101.6, training_loss.eval())
def test_weighted_multi_value_eval(self):
"""3d label, 1 example, 1 batch."""
head = head_lib._regression_head_with_mean_squared_error_loss(
weight_column='label_weights', label_dimension=3)
self.assertEqual(3, head.logits_dimension)
logits = np.array(((45., 41., 44.),))
labels = np.array(((35., 42., 45.),))
features = {
'x': np.array(((42., 43., 44.),)),
'label_weights': np.array(((1., .1, 1.5),))
}
# Create estimator spec.
spec = head.create_estimator_spec(
features=features,
mode=model_fn.ModeKeys.EVAL,
logits=logits,
labels=labels)
# Assert spec contains expected tensors.
prediction_key = prediction_keys.PredictionKeys.PREDICTIONS
self.assertItemsEqual((prediction_key,), spec.predictions.keys())
self.assertEqual(dtypes.float32, spec.predictions[prediction_key].dtype)
self.assertEqual(dtypes.float32, spec.loss.dtype)
self.assertItemsEqual(
(metric_keys.MetricKeys.LOSS_MEAN,), spec.eval_metric_ops.keys())
self.assertIsNone(spec.train_op)
self.assertIsNone(spec.export_outputs)
_assert_no_hooks(self, spec)
# Assert predictions, loss, and metrics.
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
self.assertIsNone(spec.scaffold.summary_op)
loss_mean_value_op, loss_mean_update_op = spec.eval_metric_ops[
metric_keys.MetricKeys.LOSS_MEAN]
predictions, loss, loss_mean = sess.run((
spec.predictions[prediction_key], spec.loss, loss_mean_update_op))
self.assertAllClose(logits, predictions)
# loss = 1*(35-45)^2 + .1*(42-41)^2 + 1.5*(45-44)^2 = 100+.1+1.5 = 101.6
self.assertAllClose(101.6, loss)
# loss_mean = loss/(1+.1+1.5) = 101.6/2.6 = 39.076923
expected_loss_mean = 39.076923
# Check results of both update (in `loss_mean`) and value ops.
self.assertAllClose(expected_loss_mean, loss_mean)
self.assertAllClose(expected_loss_mean, loss_mean_value_op.eval())
def test_weighted_multi_value_train_create_loss(self):
"""3d label, 1 example, 1 batch."""
head = head_lib._regression_head_with_mean_squared_error_loss(
weight_column='label_weights', label_dimension=3)
logits = np.array(((45., 41., 44.),))
labels = np.array(((35., 42., 45.),))
features = {
'x': np.array(((42., 43., 44.),)),
'label_weights': np.array(((1., .1, 1.5),))
}
# Create loss.
training_loss = head.create_loss(
features=features,
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels)[0]
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
# loss = [(35-45)^2, (42-41)^2, (45-44)^2] = [100, 1, 1].
# weighted sum loss = 1 * 100 + .1 * 1 + 1.5 * 1 = 101.6
self.assertAllClose(101.6, training_loss.eval())
def test_weighted_multi_value_train(self):
"""3d label, 1 example, 1 batch."""
head = head_lib._regression_head_with_mean_squared_error_loss(
weight_column='label_weights', label_dimension=3)
self.assertEqual(3, head.logits_dimension)
logits = np.array(((45., 41., 44.),))
labels = np.array(((35., 42., 45.),))
expected_train_result = b'my_train_op'
# loss = 1*(35-45)^2 + .1*(42-41)^2 + 1.5*(45-44)^2 = 100+.1+1.5 = 101.6
expected_loss = 101.6
def _train_op_fn(loss):
with ops.control_dependencies((check_ops.assert_equal(
math_ops.to_float(expected_loss), math_ops.to_float(loss),
name='assert_loss'),)):
return constant_op.constant(expected_train_result)
features = {
'x': np.array(((42., 43., 44.),)),
'label_weights': np.array(((1., .1, 1.5),)),
}
# Create estimator spec.
spec = head.create_estimator_spec(
features=features,
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels,
train_op_fn=_train_op_fn)
# Assert spec contains expected tensors.
prediction_key = prediction_keys.PredictionKeys.PREDICTIONS
self.assertItemsEqual((prediction_key,), spec.predictions.keys())
self.assertEqual(dtypes.float32, spec.predictions[prediction_key].dtype)
self.assertEqual(dtypes.float32, spec.loss.dtype)
self.assertEqual({}, spec.eval_metric_ops)
self.assertIsNotNone(spec.train_op)
self.assertIsNone(spec.export_outputs)
_assert_no_hooks(self, spec)
# Evaluate predictions, loss, train_op, and summaries.
with self.test_session() as sess:
_initialize_variables(self, spec.scaffold)
self.assertIsNotNone(spec.scaffold.summary_op)
predictions, loss, train_result, summary_str = sess.run((
spec.predictions[prediction_key], spec.loss, spec.train_op,
spec.scaffold.summary_op))
self.assertAllClose(logits, predictions)
self.assertAllClose(expected_loss, loss)
self.assertEqual(expected_train_result, train_result)
_assert_simple_summaries(self, {
metric_keys.MetricKeys.LOSS: expected_loss,
# loss_mean = loss/(1+.1+1.5) = 101.6/2.6 = 39.076923
metric_keys.MetricKeys.LOSS_MEAN: 39.076923,
}, summary_str)
def test_weighted_multi_batch_eval(self):
"""1d label, 1 example, 3 batches."""
head = head_lib._regression_head_with_mean_squared_error_loss(
weight_column='label_weights')
self.assertEqual(1, head.logits_dimension)
# Create estimator spec.
logits = np.array(((45.,), (41.,), (44.,)))
input_fn = numpy_io.numpy_input_fn(
x={
'x': np.array(((42.,), (43.,), (44.,))),
'label_weights': np.array(((1.,), (.1,), (1.5,))),
# 'logits' is not a feature, but we use `numpy_input_fn` to make a
# batched version of it, and pop it off before passing to
# `create_estimator_spec`.
'logits': logits,
},
y=np.array(((35.,), (42.,), (45.,))),
batch_size=1,
num_epochs=1,
shuffle=False)
batched_features, batched_labels = input_fn()
batched_logits = batched_features.pop('logits')
spec = head.create_estimator_spec(
features=batched_features,
mode=model_fn.ModeKeys.EVAL,
logits=batched_logits,
labels=batched_labels,
train_op_fn=None)
# losses = [1*(35-45)^2, .1*(42-41)^2, 1.5*(45-44)^2] = [100, .1, 1.5]
# loss = sum(losses) = 100+.1+1.5 = 101.6
# loss_mean = loss/(1+.1+1.5) = 101.6/2.6 = 39.076923
expected_metrics = {metric_keys.MetricKeys.LOSS_MEAN: 39.076923}
# Assert spec contains expected tensors.
self.assertEqual(dtypes.float32, spec.loss.dtype)
self.assertItemsEqual(expected_metrics.keys(), spec.eval_metric_ops.keys())
self.assertIsNone(spec.train_op)
_assert_no_hooks(self, spec)
with self.test_session() as sess:
# Finalize graph and initialize variables.
_initialize_variables(self, spec.scaffold)
self.assertIsNotNone(spec.scaffold.summary_op)
queue_runner_impl.start_queue_runners()
# Run tensors for `steps` steps.
steps = len(logits)
results = tuple([
sess.run((
spec.loss,
# The `[1]` gives us the metric update op.
{k: spec.eval_metric_ops[k][1] for k in spec.eval_metric_ops}
)) for _ in range(steps)
])
# Assert losses and metrics.
self.assertAllClose((100, .1, 1.5), [r[0] for r in results])
# For metrics, check results of both update (in `results`) and value ops.
# Note: we only check the result of the last step for streaming metrics.
self.assertAllClose(expected_metrics, results[steps - 1][1])
self.assertAllClose(expected_metrics, {
k: spec.eval_metric_ops[k][0].eval() for k in spec.eval_metric_ops
})
def test_weighted_multi_batch_train(self):
"""1d label, 1 example, 3 batches."""
head = head_lib._regression_head_with_mean_squared_error_loss(
weight_column='label_weights')
self.assertEqual(1, head.logits_dimension)
# Create estimator spec.
logits = np.array(((45.,), (41.,), (44.,)))
input_fn = numpy_io.numpy_input_fn(
x={
'x': np.array(((42.,), (43.,), (44.,))),
'label_weights': np.array(((1.,), (.1,), (1.5,))),
# 'logits' is not a feature, but we use `numpy_input_fn` to make a
# batched version of it, and pop it off before passing to
# `create_estimator_spec`.
'logits': logits,
},
y=np.array(((35.,), (42.,), (45.,))),
batch_size=1,
num_epochs=1,
shuffle=False)
batched_features, batched_labels = input_fn()
batched_logits = batched_features.pop('logits')
spec = head.create_estimator_spec(
features=batched_features,
mode=model_fn.ModeKeys.TRAIN,
logits=batched_logits,
labels=batched_labels,
train_op_fn=lambda loss: loss * -7.)
# Assert spec contains expected tensors.
self.assertEqual(dtypes.float32, spec.loss.dtype)
self.assertIsNotNone(spec.train_op)
with self.test_session() as sess:
# Finalize graph and initialize variables.
_initialize_variables(self, spec.scaffold)
self.assertIsNotNone(spec.scaffold.summary_op)
queue_runner_impl.start_queue_runners()
results = tuple([
sess.run((spec.loss, spec.train_op)) for _ in range(len(logits))
])
# losses = [1*(35-45)^2, .1*(42-41)^2, 1.5*(45-44)^2] = [100, .1, 1.5]
expected_losses = np.array((100, .1, 1.5))
self.assertAllClose(expected_losses, [r[0] for r in results])
self.assertAllClose(expected_losses * -7., [r[1] for r in results])
def test_multi_dim_weighted_train_create_loss(self):
"""Logits, labels of shape [2, 2, 3], weight shape [2, 2]."""
label_dimension = 3
head = head_lib._regression_head_with_mean_squared_error_loss(
weight_column='label_weights', label_dimension=label_dimension)
logits = np.array([[[00., 01., 02.], [10., 11., 12.]],
[[20., 21., 22.], [30., 31., 32.]]])
labels = np.array([[[01., 02., 03.], [12., 13., 14.]],
[[23., 24., 25.], [34., 35., 36.]]])
weights = np.array([[1., 1.5], [2., 2.5]])
expected_unreduced_loss = [[[1., 1., 1.], [4., 4., 4.]],
[[9., 9., 9.], [16., 16., 16.]]]
expected_training_loss = np.sum(
np.array([[[1. * x for x in [1., 1., 1.]],
[1.5 * x for x in [4., 4., 4.]]],
[[2. * x for x in [9., 9., 9.]],
[2.5 * x for x in [16., 16., 16.]]]]))
# Weights are expanded to [2, 2, 1] to match logits.
expected_weights = [[[1.], [1.5]], [[2.], [2.5]]]
# Create loss.
training_loss, unreduced_loss, actual_weights, _ = head.create_loss(
features={'label_weights': weights},
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels)
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
self.assertAllClose(expected_training_loss, training_loss.eval())
self.assertAllClose(expected_unreduced_loss, unreduced_loss.eval())
self.assertAllClose(expected_weights, actual_weights.eval())
def test_multi_dim_weighted_train(self):
"""Logits, labels of shape [2, 2, 3], weight shape [2, 2]."""
head = head_lib._regression_head_with_mean_squared_error_loss(
weight_column='label_weights', label_dimension=3)
logits = np.array([[[00., 01., 02.], [10., 11., 12.]],
[[20., 21., 22.], [30., 31., 32.]]])
labels = np.array([[[01., 02., 03.], [12., 13., 14.]],
[[23., 24., 25.], [34., 35., 36.]]])
expected_train_result = b'my_train_op'
features = {
'label_weights': np.array([[1., 1.5], [2., 2.5]]),
}
# loss = 1*3*1^2 + 1.5*3*2^2 + 2*3*3^2 +2.5*3*4^2 = 195
expected_loss = 195.
# Create estimator spec.
def _train_op_fn(loss):
with ops.control_dependencies((check_ops.assert_equal(
math_ops.to_float(expected_loss), math_ops.to_float(loss),
name='assert_loss'),)):
return constant_op.constant(expected_train_result)
spec = head.create_estimator_spec(
features=features,
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels,
train_op_fn=_train_op_fn)
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
self.assertAllClose(expected_loss, spec.loss.eval())
def test_multi_dim_train_weights_wrong_inner_dim(self):
"""Logits, labels of shape [2, 2, 3], weight shape [2, 1]."""
head = head_lib._regression_head_with_mean_squared_error_loss(
weight_column='label_weights', label_dimension=3)
logits = np.array([[[00., 01., 02.], [10., 11., 12.]],
[[20., 21., 22.], [30., 31., 32.]]])
labels = np.array([[[01., 02., 03.], [12., 13., 14.]],
[[23., 24., 25.], [34., 35., 36.]]])
features = {
'label_weights': np.array([[1.], [2]]),
}
def _no_op_train_fn(loss):
del loss
return control_flow_ops.no_op()
spec = head.create_estimator_spec(
features=features,
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels,
train_op_fn=_no_op_train_fn)
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
with self.assertRaisesRegexp(
errors.InvalidArgumentError,
r'\[logits_shape: \] \[2 2 3\] \[weights_shape: \] \[2 1\]'):
spec.loss.eval()
def test_multi_dim_train_weights_wrong_outer_dim(self):
"""Logits, labels of shape [2, 2, 3], weight shape [2, 2, 2]."""
head = head_lib._regression_head_with_mean_squared_error_loss(
weight_column='label_weights', label_dimension=3)
logits = np.array([[[00., 01., 02.], [10., 11., 12.]],
[[20., 21., 22.], [30., 31., 32.]]])
labels = np.array([[[01., 02., 03.], [12., 13., 14.]],
[[23., 24., 25.], [34., 35., 36.]]])
weights_placeholder = array_ops.placeholder(dtype=dtypes.float32)
features = {
'label_weights': weights_placeholder,
}
def _no_op_train_fn(loss):
del loss
return control_flow_ops.no_op()
spec = head.create_estimator_spec(
features=features,
mode=model_fn.ModeKeys.TRAIN,
logits=logits,
labels=labels,
train_op_fn=_no_op_train_fn)
with self.test_session():
_initialize_variables(self, monitored_session.Scaffold())
with self.assertRaisesRegexp(
errors.InvalidArgumentError,
r'\[logits_shape: \]\s\[2 2 3\]\s\[weights_shape: \]\s\[2 2 2\]'):
spec.loss.eval({
weights_placeholder: np.array([[[1., 1.1], [1.5, 1.6]],
[[2., 2.1], [2.5, 2.6]]])})
if __name__ == '__main__':
test.main()
| apache-2.0 |
postlund/home-assistant | homeassistant/components/bitcoin/sensor.py | 1 | 6392 | """Bitcoin information service that uses blockchain.com."""
from datetime import timedelta
import logging
from blockchain import exchangerates, statistics
import voluptuous as vol
from homeassistant.components.sensor import PLATFORM_SCHEMA
from homeassistant.const import ATTR_ATTRIBUTION, CONF_CURRENCY, CONF_DISPLAY_OPTIONS
import homeassistant.helpers.config_validation as cv
from homeassistant.helpers.entity import Entity
_LOGGER = logging.getLogger(__name__)
ATTRIBUTION = "Data provided by blockchain.com"
DEFAULT_CURRENCY = "USD"
ICON = "mdi:currency-btc"
SCAN_INTERVAL = timedelta(minutes=5)
OPTION_TYPES = {
"exchangerate": ["Exchange rate (1 BTC)", None],
"trade_volume_btc": ["Trade volume", "BTC"],
"miners_revenue_usd": ["Miners revenue", "USD"],
"btc_mined": ["Mined", "BTC"],
"trade_volume_usd": ["Trade volume", "USD"],
"difficulty": ["Difficulty", None],
"minutes_between_blocks": ["Time between Blocks", "min"],
"number_of_transactions": ["No. of Transactions", None],
"hash_rate": ["Hash rate", "PH/s"],
"timestamp": ["Timestamp", None],
"mined_blocks": ["Mined Blocks", None],
"blocks_size": ["Block size", None],
"total_fees_btc": ["Total fees", "BTC"],
"total_btc_sent": ["Total sent", "BTC"],
"estimated_btc_sent": ["Estimated sent", "BTC"],
"total_btc": ["Total", "BTC"],
"total_blocks": ["Total Blocks", None],
"next_retarget": ["Next retarget", None],
"estimated_transaction_volume_usd": ["Est. Transaction volume", "USD"],
"miners_revenue_btc": ["Miners revenue", "BTC"],
"market_price_usd": ["Market price", "USD"],
}
PLATFORM_SCHEMA = PLATFORM_SCHEMA.extend(
{
vol.Required(CONF_DISPLAY_OPTIONS, default=[]): vol.All(
cv.ensure_list, [vol.In(OPTION_TYPES)]
),
vol.Optional(CONF_CURRENCY, default=DEFAULT_CURRENCY): cv.string,
}
)
def setup_platform(hass, config, add_entities, discovery_info=None):
"""Set up the Bitcoin sensors."""
currency = config.get(CONF_CURRENCY)
if currency not in exchangerates.get_ticker():
_LOGGER.warning("Currency %s is not available. Using USD", currency)
currency = DEFAULT_CURRENCY
data = BitcoinData()
dev = []
for variable in config[CONF_DISPLAY_OPTIONS]:
dev.append(BitcoinSensor(data, variable, currency))
add_entities(dev, True)
class BitcoinSensor(Entity):
"""Representation of a Bitcoin sensor."""
def __init__(self, data, option_type, currency):
"""Initialize the sensor."""
self.data = data
self._name = OPTION_TYPES[option_type][0]
self._unit_of_measurement = OPTION_TYPES[option_type][1]
self._currency = currency
self.type = option_type
self._state = None
@property
def name(self):
"""Return the name of the sensor."""
return self._name
@property
def state(self):
"""Return the state of the sensor."""
return self._state
@property
def unit_of_measurement(self):
"""Return the unit the value is expressed in."""
return self._unit_of_measurement
@property
def icon(self):
"""Return the icon to use in the frontend, if any."""
return ICON
@property
def device_state_attributes(self):
"""Return the state attributes of the sensor."""
return {ATTR_ATTRIBUTION: ATTRIBUTION}
def update(self):
"""Get the latest data and updates the states."""
self.data.update()
stats = self.data.stats
ticker = self.data.ticker
if self.type == "exchangerate":
self._state = ticker[self._currency].p15min
self._unit_of_measurement = self._currency
elif self.type == "trade_volume_btc":
self._state = "{0:.1f}".format(stats.trade_volume_btc)
elif self.type == "miners_revenue_usd":
self._state = "{0:.0f}".format(stats.miners_revenue_usd)
elif self.type == "btc_mined":
self._state = "{}".format(stats.btc_mined * 0.00000001)
elif self.type == "trade_volume_usd":
self._state = "{0:.1f}".format(stats.trade_volume_usd)
elif self.type == "difficulty":
self._state = "{0:.0f}".format(stats.difficulty)
elif self.type == "minutes_between_blocks":
self._state = "{0:.2f}".format(stats.minutes_between_blocks)
elif self.type == "number_of_transactions":
self._state = "{}".format(stats.number_of_transactions)
elif self.type == "hash_rate":
self._state = "{0:.1f}".format(stats.hash_rate * 0.000001)
elif self.type == "timestamp":
self._state = stats.timestamp
elif self.type == "mined_blocks":
self._state = "{}".format(stats.mined_blocks)
elif self.type == "blocks_size":
self._state = "{0:.1f}".format(stats.blocks_size)
elif self.type == "total_fees_btc":
self._state = "{0:.2f}".format(stats.total_fees_btc * 0.00000001)
elif self.type == "total_btc_sent":
self._state = "{0:.2f}".format(stats.total_btc_sent * 0.00000001)
elif self.type == "estimated_btc_sent":
self._state = "{0:.2f}".format(stats.estimated_btc_sent * 0.00000001)
elif self.type == "total_btc":
self._state = "{0:.2f}".format(stats.total_btc * 0.00000001)
elif self.type == "total_blocks":
self._state = "{0:.0f}".format(stats.total_blocks)
elif self.type == "next_retarget":
self._state = "{0:.2f}".format(stats.next_retarget)
elif self.type == "estimated_transaction_volume_usd":
self._state = "{0:.2f}".format(stats.estimated_transaction_volume_usd)
elif self.type == "miners_revenue_btc":
self._state = "{0:.1f}".format(stats.miners_revenue_btc * 0.00000001)
elif self.type == "market_price_usd":
self._state = "{0:.2f}".format(stats.market_price_usd)
class BitcoinData:
"""Get the latest data and update the states."""
def __init__(self):
"""Initialize the data object."""
self.stats = None
self.ticker = None
def update(self):
"""Get the latest data from blockchain.com."""
self.stats = statistics.get()
self.ticker = exchangerates.get_ticker()
| apache-2.0 |
Elite-Kernels/HTC-10 | tools/perf/scripts/python/Perf-Trace-Util/lib/Perf/Trace/EventClass.py | 4653 | 3596 | # EventClass.py
#
# This is a library defining some events types classes, which could
# be used by other scripts to analyzing the perf samples.
#
# Currently there are just a few classes defined for examples,
# PerfEvent is the base class for all perf event sample, PebsEvent
# is a HW base Intel x86 PEBS event, and user could add more SW/HW
# event classes based on requirements.
import struct
# Event types, user could add more here
EVTYPE_GENERIC = 0
EVTYPE_PEBS = 1 # Basic PEBS event
EVTYPE_PEBS_LL = 2 # PEBS event with load latency info
EVTYPE_IBS = 3
#
# Currently we don't have good way to tell the event type, but by
# the size of raw buffer, raw PEBS event with load latency data's
# size is 176 bytes, while the pure PEBS event's size is 144 bytes.
#
def create_event(name, comm, dso, symbol, raw_buf):
if (len(raw_buf) == 144):
event = PebsEvent(name, comm, dso, symbol, raw_buf)
elif (len(raw_buf) == 176):
event = PebsNHM(name, comm, dso, symbol, raw_buf)
else:
event = PerfEvent(name, comm, dso, symbol, raw_buf)
return event
class PerfEvent(object):
event_num = 0
def __init__(self, name, comm, dso, symbol, raw_buf, ev_type=EVTYPE_GENERIC):
self.name = name
self.comm = comm
self.dso = dso
self.symbol = symbol
self.raw_buf = raw_buf
self.ev_type = ev_type
PerfEvent.event_num += 1
def show(self):
print "PMU event: name=%12s, symbol=%24s, comm=%8s, dso=%12s" % (self.name, self.symbol, self.comm, self.dso)
#
# Basic Intel PEBS (Precise Event-based Sampling) event, whose raw buffer
# contains the context info when that event happened: the EFLAGS and
# linear IP info, as well as all the registers.
#
class PebsEvent(PerfEvent):
pebs_num = 0
def __init__(self, name, comm, dso, symbol, raw_buf, ev_type=EVTYPE_PEBS):
tmp_buf=raw_buf[0:80]
flags, ip, ax, bx, cx, dx, si, di, bp, sp = struct.unpack('QQQQQQQQQQ', tmp_buf)
self.flags = flags
self.ip = ip
self.ax = ax
self.bx = bx
self.cx = cx
self.dx = dx
self.si = si
self.di = di
self.bp = bp
self.sp = sp
PerfEvent.__init__(self, name, comm, dso, symbol, raw_buf, ev_type)
PebsEvent.pebs_num += 1
del tmp_buf
#
# Intel Nehalem and Westmere support PEBS plus Load Latency info which lie
# in the four 64 bit words write after the PEBS data:
# Status: records the IA32_PERF_GLOBAL_STATUS register value
# DLA: Data Linear Address (EIP)
# DSE: Data Source Encoding, where the latency happens, hit or miss
# in L1/L2/L3 or IO operations
# LAT: the actual latency in cycles
#
class PebsNHM(PebsEvent):
pebs_nhm_num = 0
def __init__(self, name, comm, dso, symbol, raw_buf, ev_type=EVTYPE_PEBS_LL):
tmp_buf=raw_buf[144:176]
status, dla, dse, lat = struct.unpack('QQQQ', tmp_buf)
self.status = status
self.dla = dla
self.dse = dse
self.lat = lat
PebsEvent.__init__(self, name, comm, dso, symbol, raw_buf, ev_type)
PebsNHM.pebs_nhm_num += 1
del tmp_buf
| gpl-2.0 |
theflofly/tensorflow | tensorflow/python/debug/lib/common.py | 79 | 3077 | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Common values and methods for TensorFlow Debugger."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import json
GRPC_URL_PREFIX = "grpc://"
# A key for a Session.run() call.
RunKey = collections.namedtuple("RunKey", ["feed_names", "fetch_names"])
def get_graph_element_name(elem):
"""Obtain the name or string representation of a graph element.
If the graph element has the attribute "name", return name. Otherwise, return
a __str__ representation of the graph element. Certain graph elements, such as
`SparseTensor`s, do not have the attribute "name".
Args:
elem: The graph element in question.
Returns:
If the attribute 'name' is available, return the name. Otherwise, return
str(fetch).
"""
return elem.name if hasattr(elem, "name") else str(elem)
def get_flattened_names(feeds_or_fetches):
"""Get a flattened list of the names in run() call feeds or fetches.
Args:
feeds_or_fetches: Feeds or fetches of the `Session.run()` call. It maybe
a Tensor, an Operation or a Variable. It may also be nested lists, tuples
or dicts. See doc of `Session.run()` for more details.
Returns:
(list of str) A flattened list of fetch names from `feeds_or_fetches`.
"""
lines = []
if isinstance(feeds_or_fetches, (list, tuple)):
for item in feeds_or_fetches:
lines.extend(get_flattened_names(item))
elif isinstance(feeds_or_fetches, dict):
for key in feeds_or_fetches:
lines.extend(get_flattened_names(feeds_or_fetches[key]))
else:
# This ought to be a Tensor, an Operation or a Variable, for which the name
# attribute should be available. (Bottom-out condition of the recursion.)
lines.append(get_graph_element_name(feeds_or_fetches))
return lines
def get_run_key(feed_dict, fetches):
"""Summarize the names of feeds and fetches as a RunKey JSON string.
Args:
feed_dict: The feed_dict given to the `Session.run()` call.
fetches: The fetches from the `Session.run()` call.
Returns:
A JSON Array consisting of two items. They first items is a flattened
Array of the names of the feeds. The second item is a flattened Array of
the names of the fetches.
"""
return json.dumps(RunKey(get_flattened_names(feed_dict),
get_flattened_names(fetches)))
| apache-2.0 |
robcore/machinex_kernelv2 | scripts/gcc-wrapper.py | 234 | 4095 | #! /usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright (c) 2011-2012, The Linux Foundation. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of The Linux Foundation nor
# the names of its contributors may be used to endorse or promote
# products derived from this software without specific prior written
# permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NON-INFRINGEMENT ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
# OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
# WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
# OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
# ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# Invoke gcc, looking for warnings, and causing a failure if there are
# non-whitelisted warnings.
import errno
import re
import os
import sys
import subprocess
# Note that gcc uses unicode, which may depend on the locale. TODO:
# force LANG to be set to en_US.UTF-8 to get consistent warnings.
allowed_warnings = set([
"alignment.c:327",
"mmu.c:602",
"return_address.c:62",
"swab.h:49",
"SemaLambda.cpp:946",
"CGObjCGNU.cpp:1414",
"BugReporter.h:146",
"RegionStore.cpp:1904",
"SymbolManager.cpp:484",
"RewriteObjCFoundationAPI.cpp:737",
"RewriteObjCFoundationAPI.cpp:696",
"CommentParser.cpp:394",
"CommentParser.cpp:391",
"CommentParser.cpp:356",
"LegalizeDAG.cpp:3646",
"IRBuilder.h:844",
"DataLayout.cpp:193",
"transport.c:653",
"xt_socket.c:307",
"xt_socket.c:161",
"inet_hashtables.h:356",
"xc4000.c:1049",
"xc4000.c:1063",
"f_qdss.c:586",
"mipi_tc358764_dsi2lvds.c:746",
"dynamic_debug.h:75",
"hci_conn.c:407",
"f_qdss.c:740",
"mipi_novatek.c:569",
"swab.h:34",
])
# Capture the name of the object file, can find it.
ofile = None
warning_re = re.compile(r'''(.*/|)([^/]+\.[a-z]+:\d+):(\d+:)? warning:''')
def interpret_warning(line):
"""Decode the message from gcc. The messages we care about have a filename, and a warning"""
line = line.rstrip('\n')
m = warning_re.match(line)
if m and m.group(2) not in allowed_warnings:
print "error, forbidden warning:", m.group(2)
# If there is a warning, remove any object if it exists.
if ofile:
try:
os.remove(ofile)
except OSError:
pass
sys.exit(1)
def run_gcc():
args = sys.argv[1:]
# Look for -o
try:
i = args.index('-o')
global ofile
ofile = args[i+1]
except (ValueError, IndexError):
pass
compiler = sys.argv[0]
try:
proc = subprocess.Popen(args, stderr=subprocess.PIPE)
for line in proc.stderr:
print line,
interpret_warning(line)
result = proc.wait()
except OSError as e:
result = e.errno
if result == errno.ENOENT:
print args[0] + ':',e.strerror
print 'Is your PATH set correctly?'
else:
print ' '.join(args), str(e)
return result
if __name__ == '__main__':
status = run_gcc()
sys.exit(status)
| gpl-2.0 |
naphthalene/fabric-bolt | fabric_bolt/projects/migrations/0005_auto__add_field_stage_project.py | 18 | 3869 | # -*- coding: utf-8 -*-
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding field 'Stage.project'
db.add_column(u'projects_stage', 'project',
self.gf('django.db.models.fields.related.ForeignKey')(default=0, to=orm['projects.Project']),
keep_default=False)
def backwards(self, orm):
# Deleting field 'Stage.project'
db.delete_column(u'projects_stage', 'project_id')
models = {
u'projects.configuration': {
'Meta': {'object_name': 'Configuration'},
'date_created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'date_update': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'key': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'project': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['projects.Project']"}),
'value': ('django.db.models.fields.CharField', [], {'max_length': '500'})
},
u'projects.deployment': {
'Meta': {'object_name': 'Deployment'},
'comments': ('django.db.models.fields.TextField', [], {}),
'date_created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'date_update': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'stage': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['projects.Stage']"})
},
u'projects.project': {
'Meta': {'object_name': 'Project'},
'date_created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'date_update': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'description': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'number_of_deployments': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'type': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['projects.ProjectType']", 'null': 'True', 'blank': 'True'})
},
u'projects.projecttype': {
'Meta': {'object_name': 'ProjectType'},
'date_created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'date_update': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'})
},
u'projects.stage': {
'Meta': {'object_name': 'Stage'},
'date_created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'date_update': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'project': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['projects.Project']"})
}
}
complete_apps = ['projects'] | mit |
habibun/craft | web/assets/jquery-file-upload/server/gae-python/main.py | 223 | 5173 | # -*- coding: utf-8 -*-
#
# jQuery File Upload Plugin GAE Python Example 2.0
# https://github.com/blueimp/jQuery-File-Upload
#
# Copyright 2011, Sebastian Tschan
# https://blueimp.net
#
# Licensed under the MIT license:
# http://www.opensource.org/licenses/MIT
#
from __future__ import with_statement
from google.appengine.api import files, images
from google.appengine.ext import blobstore, deferred
from google.appengine.ext.webapp import blobstore_handlers
import json, re, urllib, webapp2
WEBSITE = 'http://blueimp.github.com/jQuery-File-Upload/'
MIN_FILE_SIZE = 1 # bytes
MAX_FILE_SIZE = 5000000 # bytes
IMAGE_TYPES = re.compile('image/(gif|p?jpeg|(x-)?png)')
ACCEPT_FILE_TYPES = IMAGE_TYPES
THUMBNAIL_MODIFICATOR = '=s80' # max width / height
EXPIRATION_TIME = 300 # seconds
def cleanup(blob_keys):
blobstore.delete(blob_keys)
class UploadHandler(webapp2.RequestHandler):
def initialize(self, request, response):
super(UploadHandler, self).initialize(request, response)
self.response.headers['Access-Control-Allow-Origin'] = '*'
self.response.headers[
'Access-Control-Allow-Methods'
] = 'OPTIONS, HEAD, GET, POST, PUT, DELETE'
def validate(self, file):
if file['size'] < MIN_FILE_SIZE:
file['error'] = 'File is too small'
elif file['size'] > MAX_FILE_SIZE:
file['error'] = 'File is too big'
elif not ACCEPT_FILE_TYPES.match(file['type']):
file['error'] = 'Filetype not allowed'
else:
return True
return False
def get_file_size(self, file):
file.seek(0, 2) # Seek to the end of the file
size = file.tell() # Get the position of EOF
file.seek(0) # Reset the file position to the beginning
return size
def write_blob(self, data, info):
blob = files.blobstore.create(
mime_type=info['type'],
_blobinfo_uploaded_filename=info['name']
)
with files.open(blob, 'a') as f:
f.write(data)
files.finalize(blob)
return files.blobstore.get_blob_key(blob)
def handle_upload(self):
results = []
blob_keys = []
for name, fieldStorage in self.request.POST.items():
if type(fieldStorage) is unicode:
continue
result = {}
result['name'] = re.sub(r'^.*\\', '',
fieldStorage.filename)
result['type'] = fieldStorage.type
result['size'] = self.get_file_size(fieldStorage.file)
if self.validate(result):
blob_key = str(
self.write_blob(fieldStorage.value, result)
)
blob_keys.append(blob_key)
result['delete_type'] = 'DELETE'
result['delete_url'] = self.request.host_url +\
'/?key=' + urllib.quote(blob_key, '')
if (IMAGE_TYPES.match(result['type'])):
try:
result['url'] = images.get_serving_url(
blob_key,
secure_url=self.request.host_url\
.startswith('https')
)
result['thumbnail_url'] = result['url'] +\
THUMBNAIL_MODIFICATOR
except: # Could not get an image serving url
pass
if not 'url' in result:
result['url'] = self.request.host_url +\
'/' + blob_key + '/' + urllib.quote(
result['name'].encode('utf-8'), '')
results.append(result)
deferred.defer(
cleanup,
blob_keys,
_countdown=EXPIRATION_TIME
)
return results
def options(self):
pass
def head(self):
pass
def get(self):
self.redirect(WEBSITE)
def post(self):
if (self.request.get('_method') == 'DELETE'):
return self.delete()
result = {'files': self.handle_upload()}
s = json.dumps(result, separators=(',',':'))
redirect = self.request.get('redirect')
if redirect:
return self.redirect(str(
redirect.replace('%s', urllib.quote(s, ''), 1)
))
if 'application/json' in self.request.headers.get('Accept'):
self.response.headers['Content-Type'] = 'application/json'
self.response.write(s)
def delete(self):
blobstore.delete(self.request.get('key') or '')
class DownloadHandler(blobstore_handlers.BlobstoreDownloadHandler):
def get(self, key, filename):
if not blobstore.get(key):
self.error(404)
else:
# Cache for the expiration time:
self.response.headers['Cache-Control'] =\
'public,max-age=%d' % EXPIRATION_TIME
self.send_blob(key, save_as=filename)
app = webapp2.WSGIApplication(
[
('/', UploadHandler),
('/([^/]+)/([^/]+)', DownloadHandler)
],
debug=True
) | mit |
sbidoul/odoo | openerp/report/render/html2html/__init__.py | 381 | 1091 | # -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2009 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
from html2html import parseString
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 |
jenalgit/django | django/db/models/__init__.py | 239 | 1679 | from functools import wraps
from django.core.exceptions import ObjectDoesNotExist # NOQA
from django.db.models import signals # NOQA
from django.db.models.aggregates import * # NOQA
from django.db.models.deletion import ( # NOQA
CASCADE, DO_NOTHING, PROTECT, SET, SET_DEFAULT, SET_NULL, ProtectedError,
)
from django.db.models.expressions import ( # NOQA
F, Case, Expression, ExpressionWrapper, Func, Value, When,
)
from django.db.models.fields import * # NOQA
from django.db.models.fields.files import FileField, ImageField # NOQA
from django.db.models.fields.proxy import OrderWrt # NOQA
from django.db.models.fields.subclassing import SubfieldBase # NOQA
from django.db.models.lookups import Lookup, Transform # NOQA
from django.db.models.manager import Manager # NOQA
from django.db.models.query import Q, Prefetch, QuerySet # NOQA
# Imports that would create circular imports if sorted
from django.db.models.base import Model # NOQA isort:skip
from django.db.models.fields.related import ( # NOQA isort:skip
ForeignKey, ForeignObject, OneToOneField, ManyToManyField,
ManyToOneRel, ManyToManyRel, OneToOneRel,
)
def permalink(func):
"""
Decorator that calls urlresolvers.reverse() to return a URL using
parameters returned by the decorated function "func".
"func" should be a function that returns a tuple in one of the
following formats:
(viewname, viewargs)
(viewname, viewargs, viewkwargs)
"""
from django.core.urlresolvers import reverse
@wraps(func)
def inner(*args, **kwargs):
bits = func(*args, **kwargs)
return reverse(bits[0], None, *bits[1:3])
return inner
| bsd-3-clause |
chauhanhardik/populo_2 | common/lib/xmodule/xmodule/library_content_module.py | 46 | 24808 | # -*- coding: utf-8 -*-
"""
LibraryContent: The XBlock used to include blocks from a library in a course.
"""
import json
from lxml import etree
from copy import copy
from capa.responsetypes import registry
from gettext import ngettext
from lazy import lazy
from .mako_module import MakoModuleDescriptor
from opaque_keys.edx.locator import LibraryLocator
import random
from webob import Response
from xblock.core import XBlock
from xblock.fields import Scope, String, List, Integer, Boolean
from xblock.fragment import Fragment
from xmodule.validation import StudioValidationMessage, StudioValidation
from xmodule.x_module import XModule, STUDENT_VIEW
from xmodule.studio_editable import StudioEditableModule, StudioEditableDescriptor
from .xml_module import XmlDescriptor
from pkg_resources import resource_string # pylint: disable=no-name-in-module
# Make '_' a no-op so we can scrape strings
_ = lambda text: text
ANY_CAPA_TYPE_VALUE = 'any'
def _get_human_name(problem_class):
"""
Get the human-friendly name for a problem type.
"""
return getattr(problem_class, 'human_name', problem_class.__name__)
def _get_capa_types():
"""
Gets capa types tags and labels
"""
capa_types = {tag: _get_human_name(registry.get_class_for_tag(tag)) for tag in registry.registered_tags()}
return [{'value': ANY_CAPA_TYPE_VALUE, 'display_name': _('Any Type')}] + sorted([
{'value': capa_type, 'display_name': caption}
for capa_type, caption in capa_types.items()
], key=lambda item: item.get('display_name'))
class LibraryContentFields(object):
"""
Fields for the LibraryContentModule.
Separated out for now because they need to be added to the module and the
descriptor.
"""
# Please note the display_name of each field below is used in
# common/test/acceptance/pages/studio/library.py:StudioLibraryContentXBlockEditModal
# to locate input elements - keep synchronized
display_name = String(
display_name=_("Display Name"),
help=_("Display name for this module"),
default="Randomized Content Block",
scope=Scope.settings,
)
source_library_id = String(
display_name=_("Library"),
help=_("Select the library from which you want to draw content."),
scope=Scope.settings,
values_provider=lambda instance: instance.source_library_values(),
)
source_library_version = String(
# This is a hidden field that stores the version of source_library when we last pulled content from it
display_name=_("Library Version"),
scope=Scope.settings,
)
mode = String(
display_name=_("Mode"),
help=_("Determines how content is drawn from the library"),
default="random",
values=[
{"display_name": _("Choose n at random"), "value": "random"}
# Future addition: Choose a new random set of n every time the student refreshes the block, for self tests
# Future addition: manually selected blocks
],
scope=Scope.settings,
)
max_count = Integer(
display_name=_("Count"),
help=_("Enter the number of components to display to each student."),
default=1,
scope=Scope.settings,
)
capa_type = String(
display_name=_("Problem Type"),
help=_('Choose a problem type to fetch from the library. If "Any Type" is selected no filtering is applied.'),
default=ANY_CAPA_TYPE_VALUE,
values=_get_capa_types(),
scope=Scope.settings,
)
filters = String(default="") # TBD
has_score = Boolean(
display_name=_("Scored"),
help=_("Set this value to True if this module is either a graded assignment or a practice problem."),
default=False,
scope=Scope.settings,
)
selected = List(
# This is a list of (block_type, block_id) tuples used to record
# which random/first set of matching blocks was selected per user
default=[],
scope=Scope.user_state,
)
has_children = True
@property
def source_library_key(self):
"""
Convenience method to get the library ID as a LibraryLocator and not just a string
"""
return LibraryLocator.from_string(self.source_library_id)
#pylint: disable=abstract-method
@XBlock.wants('library_tools') # Only needed in studio
class LibraryContentModule(LibraryContentFields, XModule, StudioEditableModule):
"""
An XBlock whose children are chosen dynamically from a content library.
Can be used to create randomized assessments among other things.
Note: technically, all matching blocks from the content library are added
as children of this block, but only a subset of those children are shown to
any particular student.
"""
def _publish_event(self, event_name, result, **kwargs):
""" Helper method to publish an event for analytics purposes """
event_data = {
"location": unicode(self.location),
"result": result,
"previous_count": getattr(self, "_last_event_result_count", len(self.selected)),
"max_count": self.max_count,
}
event_data.update(kwargs)
self.runtime.publish(self, "edx.librarycontentblock.content.{}".format(event_name), event_data)
self._last_event_result_count = len(result) # pylint: disable=attribute-defined-outside-init
def selected_children(self):
"""
Returns a set() of block_ids indicating which of the possible children
have been selected to display to the current user.
This reads and updates the "selected" field, which has user_state scope.
Note: self.selected and the return value contain block_ids. To get
actual BlockUsageLocators, it is necessary to use self.children,
because the block_ids alone do not specify the block type.
"""
if hasattr(self, "_selected_set"):
# Already done:
return self._selected_set # pylint: disable=access-member-before-definition
selected = set(tuple(k) for k in self.selected) # set of (block_type, block_id) tuples assigned to this student
lib_tools = self.runtime.service(self, 'library_tools')
format_block_keys = lambda keys: lib_tools.create_block_analytics_summary(self.location.course_key, keys)
# Determine which of our children we will show:
valid_block_keys = set([(c.block_type, c.block_id) for c in self.children]) # pylint: disable=no-member
# Remove any selected blocks that are no longer valid:
invalid_block_keys = (selected - valid_block_keys)
if invalid_block_keys:
selected -= invalid_block_keys
# Publish an event for analytics purposes:
# reason "invalid" means deleted from library or a different library is now being used.
self._publish_event(
"removed",
result=format_block_keys(selected),
removed=format_block_keys(invalid_block_keys),
reason="invalid"
)
# If max_count has been decreased, we may have to drop some previously selected blocks:
overlimit_block_keys = set()
while len(selected) > self.max_count:
overlimit_block_keys.add(selected.pop())
if overlimit_block_keys:
# Publish an event for analytics purposes:
self._publish_event(
"removed",
result=format_block_keys(selected),
removed=format_block_keys(overlimit_block_keys),
reason="overlimit"
)
# Do we have enough blocks now?
num_to_add = self.max_count - len(selected)
if num_to_add > 0:
added_block_keys = None
# We need to select [more] blocks to display to this user:
pool = valid_block_keys - selected
if self.mode == "random":
num_to_add = min(len(pool), num_to_add)
added_block_keys = set(random.sample(pool, num_to_add))
# We now have the correct n random children to show for this user.
else:
raise NotImplementedError("Unsupported mode.")
selected |= added_block_keys
if added_block_keys:
# Publish an event for analytics purposes:
self._publish_event(
"assigned",
result=format_block_keys(selected),
added=format_block_keys(added_block_keys)
)
# Save our selections to the user state, to ensure consistency:
self.selected = list(selected) # TODO: this doesn't save from the LMS "Progress" page.
# Cache the results
self._selected_set = selected # pylint: disable=attribute-defined-outside-init
return selected
def _get_selected_child_blocks(self):
"""
Generator returning XBlock instances of the children selected for the
current user.
"""
for block_type, block_id in self.selected_children():
yield self.runtime.get_block(self.location.course_key.make_usage_key(block_type, block_id))
def student_view(self, context):
fragment = Fragment()
contents = []
child_context = {} if not context else copy(context)
for child in self._get_selected_child_blocks():
for displayable in child.displayable_items():
rendered_child = displayable.render(STUDENT_VIEW, child_context)
fragment.add_frag_resources(rendered_child)
contents.append({
'id': displayable.location.to_deprecated_string(),
'content': rendered_child.content,
})
fragment.add_content(self.system.render_template('vert_module.html', {
'items': contents,
'xblock_context': context,
}))
return fragment
def validate(self):
"""
Validates the state of this Library Content Module Instance.
"""
return self.descriptor.validate()
def author_view(self, context):
"""
Renders the Studio views.
Normal studio view: If block is properly configured, displays library status summary
Studio container view: displays a preview of all possible children.
"""
fragment = Fragment()
root_xblock = context.get('root_xblock')
is_root = root_xblock and root_xblock.location == self.location
if is_root:
# User has clicked the "View" link. Show a preview of all possible children:
if self.children: # pylint: disable=no-member
fragment.add_content(self.system.render_template("library-block-author-preview-header.html", {
'max_count': self.max_count,
'display_name': self.display_name or self.url_name,
}))
context['can_edit_visibility'] = False
self.render_children(context, fragment, can_reorder=False, can_add=False)
# else: When shown on a unit page, don't show any sort of preview -
# just the status of this block in the validation area.
# The following JS is used to make the "Update now" button work on the unit page and the container view:
fragment.add_javascript_url(self.runtime.local_resource_url(self, 'public/js/library_content_edit.js'))
fragment.initialize_js('LibraryContentAuthorView')
return fragment
def get_child_descriptors(self):
"""
Return only the subset of our children relevant to the current student.
"""
return list(self._get_selected_child_blocks())
@XBlock.wants('user')
@XBlock.wants('library_tools') # Only needed in studio
@XBlock.wants('studio_user_permissions') # Only available in studio
class LibraryContentDescriptor(LibraryContentFields, MakoModuleDescriptor, XmlDescriptor, StudioEditableDescriptor):
"""
Descriptor class for LibraryContentModule XBlock.
"""
module_class = LibraryContentModule
mako_template = 'widgets/metadata-edit.html'
js = {'coffee': [resource_string(__name__, 'js/src/vertical/edit.coffee')]}
js_module_name = "VerticalDescriptor"
show_in_read_only_mode = True
@property
def non_editable_metadata_fields(self):
non_editable_fields = super(LibraryContentDescriptor, self).non_editable_metadata_fields
# The only supported mode is currently 'random'.
# Add the mode field to non_editable_metadata_fields so that it doesn't
# render in the edit form.
non_editable_fields.extend([LibraryContentFields.mode, LibraryContentFields.source_library_version])
return non_editable_fields
@lazy
def tools(self):
"""
Grab the library tools service or raise an error.
"""
return self.runtime.service(self, 'library_tools')
def get_user_id(self):
"""
Get the ID of the current user.
"""
user_service = self.runtime.service(self, 'user')
if user_service:
# May be None when creating bok choy test fixtures
user_id = user_service.get_current_user().opt_attrs.get('edx-platform.user_id', None)
else:
user_id = None
return user_id
@XBlock.handler
def refresh_children(self, request=None, suffix=None): # pylint: disable=unused-argument
"""
Refresh children:
This method is to be used when any of the libraries that this block
references have been updated. It will re-fetch all matching blocks from
the libraries, and copy them as children of this block. The children
will be given new block_ids, but the definition ID used should be the
exact same definition ID used in the library.
This method will update this block's 'source_library_id' field to store
the version number of the libraries used, so we easily determine if
this block is up to date or not.
"""
user_perms = self.runtime.service(self, 'studio_user_permissions')
user_id = self.get_user_id()
if not self.tools:
return Response("Library Tools unavailable in current runtime.", status=400)
self.tools.update_children(self, user_id, user_perms)
return Response()
# Copy over any overridden settings the course author may have applied to the blocks.
def _copy_overrides(self, store, user_id, source, dest):
"""
Copy any overrides the user has made on blocks in this library.
"""
for field in source.fields.itervalues():
if field.scope == Scope.settings and field.is_set_on(source):
setattr(dest, field.name, field.read_from(source))
if source.has_children:
source_children = [self.runtime.get_block(source_key) for source_key in source.children]
dest_children = [self.runtime.get_block(dest_key) for dest_key in dest.children]
for source_child, dest_child in zip(source_children, dest_children):
self._copy_overrides(store, user_id, source_child, dest_child)
store.update_item(dest, user_id)
def studio_post_duplicate(self, store, source_block):
"""
Used by the studio after basic duplication of a source block. We handle the children
ourselves, because we have to properly reference the library upstream and set the overrides.
Otherwise we'll end up losing data on the next refresh.
"""
# The first task will be to refresh our copy of the library to generate the children.
# We must do this at the currently set version of the library block. Otherwise we may not have
# exactly the same children-- someone may be duplicating an out of date block, after all.
user_id = self.get_user_id()
user_perms = self.runtime.service(self, 'studio_user_permissions')
if not self.tools:
raise RuntimeError("Library tools unavailable, duplication will not be sane!")
self.tools.update_children(self, user_id, user_perms, version=self.source_library_version)
self._copy_overrides(store, user_id, source_block, self)
# Children have been handled.
return True
def _validate_library_version(self, validation, lib_tools, version, library_key):
"""
Validates library version
"""
latest_version = lib_tools.get_library_version(library_key)
if latest_version is not None:
if version is None or version != unicode(latest_version):
validation.set_summary(
StudioValidationMessage(
StudioValidationMessage.WARNING,
_(u'This component is out of date. The library has new content.'),
# TODO: change this to action_runtime_event='...' once the unit page supports that feature.
# See https://openedx.atlassian.net/browse/TNL-993
action_class='library-update-btn',
# Translators: {refresh_icon} placeholder is substituted to "↻" (without double quotes)
action_label=_(u"{refresh_icon} Update now.").format(refresh_icon=u"↻")
)
)
return False
else:
validation.set_summary(
StudioValidationMessage(
StudioValidationMessage.ERROR,
_(u'Library is invalid, corrupt, or has been deleted.'),
action_class='edit-button',
action_label=_(u"Edit Library List.")
)
)
return False
return True
def _set_validation_error_if_empty(self, validation, summary):
""" Helper method to only set validation summary if it's empty """
if validation.empty:
validation.set_summary(summary)
def validate(self):
"""
Validates the state of this Library Content Module Instance. This
is the override of the general XBlock method, and it will also ask
its superclass to validate.
"""
validation = super(LibraryContentDescriptor, self).validate()
if not isinstance(validation, StudioValidation):
validation = StudioValidation.copy(validation)
library_tools = self.runtime.service(self, "library_tools")
if not (library_tools and library_tools.can_use_library_content(self)):
validation.set_summary(
StudioValidationMessage(
StudioValidationMessage.ERROR,
_(
u"This course does not support content libraries. "
u"Contact your system administrator for more information."
)
)
)
return validation
if not self.source_library_id:
validation.set_summary(
StudioValidationMessage(
StudioValidationMessage.NOT_CONFIGURED,
_(u"A library has not yet been selected."),
action_class='edit-button',
action_label=_(u"Select a Library.")
)
)
return validation
lib_tools = self.runtime.service(self, 'library_tools')
self._validate_library_version(validation, lib_tools, self.source_library_version, self.source_library_key)
# Note: we assume refresh_children() has been called
# since the last time fields like source_library_id or capa_types were changed.
matching_children_count = len(self.children) # pylint: disable=no-member
if matching_children_count == 0:
self._set_validation_error_if_empty(
validation,
StudioValidationMessage(
StudioValidationMessage.WARNING,
_(u'There are no matching problem types in the specified libraries.'),
action_class='edit-button',
action_label=_(u"Select another problem type.")
)
)
if matching_children_count < self.max_count:
self._set_validation_error_if_empty(
validation,
StudioValidationMessage(
StudioValidationMessage.WARNING,
(
ngettext(
u'The specified library is configured to fetch {count} problem, ',
u'The specified library is configured to fetch {count} problems, ',
self.max_count
) +
ngettext(
u'but there is only {actual} matching problem.',
u'but there are only {actual} matching problems.',
matching_children_count
)
).format(count=self.max_count, actual=matching_children_count),
action_class='edit-button',
action_label=_(u"Edit the library configuration.")
)
)
return validation
def source_library_values(self):
"""
Return a list of possible values for self.source_library_id
"""
lib_tools = self.runtime.service(self, 'library_tools')
user_perms = self.runtime.service(self, 'studio_user_permissions')
all_libraries = lib_tools.list_available_libraries()
if user_perms:
all_libraries = [
(key, name) for key, name in all_libraries
if user_perms.can_read(key) or self.source_library_id == unicode(key)
]
all_libraries.sort(key=lambda entry: entry[1]) # Sort by name
if self.source_library_id and self.source_library_key not in [entry[0] for entry in all_libraries]:
all_libraries.append((self.source_library_id, _(u"Invalid Library")))
all_libraries = [(u"", _("No Library Selected"))] + all_libraries
values = [{"display_name": name, "value": unicode(key)} for key, name in all_libraries]
return values
def editor_saved(self, user, old_metadata, old_content):
"""
If source_library_id or capa_type has been edited, refresh_children automatically.
"""
old_source_library_id = old_metadata.get('source_library_id', [])
if (old_source_library_id != self.source_library_id or
old_metadata.get('capa_type', ANY_CAPA_TYPE_VALUE) != self.capa_type):
try:
self.refresh_children()
except ValueError:
pass # The validation area will display an error message, no need to do anything now.
def has_dynamic_children(self):
"""
Inform the runtime that our children vary per-user.
See get_child_descriptors() above
"""
return True
def get_content_titles(self):
"""
Returns list of friendly titles for our selected children only; without
thi, all possible children's titles would be seen in the sequence bar in
the LMS.
This overwrites the get_content_titles method included in x_module by default.
"""
titles = []
for child in self._xmodule.get_child_descriptors():
titles.extend(child.get_content_titles())
return titles
@classmethod
def definition_from_xml(cls, xml_object, system):
children = [
# pylint: disable=no-member
system.process_xml(etree.tostring(child)).scope_ids.usage_id
for child in xml_object.getchildren()
]
definition = {
attr_name: json.loads(attr_value)
for attr_name, attr_value in xml_object.attrib
}
return definition, children
def definition_to_xml(self, resource_fs):
""" Exports Library Content Module to XML """
# pylint: disable=no-member
xml_object = etree.Element('library_content')
for child in self.get_children():
self.runtime.add_block_as_child_node(child, xml_object)
# Set node attributes based on our fields.
for field_name, field in self.fields.iteritems():
if field_name in ('children', 'parent', 'content'):
continue
if field.is_set_on(self):
xml_object.set(field_name, unicode(field.read_from(self)))
return xml_object
| agpl-3.0 |
taxpon/sverchok | nodes/object_nodes/scene_raycast2.py | 3 | 2516 | # ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
import bpy
from sverchok.node_tree import SverchCustomTreeNode
from sverchok.data_structure import (updateNode, match_long_repeat)
class SvSCNRayCastNodeMK2(bpy.types.Node, SverchCustomTreeNode):
''' RayCast Scene '''
bl_idname = 'SvSCNRayCastNodeMK2'
bl_label = 'Scene Raycast MK2' #new is nonsense name
bl_icon = 'OUTLINER_OB_EMPTY'
def sv_init(self, context):
si,so = self.inputs.new,self.outputs.new
si('VerticesSocket', 'origin').use_prop = True
si('VerticesSocket', 'direction').use_prop = True
so('VerticesSocket', "HitP")
so('VerticesSocket', "HitNorm")
so('StringsSocket', "Succes")
so('StringsSocket', "FaceIndex")
so("SvObjectSocket", "Objects")
so("MatrixSocket", "hited object matrix")
def process(self):
P,N,S,I,O,M = self.outputs
rc = []
st = self.inputs['origin'].sv_get()[0]
en = self.inputs['direction'].sv_get()[0]
st, en = match_long_repeat([st, en])
for i,i2 in zip(st,en):
rc.append(bpy.context.scene.ray_cast(i, i2))
if P.is_linked:
P.sv_set([[i[1][:] for i in rc]])
if N.is_linked:
N.sv_set([[i[2][:] for i in rc]])
if S.is_linked:
S.sv_set([[i[0] for i in rc]])
if I.is_linked:
I.sv_set([[i[3] for i in rc]])
if O.is_linked:
O.sv_set([i[4] for i in rc])
if M.is_linked:
M.sv_set([[v[:] for v in i[5]] for i in rc])
def update_socket(self, context):
self.update()
def register():
bpy.utils.register_class(SvSCNRayCastNodeMK2)
def unregister():
bpy.utils.unregister_class(SvSCNRayCastNodeMK2)
| gpl-3.0 |
2014c2g9/c2g9 | wsgi/static/Brython2.1.0-20140419-113919/Lib/test/regrtest.py | 718 | 65317 | #! /usr/bin/python3.3
"""
Usage:
python -m test [options] [test_name1 [test_name2 ...]]
python path/to/Lib/test/regrtest.py [options] [test_name1 [test_name2 ...]]
If no arguments or options are provided, finds all files matching
the pattern "test_*" in the Lib/test subdirectory and runs
them in alphabetical order (but see -M and -u, below, for exceptions).
For more rigorous testing, it is useful to use the following
command line:
python -E -Wd -m test [options] [test_name1 ...]
Options:
-h/--help -- print this text and exit
--timeout TIMEOUT
-- dump the traceback and exit if a test takes more
than TIMEOUT seconds; disabled if TIMEOUT is negative
or equals to zero
--wait -- wait for user input, e.g., allow a debugger to be attached
Verbosity
-v/--verbose -- run tests in verbose mode with output to stdout
-w/--verbose2 -- re-run failed tests in verbose mode
-W/--verbose3 -- display test output on failure
-d/--debug -- print traceback for failed tests
-q/--quiet -- no output unless one or more tests fail
-o/--slow -- print the slowest 10 tests
--header -- print header with interpreter info
Selecting tests
-r/--randomize -- randomize test execution order (see below)
--randseed -- pass a random seed to reproduce a previous random run
-f/--fromfile -- read names of tests to run from a file (see below)
-x/--exclude -- arguments are tests to *exclude*
-s/--single -- single step through a set of tests (see below)
-m/--match PAT -- match test cases and methods with glob pattern PAT
-G/--failfast -- fail as soon as a test fails (only with -v or -W)
-u/--use RES1,RES2,...
-- specify which special resource intensive tests to run
-M/--memlimit LIMIT
-- run very large memory-consuming tests
--testdir DIR
-- execute test files in the specified directory (instead
of the Python stdlib test suite)
Special runs
-l/--findleaks -- if GC is available detect tests that leak memory
-L/--runleaks -- run the leaks(1) command just before exit
-R/--huntrleaks RUNCOUNTS
-- search for reference leaks (needs debug build, v. slow)
-j/--multiprocess PROCESSES
-- run PROCESSES processes at once
-T/--coverage -- turn on code coverage tracing using the trace module
-D/--coverdir DIRECTORY
-- Directory where coverage files are put
-N/--nocoverdir -- Put coverage files alongside modules
-t/--threshold THRESHOLD
-- call gc.set_threshold(THRESHOLD)
-n/--nowindows -- suppress error message boxes on Windows
-F/--forever -- run the specified tests in a loop, until an error happens
Additional Option Details:
-r randomizes test execution order. You can use --randseed=int to provide a
int seed value for the randomizer; this is useful for reproducing troublesome
test orders.
-s On the first invocation of regrtest using -s, the first test file found
or the first test file given on the command line is run, and the name of
the next test is recorded in a file named pynexttest. If run from the
Python build directory, pynexttest is located in the 'build' subdirectory,
otherwise it is located in tempfile.gettempdir(). On subsequent runs,
the test in pynexttest is run, and the next test is written to pynexttest.
When the last test has been run, pynexttest is deleted. In this way it
is possible to single step through the test files. This is useful when
doing memory analysis on the Python interpreter, which process tends to
consume too many resources to run the full regression test non-stop.
-S is used to continue running tests after an aborted run. It will
maintain the order a standard run (ie, this assumes -r is not used).
This is useful after the tests have prematurely stopped for some external
reason and you want to start running from where you left off rather
than starting from the beginning.
-f reads the names of tests from the file given as f's argument, one
or more test names per line. Whitespace is ignored. Blank lines and
lines beginning with '#' are ignored. This is especially useful for
whittling down failures involving interactions among tests.
-L causes the leaks(1) command to be run just before exit if it exists.
leaks(1) is available on Mac OS X and presumably on some other
FreeBSD-derived systems.
-R runs each test several times and examines sys.gettotalrefcount() to
see if the test appears to be leaking references. The argument should
be of the form stab:run:fname where 'stab' is the number of times the
test is run to let gettotalrefcount settle down, 'run' is the number
of times further it is run and 'fname' is the name of the file the
reports are written to. These parameters all have defaults (5, 4 and
"reflog.txt" respectively), and the minimal invocation is '-R :'.
-M runs tests that require an exorbitant amount of memory. These tests
typically try to ascertain containers keep working when containing more than
2 billion objects, which only works on 64-bit systems. There are also some
tests that try to exhaust the address space of the process, which only makes
sense on 32-bit systems with at least 2Gb of memory. The passed-in memlimit,
which is a string in the form of '2.5Gb', determines howmuch memory the
tests will limit themselves to (but they may go slightly over.) The number
shouldn't be more memory than the machine has (including swap memory). You
should also keep in mind that swap memory is generally much, much slower
than RAM, and setting memlimit to all available RAM or higher will heavily
tax the machine. On the other hand, it is no use running these tests with a
limit of less than 2.5Gb, and many require more than 20Gb. Tests that expect
to use more than memlimit memory will be skipped. The big-memory tests
generally run very, very long.
-u is used to specify which special resource intensive tests to run,
such as those requiring large file support or network connectivity.
The argument is a comma-separated list of words indicating the
resources to test. Currently only the following are defined:
all - Enable all special resources.
none - Disable all special resources (this is the default).
audio - Tests that use the audio device. (There are known
cases of broken audio drivers that can crash Python or
even the Linux kernel.)
curses - Tests that use curses and will modify the terminal's
state and output modes.
largefile - It is okay to run some test that may create huge
files. These tests can take a long time and may
consume >2GB of disk space temporarily.
network - It is okay to run tests that use external network
resource, e.g. testing SSL support for sockets.
decimal - Test the decimal module against a large suite that
verifies compliance with standards.
cpu - Used for certain CPU-heavy tests.
subprocess Run all tests for the subprocess module.
urlfetch - It is okay to download files required on testing.
gui - Run tests that require a running GUI.
To enable all resources except one, use '-uall,-<resource>'. For
example, to run all the tests except for the gui tests, give the
option '-uall,-gui'.
"""
# We import importlib *ASAP* in order to test #15386
import importlib
import builtins
import faulthandler
import getopt
import io
import json
import logging
import os
import platform
import random
import re
import shutil
import signal
import sys
import sysconfig
import tempfile
import time
import traceback
import unittest
import warnings
from inspect import isabstract
try:
import threading
except ImportError:
threading = None
try:
import multiprocessing.process
except ImportError:
multiprocessing = None
# Some times __path__ and __file__ are not absolute (e.g. while running from
# Lib/) and, if we change the CWD to run the tests in a temporary dir, some
# imports might fail. This affects only the modules imported before os.chdir().
# These modules are searched first in sys.path[0] (so '' -- the CWD) and if
# they are found in the CWD their __file__ and __path__ will be relative (this
# happens before the chdir). All the modules imported after the chdir, are
# not found in the CWD, and since the other paths in sys.path[1:] are absolute
# (site.py absolutize them), the __file__ and __path__ will be absolute too.
# Therefore it is necessary to absolutize manually the __file__ and __path__ of
# the packages to prevent later imports to fail when the CWD is different.
for module in sys.modules.values():
if hasattr(module, '__path__'):
module.__path__ = [os.path.abspath(path) for path in module.__path__]
if hasattr(module, '__file__'):
module.__file__ = os.path.abspath(module.__file__)
# MacOSX (a.k.a. Darwin) has a default stack size that is too small
# for deeply recursive regular expressions. We see this as crashes in
# the Python test suite when running test_re.py and test_sre.py. The
# fix is to set the stack limit to 2048.
# This approach may also be useful for other Unixy platforms that
# suffer from small default stack limits.
if sys.platform == 'darwin':
try:
import resource
except ImportError:
pass
else:
soft, hard = resource.getrlimit(resource.RLIMIT_STACK)
newsoft = min(hard, max(soft, 1024*2048))
resource.setrlimit(resource.RLIMIT_STACK, (newsoft, hard))
# Test result constants.
PASSED = 1
FAILED = 0
ENV_CHANGED = -1
SKIPPED = -2
RESOURCE_DENIED = -3
INTERRUPTED = -4
CHILD_ERROR = -5 # error in a child process
from test import support
RESOURCE_NAMES = ('audio', 'curses', 'largefile', 'network',
'decimal', 'cpu', 'subprocess', 'urlfetch', 'gui')
TEMPDIR = os.path.abspath(tempfile.gettempdir())
def usage(msg):
print(msg, file=sys.stderr)
print("Use --help for usage", file=sys.stderr)
sys.exit(2)
def main(tests=None, testdir=None, verbose=0, quiet=False,
exclude=False, single=0, randomize=False, fromfile=None,
findleaks=False, use_resources=None, trace=False, coverdir='coverage',
runleaks=False, huntrleaks=False, verbose2=False, print_slow=False,
random_seed=None, use_mp=None, verbose3=False, forever=False,
header=False, failfast=False, match_tests=None):
"""Execute a test suite.
This also parses command-line options and modifies its behavior
accordingly.
tests -- a list of strings containing test names (optional)
testdir -- the directory in which to look for tests (optional)
Users other than the Python test suite will certainly want to
specify testdir; if it's omitted, the directory containing the
Python test suite is searched for.
If the tests argument is omitted, the tests listed on the
command-line will be used. If that's empty, too, then all *.py
files beginning with test_ will be used.
The other default arguments (verbose, quiet, exclude,
single, randomize, findleaks, use_resources, trace, coverdir,
print_slow, and random_seed) allow programmers calling main()
directly to set the values that would normally be set by flags
on the command line.
"""
# Display the Python traceback on fatal errors (e.g. segfault)
faulthandler.enable(all_threads=True)
# Display the Python traceback on SIGALRM or SIGUSR1 signal
signals = []
if hasattr(signal, 'SIGALRM'):
signals.append(signal.SIGALRM)
if hasattr(signal, 'SIGUSR1'):
signals.append(signal.SIGUSR1)
for signum in signals:
faulthandler.register(signum, chain=True)
replace_stdout()
support.record_original_stdout(sys.stdout)
try:
opts, args = getopt.getopt(sys.argv[1:], 'hvqxsoS:rf:lu:t:TD:NLR:FdwWM:nj:Gm:',
['help', 'verbose', 'verbose2', 'verbose3', 'quiet',
'exclude', 'single', 'slow', 'randomize', 'fromfile=', 'findleaks',
'use=', 'threshold=', 'coverdir=', 'nocoverdir',
'runleaks', 'huntrleaks=', 'memlimit=', 'randseed=',
'multiprocess=', 'coverage', 'slaveargs=', 'forever', 'debug',
'start=', 'nowindows', 'header', 'testdir=', 'timeout=', 'wait',
'failfast', 'match=', 'next='])
except getopt.error as msg:
usage(msg)
# Defaults
if random_seed is None:
random_seed = random.randrange(10000000)
if use_resources is None:
use_resources = []
debug = False
start = None
timeout = None
for o, a in opts:
if o in ('-h', '--help'):
print(__doc__)
return
elif o in ('-v', '--verbose'):
verbose += 1
elif o in ('-w', '--verbose2'):
verbose2 = True
elif o in ('-d', '--debug'):
debug = True
elif o in ('-W', '--verbose3'):
verbose3 = True
elif o in ('-G', '--failfast'):
failfast = True
elif o in ('-q', '--quiet'):
quiet = True;
verbose = 0
elif o in ('-x', '--exclude'):
exclude = True
elif o in ('-S', '--start'):
start = a
elif o in ('-s', '--single'):
single = 1
elif o == '--next':
single = int(a)
elif o in ('-o', '--slow'):
print_slow = True
elif o in ('-r', '--randomize'):
randomize = True
elif o == '--randseed':
random_seed = int(a)
elif o in ('-f', '--fromfile'):
fromfile = a
elif o in ('-m', '--match'):
match_tests = a
elif o in ('-l', '--findleaks'):
findleaks = True
elif o in ('-L', '--runleaks'):
runleaks = True
elif o in ('-t', '--threshold'):
import gc
gc.set_threshold(int(a))
elif o in ('-T', '--coverage'):
trace = True
elif o in ('-D', '--coverdir'):
# CWD is replaced with a temporary dir before calling main(), so we
# need join it with the saved CWD so it goes where the user expects.
coverdir = os.path.join(support.SAVEDCWD, a)
elif o in ('-N', '--nocoverdir'):
coverdir = None
elif o in ('-R', '--huntrleaks'):
huntrleaks = a.split(':')
if len(huntrleaks) not in (2, 3):
print(a, huntrleaks)
usage('-R takes 2 or 3 colon-separated arguments')
if not huntrleaks[0]:
huntrleaks[0] = 5
else:
huntrleaks[0] = int(huntrleaks[0])
if not huntrleaks[1]:
huntrleaks[1] = 4
else:
huntrleaks[1] = int(huntrleaks[1])
if len(huntrleaks) == 2 or not huntrleaks[2]:
huntrleaks[2:] = ["reflog.txt"]
# Avoid false positives due to various caches
# filling slowly with random data:
warm_caches()
elif o in ('-M', '--memlimit'):
support.set_memlimit(a)
elif o in ('-u', '--use'):
u = [x.lower() for x in a.split(',')]
for r in u:
if r == 'all':
use_resources[:] = RESOURCE_NAMES
continue
if r == 'none':
del use_resources[:]
continue
remove = False
if r[0] == '-':
remove = True
r = r[1:]
if r not in RESOURCE_NAMES:
usage('Invalid -u/--use option: ' + a)
if remove:
if r in use_resources:
use_resources.remove(r)
elif r not in use_resources:
use_resources.append(r)
elif o in ('-n', '--nowindows'):
import msvcrt
msvcrt.SetErrorMode(msvcrt.SEM_FAILCRITICALERRORS|
msvcrt.SEM_NOALIGNMENTFAULTEXCEPT|
msvcrt.SEM_NOGPFAULTERRORBOX|
msvcrt.SEM_NOOPENFILEERRORBOX)
try:
msvcrt.CrtSetReportMode
except AttributeError:
# release build
pass
else:
for m in [msvcrt.CRT_WARN, msvcrt.CRT_ERROR, msvcrt.CRT_ASSERT]:
msvcrt.CrtSetReportMode(m, msvcrt.CRTDBG_MODE_FILE)
msvcrt.CrtSetReportFile(m, msvcrt.CRTDBG_FILE_STDERR)
elif o in ('-F', '--forever'):
forever = True
elif o in ('-j', '--multiprocess'):
use_mp = int(a)
if use_mp <= 0:
try:
import multiprocessing
# Use all cores + extras for tests that like to sleep
use_mp = 2 + multiprocessing.cpu_count()
except (ImportError, NotImplementedError):
use_mp = 3
if use_mp == 1:
use_mp = None
elif o == '--header':
header = True
elif o == '--slaveargs':
args, kwargs = json.loads(a)
try:
result = runtest(*args, **kwargs)
except KeyboardInterrupt:
result = INTERRUPTED, ''
except BaseException as e:
traceback.print_exc()
result = CHILD_ERROR, str(e)
sys.stdout.flush()
print() # Force a newline (just in case)
print(json.dumps(result))
sys.exit(0)
elif o == '--testdir':
# CWD is replaced with a temporary dir before calling main(), so we
# join it with the saved CWD so it ends up where the user expects.
testdir = os.path.join(support.SAVEDCWD, a)
elif o == '--timeout':
if hasattr(faulthandler, 'dump_tracebacks_later'):
timeout = float(a)
if timeout <= 0:
timeout = None
else:
print("Warning: The timeout option requires "
"faulthandler.dump_tracebacks_later")
timeout = None
elif o == '--wait':
input("Press any key to continue...")
else:
print(("No handler for option {}. Please report this as a bug "
"at http://bugs.python.org.").format(o), file=sys.stderr)
sys.exit(1)
if single and fromfile:
usage("-s and -f don't go together!")
if use_mp and trace:
usage("-T and -j don't go together!")
if use_mp and findleaks:
usage("-l and -j don't go together!")
if use_mp and support.max_memuse:
usage("-M and -j don't go together!")
if failfast and not (verbose or verbose3):
usage("-G/--failfast needs either -v or -W")
good = []
bad = []
skipped = []
resource_denieds = []
environment_changed = []
interrupted = False
if findleaks:
try:
import gc
except ImportError:
print('No GC available, disabling findleaks.')
findleaks = False
else:
# Uncomment the line below to report garbage that is not
# freeable by reference counting alone. By default only
# garbage that is not collectable by the GC is reported.
#gc.set_debug(gc.DEBUG_SAVEALL)
found_garbage = []
if single:
filename = os.path.join(TEMPDIR, 'pynexttest')
try:
fp = open(filename, 'r')
next_test = fp.read().strip()
tests = [next_test]
fp.close()
except IOError:
pass
if fromfile:
tests = []
fp = open(os.path.join(support.SAVEDCWD, fromfile))
count_pat = re.compile(r'\[\s*\d+/\s*\d+\]')
for line in fp:
line = count_pat.sub('', line)
guts = line.split() # assuming no test has whitespace in its name
if guts and not guts[0].startswith('#'):
tests.extend(guts)
fp.close()
# Strip .py extensions.
removepy(args)
removepy(tests)
stdtests = STDTESTS[:]
nottests = NOTTESTS.copy()
if exclude:
for arg in args:
if arg in stdtests:
stdtests.remove(arg)
nottests.add(arg)
args = []
# For a partial run, we do not need to clutter the output.
if verbose or header or not (quiet or single != 1 or tests or args):
# Print basic platform information
print("==", platform.python_implementation(), *sys.version.split())
print("== ", platform.platform(aliased=True),
"%s-endian" % sys.byteorder)
print("== ", os.getcwd())
print("Testing with flags:", sys.flags)
# if testdir is set, then we are not running the python tests suite, so
# don't add default tests to be executed or skipped (pass empty values)
if testdir:
alltests = findtests(testdir, list(), set())
else:
alltests = findtests(testdir, stdtests, nottests)
selected = tests or args or alltests
if single:
first_selected = selected[0]
index_selected = alltests.index(first_selected)
if index_selected + single > len(alltests):
single = len(alltests) - index_selected
selected = alltests[index_selected:index_selected+single]
try:
next_single_test = alltests[index_selected+single]
except IndexError:
next_single_test = None
# Remove all the selected tests that precede start if it's set.
if start:
try:
del selected[:selected.index(start)]
except ValueError:
print("Couldn't find starting test (%s), using all tests" % start)
if randomize:
random.seed(random_seed)
print("Using random seed", random_seed)
random.shuffle(selected)
if trace:
import trace, tempfile
tracer = trace.Trace(ignoredirs=[sys.base_prefix, sys.base_exec_prefix,
tempfile.gettempdir()],
trace=False, count=True)
test_times = []
support.verbose = verbose # Tell tests to be moderately quiet
support.use_resources = use_resources
save_modules = sys.modules.keys()
def accumulate_result(test, result):
ok, test_time = result
test_times.append((test_time, test))
if ok == PASSED:
good.append(test)
elif ok == FAILED:
bad.append(test)
elif ok == ENV_CHANGED:
environment_changed.append(test)
elif ok == SKIPPED:
skipped.append(test)
elif ok == RESOURCE_DENIED:
skipped.append(test)
resource_denieds.append(test)
if forever:
def test_forever(tests=list(selected)):
while True:
for test in tests:
yield test
if bad:
return
tests = test_forever()
test_count = ''
test_count_width = 3
else:
tests = iter(selected)
test_count = '/{}'.format(len(selected))
test_count_width = len(test_count) - 1
if use_mp:
try:
from threading import Thread
except ImportError:
print("Multiprocess option requires thread support")
sys.exit(2)
from queue import Queue
from subprocess import Popen, PIPE
debug_output_pat = re.compile(r"\[\d+ refs\]$")
output = Queue()
pending = MultiprocessTests(tests)
opt_args = support.args_from_interpreter_flags()
base_cmd = [sys.executable] + opt_args + ['-m', 'test.regrtest']
def work():
# A worker thread.
try:
while True:
try:
test = next(pending)
except StopIteration:
output.put((None, None, None, None))
return
args_tuple = (
(test, verbose, quiet),
dict(huntrleaks=huntrleaks, use_resources=use_resources,
debug=debug, output_on_failure=verbose3,
timeout=timeout, failfast=failfast,
match_tests=match_tests)
)
# -E is needed by some tests, e.g. test_import
# Running the child from the same working directory ensures
# that TEMPDIR for the child is the same when
# sysconfig.is_python_build() is true. See issue 15300.
popen = Popen(base_cmd + ['--slaveargs', json.dumps(args_tuple)],
stdout=PIPE, stderr=PIPE,
universal_newlines=True,
close_fds=(os.name != 'nt'),
cwd=support.SAVEDCWD)
stdout, stderr = popen.communicate()
retcode = popen.wait()
# Strip last refcount output line if it exists, since it
# comes from the shutdown of the interpreter in the subcommand.
stderr = debug_output_pat.sub("", stderr)
stdout, _, result = stdout.strip().rpartition("\n")
if retcode != 0:
result = (CHILD_ERROR, "Exit code %s" % retcode)
output.put((test, stdout.rstrip(), stderr.rstrip(), result))
return
if not result:
output.put((None, None, None, None))
return
result = json.loads(result)
output.put((test, stdout.rstrip(), stderr.rstrip(), result))
except BaseException:
output.put((None, None, None, None))
raise
workers = [Thread(target=work) for i in range(use_mp)]
for worker in workers:
worker.start()
finished = 0
test_index = 1
try:
while finished < use_mp:
test, stdout, stderr, result = output.get()
if test is None:
finished += 1
continue
accumulate_result(test, result)
if not quiet:
fmt = "[{1:{0}}{2}/{3}] {4}" if bad else "[{1:{0}}{2}] {4}"
print(fmt.format(
test_count_width, test_index, test_count,
len(bad), test))
if stdout:
print(stdout)
if stderr:
print(stderr, file=sys.stderr)
sys.stdout.flush()
sys.stderr.flush()
if result[0] == INTERRUPTED:
raise KeyboardInterrupt
if result[0] == CHILD_ERROR:
raise Exception("Child error on {}: {}".format(test, result[1]))
test_index += 1
except KeyboardInterrupt:
interrupted = True
pending.interrupted = True
for worker in workers:
worker.join()
else:
for test_index, test in enumerate(tests, 1):
if not quiet:
fmt = "[{1:{0}}{2}/{3}] {4}" if bad else "[{1:{0}}{2}] {4}"
print(fmt.format(
test_count_width, test_index, test_count, len(bad), test))
sys.stdout.flush()
if trace:
# If we're tracing code coverage, then we don't exit with status
# if on a false return value from main.
tracer.runctx('runtest(test, verbose, quiet, timeout=timeout)',
globals=globals(), locals=vars())
else:
try:
result = runtest(test, verbose, quiet, huntrleaks, debug,
output_on_failure=verbose3,
timeout=timeout, failfast=failfast,
match_tests=match_tests)
accumulate_result(test, result)
except KeyboardInterrupt:
interrupted = True
break
except:
raise
if findleaks:
gc.collect()
if gc.garbage:
print("Warning: test created", len(gc.garbage), end=' ')
print("uncollectable object(s).")
# move the uncollectable objects somewhere so we don't see
# them again
found_garbage.extend(gc.garbage)
del gc.garbage[:]
# Unload the newly imported modules (best effort finalization)
for module in sys.modules.keys():
if module not in save_modules and module.startswith("test."):
support.unload(module)
if interrupted:
# print a newline after ^C
print()
print("Test suite interrupted by signal SIGINT.")
omitted = set(selected) - set(good) - set(bad) - set(skipped)
print(count(len(omitted), "test"), "omitted:")
printlist(omitted)
if good and not quiet:
if not bad and not skipped and not interrupted and len(good) > 1:
print("All", end=' ')
print(count(len(good), "test"), "OK.")
if print_slow:
test_times.sort(reverse=True)
print("10 slowest tests:")
for time, test in test_times[:10]:
print("%s: %.1fs" % (test, time))
if bad:
bad = sorted(set(bad) - set(environment_changed))
if bad:
print(count(len(bad), "test"), "failed:")
printlist(bad)
if environment_changed:
print("{} altered the execution environment:".format(
count(len(environment_changed), "test")))
printlist(environment_changed)
if skipped and not quiet:
print(count(len(skipped), "test"), "skipped:")
printlist(skipped)
e = _ExpectedSkips()
plat = sys.platform
if e.isvalid():
surprise = set(skipped) - e.getexpected() - set(resource_denieds)
if surprise:
print(count(len(surprise), "skip"), \
"unexpected on", plat + ":")
printlist(surprise)
else:
print("Those skips are all expected on", plat + ".")
else:
print("Ask someone to teach regrtest.py about which tests are")
print("expected to get skipped on", plat + ".")
if verbose2 and bad:
print("Re-running failed tests in verbose mode")
for test in bad:
print("Re-running test %r in verbose mode" % test)
sys.stdout.flush()
try:
verbose = True
ok = runtest(test, True, quiet, huntrleaks, debug, timeout=timeout)
except KeyboardInterrupt:
# print a newline separate from the ^C
print()
break
except:
raise
if single:
if next_single_test:
with open(filename, 'w') as fp:
fp.write(next_single_test + '\n')
else:
os.unlink(filename)
if trace:
r = tracer.results()
r.write_results(show_missing=True, summary=True, coverdir=coverdir)
if runleaks:
os.system("leaks %d" % os.getpid())
sys.exit(len(bad) > 0 or interrupted)
# small set of tests to determine if we have a basically functioning interpreter
# (i.e. if any of these fail, then anything else is likely to follow)
STDTESTS = [
'test_grammar',
'test_opcodes',
'test_dict',
'test_builtin',
'test_exceptions',
'test_types',
'test_unittest',
'test_doctest',
'test_doctest2',
'test_support'
]
# set of tests that we don't want to be executed when using regrtest
NOTTESTS = set()
def findtests(testdir=None, stdtests=STDTESTS, nottests=NOTTESTS):
"""Return a list of all applicable test modules."""
testdir = findtestdir(testdir)
names = os.listdir(testdir)
tests = []
others = set(stdtests) | nottests
for name in names:
mod, ext = os.path.splitext(name)
if mod[:5] == "test_" and ext in (".py", "") and mod not in others:
tests.append(mod)
return stdtests + sorted(tests)
# We do not use a generator so multiple threads can call next().
class MultiprocessTests(object):
"""A thread-safe iterator over tests for multiprocess mode."""
def __init__(self, tests):
self.interrupted = False
self.lock = threading.Lock()
self.tests = tests
def __iter__(self):
return self
def __next__(self):
with self.lock:
if self.interrupted:
raise StopIteration('tests interrupted')
return next(self.tests)
def replace_stdout():
"""Set stdout encoder error handler to backslashreplace (as stderr error
handler) to avoid UnicodeEncodeError when printing a traceback"""
import atexit
stdout = sys.stdout
sys.stdout = open(stdout.fileno(), 'w',
encoding=stdout.encoding,
errors="backslashreplace",
closefd=False,
newline='\n')
def restore_stdout():
sys.stdout.close()
sys.stdout = stdout
atexit.register(restore_stdout)
def runtest(test, verbose, quiet,
huntrleaks=False, debug=False, use_resources=None,
output_on_failure=False, failfast=False, match_tests=None,
timeout=None):
"""Run a single test.
test -- the name of the test
verbose -- if true, print more messages
quiet -- if true, don't print 'skipped' messages (probably redundant)
test_times -- a list of (time, test_name) pairs
huntrleaks -- run multiple times to test for leaks; requires a debug
build; a triple corresponding to -R's three arguments
output_on_failure -- if true, display test output on failure
timeout -- dump the traceback and exit if a test takes more than
timeout seconds
Returns one of the test result constants:
INTERRUPTED KeyboardInterrupt when run under -j
RESOURCE_DENIED test skipped because resource denied
SKIPPED test skipped for some other reason
ENV_CHANGED test failed because it changed the execution environment
FAILED test failed
PASSED test passed
"""
if use_resources is not None:
support.use_resources = use_resources
use_timeout = (timeout is not None)
if use_timeout:
faulthandler.dump_tracebacks_later(timeout, exit=True)
try:
support.match_tests = match_tests
if failfast:
support.failfast = True
if output_on_failure:
support.verbose = True
# Reuse the same instance to all calls to runtest(). Some
# tests keep a reference to sys.stdout or sys.stderr
# (eg. test_argparse).
if runtest.stringio is None:
stream = io.StringIO()
runtest.stringio = stream
else:
stream = runtest.stringio
stream.seek(0)
stream.truncate()
orig_stdout = sys.stdout
orig_stderr = sys.stderr
try:
sys.stdout = stream
sys.stderr = stream
result = runtest_inner(test, verbose, quiet, huntrleaks,
debug, display_failure=False)
if result[0] == FAILED:
output = stream.getvalue()
orig_stderr.write(output)
orig_stderr.flush()
finally:
sys.stdout = orig_stdout
sys.stderr = orig_stderr
else:
support.verbose = verbose # Tell tests to be moderately quiet
result = runtest_inner(test, verbose, quiet, huntrleaks, debug,
display_failure=not verbose)
return result
finally:
if use_timeout:
faulthandler.cancel_dump_tracebacks_later()
cleanup_test_droppings(test, verbose)
runtest.stringio = None
# Unit tests are supposed to leave the execution environment unchanged
# once they complete. But sometimes tests have bugs, especially when
# tests fail, and the changes to environment go on to mess up other
# tests. This can cause issues with buildbot stability, since tests
# are run in random order and so problems may appear to come and go.
# There are a few things we can save and restore to mitigate this, and
# the following context manager handles this task.
class saved_test_environment:
"""Save bits of the test environment and restore them at block exit.
with saved_test_environment(testname, verbose, quiet):
#stuff
Unless quiet is True, a warning is printed to stderr if any of
the saved items was changed by the test. The attribute 'changed'
is initially False, but is set to True if a change is detected.
If verbose is more than 1, the before and after state of changed
items is also printed.
"""
changed = False
def __init__(self, testname, verbose=0, quiet=False):
self.testname = testname
self.verbose = verbose
self.quiet = quiet
# To add things to save and restore, add a name XXX to the resources list
# and add corresponding get_XXX/restore_XXX functions. get_XXX should
# return the value to be saved and compared against a second call to the
# get function when test execution completes. restore_XXX should accept
# the saved value and restore the resource using it. It will be called if
# and only if a change in the value is detected.
#
# Note: XXX will have any '.' replaced with '_' characters when determining
# the corresponding method names.
resources = ('sys.argv', 'cwd', 'sys.stdin', 'sys.stdout', 'sys.stderr',
'os.environ', 'sys.path', 'sys.path_hooks', '__import__',
'warnings.filters', 'asyncore.socket_map',
'logging._handlers', 'logging._handlerList', 'sys.gettrace',
'sys.warnoptions', 'threading._dangling',
'multiprocessing.process._dangling',
'sysconfig._CONFIG_VARS', 'sysconfig._INSTALL_SCHEMES',
'support.TESTFN',
)
def get_sys_argv(self):
return id(sys.argv), sys.argv, sys.argv[:]
def restore_sys_argv(self, saved_argv):
sys.argv = saved_argv[1]
sys.argv[:] = saved_argv[2]
def get_cwd(self):
return os.getcwd()
def restore_cwd(self, saved_cwd):
os.chdir(saved_cwd)
def get_sys_stdout(self):
return sys.stdout
def restore_sys_stdout(self, saved_stdout):
sys.stdout = saved_stdout
def get_sys_stderr(self):
return sys.stderr
def restore_sys_stderr(self, saved_stderr):
sys.stderr = saved_stderr
def get_sys_stdin(self):
return sys.stdin
def restore_sys_stdin(self, saved_stdin):
sys.stdin = saved_stdin
def get_os_environ(self):
return id(os.environ), os.environ, dict(os.environ)
def restore_os_environ(self, saved_environ):
os.environ = saved_environ[1]
os.environ.clear()
os.environ.update(saved_environ[2])
def get_sys_path(self):
return id(sys.path), sys.path, sys.path[:]
def restore_sys_path(self, saved_path):
sys.path = saved_path[1]
sys.path[:] = saved_path[2]
def get_sys_path_hooks(self):
return id(sys.path_hooks), sys.path_hooks, sys.path_hooks[:]
def restore_sys_path_hooks(self, saved_hooks):
sys.path_hooks = saved_hooks[1]
sys.path_hooks[:] = saved_hooks[2]
def get_sys_gettrace(self):
return sys.gettrace()
def restore_sys_gettrace(self, trace_fxn):
sys.settrace(trace_fxn)
def get___import__(self):
return builtins.__import__
def restore___import__(self, import_):
builtins.__import__ = import_
def get_warnings_filters(self):
return id(warnings.filters), warnings.filters, warnings.filters[:]
def restore_warnings_filters(self, saved_filters):
warnings.filters = saved_filters[1]
warnings.filters[:] = saved_filters[2]
def get_asyncore_socket_map(self):
asyncore = sys.modules.get('asyncore')
# XXX Making a copy keeps objects alive until __exit__ gets called.
return asyncore and asyncore.socket_map.copy() or {}
def restore_asyncore_socket_map(self, saved_map):
asyncore = sys.modules.get('asyncore')
if asyncore is not None:
asyncore.close_all(ignore_all=True)
asyncore.socket_map.update(saved_map)
def get_shutil_archive_formats(self):
# we could call get_archives_formats() but that only returns the
# registry keys; we want to check the values too (the functions that
# are registered)
return shutil._ARCHIVE_FORMATS, shutil._ARCHIVE_FORMATS.copy()
def restore_shutil_archive_formats(self, saved):
shutil._ARCHIVE_FORMATS = saved[0]
shutil._ARCHIVE_FORMATS.clear()
shutil._ARCHIVE_FORMATS.update(saved[1])
def get_shutil_unpack_formats(self):
return shutil._UNPACK_FORMATS, shutil._UNPACK_FORMATS.copy()
def restore_shutil_unpack_formats(self, saved):
shutil._UNPACK_FORMATS = saved[0]
shutil._UNPACK_FORMATS.clear()
shutil._UNPACK_FORMATS.update(saved[1])
def get_logging__handlers(self):
# _handlers is a WeakValueDictionary
return id(logging._handlers), logging._handlers, logging._handlers.copy()
def restore_logging__handlers(self, saved_handlers):
# Can't easily revert the logging state
pass
def get_logging__handlerList(self):
# _handlerList is a list of weakrefs to handlers
return id(logging._handlerList), logging._handlerList, logging._handlerList[:]
def restore_logging__handlerList(self, saved_handlerList):
# Can't easily revert the logging state
pass
def get_sys_warnoptions(self):
return id(sys.warnoptions), sys.warnoptions, sys.warnoptions[:]
def restore_sys_warnoptions(self, saved_options):
sys.warnoptions = saved_options[1]
sys.warnoptions[:] = saved_options[2]
# Controlling dangling references to Thread objects can make it easier
# to track reference leaks.
def get_threading__dangling(self):
if not threading:
return None
# This copies the weakrefs without making any strong reference
return threading._dangling.copy()
def restore_threading__dangling(self, saved):
if not threading:
return
threading._dangling.clear()
threading._dangling.update(saved)
# Same for Process objects
def get_multiprocessing_process__dangling(self):
if not multiprocessing:
return None
# This copies the weakrefs without making any strong reference
return multiprocessing.process._dangling.copy()
def restore_multiprocessing_process__dangling(self, saved):
if not multiprocessing:
return
multiprocessing.process._dangling.clear()
multiprocessing.process._dangling.update(saved)
def get_sysconfig__CONFIG_VARS(self):
# make sure the dict is initialized
sysconfig.get_config_var('prefix')
return (id(sysconfig._CONFIG_VARS), sysconfig._CONFIG_VARS,
dict(sysconfig._CONFIG_VARS))
def restore_sysconfig__CONFIG_VARS(self, saved):
sysconfig._CONFIG_VARS = saved[1]
sysconfig._CONFIG_VARS.clear()
sysconfig._CONFIG_VARS.update(saved[2])
def get_sysconfig__INSTALL_SCHEMES(self):
return (id(sysconfig._INSTALL_SCHEMES), sysconfig._INSTALL_SCHEMES,
sysconfig._INSTALL_SCHEMES.copy())
def restore_sysconfig__INSTALL_SCHEMES(self, saved):
sysconfig._INSTALL_SCHEMES = saved[1]
sysconfig._INSTALL_SCHEMES.clear()
sysconfig._INSTALL_SCHEMES.update(saved[2])
def get_support_TESTFN(self):
if os.path.isfile(support.TESTFN):
result = 'f'
elif os.path.isdir(support.TESTFN):
result = 'd'
else:
result = None
return result
def restore_support_TESTFN(self, saved_value):
if saved_value is None:
if os.path.isfile(support.TESTFN):
os.unlink(support.TESTFN)
elif os.path.isdir(support.TESTFN):
shutil.rmtree(support.TESTFN)
def resource_info(self):
for name in self.resources:
method_suffix = name.replace('.', '_')
get_name = 'get_' + method_suffix
restore_name = 'restore_' + method_suffix
yield name, getattr(self, get_name), getattr(self, restore_name)
def __enter__(self):
self.saved_values = dict((name, get()) for name, get, restore
in self.resource_info())
return self
def __exit__(self, exc_type, exc_val, exc_tb):
saved_values = self.saved_values
del self.saved_values
for name, get, restore in self.resource_info():
current = get()
original = saved_values.pop(name)
# Check for changes to the resource's value
if current != original:
self.changed = True
restore(original)
if not self.quiet:
print("Warning -- {} was modified by {}".format(
name, self.testname),
file=sys.stderr)
if self.verbose > 1:
print(" Before: {}\n After: {} ".format(
original, current),
file=sys.stderr)
return False
def runtest_inner(test, verbose, quiet,
huntrleaks=False, debug=False, display_failure=True):
support.unload(test)
test_time = 0.0
refleak = False # True if the test leaked references.
try:
if test.startswith('test.'):
abstest = test
else:
# Always import it from the test package
abstest = 'test.' + test
with saved_test_environment(test, verbose, quiet) as environment:
start_time = time.time()
the_package = __import__(abstest, globals(), locals(), [])
the_module = getattr(the_package, test)
# If the test has a test_main, that will run the appropriate
# tests. If not, use normal unittest test loading.
test_runner = getattr(the_module, "test_main", None)
if test_runner is None:
tests = unittest.TestLoader().loadTestsFromModule(the_module)
test_runner = lambda: support.run_unittest(tests)
test_runner()
if huntrleaks:
refleak = dash_R(the_module, test, test_runner,
huntrleaks)
test_time = time.time() - start_time
except support.ResourceDenied as msg:
if not quiet:
print(test, "skipped --", msg)
sys.stdout.flush()
return RESOURCE_DENIED, test_time
except unittest.SkipTest as msg:
if not quiet:
print(test, "skipped --", msg)
sys.stdout.flush()
return SKIPPED, test_time
except KeyboardInterrupt:
raise
except support.TestFailed as msg:
if display_failure:
print("test", test, "failed --", msg, file=sys.stderr)
else:
print("test", test, "failed", file=sys.stderr)
sys.stderr.flush()
return FAILED, test_time
except:
msg = traceback.format_exc()
print("test", test, "crashed --", msg, file=sys.stderr)
sys.stderr.flush()
return FAILED, test_time
else:
if refleak:
return FAILED, test_time
if environment.changed:
return ENV_CHANGED, test_time
return PASSED, test_time
def cleanup_test_droppings(testname, verbose):
import shutil
import stat
import gc
# First kill any dangling references to open files etc.
# This can also issue some ResourceWarnings which would otherwise get
# triggered during the following test run, and possibly produce failures.
gc.collect()
# Try to clean up junk commonly left behind. While tests shouldn't leave
# any files or directories behind, when a test fails that can be tedious
# for it to arrange. The consequences can be especially nasty on Windows,
# since if a test leaves a file open, it cannot be deleted by name (while
# there's nothing we can do about that here either, we can display the
# name of the offending test, which is a real help).
for name in (support.TESTFN,
"db_home",
):
if not os.path.exists(name):
continue
if os.path.isdir(name):
kind, nuker = "directory", shutil.rmtree
elif os.path.isfile(name):
kind, nuker = "file", os.unlink
else:
raise SystemError("os.path says %r exists but is neither "
"directory nor file" % name)
if verbose:
print("%r left behind %s %r" % (testname, kind, name))
try:
# if we have chmod, fix possible permissions problems
# that might prevent cleanup
if (hasattr(os, 'chmod')):
os.chmod(name, stat.S_IRWXU | stat.S_IRWXG | stat.S_IRWXO)
nuker(name)
except Exception as msg:
print(("%r left behind %s %r and it couldn't be "
"removed: %s" % (testname, kind, name, msg)), file=sys.stderr)
def dash_R(the_module, test, indirect_test, huntrleaks):
"""Run a test multiple times, looking for reference leaks.
Returns:
False if the test didn't leak references; True if we detected refleaks.
"""
# This code is hackish and inelegant, but it seems to do the job.
import copyreg
import collections.abc
if not hasattr(sys, 'gettotalrefcount'):
raise Exception("Tracking reference leaks requires a debug build "
"of Python")
# Save current values for dash_R_cleanup() to restore.
fs = warnings.filters[:]
ps = copyreg.dispatch_table.copy()
pic = sys.path_importer_cache.copy()
try:
import zipimport
except ImportError:
zdc = None # Run unmodified on platforms without zipimport support
else:
zdc = zipimport._zip_directory_cache.copy()
abcs = {}
for abc in [getattr(collections.abc, a) for a in collections.abc.__all__]:
if not isabstract(abc):
continue
for obj in abc.__subclasses__() + [abc]:
abcs[obj] = obj._abc_registry.copy()
if indirect_test:
def run_the_test():
indirect_test()
else:
def run_the_test():
del sys.modules[the_module.__name__]
exec('import ' + the_module.__name__)
deltas = []
nwarmup, ntracked, fname = huntrleaks
fname = os.path.join(support.SAVEDCWD, fname)
repcount = nwarmup + ntracked
print("beginning", repcount, "repetitions", file=sys.stderr)
print(("1234567890"*(repcount//10 + 1))[:repcount], file=sys.stderr)
sys.stderr.flush()
dash_R_cleanup(fs, ps, pic, zdc, abcs)
for i in range(repcount):
rc_before = sys.gettotalrefcount()
run_the_test()
sys.stderr.write('.')
sys.stderr.flush()
dash_R_cleanup(fs, ps, pic, zdc, abcs)
rc_after = sys.gettotalrefcount()
if i >= nwarmup:
deltas.append(rc_after - rc_before)
print(file=sys.stderr)
if any(deltas):
msg = '%s leaked %s references, sum=%s' % (test, deltas, sum(deltas))
print(msg, file=sys.stderr)
sys.stderr.flush()
with open(fname, "a") as refrep:
print(msg, file=refrep)
refrep.flush()
return True
return False
def dash_R_cleanup(fs, ps, pic, zdc, abcs):
import gc, copyreg
import _strptime, linecache
import urllib.parse, urllib.request, mimetypes, doctest
import struct, filecmp, collections.abc
from distutils.dir_util import _path_created
from weakref import WeakSet
# Clear the warnings registry, so they can be displayed again
for mod in sys.modules.values():
if hasattr(mod, '__warningregistry__'):
del mod.__warningregistry__
# Restore some original values.
warnings.filters[:] = fs
copyreg.dispatch_table.clear()
copyreg.dispatch_table.update(ps)
sys.path_importer_cache.clear()
sys.path_importer_cache.update(pic)
try:
import zipimport
except ImportError:
pass # Run unmodified on platforms without zipimport support
else:
zipimport._zip_directory_cache.clear()
zipimport._zip_directory_cache.update(zdc)
# clear type cache
sys._clear_type_cache()
# Clear ABC registries, restoring previously saved ABC registries.
for abc in [getattr(collections.abc, a) for a in collections.abc.__all__]:
if not isabstract(abc):
continue
for obj in abc.__subclasses__() + [abc]:
obj._abc_registry = abcs.get(obj, WeakSet()).copy()
obj._abc_cache.clear()
obj._abc_negative_cache.clear()
# Flush standard output, so that buffered data is sent to the OS and
# associated Python objects are reclaimed.
for stream in (sys.stdout, sys.stderr, sys.__stdout__, sys.__stderr__):
if stream is not None:
stream.flush()
# Clear assorted module caches.
_path_created.clear()
re.purge()
_strptime._regex_cache.clear()
urllib.parse.clear_cache()
urllib.request.urlcleanup()
linecache.clearcache()
mimetypes._default_mime_types()
filecmp._cache.clear()
struct._clearcache()
doctest.master = None
try:
import ctypes
except ImportError:
# Don't worry about resetting the cache if ctypes is not supported
pass
else:
ctypes._reset_cache()
# Collect cyclic trash.
gc.collect()
def warm_caches():
# char cache
s = bytes(range(256))
for i in range(256):
s[i:i+1]
# unicode cache
x = [chr(i) for i in range(256)]
# int cache
x = list(range(-5, 257))
def findtestdir(path=None):
return path or os.path.dirname(__file__) or os.curdir
def removepy(names):
if not names:
return
for idx, name in enumerate(names):
basename, ext = os.path.splitext(name)
if ext == '.py':
names[idx] = basename
def count(n, word):
if n == 1:
return "%d %s" % (n, word)
else:
return "%d %ss" % (n, word)
def printlist(x, width=70, indent=4):
"""Print the elements of iterable x to stdout.
Optional arg width (default 70) is the maximum line length.
Optional arg indent (default 4) is the number of blanks with which to
begin each line.
"""
from textwrap import fill
blanks = ' ' * indent
# Print the sorted list: 'x' may be a '--random' list or a set()
print(fill(' '.join(str(elt) for elt in sorted(x)), width,
initial_indent=blanks, subsequent_indent=blanks))
# Map sys.platform to a string containing the basenames of tests
# expected to be skipped on that platform.
#
# Special cases:
# test_pep277
# The _ExpectedSkips constructor adds this to the set of expected
# skips if not os.path.supports_unicode_filenames.
# test_timeout
# Controlled by test_timeout.skip_expected. Requires the network
# resource and a socket module.
#
# Tests that are expected to be skipped everywhere except on one platform
# are also handled separately.
_expectations = (
('win32',
"""
test__locale
test_crypt
test_curses
test_dbm
test_devpoll
test_fcntl
test_fork1
test_epoll
test_dbm_gnu
test_dbm_ndbm
test_grp
test_ioctl
test_largefile
test_kqueue
test_openpty
test_ossaudiodev
test_pipes
test_poll
test_posix
test_pty
test_pwd
test_resource
test_signal
test_syslog
test_threadsignals
test_wait3
test_wait4
"""),
('linux',
"""
test_curses
test_devpoll
test_largefile
test_kqueue
test_ossaudiodev
"""),
('unixware',
"""
test_epoll
test_largefile
test_kqueue
test_minidom
test_openpty
test_pyexpat
test_sax
test_sundry
"""),
('openunix',
"""
test_epoll
test_largefile
test_kqueue
test_minidom
test_openpty
test_pyexpat
test_sax
test_sundry
"""),
('sco_sv',
"""
test_asynchat
test_fork1
test_epoll
test_gettext
test_largefile
test_locale
test_kqueue
test_minidom
test_openpty
test_pyexpat
test_queue
test_sax
test_sundry
test_thread
test_threaded_import
test_threadedtempfile
test_threading
"""),
('darwin',
"""
test__locale
test_curses
test_devpoll
test_epoll
test_dbm_gnu
test_gdb
test_largefile
test_locale
test_minidom
test_ossaudiodev
test_poll
"""),
('sunos',
"""
test_curses
test_dbm
test_epoll
test_kqueue
test_dbm_gnu
test_gzip
test_openpty
test_zipfile
test_zlib
"""),
('hp-ux',
"""
test_curses
test_epoll
test_dbm_gnu
test_gzip
test_largefile
test_locale
test_kqueue
test_minidom
test_openpty
test_pyexpat
test_sax
test_zipfile
test_zlib
"""),
('cygwin',
"""
test_curses
test_dbm
test_devpoll
test_epoll
test_ioctl
test_kqueue
test_largefile
test_locale
test_ossaudiodev
test_socketserver
"""),
('os2emx',
"""
test_audioop
test_curses
test_epoll
test_kqueue
test_largefile
test_mmap
test_openpty
test_ossaudiodev
test_pty
test_resource
test_signal
"""),
('freebsd',
"""
test_devpoll
test_epoll
test_dbm_gnu
test_locale
test_ossaudiodev
test_pep277
test_pty
test_socketserver
test_tcl
test_tk
test_ttk_guionly
test_ttk_textonly
test_timeout
test_urllibnet
test_multiprocessing
"""),
('aix',
"""
test_bz2
test_epoll
test_dbm_gnu
test_gzip
test_kqueue
test_ossaudiodev
test_tcl
test_tk
test_ttk_guionly
test_ttk_textonly
test_zipimport
test_zlib
"""),
('openbsd',
"""
test_ctypes
test_devpoll
test_epoll
test_dbm_gnu
test_locale
test_normalization
test_ossaudiodev
test_pep277
test_tcl
test_tk
test_ttk_guionly
test_ttk_textonly
test_multiprocessing
"""),
('netbsd',
"""
test_ctypes
test_curses
test_devpoll
test_epoll
test_dbm_gnu
test_locale
test_ossaudiodev
test_pep277
test_tcl
test_tk
test_ttk_guionly
test_ttk_textonly
test_multiprocessing
"""),
)
class _ExpectedSkips:
def __init__(self):
import os.path
from test import test_timeout
self.valid = False
expected = None
for item in _expectations:
if sys.platform.startswith(item[0]):
expected = item[1]
break
if expected is not None:
self.expected = set(expected.split())
# These are broken tests, for now skipped on every platform.
# XXX Fix these!
self.expected.add('test_nis')
# expected to be skipped on every platform, even Linux
if not os.path.supports_unicode_filenames:
self.expected.add('test_pep277')
# doctest, profile and cProfile tests fail when the codec for the
# fs encoding isn't built in because PyUnicode_Decode() adds two
# calls into Python.
encs = ("utf-8", "latin-1", "ascii", "mbcs", "utf-16", "utf-32")
if sys.getfilesystemencoding().lower() not in encs:
self.expected.add('test_profile')
self.expected.add('test_cProfile')
self.expected.add('test_doctest')
if test_timeout.skip_expected:
self.expected.add('test_timeout')
if sys.platform != "win32":
# test_sqlite is only reliable on Windows where the library
# is distributed with Python
WIN_ONLY = {"test_unicode_file", "test_winreg",
"test_winsound", "test_startfile",
"test_sqlite", "test_msilib"}
self.expected |= WIN_ONLY
if sys.platform != 'sunos5':
self.expected.add('test_nis')
if support.python_is_optimized():
self.expected.add("test_gdb")
self.valid = True
def isvalid(self):
"Return true iff _ExpectedSkips knows about the current platform."
return self.valid
def getexpected(self):
"""Return set of test names we expect to skip on current platform.
self.isvalid() must be true.
"""
assert self.isvalid()
return self.expected
def _make_temp_dir_for_build(TEMPDIR):
# When tests are run from the Python build directory, it is best practice
# to keep the test files in a subfolder. It eases the cleanup of leftover
# files using command "make distclean".
if sysconfig.is_python_build():
TEMPDIR = os.path.join(sysconfig.get_config_var('srcdir'), 'build')
TEMPDIR = os.path.abspath(TEMPDIR)
try:
os.mkdir(TEMPDIR)
except FileExistsError:
pass
# Define a writable temp dir that will be used as cwd while running
# the tests. The name of the dir includes the pid to allow parallel
# testing (see the -j option).
TESTCWD = 'test_python_{}'.format(os.getpid())
TESTCWD = os.path.join(TEMPDIR, TESTCWD)
return TEMPDIR, TESTCWD
if __name__ == '__main__':
# Remove regrtest.py's own directory from the module search path. Despite
# the elimination of implicit relative imports, this is still needed to
# ensure that submodules of the test package do not inappropriately appear
# as top-level modules even when people (or buildbots!) invoke regrtest.py
# directly instead of using the -m switch
mydir = os.path.abspath(os.path.normpath(os.path.dirname(sys.argv[0])))
i = len(sys.path)
while i >= 0:
i -= 1
if os.path.abspath(os.path.normpath(sys.path[i])) == mydir:
del sys.path[i]
# findtestdir() gets the dirname out of __file__, so we have to make it
# absolute before changing the working directory.
# For example __file__ may be relative when running trace or profile.
# See issue #9323.
__file__ = os.path.abspath(__file__)
# sanity check
assert __file__ == os.path.abspath(sys.argv[0])
TEMPDIR, TESTCWD = _make_temp_dir_for_build(TEMPDIR)
# Run the tests in a context manager that temporary changes the CWD to a
# temporary and writable directory. If it's not possible to create or
# change the CWD, the original CWD will be used. The original CWD is
# available from support.SAVEDCWD.
with support.temp_cwd(TESTCWD, quiet=True):
main()
| gpl-2.0 |
orgito/ansible | lib/ansible/modules/network/vyos/vyos_command.py | 41 | 7417 | #!/usr/bin/python
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'network'}
DOCUMENTATION = """
---
module: vyos_command
version_added: "2.2"
author: "Nathaniel Case (@qalthos)"
short_description: Run one or more commands on VyOS devices
description:
- The command module allows running one or more commands on remote
devices running VyOS. This module can also be introspected
to validate key parameters before returning successfully. If the
conditional statements are not met in the wait period, the task
fails.
- Certain C(show) commands in VyOS produce many lines of output and
use a custom pager that can cause this module to hang. If the
value of the environment variable C(ANSIBLE_VYOS_TERMINAL_LENGTH)
is not set, the default number of 10000 is used.
extends_documentation_fragment: vyos
options:
commands:
description:
- The ordered set of commands to execute on the remote device
running VyOS. The output from the command execution is
returned to the playbook. If the I(wait_for) argument is
provided, the module is not returned until the condition is
satisfied or the number of retries has been exceeded.
required: true
wait_for:
description:
- Specifies what to evaluate from the output of the command
and what conditionals to apply. This argument will cause
the task to wait for a particular conditional to be true
before moving forward. If the conditional is not true
by the configured I(retries), the task fails. See examples.
aliases: ['waitfor']
match:
description:
- The I(match) argument is used in conjunction with the
I(wait_for) argument to specify the match policy. Valid
values are C(all) or C(any). If the value is set to C(all)
then all conditionals in the wait_for must be satisfied. If
the value is set to C(any) then only one of the values must be
satisfied.
default: all
choices: ['any', 'all']
retries:
description:
- Specifies the number of retries a command should be tried
before it is considered failed. The command is run on the
target device every retry and evaluated against the I(wait_for)
conditionals.
default: 10
interval:
description:
- Configures the interval in seconds to wait between I(retries)
of the command. If the command does not pass the specified
conditions, the interval indicates how long to wait before
trying the command again.
default: 1
notes:
- Tested against VYOS 1.1.7
- Running C(show system boot-messages all) will cause the module to hang since
VyOS is using a custom pager setting to display the output of that command.
- If a command sent to the device requires answering a prompt, it is possible
to pass a dict containing I(command), I(answer) and I(prompt). See examples.
"""
EXAMPLES = """
tasks:
- name: show configuration on ethernet devices eth0 and eth1
vyos_command:
commands:
- show interfaces ethernet {{ item }}
with_items:
- eth0
- eth1
- name: run multiple commands and check if version output contains specific version string
vyos_command:
commands:
- show version
- show hardware cpu
wait_for:
- "result[0] contains 'VyOS 1.1.7'"
- name: run command that requires answering a prompt
vyos_command:
commands:
- command: 'rollback 1'
prompt: 'Proceed with reboot? [confirm][y]'
answer: y
"""
RETURN = """
stdout:
description: The set of responses from the commands
returned: always apart from low level errors (such as action plugin)
type: list
sample: ['...', '...']
stdout_lines:
description: The value of stdout split into a list
returned: always
type: list
sample: [['...', '...'], ['...'], ['...']]
failed_conditions:
description: The list of conditionals that have failed
returned: failed
type: list
sample: ['...', '...']
warnings:
description: The list of warnings (if any) generated by module based on arguments
returned: always
type: list
sample: ['...', '...']
"""
import time
from ansible.module_utils._text import to_text
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.network.common.parsing import Conditional
from ansible.module_utils.network.common.utils import transform_commands, to_lines
from ansible.module_utils.network.vyos.vyos import run_commands
from ansible.module_utils.network.vyos.vyos import vyos_argument_spec
def parse_commands(module, warnings):
commands = transform_commands(module)
if module.check_mode:
for item in list(commands):
if not item['command'].startswith('show'):
warnings.append(
'Only show commands are supported when using check mode, not '
'executing %s' % item['command']
)
commands.remove(item)
return commands
def main():
spec = dict(
commands=dict(type='list', required=True),
wait_for=dict(type='list', aliases=['waitfor']),
match=dict(default='all', choices=['all', 'any']),
retries=dict(default=10, type='int'),
interval=dict(default=1, type='int')
)
spec.update(vyos_argument_spec)
module = AnsibleModule(argument_spec=spec, supports_check_mode=True)
warnings = list()
result = {'changed': False, 'warnings': warnings}
commands = parse_commands(module, warnings)
wait_for = module.params['wait_for'] or list()
try:
conditionals = [Conditional(c) for c in wait_for]
except AttributeError as exc:
module.fail_json(msg=to_text(exc))
retries = module.params['retries']
interval = module.params['interval']
match = module.params['match']
for _ in range(retries):
responses = run_commands(module, commands)
for item in list(conditionals):
if item(responses):
if match == 'any':
conditionals = list()
break
conditionals.remove(item)
if not conditionals:
break
time.sleep(interval)
if conditionals:
failed_conditions = [item.raw for item in conditionals]
msg = 'One or more conditional statements have not been satisfied'
module.fail_json(msg=msg, failed_conditions=failed_conditions)
result.update({
'stdout': responses,
'stdout_lines': list(to_lines(responses)),
})
module.exit_json(**result)
if __name__ == '__main__':
main()
| gpl-3.0 |
ryfeus/lambda-packs | Tensorflow/source/tensorflow/contrib/keras/api/keras/initializers/__init__.py | 74 | 2387 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Keras built-in initializers."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# Initializer functions / callable classes.
from tensorflow.python.keras._impl.keras.initializers import Constant
from tensorflow.python.keras._impl.keras.initializers import Identity
from tensorflow.python.keras._impl.keras.initializers import Initializer
from tensorflow.python.keras._impl.keras.initializers import Ones
from tensorflow.python.keras._impl.keras.initializers import Orthogonal
from tensorflow.python.keras._impl.keras.initializers import RandomNormal
from tensorflow.python.keras._impl.keras.initializers import RandomUniform
from tensorflow.python.keras._impl.keras.initializers import TruncatedNormal
from tensorflow.python.keras._impl.keras.initializers import VarianceScaling
from tensorflow.python.keras._impl.keras.initializers import Zeros
# Functional interface.
# pylint: disable=g-bad-import-order
from tensorflow.python.keras._impl.keras.initializers import glorot_normal
from tensorflow.python.keras._impl.keras.initializers import glorot_uniform
from tensorflow.python.keras._impl.keras.initializers import he_normal
from tensorflow.python.keras._impl.keras.initializers import he_uniform
from tensorflow.python.keras._impl.keras.initializers import lecun_normal
from tensorflow.python.keras._impl.keras.initializers import lecun_uniform
# Auxiliary utils.
from tensorflow.python.keras._impl.keras.initializers import deserialize
from tensorflow.python.keras._impl.keras.initializers import serialize
from tensorflow.python.keras._impl.keras.initializers import get
del absolute_import
del division
del print_function
| mit |
xxd3vin/spp-sdk | opt/Python27/Lib/site-packages/numpy/core/memmap.py | 22 | 9611 | __all__ = ['memmap']
import warnings
from numeric import uint8, ndarray, dtype
import sys
from numpy.compat import asbytes
dtypedescr = dtype
valid_filemodes = ["r", "c", "r+", "w+"]
writeable_filemodes = ["r+","w+"]
mode_equivalents = {
"readonly":"r",
"copyonwrite":"c",
"readwrite":"r+",
"write":"w+"
}
class memmap(ndarray):
"""
Create a memory-map to an array stored in a *binary* file on disk.
Memory-mapped files are used for accessing small segments of large files
on disk, without reading the entire file into memory. Numpy's
memmap's are array-like objects. This differs from Python's ``mmap``
module, which uses file-like objects.
Parameters
----------
filename : str or file-like object
The file name or file object to be used as the array data buffer.
dtype : data-type, optional
The data-type used to interpret the file contents.
Default is `uint8`.
mode : {'r+', 'r', 'w+', 'c'}, optional
The file is opened in this mode:
+------+-------------------------------------------------------------+
| 'r' | Open existing file for reading only. |
+------+-------------------------------------------------------------+
| 'r+' | Open existing file for reading and writing. |
+------+-------------------------------------------------------------+
| 'w+' | Create or overwrite existing file for reading and writing. |
+------+-------------------------------------------------------------+
| 'c' | Copy-on-write: assignments affect data in memory, but |
| | changes are not saved to disk. The file on disk is |
| | read-only. |
+------+-------------------------------------------------------------+
Default is 'r+'.
offset : int, optional
In the file, array data starts at this offset. Since `offset` is
measured in bytes, it should be a multiple of the byte-size of
`dtype`. Requires ``shape=None``. The default is 0.
shape : tuple, optional
The desired shape of the array. By default, the returned array will be
1-D with the number of elements determined by file size and data-type.
order : {'C', 'F'}, optional
Specify the order of the ndarray memory layout: C (row-major) or
Fortran (column-major). This only has an effect if the shape is
greater than 1-D. The default order is 'C'.
Attributes
----------
filename : str
Path to the mapped file.
offset : int
Offset position in the file.
mode : str
File mode.
Methods
-------
close
Close the memmap file.
flush
Flush any changes in memory to file on disk.
When you delete a memmap object, flush is called first to write
changes to disk before removing the object.
Notes
-----
The memmap object can be used anywhere an ndarray is accepted.
Given a memmap ``fp``, ``isinstance(fp, numpy.ndarray)`` returns
``True``.
Memory-mapped arrays use the Python memory-map object which
(prior to Python 2.5) does not allow files to be larger than a
certain size depending on the platform. This size is always < 2GB
even on 64-bit systems.
Examples
--------
>>> data = np.arange(12, dtype='float32')
>>> data.resize((3,4))
This example uses a temporary file so that doctest doesn't write
files to your directory. You would use a 'normal' filename.
>>> from tempfile import mkdtemp
>>> import os.path as path
>>> filename = path.join(mkdtemp(), 'newfile.dat')
Create a memmap with dtype and shape that matches our data:
>>> fp = np.memmap(filename, dtype='float32', mode='w+', shape=(3,4))
>>> fp
memmap([[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.]], dtype=float32)
Write data to memmap array:
>>> fp[:] = data[:]
>>> fp
memmap([[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.]], dtype=float32)
>>> fp.filename == path.abspath(filename)
True
Deletion flushes memory changes to disk before removing the object:
>>> del fp
Load the memmap and verify data was stored:
>>> newfp = np.memmap(filename, dtype='float32', mode='r', shape=(3,4))
>>> newfp
memmap([[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.]], dtype=float32)
Read-only memmap:
>>> fpr = np.memmap(filename, dtype='float32', mode='r', shape=(3,4))
>>> fpr.flags.writeable
False
Copy-on-write memmap:
>>> fpc = np.memmap(filename, dtype='float32', mode='c', shape=(3,4))
>>> fpc.flags.writeable
True
It's possible to assign to copy-on-write array, but values are only
written into the memory copy of the array, and not written to disk:
>>> fpc
memmap([[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.]], dtype=float32)
>>> fpc[0,:] = 0
>>> fpc
memmap([[ 0., 0., 0., 0.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.]], dtype=float32)
File on disk is unchanged:
>>> fpr
memmap([[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.]], dtype=float32)
Offset into a memmap:
>>> fpo = np.memmap(filename, dtype='float32', mode='r', offset=16)
>>> fpo
memmap([ 4., 5., 6., 7., 8., 9., 10., 11.], dtype=float32)
"""
__array_priority__ = -100.0
def __new__(subtype, filename, dtype=uint8, mode='r+', offset=0,
shape=None, order='C'):
# Import here to minimize 'import numpy' overhead
import mmap
import os.path
try:
mode = mode_equivalents[mode]
except KeyError:
if mode not in valid_filemodes:
raise ValueError("mode must be one of %s" % \
(valid_filemodes + mode_equivalents.keys()))
if hasattr(filename,'read'):
fid = filename
else:
fid = open(filename, (mode == 'c' and 'r' or mode)+'b')
if (mode == 'w+') and shape is None:
raise ValueError, "shape must be given"
fid.seek(0, 2)
flen = fid.tell()
descr = dtypedescr(dtype)
_dbytes = descr.itemsize
if shape is None:
bytes = flen - offset
if (bytes % _dbytes):
fid.close()
raise ValueError, "Size of available data is not a "\
"multiple of data-type size."
size = bytes // _dbytes
shape = (size,)
else:
if not isinstance(shape, tuple):
shape = (shape,)
size = 1
for k in shape:
size *= k
bytes = long(offset + size*_dbytes)
if mode == 'w+' or (mode == 'r+' and flen < bytes):
fid.seek(bytes - 1, 0)
fid.write(asbytes('\0'))
fid.flush()
if mode == 'c':
acc = mmap.ACCESS_COPY
elif mode == 'r':
acc = mmap.ACCESS_READ
else:
acc = mmap.ACCESS_WRITE
if sys.version_info[:2] >= (2,6):
# The offset keyword in mmap.mmap needs Python >= 2.6
start = offset - offset % mmap.ALLOCATIONGRANULARITY
bytes -= start
offset -= start
mm = mmap.mmap(fid.fileno(), bytes, access=acc, offset=start)
else:
mm = mmap.mmap(fid.fileno(), bytes, access=acc)
self = ndarray.__new__(subtype, shape, dtype=descr, buffer=mm,
offset=offset, order=order)
self._mmap = mm
self.offset = offset
self.mode = mode
if isinstance(filename, basestring):
self.filename = os.path.abspath(filename)
elif hasattr(filename, "name"):
self.filename = os.path.abspath(filename.name)
return self
def __array_finalize__(self, obj):
if hasattr(obj, '_mmap'):
self._mmap = obj._mmap
self.filename = obj.filename
self.offset = obj.offset
self.mode = obj.mode
else:
self._mmap = None
def flush(self):
"""
Write any changes in the array to the file on disk.
For further information, see `memmap`.
Parameters
----------
None
See Also
--------
memmap
"""
if self._mmap is not None:
self._mmap.flush()
def _close(self):
"""Close the memmap file. Only do this when deleting the object."""
if self.base is self._mmap:
# The python mmap probably causes flush on close, but
# we put this here for safety
self._mmap.flush()
self._mmap.close()
self._mmap = None
def __del__(self):
# We first check if we are the owner of the mmap, rather than
# a view, so deleting a view does not call _close
# on the parent mmap
if self._mmap is self.base:
try:
# First run tell() to see whether file is open
self._mmap.tell()
except ValueError:
pass
else:
self._close()
| mit |
malkavi/Flexget | flexget/tests/api_tests/test_variables_api.py | 3 | 1727 | from flexget.components.variables.variables import Variables
from flexget.manager import Session
from flexget.utils import json
class TestVariablesAPI:
config = 'tasks: {}'
variables_dict = {'test_variable_db': True}
def test_variables_get(self, api_client):
with Session() as session:
s = Variables(variables=self.variables_dict)
session.add(s)
rsp = api_client.get('/variables/')
assert rsp.status_code == 200, 'Response code is %s' % rsp.status_code
assert json.loads(rsp.get_data(as_text=True)) == self.variables_dict
def test_variables_put(self, api_client):
rsp = api_client.get('/variables/')
assert rsp.status_code == 200, 'Response code is %s' % rsp.status_code
assert json.loads(rsp.get_data(as_text=True)) == {}
rsp = api_client.json_put('/variables/', data=json.dumps(self.variables_dict))
assert rsp.status_code == 201, 'Response code is %s' % rsp.status_code
assert json.loads(rsp.get_data(as_text=True)) == self.variables_dict
rsp = api_client.get('/variables/')
assert rsp.status_code == 200, 'Response code is %s' % rsp.status_code
assert json.loads(rsp.get_data(as_text=True)) == self.variables_dict
def test_variables_patch(self, api_client):
data = {'a': 'b', 'c': 'd'}
api_client.json_put('/variables/', data=json.dumps(data))
new_data = {'a': [1, 2, 3], 'foo': 'bar'}
rsp = api_client.json_patch('/variables/', data=json.dumps(new_data))
assert rsp.status_code == 200, 'Response code is %s' % rsp.status_code
assert json.loads(rsp.get_data(as_text=True)) == {'a': [1, 2, 3], 'foo': 'bar', 'c': 'd'}
| mit |
chouseknecht/ansible | test/units/modules/network/f5/test_bigip_profile_oneconnect.py | 22 | 3675 | # -*- coding: utf-8 -*-
#
# Copyright: (c) 2017, F5 Networks Inc.
# GNU General Public License v3.0 (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import json
import pytest
import sys
if sys.version_info < (2, 7):
pytestmark = pytest.mark.skip("F5 Ansible modules require Python >= 2.7")
from ansible.module_utils.basic import AnsibleModule
try:
from library.modules.bigip_profile_oneconnect import ApiParameters
from library.modules.bigip_profile_oneconnect import ModuleParameters
from library.modules.bigip_profile_oneconnect import ModuleManager
from library.modules.bigip_profile_oneconnect import ArgumentSpec
# In Ansible 2.8, Ansible changed import paths.
from test.units.compat import unittest
from test.units.compat.mock import Mock
from test.units.modules.utils import set_module_args
except ImportError:
from ansible.modules.network.f5.bigip_profile_oneconnect import ApiParameters
from ansible.modules.network.f5.bigip_profile_oneconnect import ModuleParameters
from ansible.modules.network.f5.bigip_profile_oneconnect import ModuleManager
from ansible.modules.network.f5.bigip_profile_oneconnect import ArgumentSpec
# Ansible 2.8 imports
from units.compat import unittest
from units.compat.mock import Mock
from units.modules.utils import set_module_args
fixture_path = os.path.join(os.path.dirname(__file__), 'fixtures')
fixture_data = {}
def load_fixture(name):
path = os.path.join(fixture_path, name)
if path in fixture_data:
return fixture_data[path]
with open(path) as f:
data = f.read()
try:
data = json.loads(data)
except Exception:
pass
fixture_data[path] = data
return data
class TestParameters(unittest.TestCase):
def test_module_parameters(self):
args = dict(
name='foo',
parent='bar',
maximum_size=100,
maximum_age=200,
maximum_reuse=300,
idle_timeout_override=20,
limit_type='strict'
)
p = ModuleParameters(params=args)
assert p.name == 'foo'
assert p.parent == '/Common/bar'
assert p.maximum_size == 100
assert p.maximum_age == 200
assert p.maximum_reuse == 300
assert p.idle_timeout_override == 20
assert p.limit_type == 'strict'
def test_api_parameters(self):
args = load_fixture('load_ltm_profile_oneconnect_1.json')
p = ApiParameters(params=args)
assert p.name == 'oneconnect'
assert p.maximum_reuse == 1000
class TestManager(unittest.TestCase):
def setUp(self):
self.spec = ArgumentSpec()
def test_create(self, *args):
# Configure the arguments that would be sent to the Ansible module
set_module_args(dict(
name='foo',
parent='bar',
maximum_reuse=1000,
provider=dict(
server='localhost',
password='password',
user='admin'
)
))
module = AnsibleModule(
argument_spec=self.spec.argument_spec,
supports_check_mode=self.spec.supports_check_mode
)
mm = ModuleManager(module=module)
# Override methods to force specific logic in the module to happen
mm.exists = Mock(return_value=False)
mm.create_on_device = Mock(return_value=True)
results = mm.exec_module()
assert results['changed'] is True
assert results['maximum_reuse'] == 1000
| gpl-3.0 |
meteorcloudy/bazel | src/create_embedded_tools.py | 1 | 5999 | # pylint: disable=g-direct-third-party-import
# pylint: disable=g-bad-file-header
# Copyright 2017 The Bazel Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Creates the embedded_tools.zip that is part of the Bazel binary."""
import contextlib
import fnmatch
import os
import os.path
import re
import sys
import zipfile
from src.create_embedded_tools_lib import copy_tar_to_zip
from src.create_embedded_tools_lib import copy_zip_to_zip
from src.create_embedded_tools_lib import is_executable
output_paths = [
('*tools/jdk/BUILD.tools', lambda x: 'tools/jdk/BUILD'),
('*tools/build_defs/repo/BUILD.repo',
lambda x: 'tools/build_defs/repo/BUILD'),
('*tools/platforms/BUILD.tools', lambda x: 'platforms/BUILD'),
('*tools/platforms/*', lambda x: 'platforms/' + os.path.basename(x)),
('*tools/cpp/BUILD.tools', lambda x: 'tools/cpp/BUILD'),
('*tools/cpp/runfiles/generated_*',
lambda x: 'tools/cpp/runfiles/' + os.path.basename(x)[len('generated_'):]),
('*BUILD.java_langtools', lambda x: 'third_party/java/jdk/langtools/BUILD'),
('*launcher.exe', lambda x: 'tools/launcher/launcher.exe'),
('*def_parser.exe', lambda x: 'tools/def_parser/def_parser.exe'),
('*zipper.exe', lambda x: 'tools/zip/zipper/zipper.exe'),
('*zipper', lambda x: 'tools/zip/zipper/zipper'),
('*src/objc_tools/*',
lambda x: 'tools/objc/precomp_' + os.path.basename(x)),
('*xcode*StdRedirect.dylib', lambda x: 'tools/objc/StdRedirect.dylib'),
('*xcode*make_hashed_objlist.py',
lambda x: 'tools/objc/make_hashed_objlist.py'),
('*xcode*realpath', lambda x: 'tools/objc/realpath'),
('*xcode*xcode-locator', lambda x: 'tools/objc/xcode-locator'),
('*src/tools/xcode/*', lambda x: 'tools/objc/' + os.path.basename(x)),
# --experimental_sibling_repository_layout=false
('*external/openjdk_*/file/*.tar.gz', lambda x: 'jdk.tar.gz'),
('*external/openjdk_*/file/*.zip', lambda x: 'jdk.zip'),
# --experimental_sibling_repository_layout=true
('*openjdk_*/file/*.tar.gz', lambda x: 'jdk.tar.gz'),
('*openjdk_*/file/*.zip', lambda x: 'jdk.zip'),
('*src/minimal_jdk.tar.gz', lambda x: 'jdk.tar.gz'),
('*src/minimal_jdk.zip', lambda x: 'jdk.zip'),
('*.bzl.tools', lambda x: x[:-6]),
('*', lambda x: re.sub(r'^.*bazel-out/[^/]*/bin/', '', x, count=1)),
]
def get_output_path(path):
for pattern, transformer in output_paths:
if fnmatch.fnmatch(path.replace('\\', '/'), pattern):
# BUILD.tools are stored as BUILD files.
return transformer(path).replace('/BUILD.tools', '/BUILD')
def get_input_files(argsfile):
"""Returns a dict of archive_file to input_file.
This describes the files that should be put into the generated archive.
Args:
argsfile: The file containing the list of input files.
Raises:
ValueError: When two input files map to the same output file.
"""
with open(argsfile, 'r') as f:
input_files = sorted(set(x.strip() for x in f.readlines()))
result = {}
for input_file in input_files:
# If we have both a BUILD and a BUILD.tools file, take the latter only.
if (os.path.basename(input_file) == 'BUILD' and
input_file + '.tools' in input_files):
continue
# It's an error to have two files map to the same output file, because the
# result is hard to predict and can easily be wrong.
output_path = get_output_path(input_file)
if output_path in result:
raise ValueError(
'Duplicate output file: Both {} and {} map to {}'.format(
result[output_path], input_file, output_path))
result[output_path] = input_file
return result
def copy_jdk_into_archive(output_zip, archive_file, input_file):
"""Extract the JDK and adds it to the archive under jdk/*."""
def _replace_dirname(filename):
# Rename the first folder to 'jdk', because Bazel looks for a
# bundled JDK in the embedded tools using that folder name.
return 'jdk/' + '/'.join(filename.split('/')[1:])
# The JDK is special - it's extracted instead of copied.
if archive_file.endswith('.tar.gz'):
copy_tar_to_zip(output_zip, input_file, _replace_dirname)
elif archive_file.endswith('.zip'):
copy_zip_to_zip(output_zip, input_file, _replace_dirname)
def main():
output_zip = os.path.join(os.getcwd(), sys.argv[1])
input_files = get_input_files(sys.argv[2])
# Copy all the input_files into output_zip.
# Adding contextlib.closing to be python 2.6 (for centos 6.7) compatible
with contextlib.closing(
zipfile.ZipFile(output_zip, 'w', zipfile.ZIP_DEFLATED)) as output_zip:
zipinfo = zipfile.ZipInfo('WORKSPACE', (1980, 1, 1, 0, 0, 0))
zipinfo.external_attr = 0o644 << 16
output_zip.writestr(zipinfo, 'workspace(name = "bazel_tools")\n')
# By sorting the file list, the resulting ZIP file will be reproducible and
# deterministic.
for archive_file, input_file in sorted(input_files.items()):
if os.path.basename(archive_file) in ('jdk.tar.gz', 'jdk.zip'):
copy_jdk_into_archive(output_zip, archive_file, input_file)
else:
zipinfo = zipfile.ZipInfo(archive_file, (1980, 1, 1, 0, 0, 0))
zipinfo.external_attr = 0o755 << 16 if is_executable(
input_file) else 0o644 << 16
zipinfo.compress_type = zipfile.ZIP_DEFLATED
with open(input_file, 'rb') as f:
output_zip.writestr(zipinfo, f.read())
if __name__ == '__main__':
main()
| apache-2.0 |
klickagent/phantomjs | src/breakpad/src/third_party/protobuf/protobuf/python/google/protobuf/internal/containers.py | 261 | 9573 | # Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# http://code.google.com/p/protobuf/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Contains container classes to represent different protocol buffer types.
This file defines container classes which represent categories of protocol
buffer field types which need extra maintenance. Currently these categories
are:
- Repeated scalar fields - These are all repeated fields which aren't
composite (e.g. they are of simple types like int32, string, etc).
- Repeated composite fields - Repeated fields which are composite. This
includes groups and nested messages.
"""
__author__ = 'petar@google.com (Petar Petrov)'
class BaseContainer(object):
"""Base container class."""
# Minimizes memory usage and disallows assignment to other attributes.
__slots__ = ['_message_listener', '_values']
def __init__(self, message_listener):
"""
Args:
message_listener: A MessageListener implementation.
The RepeatedScalarFieldContainer will call this object's
Modified() method when it is modified.
"""
self._message_listener = message_listener
self._values = []
def __getitem__(self, key):
"""Retrieves item by the specified key."""
return self._values[key]
def __len__(self):
"""Returns the number of elements in the container."""
return len(self._values)
def __ne__(self, other):
"""Checks if another instance isn't equal to this one."""
# The concrete classes should define __eq__.
return not self == other
def __hash__(self):
raise TypeError('unhashable object')
def __repr__(self):
return repr(self._values)
def sort(self, sort_function=cmp):
self._values.sort(sort_function)
class RepeatedScalarFieldContainer(BaseContainer):
"""Simple, type-checked, list-like container for holding repeated scalars."""
# Disallows assignment to other attributes.
__slots__ = ['_type_checker']
def __init__(self, message_listener, type_checker):
"""
Args:
message_listener: A MessageListener implementation.
The RepeatedScalarFieldContainer will call this object's
Modified() method when it is modified.
type_checker: A type_checkers.ValueChecker instance to run on elements
inserted into this container.
"""
super(RepeatedScalarFieldContainer, self).__init__(message_listener)
self._type_checker = type_checker
def append(self, value):
"""Appends an item to the list. Similar to list.append()."""
self._type_checker.CheckValue(value)
self._values.append(value)
if not self._message_listener.dirty:
self._message_listener.Modified()
def insert(self, key, value):
"""Inserts the item at the specified position. Similar to list.insert()."""
self._type_checker.CheckValue(value)
self._values.insert(key, value)
if not self._message_listener.dirty:
self._message_listener.Modified()
def extend(self, elem_seq):
"""Extends by appending the given sequence. Similar to list.extend()."""
if not elem_seq:
return
new_values = []
for elem in elem_seq:
self._type_checker.CheckValue(elem)
new_values.append(elem)
self._values.extend(new_values)
self._message_listener.Modified()
def MergeFrom(self, other):
"""Appends the contents of another repeated field of the same type to this
one. We do not check the types of the individual fields.
"""
self._values.extend(other._values)
self._message_listener.Modified()
def remove(self, elem):
"""Removes an item from the list. Similar to list.remove()."""
self._values.remove(elem)
self._message_listener.Modified()
def __setitem__(self, key, value):
"""Sets the item on the specified position."""
self._type_checker.CheckValue(value)
self._values[key] = value
self._message_listener.Modified()
def __getslice__(self, start, stop):
"""Retrieves the subset of items from between the specified indices."""
return self._values[start:stop]
def __setslice__(self, start, stop, values):
"""Sets the subset of items from between the specified indices."""
new_values = []
for value in values:
self._type_checker.CheckValue(value)
new_values.append(value)
self._values[start:stop] = new_values
self._message_listener.Modified()
def __delitem__(self, key):
"""Deletes the item at the specified position."""
del self._values[key]
self._message_listener.Modified()
def __delslice__(self, start, stop):
"""Deletes the subset of items from between the specified indices."""
del self._values[start:stop]
self._message_listener.Modified()
def __eq__(self, other):
"""Compares the current instance with another one."""
if self is other:
return True
# Special case for the same type which should be common and fast.
if isinstance(other, self.__class__):
return other._values == self._values
# We are presumably comparing against some other sequence type.
return other == self._values
class RepeatedCompositeFieldContainer(BaseContainer):
"""Simple, list-like container for holding repeated composite fields."""
# Disallows assignment to other attributes.
__slots__ = ['_message_descriptor']
def __init__(self, message_listener, message_descriptor):
"""
Note that we pass in a descriptor instead of the generated directly,
since at the time we construct a _RepeatedCompositeFieldContainer we
haven't yet necessarily initialized the type that will be contained in the
container.
Args:
message_listener: A MessageListener implementation.
The RepeatedCompositeFieldContainer will call this object's
Modified() method when it is modified.
message_descriptor: A Descriptor instance describing the protocol type
that should be present in this container. We'll use the
_concrete_class field of this descriptor when the client calls add().
"""
super(RepeatedCompositeFieldContainer, self).__init__(message_listener)
self._message_descriptor = message_descriptor
def add(self, **kwargs):
"""Adds a new element at the end of the list and returns it. Keyword
arguments may be used to initialize the element.
"""
new_element = self._message_descriptor._concrete_class(**kwargs)
new_element._SetListener(self._message_listener)
self._values.append(new_element)
if not self._message_listener.dirty:
self._message_listener.Modified()
return new_element
def extend(self, elem_seq):
"""Extends by appending the given sequence of elements of the same type
as this one, copying each individual message.
"""
message_class = self._message_descriptor._concrete_class
listener = self._message_listener
values = self._values
for message in elem_seq:
new_element = message_class()
new_element._SetListener(listener)
new_element.MergeFrom(message)
values.append(new_element)
listener.Modified()
def MergeFrom(self, other):
"""Appends the contents of another repeated field of the same type to this
one, copying each individual message.
"""
self.extend(other._values)
def __getslice__(self, start, stop):
"""Retrieves the subset of items from between the specified indices."""
return self._values[start:stop]
def __delitem__(self, key):
"""Deletes the item at the specified position."""
del self._values[key]
self._message_listener.Modified()
def __delslice__(self, start, stop):
"""Deletes the subset of items from between the specified indices."""
del self._values[start:stop]
self._message_listener.Modified()
def __eq__(self, other):
"""Compares the current instance with another one."""
if self is other:
return True
if not isinstance(other, self.__class__):
raise TypeError('Can only compare repeated composite fields against '
'other repeated composite fields.')
return self._values == other._values
| bsd-3-clause |
daafgo/CourseBuilder-Xapi | modules/dashboard/messages.py | 3 | 5528 | # Copyright 2013 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS-IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Messages used in the dashboard."""
__author__ = 'John Orr (jorr@google.com)'
from common import safe_dom
ABOUT_THE_COURSE_DESCRIPTION = safe_dom.assemble_text_message("""
This information is configured by an administrator from the Admin pages.
""", None)
ADMIN_PREFERENCES_DESCRIPTION = safe_dom.assemble_text_message("""
Preferences settings for individual course admins.
""", None)
ADMINISTERED_COURSES_DESCRIPTION = safe_dom.assemble_text_message("""
Courses for which you have administrator privileges
""", None)
ASSESSMENT_EDITOR_DESCRIPTION = safe_dom.assemble_text_message(
None, 'https://code.google.com/p/course-builder/wiki/CreateAssessments')
ASSETS_DESCRIPTION = safe_dom.assemble_text_message("""
These are all the assets for your course. You can upload new images and
documents here, after which you can use them in your lessons and activities.
You may create, edit, and delete activities and assessments from the Outline
page. All other assets must be edited by an administrator.
""", None)
ASSIGNMENTS_MENU_DESCRIPTION = safe_dom.assemble_text_message("""
Select a peer-reviewed assignment and enter a student's email address to view
their assignment submission and any associated reviews.
""", None)
CONTENTS_OF_THE_COURSE_DESCRIPTION = safe_dom.assemble_text_message("""
The course.yaml file contains all course-level settings. It can be
modified from other settings sub-tabs, or directly edited in its
raw form here.
""", 'https://code.google.com/p/course-builder/wiki/CourseSettings')
COURSE_ADMIN_DESCRIPTION = safe_dom.assemble_text_message("""
Admin settings for users who are course authors but not
site administrators.
""", None)
COURSE_OUTLINE_DESCRIPTION = safe_dom.assemble_text_message(
'Build, organize and preview your course here.',
'https://code.google.com/p/course-builder/wiki/Dashboard#Outline')
COURSE_OUTLINE_EDITOR_DESCRIPTION = safe_dom.assemble_text_message("""
Click up/down arrows to re-order units, or lessons within units. To move a
lesson between units, edit that lesson from the outline page and change its
parent unit.
""", None)
COURSE_TEMPLATE_DESCRIPTION = safe_dom.assemble_text_message("""
The course_template.yaml file provides default values for course settings.
These values are not dynamically editable, but you can override them
by editing your course.yaml file directly, or by changing settings in
the other Settings sub-tabs.
You can also change the default settings for all courses by editing
the course_template.yaml file on disk and re-pushing CourseBuilder to
AppEngine. Changing the defaults in the file will not erase or
override any course-specific settings you may have made.
""", None)
DATA_FILES_DESCRIPTION = safe_dom.assemble_text_message("""
The lesson.csv file contains the contents of your lesson. The unit.csv file
contains the course related content shown on the homepage. These files are
located in your Course Builder installation. Edit them directly with an editor
like Notepad++. Be careful, some editors will add extra characters, which may
prevent the uploading of these files.
""", 'https://code.google.com/p/course-builder/wiki/Dashboard#Outline')
EDIT_SETTINGS_DESCRIPTION = safe_dom.assemble_text_message("""
The course.yaml file contains many course settings.
""", 'https://code.google.com/p/course-builder/wiki/CourseSettings')
EDIT_HTML_HOOK_DESCRIPTION = safe_dom.assemble_text_message("""
HTML hooks are snippets of HTML code that are inserted at different points on
the pages of a course. Editing these snippets here permits you to make
global changes to these items.
""", 'https://code.google.com/p/course-builder/wiki/Dashboard#Outline')
IMPORT_COURSE_DESCRIPTION = safe_dom.assemble_text_message("""
Import the contents of another course into this course. Both courses must be on
the same Google App Engine instance.
""", None)
LINK_EDITOR_DESCRIPTION = safe_dom.assemble_text_message("""
Links will appear in your outline and will take students directly to the URL.
""", None)
PAGES_DESCRIPTION = safe_dom.assemble_text_message(
None, 'https://code.google.com/p/course-builder/wiki/Dashboard#Outline')
ROLES_DESCRIPTION = """
Manage the different roles associated with your course.
A role binds a set of permissions to a set of users. The role editor allows you
to assign any of the permissions currently registered by the enabled modules.
"""
SETTINGS_DESCRIPTION = safe_dom.assemble_text_message(
None, 'https://code.google.com/p/course-builder/wiki/Dashboard#Settings')
UNIT_EDITOR_DESCRIPTION = safe_dom.assemble_text_message("""
Units contain lessons and acitivities.
""", 'https://code.google.com/p/course-builder/wiki/Dashboard#Outline')
UPLOAD_ASSET_DESCRIPTION = safe_dom.assemble_text_message("""
Choose a file to upload to this Google App Engine instance. Learn more about
file storage and hosting.
""", 'https://code.google.com/p/course-builder/wiki/Dashboard#Assets')
| apache-2.0 |
ahh2131/mchisel | test/extended_id_test.py | 3 | 1208 | from pymclevel import BoundingBox
from pymclevel.schematic import MCSchematic
from pymclevel import MCInfdevOldLevel
from templevel import TempLevel
__author__ = 'Rio'
def test_schematic_extended_ids():
s = MCSchematic(shape=(1, 1, 5))
s.Blocks[0,0,0] = 2048
temp = TempLevel("schematic", createFunc=s.saveToFile)
s = temp.level
assert s.Blocks[0,0,0] == 2048
def alpha_test_level():
temp = TempLevel("alpha", createFunc=lambda f: MCInfdevOldLevel(f, create=True))
level = temp.level
level.createChunk(0, 0)
for x in range(0, 10):
level.setBlockAt(x, 2, 5, 2048)
level.saveInPlace()
level.close()
level = MCInfdevOldLevel(filename=level.filename)
return level
def testExport():
level = alpha_test_level()
for size in [(16, 16, 16),
(15, 16, 16),
(15, 16, 15),
(15, 15, 15),
]:
schem = level.extractSchematic(BoundingBox((0, 0, 0), size))
schem = TempLevel("schem", createFunc=lambda f: schem.saveToFile(f)).level
assert (schem.Blocks > 255).any()
def testAlphaIDs():
level = alpha_test_level()
assert level.blockAt(0,2,5) == 2048
| isc |
CHT5/program-y | src/test/parser/template/nodes/test_condtype1.py | 3 | 4310 | import xml.etree.ElementTree as ET
from programy.parser.template.nodes.base import TemplateNode
from programy.parser.template.nodes.word import TemplateWordNode
from programy.parser.template.nodes.condtype1 import TemplateType1ConditionNode
from programy.dialog import Question
from test.parser.template.base import TemplateTestsBaseClass
class TemplateType1ConditionNodeTests(TemplateTestsBaseClass):
def test_node_global_match(self):
root = TemplateNode()
self.assertIsNotNone(root)
self.assertIsNotNone(root.children)
self.assertEqual(len(root.children), 0)
node = TemplateType1ConditionNode("name1", TemplateWordNode("value1"), local=False)
self.assertIsNotNone(node)
node.append(TemplateWordNode("Hello"))
root.append(node)
self.assertEqual(len(root.children), 1)
self.bot.conversation(self.clientid)._predicates['name1'] = "value1"
result = root.resolve(self.bot, self.clientid)
self.assertIsNotNone(result)
self.assertEqual(result, "Hello")
def test_node_global_nomatch(self):
root = TemplateNode()
self.assertIsNotNone(root)
self.assertIsNotNone(root.children)
self.assertEqual(len(root.children), 0)
node = TemplateType1ConditionNode("name1", TemplateWordNode("value1"), local=False)
self.assertIsNotNone(node)
node.append(TemplateWordNode("Hello"))
root.append(node)
self.assertEqual(len(root.children), 1)
self.bot.conversation(self.clientid)._predicates['name1'] = "value2"
result = root.resolve(self.bot, self.clientid)
self.assertIsNotNone(result)
self.assertEqual(result, "")
def test_node_local_match(self):
root = TemplateNode()
self.assertIsNotNone(root)
self.assertIsNotNone(root.children)
self.assertEqual(len(root.children), 0)
node = TemplateType1ConditionNode("var1", TemplateWordNode("value1"), local=True)
self.assertIsNotNone(node)
node.append(TemplateWordNode("Hello"))
root.append(node)
self.assertEqual(len(root.children), 1)
question = Question.create_from_text("Hello")
self.bot.conversation(self.clientid).record_dialog(question)
self.bot.conversation(self.clientid).current_question().set_predicate("var1", "value1")
result = root.resolve(self.bot, self.clientid)
self.assertIsNotNone(result)
self.assertEqual(result, "Hello")
def test_node_local_nomatch(self):
root = TemplateNode()
self.assertIsNotNone(root)
self.assertIsNotNone(root.children)
self.assertEqual(len(root.children), 0)
node = TemplateType1ConditionNode("var1", TemplateWordNode("value1"), local=True)
self.assertIsNotNone(node)
node.append(TemplateWordNode("Hello"))
root.append(node)
self.assertEqual(len(root.children), 1)
question = Question.create_from_text("Hello")
self.bot.conversation(self.clientid).record_dialog(question)
self.bot.conversation(self.clientid).current_question().set_predicate("var1", "value2")
result = root.resolve(self.bot, self.clientid)
self.assertIsNotNone(result)
self.assertEqual(result, "")
def test_to_xml_global(self):
root = TemplateNode()
node = TemplateType1ConditionNode("name1", TemplateWordNode("value1"), local=False)
node.append(TemplateWordNode("Hello"))
root.append(node)
xml = root.xml_tree(self.bot, self.clientid)
self.assertIsNotNone(xml)
xml_str = ET.tostring(xml, "utf-8").decode("utf-8")
self.assertEqual('<template><condition name="name1"><value>value1</value>Hello</condition></template>', xml_str)
def test_to_xml_local(self):
root = TemplateNode()
node = TemplateType1ConditionNode("name1", TemplateWordNode("value1"), local=True)
node.append(TemplateWordNode("Hello"))
root.append(node)
xml = root.xml_tree(self.bot, self.clientid)
self.assertIsNotNone(xml)
xml_str = ET.tostring(xml, "utf-8").decode("utf-8")
self.assertEqual('<template><condition var="name1"><value>value1</value>Hello</condition></template>', xml_str)
| mit |
zimmerst/phoshare | appledata/applexml.py | 7 | 5245 | '''Reads iPhoto or iTunes XML data files'''
# Copyright 2010 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import datetime
import unicodedata
from xml import sax
import tilutil.systemutils as su
#APPLE_BASE = time.mktime((2001, 1, 1, 0, 0, 0, 0, 0, -1))
APPLE_BASE = 978307200 # 2001/1/1
def getappletime(value):
'''Converts a numeric Apple time stamp into a date and time'''
try:
return datetime.datetime.fromtimestamp(APPLE_BASE + float(value))
except ValueError, _e:
# bad time stamp in database, default to "now"
return datetime.datetime.now()
class AppleXMLResolver(sax.handler.EntityResolver): #IGNORE:W0232
'''Helper to deal with XML entity resolving'''
def __init__(self):
pass
def resolveEntity(self, _publicId, systemId): #IGNORE:C0103
'''Simple schema, resolve all entities to just the systemId'''
return systemId
class AppleXMLHandler(sax.handler.ContentHandler):
'''Parses an Apple XML file, as generated by iPhoto and iTunes'''
def __init__(self):
sax.handler.ContentHandler.__init__(self)
self.chars = ""
self.key = None
self.parse_stack = []
self.top_node = None
self._parsingdata = False
def add_object(self, xml_object):
'''Adds an object to the current container, which can be a list or a
map.
'''
current_top = self.parse_stack[-1]
if isinstance(current_top, list):
current_top.append(xml_object)
else:
current_top[self.key] = xml_object
def startElement(self, name, _attributes): #IGNORE:C0103
'''Handles the start of an XML element'''
self._parsingdata = False
if name in ("key", "date", "string", "integer", "real", "false",
"true"):
self.chars = None
elif name == "dict":
new_dict = {}
self.add_object(new_dict)
self.parse_stack.append(new_dict)
self.chars = None
elif name == "array":
new_array = []
self.add_object(new_array)
self.parse_stack.append(new_array)
self.chars = None
elif name == "plist":
self.parse_stack.append([])
self.chars = None
elif name == "data":
self.chars = None
self._parsingdata = True
else:
print "unrecognized element in XML data: " + name
def characters(self, data):
'''Process a character string from the SAX parser.'''
# if we are inside a <data> element, we need to strip the characters.
# Here is a typical <data> element:
# <data>
# AQEAAwAAAAIAAAAZAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
# AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
# AAAAAA==
# </data>
if self._parsingdata:
data = data.strip()
if not self.chars:
self.chars = data
else:
self.chars += data
def endElement(self, name): #IGNORE:C0103
'''callback for the end of a parsed XML element'''
if name == "key":
self.key = self.chars
elif name == 'string':
self.add_object(
unicodedata.normalize("NFC", su.unicode_string(self.chars)))
elif name in ("integer", "real", "date"):
self.add_object(self.chars)
elif name == "true":
self.add_object(True)
elif name == "false":
self.add_object(False)
elif name == "data":
self.add_object(self.chars)
elif name == "dict" or name == "array":
self.parse_stack.pop()
elif name == "plist":
self.top_node = self.parse_stack.pop()
else:
print "unrecognized element in XML data: " + name
self.chars = None
def gettopnode(self):
'''Returns the root of the parsed data tree'''
return self.top_node[0]
def read_applexml(filename):
'''Reads the named file, and parses it as an Apple XML file. Returns the
top node.'''
parser = sax.make_parser()
handler = AppleXMLHandler()
parser.setContentHandler(handler)
parser.setEntityResolver(AppleXMLResolver())
parser.parse(filename)
return handler.gettopnode()
def read_applexml_string(data):
'''Parses the data as Apple XML format. Returns the top node.'''
parser = sax.make_parser()
handler = AppleXMLHandler()
parser.setContentHandler(handler)
parser.setEntityResolver(AppleXMLResolver())
parser.parseString(data)
return handler.gettopnode()
| apache-2.0 |
quamilek/django | django/contrib/messages/storage/session.py | 478 | 1714 | import json
from django.contrib.messages.storage.base import BaseStorage
from django.contrib.messages.storage.cookie import (
MessageDecoder, MessageEncoder,
)
from django.utils import six
class SessionStorage(BaseStorage):
"""
Stores messages in the session (that is, django.contrib.sessions).
"""
session_key = '_messages'
def __init__(self, request, *args, **kwargs):
assert hasattr(request, 'session'), "The session-based temporary "\
"message storage requires session middleware to be installed, "\
"and come before the message middleware in the "\
"MIDDLEWARE_CLASSES list."
super(SessionStorage, self).__init__(request, *args, **kwargs)
def _get(self, *args, **kwargs):
"""
Retrieves a list of messages from the request's session. This storage
always stores everything it is given, so return True for the
all_retrieved flag.
"""
return self.deserialize_messages(self.request.session.get(self.session_key)), True
def _store(self, messages, response, *args, **kwargs):
"""
Stores a list of messages to the request's session.
"""
if messages:
self.request.session[self.session_key] = self.serialize_messages(messages)
else:
self.request.session.pop(self.session_key, None)
return []
def serialize_messages(self, messages):
encoder = MessageEncoder(separators=(',', ':'))
return encoder.encode(messages)
def deserialize_messages(self, data):
if data and isinstance(data, six.string_types):
return json.loads(data, cls=MessageDecoder)
return data
| bsd-3-clause |
levenlabs/ansible | lib/ansible/executor/task_result.py | 12 | 2757 | # (c) 2012-2014, Michael DeHaan <michael.dehaan@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible.parsing.dataloader import DataLoader
class TaskResult:
'''
This class is responsible for interpretting the resulting data
from an executed task, and provides helper methods for determining
the result of a given task.
'''
def __init__(self, host, task, return_data):
self._host = host
self._task = task
if isinstance(return_data, dict):
self._result = return_data.copy()
else:
self._result = DataLoader().load(return_data)
def is_changed(self):
return self._check_key('changed')
def is_skipped(self):
# loop results
if 'results' in self._result and self._task.loop:
results = self._result['results']
# Loop tasks are only considered skipped if all items were skipped.
# some squashed results (eg, yum) are not dicts and can't be skipped individually
if results and all(isinstance(res, dict) and res.get('skipped', False) for res in results):
return True
# regular tasks and squashed non-dict results
return self._result.get('skipped', False)
def is_failed(self):
if 'failed_when_result' in self._result or \
'results' in self._result and True in [True for x in self._result['results'] if 'failed_when_result' in x]:
return self._check_key('failed_when_result')
else:
return self._check_key('failed') or self._result.get('rc', 0) != 0
def is_unreachable(self):
return self._check_key('unreachable')
def _check_key(self, key):
if 'results' in self._result and self._task.loop:
flag = False
for res in self._result.get('results', []):
if isinstance(res, dict):
flag |= res.get(key, False)
return flag
else:
return self._result.get(key, False)
| gpl-3.0 |
NiHab/py-jr | src/dictionaries/lookup.py | 1 | 2512 | #@PydevCodeAnalysisIgnore
from lxml import etree as ET
import dictionaries.jmdict
import dictionaries.freq
import dictionaries.deinflecter
from librepo import Result
from copy import deepcopy
class Lookup(object):
'''
Combines and abstracts dict, freq and deinflecter
'''
def __init__(self):
self.fr = dictionaries.freq.Freq()
self.di = dictionaries.jmdict.JMDict()
self.de = dictionaries.deinflecter.Deinflecter()
self.de.load("../data/inflections")
self.di.load("../data/JMdict_e", "../data/transform.xml")
self.fr.load("../data/freq")
def lookup(self, word, asHtml = False):
deinfs = self.de.deinflectWord(word, returnWord=True)
hits = []
addednodes = {}
for deinf in deinfs:
for res in self.di[deinf.deinflected]:
if not (res in addednodes):
#Deep-copy so we can modify
copied = deepcopy(res)
#Add conjugation info
conjugationode = ET.SubElement(copied, "app_infl")
conjugationode.text = deinf.type
#Add frequency
fnode = ET.SubElement(copied, "freq")
freq = -1
#Get the highest freq of all possible readings / writings for a node
for el in copied.findall("k_ele/keb") + copied.findall("r_ele/reb"):
nfreq = self.fr[el.text]
freq = nfreq if nfreq > freq else freq
fnode.text = str(freq)
#Done
hits += [copied]
addednodes[res] = copied
else:
#Node we already added can be reached via different conjugation, add new conjugation info
cpy = addednodes[res]
conjugationode = ET.SubElement(cpy, "app_infl")
conjugationode.text = deinf.type
resultxmlnode = ET.Element('result')
resultxmlnode.extend(hits)
if asHtml:
return (str(self.di.toHTML(resultxmlnode)), len(hits))
else:
return (resultxmlnode, len(hits))
| gpl-2.0 |
beni55/txZMQ | txzmq/test/test_router_dealer.py | 1 | 2395 | """
Tests for L{txzmq.router_dealer}.
"""
from twisted.internet import defer, reactor
from twisted.trial import unittest
from txzmq.connection import ZmqEndpoint, ZmqEndpointType
from txzmq.factory import ZmqFactory
from txzmq.router_dealer import ZmqRouterConnection, ZmqDealerConnection
class ZmqTestRouterConnection(ZmqRouterConnection):
message_count = 0
def gotMessage(self, senderId, message):
assert senderId == 'dealer'
if message == 'stop':
reactor.callLater(0, self.sendMsg, 'dealer', 'exit')
else:
self.message_count += 1
self.sendMsg('dealer', 'event')
for _ in xrange(2):
reactor.callLater(0, self.sendMsg, 'dealer', 'event')
class ZmqTestDealerConnection(ZmqDealerConnection):
message_count = 0
def gotMessage(self, message):
if message == 'event':
self.message_count += 1
elif message == 'exit':
self.d.callback(None)
else:
assert False, "received unexpected message: %r" % (message,)
class ZmqRouterDealerTwoFactoryConnectionTestCase(unittest.TestCase):
"""
Test case for L{txzmq.req_rep} with ROUTER/DEALER in two factories.
"""
REQUEST_COUNT = 10000
def setUp(self):
self.factory1 = ZmqFactory()
dealer_endpoint = ZmqEndpoint(ZmqEndpointType.connect, "ipc://#7")
self.dealer = ZmqTestDealerConnection(self.factory1, dealer_endpoint,
identity='dealer')
self.dealer.d = defer.Deferred()
self.factory2 = ZmqFactory()
router_endpoint = ZmqEndpoint(ZmqEndpointType.bind, "ipc://#7")
self.router = ZmqTestRouterConnection(self.factory2, router_endpoint,
identity='router')
def tearDown(self):
self.factory2.shutdown()
self.factory1.shutdown()
def test_start(self):
for _ in xrange(self.REQUEST_COUNT):
reactor.callLater(0, self.dealer.sendMsg, 'req')
reactor.callLater(0, self.dealer.sendMsg, 'stop')
def checkResults(_):
self.failUnlessEqual(self.dealer.message_count,
3 * self.REQUEST_COUNT)
self.failUnlessEqual(self.router.message_count, self.REQUEST_COUNT)
return self.dealer.d.addCallback(checkResults)
| gpl-2.0 |
scorphus/scrapy | scrapy/link.py | 56 | 1253 | """
This module defines the Link object used in Link extractors.
For actual link extractors implementation see scrapy.linkextractors, or
its documentation in: docs/topics/link-extractors.rst
"""
import six
class Link(object):
"""Link objects represent an extracted link by the LinkExtractor."""
__slots__ = ['url', 'text', 'fragment', 'nofollow']
def __init__(self, url, text='', fragment='', nofollow=False):
if isinstance(url, six.text_type):
import warnings
warnings.warn("Do not instantiate Link objects with unicode urls. "
"Assuming utf-8 encoding (which could be wrong)")
url = url.encode('utf-8')
self.url = url
self.text = text
self.fragment = fragment
self.nofollow = nofollow
def __eq__(self, other):
return self.url == other.url and self.text == other.text and \
self.fragment == other.fragment and self.nofollow == other.nofollow
def __hash__(self):
return hash(self.url) ^ hash(self.text) ^ hash(self.fragment) ^ hash(self.nofollow)
def __repr__(self):
return 'Link(url=%r, text=%r, fragment=%r, nofollow=%r)' % \
(self.url, self.text, self.fragment, self.nofollow)
| bsd-3-clause |
natj/bender | runs/out/reds.py | 1 | 3348 | import numpy as np
import matplotlib as mpl
from pylab import *
from matplotlib import cm
from matplotlib.colors import LogNorm
mpl.rcParams['image.cmap'] = 'inferno'
mpl.rc('font', family='serif')
mpl.rc('xtick', labelsize='small')
mpl.rc('ytick', labelsize='small')
gs = GridSpec(1, 3)
gs.update(hspace = 0.3)
#Construct output xy image plane from img object
##################################################
x_span = 11.0
y_span = 11.0
x_bins = 500
y_bins = 500
xs = np.linspace(-x_span, x_span, x_bins)
ys = np.linspace(-y_span, y_span, y_bins)
##################################################
# plot values on image plane
def trans(mat):
return np.flipud(mat.T)
#return mat
def detrans(mat):
return np.flipud(mat).T
def clean_image(mat):
#mask all 0.0 elements and transpose
mat_masked = np.ma.masked_where(mat == 0, mat)
return trans(mat_masked)
#read redshift array
fname = "reds_f600pbbr15m1.4i45.csv"
data = np.genfromtxt(fname, delimiter=',')
redshift = np.reshape(data, (x_bins, y_bins) )
redshift = clean_image(redshift)
##################################################
fname2 = 'reds_f600_bb_r15_m1.4_i45.csv'
data2 = np.genfromtxt(fname2, delimiter=',')
redshift2 = np.reshape(data2, (x_bins, y_bins) )
redshift2 = clean_image(redshift2)
# other settings for imshow
extent=( xs[0], xs[-1], ys[0], xs[-1] )
interpolation = 'nearest'
###################################################
ax = subplot(gs[0])
ax.minorticks_on()
cax = ax.imshow(redshift, interpolation=interpolation, origin='lower', extent=extent,
cmap=cm.get_cmap('coolwarm_r'))
ax.contour(redshift, 20, hold='on', colors='w',
origin='lower', extent=extent)
###################################################
ax = subplot(gs[1])
ax.minorticks_on()
cax = ax.imshow(redshift2, interpolation=interpolation, origin='lower', extent=extent,
cmap=cm.get_cmap('coolwarm_r'))
ax.contour(redshift2, 20, hold='on', colors='w',
origin='lower', extent=extent)
ax = subplot(gs[2])
ax.minorticks_on()
###################################################
# relative error
relerr = np.zeros(( x_bins, y_bins))
for i, x in enumerate(xs):
for j, y in enumerate(ys):
val1 = redshift[i,j]
val2 = redshift2[i,j]
errval = 0.0
if not(val2 == 0.0):
errval = np.abs( (val2 - val1)/val2 )
#errval = np.log10( np.abs((val2 - val1)/val2) )
relerr[i,j] = errval
relerr = np.ma.masked_where(relerr == 0, relerr)
#emin = -0.02
#emax = 0.02
print "min :",np.min(relerr)
print "max :",np.max(relerr)
#emin = -3.0
#emax = 1.0
emin = 1.0e-4
emax = 1.0e-1
cax = ax.imshow(relerr,
interpolation=interpolation,
origin='lower', extent=extent,
cmap=cm.get_cmap('inferno_r'),
norm=LogNorm(emin, emax)
#vmin = emin,
#vmax = emax,
)
levels = np.linspace(emin, emax, 10)
#levels = np.array( [1.0e-3, 5.0e-3, 1.0e-2, 5.0e-2, 1.0e-1, 5.0e-1, 1.0e0] )
#levels = np.array( [1.0e-2, 2.0e-2 ] )
levels = np.array( [1.0e-3, 5.0e-3] )
ax.contour(relerr,
levels,
hold='on',
linestyle='dashed',
colors='r',
origin='lower',
extent=extent,
vmin = emin,
vmax = emax
)
colorbar(cax)
show()
savefig('reds.pdf')
| mit |
IndonesiaX/edx-platform | common/djangoapps/cors_csrf/tests/test_views.py | 150 | 2397 | """Tests for cross-domain request views. """
import json
from django.test import TestCase
from django.core.urlresolvers import reverse, NoReverseMatch
import ddt
from config_models.models import cache
from cors_csrf.models import XDomainProxyConfiguration
@ddt.ddt
class XDomainProxyTest(TestCase):
"""Tests for the xdomain proxy end-point. """
def setUp(self):
"""Clear model-based config cache. """
super(XDomainProxyTest, self).setUp()
try:
self.url = reverse('xdomain_proxy')
except NoReverseMatch:
self.skipTest('xdomain_proxy URL is not configured')
cache.clear()
def test_xdomain_proxy_disabled(self):
self._configure(False)
response = self._load_page()
self.assertEqual(response.status_code, 404)
@ddt.data(None, [' '], [' ', ' '])
def test_xdomain_proxy_enabled_no_whitelist(self, whitelist):
self._configure(True, whitelist=whitelist)
response = self._load_page()
self.assertEqual(response.status_code, 404)
@ddt.data(
(['example.com'], ['example.com']),
(['example.com', 'sub.example.com'], ['example.com', 'sub.example.com']),
([' example.com '], ['example.com']),
([' ', 'example.com'], ['example.com']),
)
@ddt.unpack
def test_xdomain_proxy_enabled_with_whitelist(self, whitelist, expected_whitelist):
self._configure(True, whitelist=whitelist)
response = self._load_page()
self._check_whitelist(response, expected_whitelist)
def _configure(self, is_enabled, whitelist=None):
"""Enable or disable the end-point and configure the whitelist. """
config = XDomainProxyConfiguration.current()
config.enabled = is_enabled
if whitelist:
config.whitelist = "\n".join(whitelist)
config.save()
cache.clear()
def _load_page(self):
"""Load the end-point. """
return self.client.get(reverse('xdomain_proxy'))
def _check_whitelist(self, response, expected_whitelist):
"""Verify that the domain whitelist is rendered on the page. """
rendered_whitelist = json.dumps({
domain: '*'
for domain in expected_whitelist
})
self.assertContains(response, 'xdomain.min.js')
self.assertContains(response, rendered_whitelist)
| agpl-3.0 |
bottompawn/kbengine | kbe/src/lib/python/Lib/test/test_pkg.py | 84 | 9759 | # Test packages (dotted-name import)
import sys
import os
import tempfile
import textwrap
import unittest
from test import support
# Helpers to create and destroy hierarchies.
def cleanout(root):
names = os.listdir(root)
for name in names:
fullname = os.path.join(root, name)
if os.path.isdir(fullname) and not os.path.islink(fullname):
cleanout(fullname)
else:
os.remove(fullname)
os.rmdir(root)
def fixdir(lst):
if "__builtins__" in lst:
lst.remove("__builtins__")
if "__initializing__" in lst:
lst.remove("__initializing__")
return lst
# XXX Things to test
#
# import package without __init__
# import package with __init__
# __init__ importing submodule
# __init__ importing global module
# __init__ defining variables
# submodule importing other submodule
# submodule importing global module
# submodule import submodule via global name
# from package import submodule
# from package import subpackage
# from package import variable (defined in __init__)
# from package import * (defined in __init__)
class TestPkg(unittest.TestCase):
def setUp(self):
self.root = None
self.pkgname = None
self.syspath = list(sys.path)
self.modules_before = support.modules_setup()
def tearDown(self):
sys.path[:] = self.syspath
support.modules_cleanup(*self.modules_before)
if self.root: # Only clean if the test was actually run
cleanout(self.root)
# delete all modules concerning the tested hierarchy
if self.pkgname:
modules = [name for name in sys.modules
if self.pkgname in name.split('.')]
for name in modules:
del sys.modules[name]
def run_code(self, code):
exec(textwrap.dedent(code), globals(), {"self": self})
def mkhier(self, descr):
root = tempfile.mkdtemp()
sys.path.insert(0, root)
if not os.path.isdir(root):
os.mkdir(root)
for name, contents in descr:
comps = name.split()
fullname = root
for c in comps:
fullname = os.path.join(fullname, c)
if contents is None:
os.mkdir(fullname)
else:
f = open(fullname, "w")
f.write(contents)
if contents and contents[-1] != '\n':
f.write('\n')
f.close()
self.root = root
# package name is the name of the first item
self.pkgname = descr[0][0]
def test_1(self):
hier = [("t1", None), ("t1 __init__.py", "")]
self.mkhier(hier)
import t1
def test_2(self):
hier = [
("t2", None),
("t2 __init__.py", "'doc for t2'"),
("t2 sub", None),
("t2 sub __init__.py", ""),
("t2 sub subsub", None),
("t2 sub subsub __init__.py", "spam = 1"),
]
self.mkhier(hier)
import t2.sub
import t2.sub.subsub
self.assertEqual(t2.__name__, "t2")
self.assertEqual(t2.sub.__name__, "t2.sub")
self.assertEqual(t2.sub.subsub.__name__, "t2.sub.subsub")
# This exec crap is needed because Py3k forbids 'import *' outside
# of module-scope and __import__() is insufficient for what we need.
s = """
import t2
from t2 import *
self.assertEqual(dir(), ['self', 'sub', 't2'])
"""
self.run_code(s)
from t2 import sub
from t2.sub import subsub
from t2.sub.subsub import spam
self.assertEqual(sub.__name__, "t2.sub")
self.assertEqual(subsub.__name__, "t2.sub.subsub")
self.assertEqual(sub.subsub.__name__, "t2.sub.subsub")
for name in ['spam', 'sub', 'subsub', 't2']:
self.assertTrue(locals()["name"], "Failed to import %s" % name)
import t2.sub
import t2.sub.subsub
self.assertEqual(t2.__name__, "t2")
self.assertEqual(t2.sub.__name__, "t2.sub")
self.assertEqual(t2.sub.subsub.__name__, "t2.sub.subsub")
s = """
from t2 import *
self.assertTrue(dir(), ['self', 'sub'])
"""
self.run_code(s)
def test_3(self):
hier = [
("t3", None),
("t3 __init__.py", ""),
("t3 sub", None),
("t3 sub __init__.py", ""),
("t3 sub subsub", None),
("t3 sub subsub __init__.py", "spam = 1"),
]
self.mkhier(hier)
import t3.sub.subsub
self.assertEqual(t3.__name__, "t3")
self.assertEqual(t3.sub.__name__, "t3.sub")
self.assertEqual(t3.sub.subsub.__name__, "t3.sub.subsub")
def test_4(self):
hier = [
("t4.py", "raise RuntimeError('Shouldnt load t4.py')"),
("t4", None),
("t4 __init__.py", ""),
("t4 sub.py", "raise RuntimeError('Shouldnt load sub.py')"),
("t4 sub", None),
("t4 sub __init__.py", ""),
("t4 sub subsub.py",
"raise RuntimeError('Shouldnt load subsub.py')"),
("t4 sub subsub", None),
("t4 sub subsub __init__.py", "spam = 1"),
]
self.mkhier(hier)
s = """
from t4.sub.subsub import *
self.assertEqual(spam, 1)
"""
self.run_code(s)
def test_5(self):
hier = [
("t5", None),
("t5 __init__.py", "import t5.foo"),
("t5 string.py", "spam = 1"),
("t5 foo.py",
"from . import string; assert string.spam == 1"),
]
self.mkhier(hier)
import t5
s = """
from t5 import *
self.assertEqual(dir(), ['foo', 'self', 'string', 't5'])
"""
self.run_code(s)
import t5
self.assertEqual(fixdir(dir(t5)),
['__cached__', '__doc__', '__file__', '__loader__',
'__name__', '__package__', '__path__', '__spec__',
'foo', 'string', 't5'])
self.assertEqual(fixdir(dir(t5.foo)),
['__cached__', '__doc__', '__file__', '__loader__',
'__name__', '__package__', '__spec__', 'string'])
self.assertEqual(fixdir(dir(t5.string)),
['__cached__', '__doc__', '__file__', '__loader__',
'__name__', '__package__', '__spec__', 'spam'])
def test_6(self):
hier = [
("t6", None),
("t6 __init__.py",
"__all__ = ['spam', 'ham', 'eggs']"),
("t6 spam.py", ""),
("t6 ham.py", ""),
("t6 eggs.py", ""),
]
self.mkhier(hier)
import t6
self.assertEqual(fixdir(dir(t6)),
['__all__', '__cached__', '__doc__', '__file__',
'__loader__', '__name__', '__package__', '__path__',
'__spec__'])
s = """
import t6
from t6 import *
self.assertEqual(fixdir(dir(t6)),
['__all__', '__cached__', '__doc__', '__file__',
'__loader__', '__name__', '__package__',
'__path__', '__spec__', 'eggs', 'ham', 'spam'])
self.assertEqual(dir(), ['eggs', 'ham', 'self', 'spam', 't6'])
"""
self.run_code(s)
def test_7(self):
hier = [
("t7.py", ""),
("t7", None),
("t7 __init__.py", ""),
("t7 sub.py",
"raise RuntimeError('Shouldnt load sub.py')"),
("t7 sub", None),
("t7 sub __init__.py", ""),
("t7 sub .py",
"raise RuntimeError('Shouldnt load subsub.py')"),
("t7 sub subsub", None),
("t7 sub subsub __init__.py",
"spam = 1"),
]
self.mkhier(hier)
t7, sub, subsub = None, None, None
import t7 as tas
self.assertEqual(fixdir(dir(tas)),
['__cached__', '__doc__', '__file__', '__loader__',
'__name__', '__package__', '__path__', '__spec__'])
self.assertFalse(t7)
from t7 import sub as subpar
self.assertEqual(fixdir(dir(subpar)),
['__cached__', '__doc__', '__file__', '__loader__',
'__name__', '__package__', '__path__', '__spec__'])
self.assertFalse(t7)
self.assertFalse(sub)
from t7.sub import subsub as subsubsub
self.assertEqual(fixdir(dir(subsubsub)),
['__cached__', '__doc__', '__file__', '__loader__',
'__name__', '__package__', '__path__', '__spec__',
'spam'])
self.assertFalse(t7)
self.assertFalse(sub)
self.assertFalse(subsub)
from t7.sub.subsub import spam as ham
self.assertEqual(ham, 1)
self.assertFalse(t7)
self.assertFalse(sub)
self.assertFalse(subsub)
@unittest.skipIf(sys.flags.optimize >= 2,
"Docstrings are omitted with -O2 and above")
def test_8(self):
hier = [
("t8", None),
("t8 __init__"+os.extsep+"py", "'doc for t8'"),
]
self.mkhier(hier)
import t8
self.assertEqual(t8.__doc__, "doc for t8")
def test_main():
support.run_unittest(__name__)
if __name__ == "__main__":
test_main()
| lgpl-3.0 |
duramato/CouchPotatoServer | libs/xmpp/auth.py | 196 | 15633 | ## auth.py
##
## Copyright (C) 2003-2005 Alexey "Snake" Nezhdanov
##
## This program is free software; you can redistribute it and/or modify
## it under the terms of the GNU General Public License as published by
## the Free Software Foundation; either version 2, or (at your option)
## any later version.
##
## This program is distributed in the hope that it will be useful,
## but WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
## GNU General Public License for more details.
# $Id: auth.py,v 1.41 2008/09/13 21:45:21 normanr Exp $
"""
Provides library with all Non-SASL and SASL authentication mechanisms.
Can be used both for client and transport authentication.
"""
from protocol import *
from client import PlugIn
import sha,base64,random,dispatcher,re
import md5
def HH(some): return md5.new(some).hexdigest()
def H(some): return md5.new(some).digest()
def C(some): return ':'.join(some)
class NonSASL(PlugIn):
""" Implements old Non-SASL (JEP-0078) authentication used in jabberd1.4 and transport authentication."""
def __init__(self,user,password,resource):
""" Caches username, password and resource for auth. """
PlugIn.__init__(self)
self.DBG_LINE='gen_auth'
self.user=user
self.password=password
self.resource=resource
def plugin(self,owner):
""" Determine the best auth method (digest/0k/plain) and use it for auth.
Returns used method name on success. Used internally. """
if not self.resource: return self.authComponent(owner)
self.DEBUG('Querying server about possible auth methods','start')
resp=owner.Dispatcher.SendAndWaitForResponse(Iq('get',NS_AUTH,payload=[Node('username',payload=[self.user])]))
if not isResultNode(resp):
self.DEBUG('No result node arrived! Aborting...','error')
return
iq=Iq(typ='set',node=resp)
query=iq.getTag('query')
query.setTagData('username',self.user)
query.setTagData('resource',self.resource)
if query.getTag('digest'):
self.DEBUG("Performing digest authentication",'ok')
query.setTagData('digest',sha.new(owner.Dispatcher.Stream._document_attrs['id']+self.password).hexdigest())
if query.getTag('password'): query.delChild('password')
method='digest'
elif query.getTag('token'):
token=query.getTagData('token')
seq=query.getTagData('sequence')
self.DEBUG("Performing zero-k authentication",'ok')
hash = sha.new(sha.new(self.password).hexdigest()+token).hexdigest()
for foo in xrange(int(seq)): hash = sha.new(hash).hexdigest()
query.setTagData('hash',hash)
method='0k'
else:
self.DEBUG("Sequre methods unsupported, performing plain text authentication",'warn')
query.setTagData('password',self.password)
method='plain'
resp=owner.Dispatcher.SendAndWaitForResponse(iq)
if isResultNode(resp):
self.DEBUG('Sucessfully authenticated with remove host.','ok')
owner.User=self.user
owner.Resource=self.resource
owner._registered_name=owner.User+'@'+owner.Server+'/'+owner.Resource
return method
self.DEBUG('Authentication failed!','error')
def authComponent(self,owner):
""" Authenticate component. Send handshake stanza and wait for result. Returns "ok" on success. """
self.handshake=0
owner.send(Node(NS_COMPONENT_ACCEPT+' handshake',payload=[sha.new(owner.Dispatcher.Stream._document_attrs['id']+self.password).hexdigest()]))
owner.RegisterHandler('handshake',self.handshakeHandler,xmlns=NS_COMPONENT_ACCEPT)
while not self.handshake:
self.DEBUG("waiting on handshake",'notify')
owner.Process(1)
owner._registered_name=self.user
if self.handshake+1: return 'ok'
def handshakeHandler(self,disp,stanza):
""" Handler for registering in dispatcher for accepting transport authentication. """
if stanza.getName()=='handshake': self.handshake=1
else: self.handshake=-1
class SASL(PlugIn):
""" Implements SASL authentication. """
def __init__(self,username,password):
PlugIn.__init__(self)
self.username=username
self.password=password
def plugin(self,owner):
if not self._owner.Dispatcher.Stream._document_attrs.has_key('version'): self.startsasl='not-supported'
elif self._owner.Dispatcher.Stream.features:
try: self.FeaturesHandler(self._owner.Dispatcher,self._owner.Dispatcher.Stream.features)
except NodeProcessed: pass
else: self.startsasl=None
def auth(self):
""" Start authentication. Result can be obtained via "SASL.startsasl" attribute and will be
either "success" or "failure". Note that successfull auth will take at least
two Dispatcher.Process() calls. """
if self.startsasl: pass
elif self._owner.Dispatcher.Stream.features:
try: self.FeaturesHandler(self._owner.Dispatcher,self._owner.Dispatcher.Stream.features)
except NodeProcessed: pass
else: self._owner.RegisterHandler('features',self.FeaturesHandler,xmlns=NS_STREAMS)
def plugout(self):
""" Remove SASL handlers from owner's dispatcher. Used internally. """
if self._owner.__dict__.has_key('features'): self._owner.UnregisterHandler('features',self.FeaturesHandler,xmlns=NS_STREAMS)
if self._owner.__dict__.has_key('challenge'): self._owner.UnregisterHandler('challenge',self.SASLHandler,xmlns=NS_SASL)
if self._owner.__dict__.has_key('failure'): self._owner.UnregisterHandler('failure',self.SASLHandler,xmlns=NS_SASL)
if self._owner.__dict__.has_key('success'): self._owner.UnregisterHandler('success',self.SASLHandler,xmlns=NS_SASL)
def FeaturesHandler(self,conn,feats):
""" Used to determine if server supports SASL auth. Used internally. """
if not feats.getTag('mechanisms',namespace=NS_SASL):
self.startsasl='not-supported'
self.DEBUG('SASL not supported by server','error')
return
mecs=[]
for mec in feats.getTag('mechanisms',namespace=NS_SASL).getTags('mechanism'):
mecs.append(mec.getData())
self._owner.RegisterHandler('challenge',self.SASLHandler,xmlns=NS_SASL)
self._owner.RegisterHandler('failure',self.SASLHandler,xmlns=NS_SASL)
self._owner.RegisterHandler('success',self.SASLHandler,xmlns=NS_SASL)
if "ANONYMOUS" in mecs and self.username == None:
node=Node('auth',attrs={'xmlns':NS_SASL,'mechanism':'ANONYMOUS'})
elif "DIGEST-MD5" in mecs:
node=Node('auth',attrs={'xmlns':NS_SASL,'mechanism':'DIGEST-MD5'})
elif "PLAIN" in mecs:
sasl_data='%s\x00%s\x00%s'%(self.username+'@'+self._owner.Server,self.username,self.password)
node=Node('auth',attrs={'xmlns':NS_SASL,'mechanism':'PLAIN'},payload=[base64.encodestring(sasl_data).replace('\r','').replace('\n','')])
else:
self.startsasl='failure'
self.DEBUG('I can only use DIGEST-MD5 and PLAIN mecanisms.','error')
return
self.startsasl='in-process'
self._owner.send(node.__str__())
raise NodeProcessed
def SASLHandler(self,conn,challenge):
""" Perform next SASL auth step. Used internally. """
if challenge.getNamespace()<>NS_SASL: return
if challenge.getName()=='failure':
self.startsasl='failure'
try: reason=challenge.getChildren()[0]
except: reason=challenge
self.DEBUG('Failed SASL authentification: %s'%reason,'error')
raise NodeProcessed
elif challenge.getName()=='success':
self.startsasl='success'
self.DEBUG('Successfully authenticated with remote server.','ok')
handlers=self._owner.Dispatcher.dumpHandlers()
self._owner.Dispatcher.PlugOut()
dispatcher.Dispatcher().PlugIn(self._owner)
self._owner.Dispatcher.restoreHandlers(handlers)
self._owner.User=self.username
raise NodeProcessed
########################################3333
incoming_data=challenge.getData()
chal={}
data=base64.decodestring(incoming_data)
self.DEBUG('Got challenge:'+data,'ok')
for pair in re.findall('(\w+\s*=\s*(?:(?:"[^"]+")|(?:[^,]+)))',data):
key,value=[x.strip() for x in pair.split('=', 1)]
if value[:1]=='"' and value[-1:]=='"': value=value[1:-1]
chal[key]=value
if chal.has_key('qop') and 'auth' in [x.strip() for x in chal['qop'].split(',')]:
resp={}
resp['username']=self.username
resp['realm']=self._owner.Server
resp['nonce']=chal['nonce']
cnonce=''
for i in range(7):
cnonce+=hex(int(random.random()*65536*4096))[2:]
resp['cnonce']=cnonce
resp['nc']=('00000001')
resp['qop']='auth'
resp['digest-uri']='xmpp/'+self._owner.Server
A1=C([H(C([resp['username'],resp['realm'],self.password])),resp['nonce'],resp['cnonce']])
A2=C(['AUTHENTICATE',resp['digest-uri']])
response= HH(C([HH(A1),resp['nonce'],resp['nc'],resp['cnonce'],resp['qop'],HH(A2)]))
resp['response']=response
resp['charset']='utf-8'
sasl_data=''
for key in ['charset','username','realm','nonce','nc','cnonce','digest-uri','response','qop']:
if key in ['nc','qop','response','charset']: sasl_data+="%s=%s,"%(key,resp[key])
else: sasl_data+='%s="%s",'%(key,resp[key])
########################################3333
node=Node('response',attrs={'xmlns':NS_SASL},payload=[base64.encodestring(sasl_data[:-1]).replace('\r','').replace('\n','')])
self._owner.send(node.__str__())
elif chal.has_key('rspauth'): self._owner.send(Node('response',attrs={'xmlns':NS_SASL}).__str__())
else:
self.startsasl='failure'
self.DEBUG('Failed SASL authentification: unknown challenge','error')
raise NodeProcessed
class Bind(PlugIn):
""" Bind some JID to the current connection to allow router know of our location."""
def __init__(self):
PlugIn.__init__(self)
self.DBG_LINE='bind'
self.bound=None
def plugin(self,owner):
""" Start resource binding, if allowed at this time. Used internally. """
if self._owner.Dispatcher.Stream.features:
try: self.FeaturesHandler(self._owner.Dispatcher,self._owner.Dispatcher.Stream.features)
except NodeProcessed: pass
else: self._owner.RegisterHandler('features',self.FeaturesHandler,xmlns=NS_STREAMS)
def plugout(self):
""" Remove Bind handler from owner's dispatcher. Used internally. """
self._owner.UnregisterHandler('features',self.FeaturesHandler,xmlns=NS_STREAMS)
def FeaturesHandler(self,conn,feats):
""" Determine if server supports resource binding and set some internal attributes accordingly. """
if not feats.getTag('bind',namespace=NS_BIND):
self.bound='failure'
self.DEBUG('Server does not requested binding.','error')
return
if feats.getTag('session',namespace=NS_SESSION): self.session=1
else: self.session=-1
self.bound=[]
def Bind(self,resource=None):
""" Perform binding. Use provided resource name or random (if not provided). """
while self.bound is None and self._owner.Process(1): pass
if resource: resource=[Node('resource',payload=[resource])]
else: resource=[]
resp=self._owner.SendAndWaitForResponse(Protocol('iq',typ='set',payload=[Node('bind',attrs={'xmlns':NS_BIND},payload=resource)]))
if isResultNode(resp):
self.bound.append(resp.getTag('bind').getTagData('jid'))
self.DEBUG('Successfully bound %s.'%self.bound[-1],'ok')
jid=JID(resp.getTag('bind').getTagData('jid'))
self._owner.User=jid.getNode()
self._owner.Resource=jid.getResource()
resp=self._owner.SendAndWaitForResponse(Protocol('iq',typ='set',payload=[Node('session',attrs={'xmlns':NS_SESSION})]))
if isResultNode(resp):
self.DEBUG('Successfully opened session.','ok')
self.session=1
return 'ok'
else:
self.DEBUG('Session open failed.','error')
self.session=0
elif resp: self.DEBUG('Binding failed: %s.'%resp.getTag('error'),'error')
else:
self.DEBUG('Binding failed: timeout expired.','error')
return ''
class ComponentBind(PlugIn):
""" ComponentBind some JID to the current connection to allow router know of our location."""
def __init__(self, sasl):
PlugIn.__init__(self)
self.DBG_LINE='bind'
self.bound=None
self.needsUnregister=None
self.sasl = sasl
def plugin(self,owner):
""" Start resource binding, if allowed at this time. Used internally. """
if not self.sasl:
self.bound=[]
return
if self._owner.Dispatcher.Stream.features:
try: self.FeaturesHandler(self._owner.Dispatcher,self._owner.Dispatcher.Stream.features)
except NodeProcessed: pass
else:
self._owner.RegisterHandler('features',self.FeaturesHandler,xmlns=NS_STREAMS)
self.needsUnregister=1
def plugout(self):
""" Remove ComponentBind handler from owner's dispatcher. Used internally. """
if self.needsUnregister:
self._owner.UnregisterHandler('features',self.FeaturesHandler,xmlns=NS_STREAMS)
def FeaturesHandler(self,conn,feats):
""" Determine if server supports resource binding and set some internal attributes accordingly. """
if not feats.getTag('bind',namespace=NS_BIND):
self.bound='failure'
self.DEBUG('Server does not requested binding.','error')
return
if feats.getTag('session',namespace=NS_SESSION): self.session=1
else: self.session=-1
self.bound=[]
def Bind(self,domain=None):
""" Perform binding. Use provided domain name (if not provided). """
while self.bound is None and self._owner.Process(1): pass
if self.sasl:
xmlns = NS_COMPONENT_1
else:
xmlns = None
self.bindresponse = None
ttl = dispatcher.DefaultTimeout
self._owner.RegisterHandler('bind',self.BindHandler,xmlns=xmlns)
self._owner.send(Protocol('bind',attrs={'name':domain},xmlns=NS_COMPONENT_1))
while self.bindresponse is None and self._owner.Process(1) and ttl > 0: ttl-=1
self._owner.UnregisterHandler('bind',self.BindHandler,xmlns=xmlns)
resp=self.bindresponse
if resp and resp.getAttr('error'):
self.DEBUG('Binding failed: %s.'%resp.getAttr('error'),'error')
elif resp:
self.DEBUG('Successfully bound.','ok')
return 'ok'
else:
self.DEBUG('Binding failed: timeout expired.','error')
return ''
def BindHandler(self,conn,bind):
self.bindresponse = bind
pass
| gpl-3.0 |
robmcmullen/peppy | peppy/vfs/itools/datatypes.py | 1 | 9299 | # -*- coding: UTF-8 -*-
# Copyright (C) 2004-2007 Juan David Ibáñez Palomar <jdavid@itaapy.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
class DataType(object):
default = None
def __init__(self, **kw):
for key in kw:
setattr(self, key, kw[key])
@staticmethod
def decode(data):
"""Deserializes the given byte string to a value with a type."""
raise NotImplementedError
@staticmethod
def encode(value):
"""Serializes the given value to a byte string."""
raise NotImplementedError
@staticmethod
def is_valid(value):
"""Checks whether the given value is valid.
For example, for a natural number the value will be an integer,
and this method will check that it is not a negative number.
"""
return True
import re, time, datetime, calendar, decimal, mimetypes
from copy import deepcopy
# Import from itools
from peppy.vfs.itools.uri import get_reference
#from itools.i18n import has_language
def has_language(stuff):
return False
#from base import DataType
def is_datatype(type, base_type):
"""
Returns True if 'type' is of 'base_type'.
"""
try:
if issubclass(type, base_type):
return True
except TypeError:
pass
if isinstance(type, base_type):
return True
return False
class Integer(DataType):
@staticmethod
def decode(value):
if not value:
return None
return int(value)
@staticmethod
def encode(value):
if value is None:
return ''
return str(value)
class Decimal(DataType):
@staticmethod
def decode(value):
if not value:
return None
return decimal.Decimal(value)
@staticmethod
def encode(value):
if value is None:
return ''
return str(value)
class Unicode(DataType):
default = u''
@staticmethod
def decode(value, encoding='UTF-8'):
return unicode(value, encoding)
@staticmethod
def encode(value, encoding='UTF-8'):
return value.encode(encoding)
class String(DataType):
@staticmethod
def decode(value):
return value
@staticmethod
def encode(value):
return value
class Boolean(DataType):
default = False
@staticmethod
def decode(value):
return bool(int(value))
@staticmethod
def encode(value):
if value is True:
return '1'
elif value is False:
return '0'
else:
raise ValueError, 'value is not a boolean'
class URI(DataType):
@staticmethod
def decode(value):
return get_reference(value)
@staticmethod
def encode(value):
return str(value)
class Email(String):
@staticmethod
def is_valid(value):
expr = "^[0-9a-z]+[_\.0-9a-z-'+]*@([0-9a-z][0-9a-z-]+\.)+[a-z]{2,4}$"
return re.match(expr, value.lower()) is not None
class FileName(DataType):
"""
A filename is tuple consisting of a name, a type and a language.
XXX We should extend this to add the character encoding
"""
@staticmethod
def decode(data):
data = data.split('.')
# XXX The encoding (UTF-8, etc.)
n = len(data)
if n == 1:
return data[0], None, None
elif n == 2:
if '.%s' % data[-1].lower() in mimetypes.types_map:
name, type = data
return name, type, None
elif has_language(data[-1]):
name, language = data
return name, None, language
else:
return '.'.join(data), None, None
else:
# Default values
type = encoding = language = None
# The language
if '.%s' % data[-1].lower() in mimetypes.encodings_map:
encoding = data[-1]
data = data[:-1]
elif has_language(data[-1]):
language = data[-1]
data = data[:-1]
# The type
if '.%s' % data[-1].lower() in mimetypes.types_map:
type = data[-1]
data = data[:-1]
if encoding is not None:
type = '%s.%s' % (type, encoding)
# The name
name = '.'.join(data)
return name, type, language
@staticmethod
def encode(value):
name, type, language = value
if type is not None:
name = name + '.' + type
if language is not None:
name = name + '.' + language
return name
class HTTPDate(DataType):
# XXX As specified by RFC 1945 (HTTP 1.0), should check HTTP 1.1
# XXX The '%a', '%A' and '%b' format variables depend on the locale
# (that's what the Python docs say), so what happens if the locale
# in the server is not in English?
@staticmethod
def decode(data):
formats = [
# RFC-1123 (updates RFC-822, which uses two-digits years)
'%a, %d %b %Y %H:%M:%S GMT',
# RFC-850
'%A, %d-%b-%y %H:%M:%S GMT',
# ANSI C's asctime() format
'%a %b %d %H:%M:%S %Y',
# Non-Standard formats, sent by some clients
# Variation of RFC-1123, uses full day name (sent by Netscape 4)
'%A, %d %b %Y %H:%M:%S GMT',
# Variation of RFC-850, uses full month name and full year
# (unkown sender)
'%A, %d-%B-%Y %H:%M:%S GMT',
]
for format in formats:
try:
tm = time.strptime(data, format)
except ValueError:
pass
else:
break
else:
raise ValueError, 'date "%s" is not an HTTP-Date' % data
return datetime.datetime.utcfromtimestamp(calendar.timegm(tm))
@staticmethod
def encode(mtime):
tm = time.gmtime(mtime)
return time.strftime('%a, %d %b %Y %H:%M:%S GMT', tm)
class QName(DataType):
@staticmethod
def decode(data):
if ':' in data:
return tuple(data.split(':', 1))
return None, data
@staticmethod
def encode(value):
if value[0] is None:
return value[1]
return '%s:%s' % value
class Tokens(DataType):
@staticmethod
def decode(data):
return tuple(data.split())
@staticmethod
def encode(value):
return ' '.join(value)
class Enumerate(String):
is_enumerate = True
options = []
@classmethod
def get_options(cls):
"""Returns a list of dictionaries in the format
[{'name': <str>, 'value': <unicode>}, ...]
The default implementation returns a copy of the "options" class
attribute. Both the list and the dictionaries may be modified
afterwards.
"""
return deepcopy(cls.options)
@classmethod
def is_valid(cls, name):
"""Returns True if the given name is part of this Enumerate's options.
"""
for option in cls.get_options():
if name == option['name']:
return True
return False
@classmethod
def get_namespace(cls, name):
"""Extends the options with information about which one is matching the
given name.
"""
options = cls.get_options()
if isinstance(name, list):
for option in options:
option['selected'] = option['name'] in name
else:
for option in options:
option['selected'] = option['name'] == name
return options
@classmethod
def get_value(cls, name, default=None):
"""Returns the value matching the given name, or the default value.
"""
for option in cls.get_options():
if option['name'] == name:
return option['value']
return default
############################################################################
# Medium decoder/encoders (not for values)
class XML(object):
@staticmethod
def encode(value):
return value.replace('&', '&').replace('<', '<')
@staticmethod
def decode(value):
return value.replace('&', '&').replace('<', '<')
class XMLAttribute(object):
@staticmethod
def encode(value):
value = value.replace('&', '&').replace('<', '<')
return value.replace('"', '"')
@staticmethod
def decode(value):
value = value.replace('&', '&').replace('<', '<')
return value.replace('"', '"')
| gpl-2.0 |
staslev/incubator-beam | sdks/python/apache_beam/runners/direct/watermark_manager.py | 5 | 10299 | #
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""Manages watermarks of PCollections and AppliedPTransforms."""
from __future__ import absolute_import
import threading
from apache_beam import pipeline
from apache_beam import pvalue
from apache_beam.runners.direct.util import TimerFiring
from apache_beam.utils.timestamp import MAX_TIMESTAMP
from apache_beam.utils.timestamp import MIN_TIMESTAMP
from apache_beam.utils.timestamp import TIME_GRANULARITY
class WatermarkManager(object):
"""For internal use only; no backwards-compatibility guarantees.
Tracks and updates watermarks for all AppliedPTransforms."""
WATERMARK_POS_INF = MAX_TIMESTAMP
WATERMARK_NEG_INF = MIN_TIMESTAMP
def __init__(self, clock, root_transforms, value_to_consumers,
transform_keyed_states):
self._clock = clock # processing time clock
self._root_transforms = root_transforms
self._value_to_consumers = value_to_consumers
self._transform_keyed_states = transform_keyed_states
# AppliedPTransform -> TransformWatermarks
self._transform_to_watermarks = {}
for root_transform in root_transforms:
self._transform_to_watermarks[root_transform] = _TransformWatermarks(
self._clock, transform_keyed_states[root_transform], root_transform)
for consumers in value_to_consumers.values():
for consumer in consumers:
self._transform_to_watermarks[consumer] = _TransformWatermarks(
self._clock, transform_keyed_states[consumer], consumer)
for consumers in value_to_consumers.values():
for consumer in consumers:
self._update_input_transform_watermarks(consumer)
def _update_input_transform_watermarks(self, applied_ptransform):
assert isinstance(applied_ptransform, pipeline.AppliedPTransform)
input_transform_watermarks = []
for input_pvalue in applied_ptransform.inputs:
assert input_pvalue.producer or isinstance(input_pvalue, pvalue.PBegin)
if input_pvalue.producer:
input_transform_watermarks.append(
self.get_watermarks(input_pvalue.producer))
self._transform_to_watermarks[
applied_ptransform].update_input_transform_watermarks(
input_transform_watermarks)
def get_watermarks(self, applied_ptransform):
"""Gets the input and output watermarks for an AppliedPTransform.
If the applied_ptransform has not processed any elements, return a
watermark with minimum value.
Args:
applied_ptransform: AppliedPTransform to get the watermarks for.
Returns:
A snapshot (TransformWatermarks) of the input watermark and output
watermark for the provided transform.
"""
# TODO(altay): Composite transforms should have a composite watermark. Until
# then they are represented by their last transform.
while applied_ptransform.parts:
applied_ptransform = applied_ptransform.parts[-1]
return self._transform_to_watermarks[applied_ptransform]
def update_watermarks(self, completed_committed_bundle, applied_ptransform,
completed_timers, outputs, unprocessed_bundles,
keyed_earliest_holds):
assert isinstance(applied_ptransform, pipeline.AppliedPTransform)
self._update_pending(
completed_committed_bundle, applied_ptransform, completed_timers,
outputs, unprocessed_bundles)
tw = self.get_watermarks(applied_ptransform)
tw.hold(keyed_earliest_holds)
self._refresh_watermarks(applied_ptransform)
def _update_pending(self, input_committed_bundle, applied_ptransform,
completed_timers, output_committed_bundles,
unprocessed_bundles):
"""Updated list of pending bundles for the given AppliedPTransform."""
# Update pending elements. Filter out empty bundles. They do not impact
# watermarks and should not trigger downstream execution.
for output in output_committed_bundles:
if output.has_elements():
if output.pcollection in self._value_to_consumers:
consumers = self._value_to_consumers[output.pcollection]
for consumer in consumers:
consumer_tw = self._transform_to_watermarks[consumer]
consumer_tw.add_pending(output)
completed_tw = self._transform_to_watermarks[applied_ptransform]
completed_tw.update_timers(completed_timers)
for unprocessed_bundle in unprocessed_bundles:
completed_tw.add_pending(unprocessed_bundle)
assert input_committed_bundle or applied_ptransform in self._root_transforms
if input_committed_bundle and input_committed_bundle.has_elements():
completed_tw.remove_pending(input_committed_bundle)
def _refresh_watermarks(self, applied_ptransform):
assert isinstance(applied_ptransform, pipeline.AppliedPTransform)
tw = self.get_watermarks(applied_ptransform)
if tw.refresh():
for pval in applied_ptransform.outputs.values():
if isinstance(pval, pvalue.DoOutputsTuple):
pvals = (v for v in pval)
else:
pvals = (pval,)
for v in pvals:
if v in self._value_to_consumers: # If there are downstream consumers
consumers = self._value_to_consumers[v]
for consumer in consumers:
self._refresh_watermarks(consumer)
def extract_fired_timers(self):
all_timers = []
for applied_ptransform, tw in self._transform_to_watermarks.iteritems():
fired_timers = tw.extract_fired_timers()
if fired_timers:
all_timers.append((applied_ptransform, fired_timers))
return all_timers
class _TransformWatermarks(object):
"""Tracks input and output watermarks for an AppliedPTransform."""
def __init__(self, clock, keyed_states, transform):
self._clock = clock
self._keyed_states = keyed_states
self._input_transform_watermarks = []
self._input_watermark = WatermarkManager.WATERMARK_NEG_INF
self._output_watermark = WatermarkManager.WATERMARK_NEG_INF
self._keyed_earliest_holds = {}
self._pending = set() # Scheduled bundles targeted for this transform.
self._fired_timers = set()
self._lock = threading.Lock()
self._label = str(transform)
def update_input_transform_watermarks(self, input_transform_watermarks):
with self._lock:
self._input_transform_watermarks = input_transform_watermarks
def update_timers(self, completed_timers):
with self._lock:
for timer_firing in completed_timers:
self._fired_timers.remove(timer_firing)
@property
def input_watermark(self):
with self._lock:
return self._input_watermark
@property
def output_watermark(self):
with self._lock:
return self._output_watermark
def hold(self, keyed_earliest_holds):
with self._lock:
for key, hold_value in keyed_earliest_holds.iteritems():
self._keyed_earliest_holds[key] = hold_value
if (hold_value is None or
hold_value == WatermarkManager.WATERMARK_POS_INF):
del self._keyed_earliest_holds[key]
def add_pending(self, pending):
with self._lock:
self._pending.add(pending)
def remove_pending(self, completed):
with self._lock:
# Ignore repeated removes. This will happen if a transform has a repeated
# input.
if completed in self._pending:
self._pending.remove(completed)
def refresh(self):
with self._lock:
min_pending_timestamp = WatermarkManager.WATERMARK_POS_INF
has_pending_elements = False
for input_bundle in self._pending:
# TODO(ccy): we can have the Bundle class keep track of the minimum
# timestamp so we don't have to do an iteration here.
for wv in input_bundle.get_elements_iterable():
has_pending_elements = True
if wv.timestamp < min_pending_timestamp:
min_pending_timestamp = wv.timestamp
# If there is a pending element with a certain timestamp, we can at most
# advance our watermark to the maximum timestamp less than that
# timestamp.
pending_holder = WatermarkManager.WATERMARK_POS_INF
if has_pending_elements:
pending_holder = min_pending_timestamp - TIME_GRANULARITY
input_watermarks = [
tw.output_watermark for tw in self._input_transform_watermarks]
input_watermarks.append(WatermarkManager.WATERMARK_POS_INF)
producer_watermark = min(input_watermarks)
self._input_watermark = max(self._input_watermark,
min(pending_holder, producer_watermark))
earliest_hold = WatermarkManager.WATERMARK_POS_INF
for hold in self._keyed_earliest_holds.values():
if hold < earliest_hold:
earliest_hold = hold
new_output_watermark = min(self._input_watermark, earliest_hold)
advanced = new_output_watermark > self._output_watermark
self._output_watermark = new_output_watermark
return advanced
@property
def synchronized_processing_output_time(self):
return self._clock.time()
def extract_fired_timers(self):
with self._lock:
if self._fired_timers:
return False
fired_timers = []
for encoded_key, state in self._keyed_states.iteritems():
timers = state.get_timers(watermark=self._input_watermark)
for expired in timers:
window, (name, time_domain, timestamp) = expired
fired_timers.append(
TimerFiring(encoded_key, window, name, time_domain, timestamp))
self._fired_timers.update(fired_timers)
return fired_timers
| apache-2.0 |
cgar/servo | tests/wpt/web-platform-tests/conformance-checkers/tools/ins-del-datetime.py | 107 | 8420 | # -*- coding: utf-8 -*-
import os
ccdir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
template = """<!DOCTYPE html>
<meta charset=utf-8>
"""
errors = {
"date-year-0000": "0000-12-09",
"date-month-00": "2002-00-15",
"date-month-13": "2002-13-15",
"date-0005-02-29": "0005-02-29",
"date-1969-02-29": "1969-02-29",
"date-1900-02-29": "1900-02-29",
"date-2100-02-29": "2100-02-29",
"date-2200-02-29": "2200-02-29",
"date-2014-02-29": "2014-02-29",
"date-day-04-31": "2002-04-31",
"date-day-06-31": "2002-06-31",
"date-day-09-31": "2002-09-31",
"date-day-11-31": "2002-11-31",
"date-day-01-32": "2002-01-32",
"date-day-03-32": "2002-03-32",
"date-day-05-32": "2002-05-32",
"date-day-07-32": "2002-07-32",
"date-day-08-32": "2002-08-32",
"date-day-10-32": "2002-10-32",
"date-day-12-32": "2002-12-32",
"date-iso8601-YYYYMMDD-no-hyphen": "20020929",
"date-leading-whitespace": " 2002-09-29",
"date-trailing-whitespace": "2002-09-29 ",
"date-month-one-digit": "2002-9-29",
"date-month-three-digits": "2002-011-29",
"date-year-three-digits": "782-09-29",
"date-day-one-digit": "2002-09-9",
"date-day-three-digits": "2002-11-009",
"date-day-missing-separator": "2014-0220",
"date-month-missing-separator": "201402-20",
"date-non-ascii-digit": "2002-09-29",
"date-trailing-U+0000": "2002-09-29�",
"date-trailing-pile-of-poo": "2002-09-29💩",
"date-wrong-day-separator": "2014-02:20",
"date-wrong-month-separator": "2014:02-20",
"date-year-negative": "-2002-09-29",
"date-leading-bom": "2002-09-29",
"global-date-and-time-60-minutes": "2011-11-12T00:60:00+08:00",
"global-date-and-time-60-seconds": "2011-11-12T00:00:60+08:00",
"global-date-and-time-2400": "2011-11-12T24:00:00+08:00",
"global-date-and-time-space-before-timezone": "2011-11-12T06:54:39 08:00",
"global-date-and-time-hour-one-digit": "2011-11-12T6:54:39-08:00",
"global-date-and-time-hour-three-digits": "2011-11-12T016:54:39-08:00",
"global-date-and-time-minutes-one-digit": "2011-11-12T16:4:39-08:00",
"global-date-and-time-minutes-three-digits": "2011-11-12T16:354:39-08:00",
"global-date-and-time-seconds-one-digit": "2011-11-12T16:54:9-08:00",
"global-date-and-time-seconds-three-digits": "2011-11-12T16:54:039-08:00",
"global-date-and-time-timezone-with-seconds": "2011-11-12T06:54:39-08:00:00",
"global-date-and-time-timezone-60-minutes": "2011-11-12T06:54:39-08:60",
"global-date-and-time-timezone-one-digit-hour": "2011-11-12T06:54:39-5:00",
"global-date-and-time-timezone-one-digit-minute": "2011-11-12T06:54:39-05:0",
"global-date-and-time-timezone-three-digit-hour": "2011-11-12T06:54:39-005:00",
"global-date-and-time-timezone-three-digit-minute": "2011-11-12T06:54:39-05:000",
"global-date-and-time-nbsp": "2011-11-12 14:54Z",
"global-date-and-time-missing-minutes-separator": "2011-11-12T1454Z",
"global-date-and-time-missing-seconds-separator": "2011-11-12T14:5439Z",
"global-date-and-time-wrong-minutes-separator": "2011-11-12T14-54Z",
"global-date-and-time-wrong-seconds-separator": "2011-11-12T14:54-39Z",
"global-date-and-time-lowercase-z": "2011-11-12T14:54z",
"global-date-and-time-with-both-T-and-space": "2011-11-12T 14:54Z",
"global-date-and-time-zero-digit-fraction": "2011-11-12T06:54:39.-08:00",
"global-date-and-time-four-digit-fraction": "2011-11-12T06:54:39.9291-08:00",
"global-date-and-time-bad-fraction-separator": "2011-11-12T14:54:39,929+0000",
"global-date-and-time-timezone-non-T-character": "2011-11-12+14:54Z",
"global-date-and-time-timezone-lowercase-t": "2011-11-12t14:54Z",
"global-date-and-time-timezone-multiple-spaces": "2011-11-12 14:54Z",
"global-date-and-time-timezone-offset-space-start": "2011-11-12T06:54:39.929 08:00",
"global-date-and-time-timezone-offset-colon-start": "2011-11-12T06:54:39.929:08:00",
"global-date-and-time-timezone-plus-2400": "2011-11-12T06:54:39-24:00",
"global-date-and-time-timezone-minus-2400": "2011-11-12T06:54:39-24:00",
"global-date-and-time-timezone-iso8601-two-digit": "2011-11-12T06:54:39-08",
"global-date-and-time-iso8601-hhmmss-no-colon": "2011-11-12T145439Z",
"global-date-and-time-iso8601-hhmm-no-colon": "2011-11-12T1454Z",
"global-date-and-time-iso8601-hh": "2011-11-12T14Z",
"year": "2006",
"yearless-date": "07-15",
"month": "2011-11",
"week": "2011-W46",
"time": "14:54:39",
"local-date-and-time": "2011-11-12T14:54",
"duration-P-form": "PT4H18M3S",
"duration-time-component": "4h 18m 3s",
}
warnings = {
"global-date-and-time-timezone-plus-1500": "2011-11-12T00:00:00+1500",
"global-date-and-time-timezone-minus-1300": "2011-11-12T00:00:00-1300",
"global-date-and-time-timezone-minutes-15": "2011-11-12T00:00:00+08:15",
"date-0214-09-29": "0214-09-29",
"date-20014-09-29": "20014-09-29",
"date-0004-02-29": "0004-02-29",
"date-year-five-digits": "12014-09-29",
}
non_errors = {
"date": "2002-09-29",
"date-2000-02-29": "2000-02-29",
"date-2400-02-29": "2400-02-29",
"date-1968-02-29": "1968-02-29",
"date-1900-02-28": "1900-02-28",
"date-2100-02-28": "2100-02-28",
"date-2200-02-28": "2200-02-28",
"date-2014-02-28": "2014-02-28",
"date-day-01-31": "2002-01-31",
"date-day-03-31": "2002-03-31",
"date-day-05-31": "2002-05-31",
"date-day-07-31": "2002-07-31",
"date-day-08-31": "2002-08-31",
"date-day-10-31": "2002-10-31",
"date-day-12-31": "2002-12-31",
"date-day-04-30": "2002-04-30",
"date-day-06-30": "2002-06-30",
"date-day-09-30": "2002-09-30",
"date-day-11-30": "2002-11-30",
"global-date-and-time-no-seconds": "2011-11-12T14:54Z",
"global-date-and-time-with-seconds": "2011-11-12T14:54:39+0000",
"global-date-and-time-with-one-digit-fraction": "2011-11-12T06:54:39.9-08:00",
"global-date-and-time-with-two-digit-fraction": "2011-11-12T06:54:39.92+07:00",
"global-date-and-time-with-three-digit-fraction": "2011-11-12T06:54:39.929-06:00",
"global-date-and-time-space": "2011-11-12 14:54Z",
"global-date-and-time-timezone": "2011-11-12T06:54:39+0900",
"global-date-and-time-timezone-30": "2011-11-12T06:54:39-0830",
"global-date-and-time-timezone-45": "2011-11-12T06:54:39-0845",
"global-date-and-time-timezone-with-colon": "2011-11-12T06:54:39-08:00",
"global-date-and-time-timezone-without-colon": "2011-11-12T06:54:39-0800",
}
for key in errors.keys():
error = errors[key]
template_ins = template
template_del = template
template_ins += '<title>%s</title>\n' % key
template_del += '<title>%s</title>\n' % key
template_ins += '<ins datetime="%s"></ins>' % errors[key]
template_del += '<del datetime="%s"></del>' % errors[key]
ins_file = open(os.path.join(ccdir, "html/elements/ins/%s-novalid.html" % key), 'wb')
ins_file.write(template_ins)
ins_file.close()
del_file = open(os.path.join(ccdir, "html/elements/del/%s-novalid.html" % key), 'wb')
del_file.write(template_del)
del_file.close()
for key in warnings.keys():
non_error = warnings[key]
template_ins = template
template_del = template
template_ins += '<title>%s</title>\n' % key
template_del += '<title>%s</title>\n' % key
template_ins += '<ins datetime="%s"></ins>' % warnings[key]
template_del += '<del datetime="%s"></del>' % warnings[key]
ins_file = open(os.path.join(ccdir, "html/elements/ins/%s-haswarn.html" % key), 'wb')
ins_file.write(template_ins)
ins_file.close()
del_file = open(os.path.join(ccdir, "html/elements/del/%s-haswarn.html" % key), 'wb')
del_file.write(template_del)
del_file.close()
ins_file = open(os.path.join(ccdir, "html/elements/ins/datetime-isvalid.html"), 'wb')
del_file = open(os.path.join(ccdir, "html/elements/del/datetime-isvalid.html"), 'wb')
ins_file.write(template + '<title>valid datetime</title>\n')
del_file.write(template + '<title>valid datetime</title>\n')
for key in non_errors.keys():
non_error = non_errors[key]
ins_file.write('<ins datetime="%s"></ins> <!-- %s -->\n' % (non_errors[key], key))
del_file.write('<del datetime="%s"></del> <!-- %s -->\n' % (non_errors[key], key))
ins_file.close()
del_file.close()
# vim: ts=4:sw=4
| mpl-2.0 |
Altair3/Tanks | docs/conf.py | 20 | 6455 | # -*- coding: utf-8 -*-
#
# BZRobots documentation build configuration file, created by
# sphinx-quickstart on Tue Nov 24 12:37:33 2009.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys, os
sys.path.append('.')
sys.path.append('../')
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.append(os.path.abspath('.'))
# -- General configuration -----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.todo', 'sphinx.ext.coverage']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'BZRobots'
copyright = u'2009, Jared Forsyth, Andrew McNabb'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '1.0'
# The full version, including alpha/beta/rc tags.
release = '1.0'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of documents that shouldn't be included in the build.
#unused_docs = []
# List of directories, relative to source directory, that shouldn't be searched
# for source files.
exclude_trees = ['_build']
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
html_theme = 'default'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_use_modindex = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = ''
# Output file base name for HTML help builder.
htmlhelp_basename = 'BZRobotsdoc'
# -- Options for LaTeX output --------------------------------------------------
# The paper size ('letter' or 'a4').
#latex_paper_size = 'letter'
# The font size ('10pt', '11pt' or '12pt').
#latex_font_size = '10pt'
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
('index', 'BZRobots.tex', u'BZRobots Documentation',
u'Jared Forsyth, Andrew McNabb', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# Additional stuff for the LaTeX preamble.
#latex_preamble = ''
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_use_modindex = True
| gpl-3.0 |
tapanagupta/mi-instrument | mi/idk/result_set.py | 2 | 15901 | #!/usr/bin/env python
"""
@file coi-services/mi/idk/result_set.py
@author Bill French
@brief Read a result set file and use the data to verify
data particles.
Usage:
from mi.core.log import log
rs = ResultSet(result_set_file_path)
if not rs.verify(particles):
log.info("Particle verified")
else:
log.error("Particle validate failed")
log.error(rs.report())
Result Set File Format:
result files are yml formatted files with a header and data section.
the data is stored in record elements with the key being the parameter name.
- two special fields are internal_timestamp and _index.
- internal timestamp can be input in text string or ntp float format
eg.
# Result data for verifying particles. Comments are ignored.
header:
particle_object: CtdpfParserDataParticleKey
particle_type: ctdpf_parsed
data:
- _index: 1
internal_timestamp: 07/26/2013 21:01:03
temperature: 4.1870
conductivity: 10.5914
pressure: 161.06
oxygen: 2693.0
- _index: 2
internal_timestamp: 07/26/2013 21:01:04
temperature: 4.1872
conductivity: 10.5414
pressure: 161.16
oxygen: 2693.1
If a driver returns multiple particle types, the particle type must be specified in each particle
header:
particle_object: 'MULTIPLE'
particle_type: 'MULTIPLE'
data:
- _index: 1
particle_object: CtdpfParser1DataParticleKey
particle_type: ctdpf_parsed_1
internal_timestamp: 07/26/2013 21:01:03
temperature: 4.1870
conductivity: 10.5914
pressure: 161.06
oxygen: 2693.0
- _index: 2
particle_object: CtdpfParser2DataParticleKey
particle_type: ctdpf_parsed_2
internal_timestamp: 07/26/2013 21:01:04
temperature: 4.1872
conductivity: 10.5414
pressure: 161.16
oxygen: 2693.1
"""
__author__ = 'Bill French'
__license__ = 'Apache 2.0'
import re
import yaml
import ntplib
import time
from dateutil import parser
from mi.core.instrument.data_particle import DataParticle
from mi.core.log import get_logger ; log = get_logger()
DATE_PATTERN = r'^\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}(\.\d+)?Z?$'
DATE_MATCHER = re.compile(DATE_PATTERN)
class ResultSet(object):
"""
Result Set object
Read result set files and compare to parsed particles.
"""
def __init__(self, result_file_path):
self.yaml = dict()
log.debug("read result file: %s" % result_file_path)
stream = file(result_file_path, 'r')
result_set = yaml.load(stream)
self._set_result_set(result_set)
self._clear_report()
def verify(self, particles):
"""
Verify particles passed in against result set read
in the ctor.
Ensure:
- Verify particles as a set
- Verify individual particle data
store verification result in the object and
return success or failure.
@param particls: list of particles to verify.
@return True if verification successful, False otherwise
"""
self._clear_report()
result = True
if self._verify_set(particles):
result = self._verify_particles(particles)
else:
result = False
if not result:
log.error("Failed verification: \n%s", self.report())
return result
def report(self):
"""
Return an ascii formatted verification failure report.
@return string report
"""
if len(self._report):
return "\n".join(self._report)
else:
return None
###
# Helpers
###
def _add_to_report(self, messages, indent = 0):
"""
Add a message to the report buffer, pass an indent factor to
indent message in the ascii report.
"""
if not isinstance(messages, list): messages = [messages]
for message in messages:
ind = ""
for i in range(0, indent):
ind += " "
self._report.append("%s%s" %(ind, message))
log.warn(message)
def _clear_report(self):
"""
Add a message to the report buffer, pass an indent factor to
indent message in the ascii report.
"""
self._report = []
def _set_result_set(self, result_set):
"""
Take data from yaml file and store it in internal objects for
verifying data. Raise an exception on error.
"""
log.trace("Parsing result set header: %s", result_set)
self._result_set_header = result_set.get("header")
if not self._result_set_header: raise IOError("Missing result set header")
log.trace("Header: %s", self._result_set_header)
if self._result_set_header.get("particle_object") is None:
IOError("header.particle_object not defined")
if self._result_set_header.get("particle_type") is None:
IOError("header.particle_type not defined")
self._result_set_data = {}
data = result_set.get("data")
if not data: raise IOError("Missing result set data")
for particle in data:
index = particle.get("_index")
if index is None:
log.error("Particle definition missing _index: %s", particle)
raise IOError("Particle definition missing _index")
if self._result_set_data.get(index) is not None:
log.error("Duplicate particle definition for _index %s: %s", index, particle)
raise IOError("Duplicate definition found for index: %s"% index)
self._result_set_data[index] = particle
log.trace("Result set data: %s", self._result_set_data)
def _verify_set(self, particles):
"""
Verify the particles as a set match what we expect.
- All particles are of the expected type
- Check particle count
"""
errors = []
if len(self._result_set_data) != len(particles):
errors.append("result set records != particles to verify (%d != %d)" %
(len(self._result_set_data), len(particles)))
# if this driver returns multiple particle classes, type checking happens
# for each particle in _get_particle_data_errors
if self._result_set_header.get("particle_object") != 'MULTIPLE' and \
self._result_set_header.get("particle_type") != 'MULTIPLE':
for particle in particles:
if not self._verify_particle_type(particle):
log.error("particle type mismatch: %s", particle)
errors.append('particle type mismatch')
if len(errors):
self._add_to_report("Header verification failure")
self._add_to_report(errors, 1)
return False
return True
def _verify_particles(self, particles):
"""
Verify data in the particles individually.
- Verify order based on _index
- Verify parameter data values
- Verify there are extra or missing parameters
"""
result = True
index = 1
for particle in particles:
particle_def = self._result_set_data.get(index)
errors = []
# No particle definition, we fail
if particle_def is None:
errors.append("no particle result defined for index %d" % index)
# Otherwise lets do some validation
else:
errors += self._get_particle_header_errors(particle, particle_def)
errors += self._get_particle_data_errors(particle, particle_def)
if len(errors):
self._add_to_report("Failed particle validation for index %d" % index)
self._add_to_report(errors, 1)
result = False
index += 1
return result
def _verify_particle_type(self, particle):
"""
Verify that the object is a DataParticle and is the
correct type.
"""
if isinstance(particle, dict):
return True
expected = self._result_set_header['particle_object']
cls = particle.__class__.__name__
if not issubclass(particle.__class__, DataParticle):
log.error("type not a data particle")
if expected != cls:
log.error("type mismatch: %s != %s", expected, cls)
return False
return True
def _get_particle_header_errors(self, particle, particle_def):
"""
Verify all parameters defined in the header:
- Stream type
- Internal timestamp
"""
errors = []
particle_dict = self._particle_as_dict(particle)
particle_timestamp = particle_dict.get('internal_timestamp')
expected_time = particle_def.get('internal_timestamp')
allow_diff = .000001
# Verify the timestamp
if particle_timestamp and not expected_time:
errors.append("particle_timestamp defined in particle, but not expected")
elif not particle_timestamp and expected_time:
errors.append("particle_timestamp expected, but not defined in particle")
elif particle_timestamp:
if isinstance(expected_time, basestring):
expected = self._string_to_ntp_date_time(expected_time)
else:
# if not a string, timestamp should alread be in ntp
expected = expected_time
ts_diff = abs(particle_timestamp - expected)
log.debug("verify timestamp: abs(%s - %s) = %s", expected, particle_timestamp, ts_diff)
if ts_diff > allow_diff:
errors.append("expected internal_timestamp mismatch, %.9f != %.9f (%.9f)" %
(expected, particle_timestamp, ts_diff))
# verify the stream name, unless multiple are returned, type checking is done
# in get_particle_data_errors if so
particle_stream = particle_dict['stream_name']
if self._result_set_header['particle_type'] != 'MULTIPLE':
expected_stream = self._result_set_header['particle_type']
if particle_stream != expected_stream:
errors.append("expected stream name mismatch: %s != %s" %
(expected_stream, particle_stream))
return errors
def _get_particle_data_errors(self, particle, particle_def):
"""
Verify that all data parameters are present and have the
expected value
"""
errors = []
particle_dict = self._particle_as_dict(particle)
log.debug("Particle to test: %s", particle_dict)
log.debug("Particle definition: %s", particle_def)
particle_values = particle_dict['values']
# particle object and particle type keys will only be present for drivers
# returning multiple particle types
if 'particle_object' in particle_def:
expected_object = particle_def.get('particle_object')
expected_type = particle_def.get('particle_type', None)
# particle is either a class or dictionary, if it is a
# dictionary there is no class to compare
if not isinstance(particle, dict):
# particle is an actual class, check that the class matches
cls = particle.__class__.__name__
if not issubclass(particle.__class__, DataParticle):
errors.append("Particle class %s is not a subclass of DataParticle" %
particle.__class__)
if expected_object != cls:
errors.append("Class mismatch, expected: %s, received: %s" %
(expected_object, cls))
particle_stream = particle_dict['stream_name']
if particle_stream != expected_type:
log.debug("Stream type mismatch, expected: %s, received: %s" % (expected_type, particle_stream))
errors.append("Stream type mismatch, expected: %s, received: %s" % (expected_type, particle_stream))
expected_keys = []
for (key, value) in particle_def.items():
if(key not in ['_index', '_new_sequence', 'internal_timestamp', 'particle_object', 'particle_type']):
expected_keys.append(key)
particle_keys = []
pv = {}
for value in particle_values:
particle_keys.append(value['value_id'])
pv[value['value_id']] = value['value']
if sorted(expected_keys) != sorted(particle_keys):
errors.append("expected / particle keys mismatch: %s != %s" %
(sorted(expected_keys), sorted(particle_keys)))
else:
for key in expected_keys:
expected_value = particle_def[key]
particle_value = pv[key]
log.debug("Verify value for '%s'", key)
e = self._verify_value(expected_value, particle_value)
if e:
errors.append("'%s' %s" % (key, e))
return errors
def _verify_value(self, expected_value, particle_value):
"""
Verify a value matches what we expect. If the expected value (from the yaml)
is a dict then we expect the value to be in a 'value' field. Otherwise just
use the parameter as a raw value.
when passing a dict you can specify a 'round' factor.
"""
if isinstance(expected_value, dict):
ex_value = expected_value['value']
round_factor = expected_value.get('round')
else:
ex_value = expected_value
round_factor = None
if ex_value is None:
log.debug("No value to compare, ignoring")
return None
if round_factor is not None and particle_value is not None:
particle_value = round(particle_value, round_factor)
log.debug("rounded value to %s", particle_value)
if ex_value != particle_value:
return "value mismatch, %s != %s (decimals may be rounded)" % (ex_value, particle_value)
return None
def _string_to_ntp_date_time(self, datestr):
"""
Extract an ntp date from a ISO8601 formatted date string.
@param str an ISO8601 formatted string containing date information
@retval an ntp date number (seconds since jan 1 1900)
@throws InstrumentParameterException if datestr cannot be formatted to
a date.
"""
if not isinstance(datestr, basestring):
raise IOError('Value %s is not a string.' % str(datestr))
if not DATE_MATCHER.match(datestr):
raise ValueError("date string not in ISO8601 format YYYY-MM-DDTHH:MM:SS.SSSSZ")
try:
# This assumes input date string are in UTC (=GMT)
if datestr[-1:] != 'Z':
datestr += 'Z'
# the parsed date time represents a GMT time, but strftime
# does not take timezone into account, so these are seconds from the
# local start of 1970
local_sec = float(parser.parse(datestr).strftime("%s.%f"))
# remove the local time zone to convert to gmt (seconds since gmt jan 1 1970)
gmt_sec = local_sec - time.timezone
# convert to ntp (seconds since gmt jan 1 1900)
timestamp = ntplib.system_to_ntp_time(gmt_sec)
except ValueError as e:
raise ValueError('Value %s could not be formatted to a date. %s' % (str(datestr), e))
log.debug("converting time string '%s', unix_ts: %s ntp: %s", datestr, gmt_sec, timestamp)
return timestamp
def _particle_as_dict(self, particle):
if isinstance(particle, dict):
return particle
return particle.generate_dict()
| bsd-2-clause |
HesselTjeerdsma/Cyber-Physical-Pacman-Game | test programs/test/test/lib/python2.7/site-packages/wheel/paths.py | 70 | 1129 | """
Installation paths.
Map the .data/ subdirectory names to install paths.
"""
import distutils.command.install as install
import distutils.dist as dist
import os.path
import sys
def get_install_command(name):
# late binding due to potential monkeypatching
d = dist.Distribution({'name': name})
i = install.install(d)
i.finalize_options()
return i
def get_install_paths(name):
"""
Return the (distutils) install paths for the named dist.
A dict with ('purelib', 'platlib', 'headers', 'scripts', 'data') keys.
"""
paths = {}
i = get_install_command(name)
for key in install.SCHEME_KEYS:
paths[key] = getattr(i, 'install_' + key)
# pip uses a similar path as an alternative to the system's (read-only)
# include directory:
if hasattr(sys, 'real_prefix'): # virtualenv
paths['headers'] = os.path.join(sys.prefix,
'include',
'site',
'python' + sys.version[:3],
name)
return paths
| apache-2.0 |
weolar/miniblink49 | v8_5_1/tools/release/test_search_related_commits.py | 45 | 8270 | #!/usr/bin/env python
# Copyright 2015 the V8 project authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
from collections import namedtuple
from os import path
import search_related_commits
import shutil
from subprocess import Popen, PIPE, check_call
import unittest
TEST_CONFIG = {
"GIT_REPO": "/tmp/test-v8-search-related-commits",
}
class TestSearchRelatedCommits(unittest.TestCase):
base_dir = TEST_CONFIG["GIT_REPO"]
def _execute_git(self, git_args):
fullCommand = ["git", "-C", self.base_dir] + git_args
p = Popen(args=fullCommand, stdin=PIPE,
stdout=PIPE, stderr=PIPE)
output, err = p.communicate()
rc = p.returncode
if rc != 0:
raise Exception(err)
return output
def setUp(self):
if path.exists(self.base_dir):
shutil.rmtree(self.base_dir)
check_call(["git", "init", self.base_dir])
# Initial commit
message = """[turbofan] Sanitize language mode for javascript operators.
R=mstarzinger@chromium.org
Review URL: https://codereview.chromium.org/1084243005
Cr-Commit-Position: refs/heads/master@{#28059}"""
self._make_empty_commit(message)
message = """[crankshaft] Do some stuff
R=hablich@chromium.org
Review URL: https://codereview.chromium.org/1084243007
Cr-Commit-Position: refs/heads/master@{#28030}"""
self._make_empty_commit(message)
def tearDown(self):
if path.exists(self.base_dir):
shutil.rmtree(self.base_dir)
def _assert_correct_standard_result(
self, result, all_commits, hash_of_first_commit):
self.assertEqual(len(result), 1, "Master commit not found")
self.assertTrue(
result.get(hash_of_first_commit),
"Master commit is wrong")
self.assertEqual(
len(result[hash_of_first_commit]),
1,
"Child commit not found")
self.assertEqual(
all_commits[2],
result[hash_of_first_commit][0],
"Child commit wrong")
def _get_commits(self):
commits = self._execute_git(
["log", "--format=%H", "--reverse"]).splitlines()
return commits
def _make_empty_commit(self, message):
self._execute_git(["commit", "--allow-empty", "-m", message])
def testSearchByCommitPosition(self):
message = """Revert of some stuff.
> Cr-Commit-Position: refs/heads/master@{#28059}
R=mstarzinger@chromium.org
Review URL: https://codereview.chromium.org/1084243005
Cr-Commit-Position: refs/heads/master@{#28088}"""
self._make_empty_commit(message)
commits = self._get_commits()
hash_of_first_commit = commits[0]
result = search_related_commits.search_all_related_commits(
self.base_dir, hash_of_first_commit, "HEAD", None)
self._assert_correct_standard_result(result, commits, hash_of_first_commit)
def testSearchByTitle(self):
message = """Revert of some stuff.
> [turbofan] Sanitize language mode for javascript operators.
> Cr-Commit-Position: refs/heads/master@{#289}
R=mstarzinger@chromium.org
Review URL: https://codereview.chromium.org/1084243005
Cr-Commit-Position: refs/heads/master@{#28088}"""
self._make_empty_commit(message)
commits = self._get_commits()
hash_of_first_commit = commits[0]
result = search_related_commits.search_all_related_commits(
self.base_dir, hash_of_first_commit, "HEAD", None)
self._assert_correct_standard_result(result, commits, hash_of_first_commit)
def testSearchByHash(self):
commits = self._get_commits()
hash_of_first_commit = commits[0]
message = """Revert of some stuff.
> [turbofan] Sanitize language mode for javascript operators.
> Reverting """ + hash_of_first_commit + """
> R=mstarzinger@chromium.org
Review URL: https://codereview.chromium.org/1084243005
Cr-Commit-Position: refs/heads/master@{#28088}"""
self._make_empty_commit(message)
#Fetch again for an update
commits = self._get_commits()
hash_of_first_commit = commits[0]
result = search_related_commits.search_all_related_commits(
self.base_dir,
hash_of_first_commit,
"HEAD",
None)
self._assert_correct_standard_result(result, commits, hash_of_first_commit)
def testConsiderSeparator(self):
commits = self._get_commits()
hash_of_first_commit = commits[0]
# Related commits happen before separator so it is not a hit
message = """Revert of some stuff: Not a hit
> [turbofan] Sanitize language mode for javascript operators.
> Reverting """ + hash_of_first_commit + """
> R=mstarzinger@chromium.org
Review URL: https://codereview.chromium.org/1084243005
Cr-Commit-Position: refs/heads/master@{#28088}"""
self._make_empty_commit(message)
# Related commits happen before and after separator so it is a hit
commit_pos_of_master = "27088"
message = """Implement awesome feature: Master commit
Review URL: https://codereview.chromium.org/1084243235
Cr-Commit-Position: refs/heads/master@{#""" + commit_pos_of_master + "}"
self._make_empty_commit(message)
# Separator commit
message = """Commit which is the origin of the branch
Review URL: https://codereview.chromium.org/1084243456
Cr-Commit-Position: refs/heads/master@{#28173}"""
self._make_empty_commit(message)
# Filler commit
message = "Some unrelated commit: Not a hit"
self._make_empty_commit(message)
# Related commit after separator: a hit
message = "Patch r" + commit_pos_of_master +""" done
Review URL: https://codereview.chromium.org/1084243235
Cr-Commit-Position: refs/heads/master@{#29567}"""
self._make_empty_commit(message)
#Fetch again for an update
commits = self._get_commits()
hash_of_first_commit = commits[0]
hash_of_hit = commits[3]
hash_of_separator = commits[4]
hash_of_child_hit = commits[6]
result = search_related_commits.search_all_related_commits(
self.base_dir,
hash_of_first_commit,
"HEAD",
hash_of_separator)
self.assertTrue(result.get(hash_of_hit), "Hit not found")
self.assertEqual(len(result), 1, "More than one hit found")
self.assertEqual(
len(result.get(hash_of_hit)),
1,
"More than one child hit found")
self.assertEqual(
result.get(hash_of_hit)[0],
hash_of_child_hit,
"Wrong commit found")
def testPrettyPrint(self):
message = """Revert of some stuff.
> [turbofan] Sanitize language mode for javascript operators.
> Cr-Commit-Position: refs/heads/master@{#289}
R=mstarzinger@chromium.org
Review URL: https://codereview.chromium.org/1084243005
Cr-Commit-Position: refs/heads/master@{#28088}"""
self._make_empty_commit(message)
commits = self._get_commits()
hash_of_first_commit = commits[0]
OptionsStruct = namedtuple(
"OptionsStruct",
"git_dir of until all prettyprint separator verbose")
options = OptionsStruct(
git_dir= self.base_dir,
of= [hash_of_first_commit],
until= [commits[2]],
all= True,
prettyprint= True,
separator = None,
verbose=False)
output = []
for current_line in search_related_commits.main(options):
output.append(current_line)
self.assertIs(len(output), 2, "Not exactly two entries written")
self.assertTrue(output[0].startswith("+"), "Master entry not marked with +")
self.assertTrue(output[1].startswith("| "), "Child entry not marked with |")
def testNothingFound(self):
commits = self._get_commits()
self._execute_git(["commit", "--allow-empty", "-m", "A"])
self._execute_git(["commit", "--allow-empty", "-m", "B"])
self._execute_git(["commit", "--allow-empty", "-m", "C"])
self._execute_git(["commit", "--allow-empty", "-m", "D"])
hash_of_first_commit = commits[0]
result = search_related_commits.search_all_related_commits(
self.base_dir,
hash_of_first_commit,
"HEAD",
None)
self.assertEqual(len(result), 0, "Results found where none should be.")
if __name__ == "__main__":
#import sys;sys.argv = ['', 'Test.testName']
unittest.main()
| apache-2.0 |
beppec56/xbmc | tools/EventClients/Clients/PS3 BD Remote/ps3_remote.py | 138 | 6987 | #!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (C) 2008-2013 Team XBMC
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
# This is a quick port of brandonj's PS3 remote script to use the event server
# for sending input events.
#
# The original script and documentation regarding the remote can be found at:
# http://forum.kodi.tv/showthread.php?tid=28765
#
#
# TODO:
# 1. Send keepalive ping at least once every 60 seconds to prevent timeouts
# 2. Permanent pairing
# 3. Detect if Kodi has been restarted (non trivial until broadcasting is
# implemented, until then maybe the HELO packet could be used instead of
# PING as keepalive
#
import sys
try:
# try loading modules from source directory
sys.path.append("../../lib/python")
from xbmcclient import *
from ps3.keymaps import keymap_remote as g_keymap # look here to change the keymapping
from bt.bt import *
ICON_PATH = "../../icons/"
except:
# fallback to system wide modules
from kodi.xbmcclient import *
from kodi.ps3.keymaps import keymap_remote as g_keymap # look here to change the keymapping
from kodi.bt.bt import *
from kodi.defs import *
import os
import time
xbmc = None
bticon = ICON_PATH + "/bluetooth.png"
def get_remote_address(remote, target_name = "BD Remote Control"):
global xbmc
target_connected = False
target_address = None
while target_connected is False:
xbmc.send_notification("Action Required!",
"Hold Start+Enter on your remote.",
bticon)
print "Searching for %s" % target_name
print "(Hold Start + Enter on remote to make it discoverable)"
time.sleep(2)
if not target_address:
try:
nearby_devices = bt_discover_devices()
except Exception, e:
print "Error performing bluetooth discovery"
print str(e)
xbmc.send_notification("Error", "Unable to find devices.", bticon)
time.sleep(5)
continue
for bdaddr in nearby_devices:
bname = bt_lookup_name( bdaddr )
addr = bt_lookup_addr ( bdaddr )
print "%s (%s) in range" % (bname,addr)
if target_name == bname:
target_address = addr
break
if target_address is not None:
print "Found %s with address %s" % (target_name, target_address)
xbmc.send_notification("Found Device",
"Pairing %s, please wait." % target_name,
bticon)
print "Attempting to pair with remote"
try:
remote.connect((target_address,19))
target_connected = True
print "Remote Paired.\a"
xbmc.send_notification("Pairing Successfull",
"Your remote was successfully "\
"paired and is ready to be used.",
bticon)
except:
del remote
remote = bt_create_socket()
target_address = None
xbmc.send_notification("Pairing Failed",
"An error occurred while attempting to "\
"pair.", bticon)
print "ERROR - Could Not Connect. Trying again..."
time.sleep(2)
else:
xbmc.send_notification("Error", "No remotes were found.", bticon)
print "Could not find BD Remote Control. Trying again..."
time.sleep(2)
return (remote,target_address)
def usage():
print """
PS3 Blu-Ray Remote Control Client for XBMC v0.1
Usage: ps3_remote.py <address> [port]
address => address of system that XBMC is running on
("localhost" if it is this machine)
port => port to send packets to
(default 9777)
"""
def process_keys(remote, xbmc):
"""
Return codes:
0 - key was processed normally
2 - socket read timeout
3 - PS and then Skip Plus was pressed (sequentially)
4 - PS and then Skip Minus was pressed (sequentially)
FIXME: move to enums
"""
done = 0
try:
xbmc.previous_key
except:
xbmc.previous_key = ""
xbmc.connect()
datalen = 0
try:
data = remote.recv(1024)
datalen = len(data)
except Exception, e:
if str(e)=="timed out":
return 2
time.sleep(2)
# some other read exception occured, so raise it
raise e
if datalen == 13:
keycode = data.encode("hex")[10:12]
if keycode == "ff":
xbmc.release_button()
return done
try:
# if the user presses the PS button followed by skip + or skip -
# return different codes.
if xbmc.previous_key == "43":
xbmc.previous_key = keycode
if keycode == "31": # skip +
return 3
elif keycode == "30": # skip -
return 4
# save previous key press
xbmc.previous_key = keycode
if g_keymap[keycode]:
xbmc.send_remote_button(g_keymap[keycode])
except Exception, e:
print "Unknown data: %s" % str(e)
return done
def main():
global xbmc, bticon
host = "127.0.0.1"
port = 9777
if len(sys.argv)>1:
try:
host = sys.argv[1]
port = sys.argv[2]
except:
pass
else:
return usage()
loop_forever = True
xbmc = XBMCClient("PS3 Bluetooth Remote",
icon_file=bticon)
while loop_forever is True:
target_connected = False
remote = bt_create_socket()
xbmc.connect(host, port)
(remote,target_address) = get_remote_address(remote)
while True:
if process_keys(remote, xbmc):
break
print "Disconnected."
try:
remote.close()
except:
print "Cannot close."
if __name__=="__main__":
main()
| gpl-2.0 |
neurodata/NeuroDataViz | python/examples/example.py | 2 | 1114 | from __future__ import print_function
import numpy as np
import ndviz
a = np.zeros((3, 100, 100, 100), dtype=np.uint8)
ix, iy, iz = np.meshgrid(* [np.linspace(0, 1, n) for n in a.shape[1:]], indexing='ij')
a[0, :, :, :] = np.abs(np.sin(4 * (ix + iy))) * 255
a[1, :, :, :] = np.abs(np.sin(4 * (iy + iz))) * 255
a[2, :, :, :] = np.abs(np.sin(4 * (ix + iz))) * 255
b = np.cast[np.uint32](np.floor(np.sqrt((ix - 0.5)**2 + (iy - 0.5)**2 + (iz - 0.5)**2) * 10))
b = np.pad(b, 1, 'constant')
viewer = ndviz.Viewer()
with viewer.txn() as s:
s.voxel_size = [10, 10, 10]
s.layers.append(
name='a',
layer=ndviz.LocalVolume(
data=a,
# offset is in nm, not voxels
offset=(200, 300, 150),
voxel_size=s.voxel_size,
),
shader="""
void main() {
emitRGB(vec3(toNormalized(getDataValue(0)),
toNormalized(getDataValue(1)),
toNormalized(getDataValue(2))));
}
""")
s.layers.append(
name='b', layer=ndviz.LocalVolume(
data=b,
voxel_size=s.voxel_size,
))
print(viewer)
| apache-2.0 |
djhaskin987/git-cola | cola/i18n.py | 11 | 2812 | """i18n and l10n support for git-cola"""
from __future__ import division, absolute_import, unicode_literals
import gettext as _gettext
import os
import sys
from cola import compat
from cola import core
from cola import resources
_null_translation = _gettext.NullTranslations()
# Python 3 compat
if not hasattr(_null_translation, 'ugettext'):
_null_translation.ugettext = _null_translation.gettext
_null_translation.ungettext = _null_translation.ngettext
_translation = _null_translation
def gettext(s):
txt = _translation.ugettext(s)
if txt[-6:-4] == '@@': # handle @@verb / @@noun
txt = txt[:-6]
return txt
def ngettext(s, p, n):
return _translation.ungettext(s, p, n)
def N_(s):
return gettext(s)
def install(locale):
global _translation
if sys.platform == 'win32':
_check_win32_locale()
if locale:
compat.setenv('LANGUAGE', locale)
compat.setenv('LANG', locale)
compat.setenv('LC_MESSAGES', locale)
_install_custom_language()
_gettext.textdomain('messages')
_translation = _gettext.translation('git-cola',
localedir=_get_locale_dir(),
fallback=True)
# Python 3 compat
if not hasattr(_translation, 'ugettext'):
_translation.ugettext = _translation.gettext
_translation.ungettext = _translation.ngettext
def uninstall():
global _translation
_translation = _null_translation
def _get_locale_dir():
return resources.prefix('share', 'locale')
def _install_custom_language():
"""Allow a custom language to be set in ~/.config/git-cola/language"""
lang_file = resources.config_home('language')
if not core.exists(lang_file):
return
try:
lang = core.read(lang_file).strip()
except:
return
if lang:
compat.setenv('LANGUAGE', lang)
def _check_win32_locale():
for i in ('LANGUAGE','LC_ALL','LC_MESSAGES','LANG'):
if os.environ.get(i):
break
else:
lang = None
import locale
try:
import ctypes
except ImportError:
# use only user's default locale
lang = locale.getdefaultlocale()[0]
else:
# using ctypes to determine all locales
lcid_user = ctypes.windll.kernel32.GetUserDefaultLCID()
lcid_system = ctypes.windll.kernel32.GetSystemDefaultLCID()
if lcid_user != lcid_system:
lcid = [lcid_user, lcid_system]
else:
lcid = [lcid_user]
lang = [locale.windows_locale.get(i) for i in lcid]
lang = ':'.join([i for i in lang if i])
# set lang code for gettext
if lang:
compat.setenv('LANGUAGE', lang)
| gpl-2.0 |
hzlf/openbroadcast | website/apps/spf/util/match.py | 1 | 6519 | import os
import time
import re
import pprint
import json
import datetime
from django.conf import settings
import musicbrainzngs
from obp_legacy.models import *
from spf.models import Match, Request
import logging
log = logging.getLogger(__name__)
class MediaMatch(object):
def __init__(self):
log = logging.getLogger('util.migrator.__init__')
musicbrainzngs.set_useragent("Example music app", "0.1", "http://example.com/music")
#musicbrainzngs.set_hostname("mb.anorg.net")
#musicbrainzngs.set_rate_limit(limit_or_interval=False)
self.pp = pprint.PrettyPrinter(indent=4)
#self.pp.pprint = lambda d: None
def match(self, obj):
log = logging.getLogger('util.match.match')
log.info('matching: %s' % obj.title)
for r in obj.results_mb:
print '--'
#print r
mb_id = r['id']
print mb_id
match, created = Match.objects.get_or_create(request=obj, mb_id=mb_id)
includes = [
'artists',
'releases',
'artist-credits',
'release-rels',
'release-group-rels',
'artist-rels',
'annotation',
'discids',
'label-rels',
'work-rels',
'recording-rels',
'media',
'isrcs',
]
try:
mr = musicbrainzngs.get_recording_by_id(id=mb_id, includes=includes)
mr = mr['recording']
# self.pp.pprint(mr)
match.title = mr['title']
# match data as json
match.results_mb = mr
match.artist = mr['artist-credit-phrase']
if 'length' in mr:
match.duration = mr['length']
# compose cretits string
if 'artist-credit' in mr:
credits = ''
for c in mr['artist-credit']:
try:
astr = c['artist']['name']
credits += astr + "\n"
except:
pass
match.artist_credits = credits
# compose secondary cretits string
if 'artist-relation-list' in mr:
credits = ''
for c in mr['artist-relation-list']:
try:
astr = c['artist']['name']
astr = '%s - [%s: %s]' % (astr, c['type'], ', '.join(c['attribute-list']))
credits += astr + "\n"
except Exception, e:
print e
pass
match.artist_credits_secondary = credits
if 'isrc-list' in mr:
self.pp.pprint(mr['isrc-list'])
try:
isrcs = "\n".join(mr['isrc-list'])
match.isrc_list = isrcs
except Exception, e:
print e
pass
# compose release string
if 'release-list' in mr:
releases = ''
for r in mr['release-list']:
"""
print '*******************************'
self.pp.pprint(r)
print '*******************************'
"""
includes = ['labels','release-rels', 'work-rels']
#mr = mr['recording']
try:
pass
except:
pass
try:
rstr = r['title']
rstr = '%s - [%s | %s]' % (rstr, r['country'], r['date'])
if 'medium-list' in r:
try:
rstr += ' - %s - Track# %s' % (r['medium-list'][0]['format'], r['medium-list'][0]['track-list'][0]['number'])
except:
pass
try:
tstr = ''
if 'label-info-list' in mrel:
lil = mrel['label-info-list'][0]
self.pp.pprint(lil)
if 'label' in lil:
tstr += ' %s ' % lil['label']['name']
if 'label-code' in lil['label']:
tstr += '(%s) ' % lil['label']['label-code']
if 'catalog-number' in lil:
tstr += 'catno: %s' % lil['catalog-number']
print '****'
print tstr
rstr += ' [ ' + tstr + ' ] '
except Exception, e:
print e
pass
try:
mrel = musicbrainzngs.get_release_by_id(id=r['id'], includes=includes)
mrel = mrel['release']
# self.pp.pprint(mrel)
rstr += ' [barcode: %s]' % (mrel['barcode'])
except:
pass
releases += rstr + "\n"
except Exception, e:
print e
pass
match.release_list = releases
# compose release string
if 'work-relation-list' in mr:
try:
iswcs = "\n".join(mr['work-relation-list'][0]['work']['iswc-list'])
match.iswc_list = iswcs
except:
pass
match.status = 1
except Exception, e:
print 'GOT ERROR!!!: '
print e
print
match.status = 99
match.save()
| gpl-3.0 |
joshim5/TALE_Toolbox | TALE_Toolbox/public/views.py | 1 | 1320 | # -*- coding: utf-8 -*-
"""Public section, including homepage and signup."""
from flask import (Blueprint, request, render_template, flash, url_for,
redirect, session, make_response)
from TALE_Toolbox.utils import flash_errors
from TALE_Toolbox.computations import ReferenceSequenceGenerator
#from TALE_Toolbox.computations import generate_genbank
blueprint = Blueprint('public', __name__, static_folder="../static")
@blueprint.route("/", methods=["GET", "POST"])
def home():
return render_template("public/home.html")
@blueprint.route("/about/")
def about():
return render_template("public/about.html")
@blueprint.route("/generate/")
def generate():
sequence = request.args.get('sequence')
g_monomer = request.args.get('g_monomer')
backbone = request.args.get('backbone')
generator = ReferenceSequenceGenerator(sequence, g_monomer, backbone)
genbank = generator.generate_genbank()
response = make_response(genbank)
# Wequence with TF or Nuc appended to the name, e.g. “TALE_Nuc_TGAACAGATGC.gb"
filename = "TALE_Nuc_"
if backbone == "TALETF":
filename = "TALE_TF_"
filename = filename + sequence + ".gb"
response.headers["Content-Disposition"] = "attachment; filename=" + filename
response.status_code = 200
return response
| apache-2.0 |
Thraxis/pymedusa | lib/imdb/parser/http/bsouplxml/html.py | 143 | 1175 | """
parser.http.bsouplxml.html module (imdb.parser.http package).
This module adapts the beautifulsoup interface to lxml.html module.
Copyright 2008 H. Turgut Uyar <uyar@tekir.org>
2008 Davide Alberani <da@erlug.linux.it>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
"""
import _bsoup as BeautifulSoup
def fromstring(html_string):
"""Return a DOM representation of the string."""
return BeautifulSoup.BeautifulSoup(html_string,
convertEntities=BeautifulSoup.BeautifulSoup.HTML_ENTITIES
).findChild(True)
| gpl-3.0 |
ewandor/home-assistant | homeassistant/components/binary_sensor/android_ip_webcam.py | 20 | 1894 | """
Support for IP Webcam binary sensors.
For more details about this platform, please refer to the documentation at
https://home-assistant.io/components/binary_sensor.android_ip_webcam/
"""
import asyncio
from homeassistant.components.binary_sensor import BinarySensorDevice
from homeassistant.components.android_ip_webcam import (
KEY_MAP, DATA_IP_WEBCAM, AndroidIPCamEntity, CONF_HOST, CONF_NAME)
DEPENDENCIES = ['android_ip_webcam']
@asyncio.coroutine
def async_setup_platform(hass, config, async_add_devices, discovery_info=None):
"""Set up the IP Webcam binary sensors."""
if discovery_info is None:
return
host = discovery_info[CONF_HOST]
name = discovery_info[CONF_NAME]
ipcam = hass.data[DATA_IP_WEBCAM][host]
async_add_devices(
[IPWebcamBinarySensor(name, host, ipcam, 'motion_active')], True)
class IPWebcamBinarySensor(AndroidIPCamEntity, BinarySensorDevice):
"""Representation of an IP Webcam binary sensor."""
def __init__(self, name, host, ipcam, sensor):
"""Initialize the binary sensor."""
super().__init__(host, ipcam)
self._sensor = sensor
self._mapped_name = KEY_MAP.get(self._sensor, self._sensor)
self._name = '{} {}'.format(name, self._mapped_name)
self._state = None
self._unit = None
@property
def name(self):
"""Return the name of the binary sensor, if any."""
return self._name
@property
def is_on(self):
"""Return true if the binary sensor is on."""
return self._state
@asyncio.coroutine
def async_update(self):
"""Retrieve latest state."""
state, _ = self._ipcam.export_sensor(self._sensor)
self._state = state == 1.0
@property
def device_class(self):
"""Return the class of this device, from component DEVICE_CLASSES."""
return 'motion'
| apache-2.0 |
markhice/ghost-casper | node_modules_bak/grunt-docker/node_modules/docker/node_modules/pygmentize-bundled/vendor/pygments/pygments/styles/perldoc.py | 364 | 2175 | # -*- coding: utf-8 -*-
"""
pygments.styles.perldoc
~~~~~~~~~~~~~~~~~~~~~~~
Style similar to the style used in the `perldoc`_ code blocks.
.. _perldoc: http://perldoc.perl.org/
:copyright: Copyright 2006-2013 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from pygments.style import Style
from pygments.token import Keyword, Name, Comment, String, Error, \
Number, Operator, Generic, Whitespace
class PerldocStyle(Style):
"""
Style similar to the style used in the perldoc code blocks.
"""
background_color = '#eeeedd'
default_style = ''
styles = {
Whitespace: '#bbbbbb',
Comment: '#228B22',
Comment.Preproc: '#1e889b',
Comment.Special: '#8B008B bold',
String: '#CD5555',
String.Heredoc: '#1c7e71 italic',
String.Regex: '#B452CD',
String.Other: '#cb6c20',
String.Regex: '#1c7e71',
Number: '#B452CD',
Operator.Word: '#8B008B',
Keyword: '#8B008B bold',
Keyword.Type: '#a7a7a7',
Name.Class: '#008b45 bold',
Name.Exception: '#008b45 bold',
Name.Function: '#008b45',
Name.Namespace: '#008b45 underline',
Name.Variable: '#00688B',
Name.Constant: '#00688B',
Name.Decorator: '#707a7c',
Name.Tag: '#8B008B bold',
Name.Attribute: '#658b00',
Name.Builtin: '#658b00',
Generic.Heading: 'bold #000080',
Generic.Subheading: 'bold #800080',
Generic.Deleted: '#aa0000',
Generic.Inserted: '#00aa00',
Generic.Error: '#aa0000',
Generic.Emph: 'italic',
Generic.Strong: 'bold',
Generic.Prompt: '#555555',
Generic.Output: '#888888',
Generic.Traceback: '#aa0000',
Error: 'bg:#e3d2d2 #a61717'
}
| mit |
noxora/flask-base | flask/lib/python3.4/site-packages/werkzeug/debug/repr.py | 254 | 9354 | # -*- coding: utf-8 -*-
"""
werkzeug.debug.repr
~~~~~~~~~~~~~~~~~~~
This module implements object representations for debugging purposes.
Unlike the default repr these reprs expose a lot more information and
produce HTML instead of ASCII.
Together with the CSS and JavaScript files of the debugger this gives
a colorful and more compact output.
:copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
:license: BSD.
"""
import sys
import re
import codecs
from traceback import format_exception_only
try:
from collections import deque
except ImportError: # pragma: no cover
deque = None
from werkzeug.utils import escape
from werkzeug._compat import iteritems, PY2, text_type, integer_types, \
string_types
missing = object()
_paragraph_re = re.compile(r'(?:\r\n|\r|\n){2,}')
RegexType = type(_paragraph_re)
HELP_HTML = '''\
<div class=box>
<h3>%(title)s</h3>
<pre class=help>%(text)s</pre>
</div>\
'''
OBJECT_DUMP_HTML = '''\
<div class=box>
<h3>%(title)s</h3>
%(repr)s
<table>%(items)s</table>
</div>\
'''
def debug_repr(obj):
"""Creates a debug repr of an object as HTML unicode string."""
return DebugReprGenerator().repr(obj)
def dump(obj=missing):
"""Print the object details to stdout._write (for the interactive
console of the web debugger.
"""
gen = DebugReprGenerator()
if obj is missing:
rv = gen.dump_locals(sys._getframe(1).f_locals)
else:
rv = gen.dump_object(obj)
sys.stdout._write(rv)
class _Helper(object):
"""Displays an HTML version of the normal help, for the interactive
debugger only because it requires a patched sys.stdout.
"""
def __repr__(self):
return 'Type help(object) for help about object.'
def __call__(self, topic=None):
if topic is None:
sys.stdout._write('<span class=help>%s</span>' % repr(self))
return
import pydoc
pydoc.help(topic)
rv = sys.stdout.reset()
if isinstance(rv, bytes):
rv = rv.decode('utf-8', 'ignore')
paragraphs = _paragraph_re.split(rv)
if len(paragraphs) > 1:
title = paragraphs[0]
text = '\n\n'.join(paragraphs[1:])
else: # pragma: no cover
title = 'Help'
text = paragraphs[0]
sys.stdout._write(HELP_HTML % {'title': title, 'text': text})
helper = _Helper()
def _add_subclass_info(inner, obj, base):
if isinstance(base, tuple):
for base in base:
if type(obj) is base:
return inner
elif type(obj) is base:
return inner
module = ''
if obj.__class__.__module__ not in ('__builtin__', 'exceptions'):
module = '<span class="module">%s.</span>' % obj.__class__.__module__
return '%s%s(%s)' % (module, obj.__class__.__name__, inner)
class DebugReprGenerator(object):
def __init__(self):
self._stack = []
def _sequence_repr_maker(left, right, base=object(), limit=8):
def proxy(self, obj, recursive):
if recursive:
return _add_subclass_info(left + '...' + right, obj, base)
buf = [left]
have_extended_section = False
for idx, item in enumerate(obj):
if idx:
buf.append(', ')
if idx == limit:
buf.append('<span class="extended">')
have_extended_section = True
buf.append(self.repr(item))
if have_extended_section:
buf.append('</span>')
buf.append(right)
return _add_subclass_info(u''.join(buf), obj, base)
return proxy
list_repr = _sequence_repr_maker('[', ']', list)
tuple_repr = _sequence_repr_maker('(', ')', tuple)
set_repr = _sequence_repr_maker('set([', '])', set)
frozenset_repr = _sequence_repr_maker('frozenset([', '])', frozenset)
if deque is not None:
deque_repr = _sequence_repr_maker('<span class="module">collections.'
'</span>deque([', '])', deque)
del _sequence_repr_maker
def regex_repr(self, obj):
pattern = repr(obj.pattern)
if PY2:
pattern = pattern.decode('string-escape', 'ignore')
else:
pattern = codecs.decode(pattern, 'unicode-escape', 'ignore')
if pattern[:1] == 'u':
pattern = 'ur' + pattern[1:]
else:
pattern = 'r' + pattern
return u're.compile(<span class="string regex">%s</span>)' % pattern
def string_repr(self, obj, limit=70):
buf = ['<span class="string">']
escaped = escape(obj)
a = repr(escaped[:limit])
b = repr(escaped[limit:])
if isinstance(obj, text_type) and PY2:
buf.append('u')
a = a[1:]
b = b[1:]
if b != "''":
buf.extend((a[:-1], '<span class="extended">', b[1:], '</span>'))
else:
buf.append(a)
buf.append('</span>')
return _add_subclass_info(u''.join(buf), obj, (bytes, text_type))
def dict_repr(self, d, recursive, limit=5):
if recursive:
return _add_subclass_info(u'{...}', d, dict)
buf = ['{']
have_extended_section = False
for idx, (key, value) in enumerate(iteritems(d)):
if idx:
buf.append(', ')
if idx == limit - 1:
buf.append('<span class="extended">')
have_extended_section = True
buf.append('<span class="pair"><span class="key">%s</span>: '
'<span class="value">%s</span></span>' %
(self.repr(key), self.repr(value)))
if have_extended_section:
buf.append('</span>')
buf.append('}')
return _add_subclass_info(u''.join(buf), d, dict)
def object_repr(self, obj):
r = repr(obj)
if PY2:
r = r.decode('utf-8', 'replace')
return u'<span class="object">%s</span>' % escape(r)
def dispatch_repr(self, obj, recursive):
if obj is helper:
return u'<span class="help">%r</span>' % helper
if isinstance(obj, (integer_types, float, complex)):
return u'<span class="number">%r</span>' % obj
if isinstance(obj, string_types):
return self.string_repr(obj)
if isinstance(obj, RegexType):
return self.regex_repr(obj)
if isinstance(obj, list):
return self.list_repr(obj, recursive)
if isinstance(obj, tuple):
return self.tuple_repr(obj, recursive)
if isinstance(obj, set):
return self.set_repr(obj, recursive)
if isinstance(obj, frozenset):
return self.frozenset_repr(obj, recursive)
if isinstance(obj, dict):
return self.dict_repr(obj, recursive)
if deque is not None and isinstance(obj, deque):
return self.deque_repr(obj, recursive)
return self.object_repr(obj)
def fallback_repr(self):
try:
info = ''.join(format_exception_only(*sys.exc_info()[:2]))
except Exception: # pragma: no cover
info = '?'
if PY2:
info = info.decode('utf-8', 'ignore')
return u'<span class="brokenrepr"><broken repr (%s)>' \
u'</span>' % escape(info.strip())
def repr(self, obj):
recursive = False
for item in self._stack:
if item is obj:
recursive = True
break
self._stack.append(obj)
try:
try:
return self.dispatch_repr(obj, recursive)
except Exception:
return self.fallback_repr()
finally:
self._stack.pop()
def dump_object(self, obj):
repr = items = None
if isinstance(obj, dict):
title = 'Contents of'
items = []
for key, value in iteritems(obj):
if not isinstance(key, string_types):
items = None
break
items.append((key, self.repr(value)))
if items is None:
items = []
repr = self.repr(obj)
for key in dir(obj):
try:
items.append((key, self.repr(getattr(obj, key))))
except Exception:
pass
title = 'Details for'
title += ' ' + object.__repr__(obj)[1:-1]
return self.render_object_dump(items, title, repr)
def dump_locals(self, d):
items = [(key, self.repr(value)) for key, value in d.items()]
return self.render_object_dump(items, 'Local variables in frame')
def render_object_dump(self, items, title, repr=None):
html_items = []
for key, value in items:
html_items.append('<tr><th>%s<td><pre class=repr>%s</pre>' %
(escape(key), value))
if not html_items:
html_items.append('<tr><td><em>Nothing</em>')
return OBJECT_DUMP_HTML % {
'title': escape(title),
'repr': repr and '<pre class=repr>%s</pre>' % repr or '',
'items': '\n'.join(html_items)
}
| mit |
agry/NGECore2 | scripts/mobiles/talus/lost_aqualish_marksman.py | 2 | 1662 | import sys
from services.spawn import MobileTemplate
from services.spawn import WeaponTemplate
from resources.datatables import WeaponType
from resources.datatables import Difficulty
from resources.datatables import Options
from java.util import Vector
def addTemplate(core):
mobileTemplate = MobileTemplate()
mobileTemplate.setCreatureName('lost_aqualish_marksman')
mobileTemplate.setLevel(43)
mobileTemplate.setDifficulty(Difficulty.NORMAL)
mobileTemplate.setMinSpawnDistance(4)
mobileTemplate.setMaxSpawnDistance(8)
mobileTemplate.setDeathblow(False)
mobileTemplate.setScale(1)
mobileTemplate.setSocialGroup("lost aqualish")
mobileTemplate.setAssistRange(6)
mobileTemplate.setStalker(True)
mobileTemplate.setOptionsBitmask(Options.AGGRESSIVE | Options.ATTACKABLE)
templates = Vector()
templates.add('object/mobile/shared_dressed_lost_aqualish_marksman_male_01.iff')
templates.add('object/mobile/shared_dressed_lost_aqualish_marksman_female_01.iff')
mobileTemplate.setTemplates(templates)
weaponTemplates = Vector()
weapontemplate = WeaponTemplate('object/weapon/ranged/rifle/shared_rifle_t21.iff', WeaponType.CARBINE, 1.0, 15, 'energy')
weaponTemplates.add(weapontemplate)
mobileTemplate.setWeaponTemplateVector(weaponTemplates)
attacks = Vector()
mobileTemplate.setDefaultAttack('rangedShot')
mobileTemplate.setAttacks(attacks)
lootPoolNames_1 = ['Junk']
lootPoolChances_1 = [100]
lootGroupChance_1 = 100
mobileTemplate.addToLootGroups(lootPoolNames_1,lootPoolChances_1,lootGroupChance_1)
core.spawnService.addMobileTemplate('lost_aqualish_marksman', mobileTemplate)
return | lgpl-3.0 |
pcabido/socorro | socorro/processor/breakpad_pipe_to_json.py | 10 | 9304 | # This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
"""This module provides a function that will transate Minidump Stackwalk pipe
dump into a json format.
{
# "status": string, // OK | ERROR_* | SYMBOL_SUPPLIER_INTERRUPTED
"system_info": {
"os": string,
"os_ver": string,
"cpu_arch": string, // x86 | amd64 | arm | ppc | sparc
"cpu_info": string,
"cpu_count": int
},
"crash_info": {
"type": string,
"crash_address": string, // 0x[[:xdigit:]]+
"crashing_thread": int // | null
}
"main_module": int, // index into modules
"modules": [
// zero or more
{
"base_addr": string, // 0x[[:xdigit:]]+
"debug_file": string,
"debug_id": string, // [[:xdigit:]]{33}
"end_addr": string, // 0x[[:xdigit:]]+
"filename": string,
"version": string
}
],
"thread_count": int,
"threads": [
// for i in range(thread_count)
{
"frame_count": int,
"frames": [
// for i in range(frame_count)
{
"frame": int,
"module": string, // optional
"function": string, // optional
"file": string, // optional
"line": int, // optional
"offset": string, // 0x[[:xdigit:]]+
"module_offset": string, // 0x[[:xdigit:]]+ , optional
"function_offset": string // 0x[[:xdigit:]]+ , optional
}
]
}
],
// repeated here for ease of searching
// (max 10 frames)
"crashing_thread": {
"threads_index": int,
"total_frames": int,
"frames": [
// for i in range(length)
{
// as per "frames" entries from "threads" above
}
]
}
}
"""
from socorro.lib.util import DotDict
#==============================================================================
class DotDictWithPut(DotDict):
#--------------------------------------------------------------------------
def put_if_not_none(self, key, value):
if value is not None and value != '':
self[key] = value
#------------------------------------------------------------------------------
def pipe_dump_to_json_dump(pipe_dump_iterable):
"""given a list (or any iterable) of strings representing a MDSW pipe dump,
this function will convert it into a json format."""
json_dump = DotDict()
crashing_thread = None
module_counter = 0
thread_counter = 0
for a_line in pipe_dump_iterable:
parts = a_line.split('|')
if parts[0] == 'OS':
_extract_OS_info(parts, json_dump)
elif parts[0] == 'CPU':
_extract_CPU_info(parts, json_dump)
elif parts[0] == 'Crash':
crashing_thread = _extract_crash_info(parts, json_dump)
elif parts[0] == 'Module':
_extract_module_info(parts, json_dump, module_counter)
module_counter += 1
else:
try:
thread_number = int(parts[0])
except (ValueError, IndexError):
continue # unknow line type, ignore it
_extract_frame_info(parts, json_dump)
try:
json_dump.thread_count = len(json_dump.threads)
except KeyError: # no threads were over found, 'threads' key was not made
json_dump.thread_count = 0
if crashing_thread is not None:
crashing_thread_frames = DotDict()
crashing_thread_frames.threads_index = crashing_thread
crashing_thread_frames.total_frames = \
len(json_dump.threads[crashing_thread].frames)
crashing_thread_frames.frames = \
json_dump.threads[crashing_thread].frames[:10]
json_dump.crashing_thread = crashing_thread_frames
return json_dump
#------------------------------------------------------------------------------
def _get(indexable_container, index, default):
"""like 'get' on a dict, but it works on lists, too"""
try:
return indexable_container[index]
except (IndexError, KeyError):
return default
#------------------------------------------------------------------------------
def _get_int(indexable_container, index, default):
"""try to get an int from an indexable container. If that fails
return the default"""
try:
return int(indexable_container[index])
# exceptions separated to make case coverage clearer
except (IndexError, KeyError):
# item not found in the container
return default
except ValueError:
# conversion to integer has failed
return default
#------------------------------------------------------------------------------
def _extract_OS_info(os_line, json_dump):
"""given a pipe dump OS line, extract the parts and put them in their
proper location within the json_dump"""
system_info = DotDictWithPut()
system_info.put_if_not_none('os', _get(os_line, 1, None))
system_info.put_if_not_none('os_ver', _get(os_line, 2, None))
if 'system_info' in json_dump:
json_dump.system_info.update(system_info)
else:
json_dump.system_info = system_info
#------------------------------------------------------------------------------
def _extract_CPU_info(cpu_line, json_dump):
"""given a pipe dump CPU line, extract the parts and put them in their
proper location within the json_dump"""
system_info = DotDictWithPut()
system_info.put_if_not_none('cpu_arch', _get(cpu_line, 1, None))
system_info.put_if_not_none('cpu_info', _get(cpu_line, 2, None))
system_info.put_if_not_none('cpu_count', _get_int(cpu_line, 3, None))
if 'system_info' in json_dump:
json_dump.system_info.update(system_info)
else:
json_dump.system_info = system_info
#------------------------------------------------------------------------------
def _extract_crash_info(crash_line, json_dump):
"""given a pipe dump CRASH line, extract the parts and put them in their
proper location within the json_dump"""
crash_info = DotDictWithPut()
crash_info.put_if_not_none('type', _get(crash_line, 1, None))
crash_info.put_if_not_none('crash_address', _get(crash_line, 2, None))
crash_info.put_if_not_none('crashing_thread', _get_int(crash_line, 3, None))
json_dump.crash_info = crash_info
return crash_info.get('crashing_thread', None)
#------------------------------------------------------------------------------
def _extract_module_info(module_line, json_dump, module_counter):
"""given a pipe dump Module line, extract the parts and put them in their
proper location within the json_dump"""
module = DotDictWithPut()
module.put_if_not_none('filename', _get(module_line, 1, None))
module.put_if_not_none('version', _get(module_line, 2, None))
module.put_if_not_none('debug_file', _get(module_line, 3, None))
module.put_if_not_none('debug_id', _get(module_line, 4, None))
module.put_if_not_none('base_addr', _get(module_line, 5, None))
module.put_if_not_none('end_addr', _get(module_line, 6, None))
is_main_module = _get_int(module_line, 7, 0)
if is_main_module:
json_dump.main_module = module_counter
if 'modules' not in json_dump:
json_dump.modules = []
json_dump.modules.append(module)
#------------------------------------------------------------------------------
def _extract_frame_info(frame_line, json_dump):
"""given a pipe dump Frame line, extract the parts and put them in their
proper location within the json_dump"""
if 'threads' not in json_dump:
json_dump.threads = []
thread_number = _get_int(frame_line, 0, None)
if thread_number is None:
return
if thread_number >=len(json_dump.threads):
# threads are supposed to arrive in order. We've not seen this thread
# before, fill in a new entry in the 'threads' section of the json_dump
# making sure that intervening missing threads have empty thread data
for i in range(thread_number - len(json_dump.threads) + 1):
thread = DotDict()
thread.frame_count = 0
thread.frames = []
json_dump.threads.append(thread)
# collect frame info from the pipe dump line
tmp_frame = _get_int(frame_line, 1, None)
tmp_module = _get(frame_line, 2, None)
tmp_function = _get(frame_line, 3, None)
tmp_file = _get(frame_line, 4, None)
tmp_line = _get_int(frame_line, 5, None)
tmp_offset = _get(frame_line, 6, None)
frame = DotDictWithPut()
frame.put_if_not_none('frame', tmp_frame)
frame.put_if_not_none('module', tmp_module)
frame.put_if_not_none('function', tmp_function)
frame.put_if_not_none('file', tmp_file)
frame.put_if_not_none('line', tmp_line)
if tmp_file and tmp_line is not None:
# skip offset entirely
pass
elif not tmp_file and tmp_function:
frame.function_offset = tmp_offset
elif not tmp_function and tmp_module:
frame.module_offset = tmp_offset
else:
frame.offset = tmp_offset
# save the frame info into the json
json_dump.threads[thread_number].frames.append(frame)
json_dump.threads[thread_number].frame_count += 1
| mpl-2.0 |
ssteo/moviepy | moviepy/video/io/ImageSequenceClip.py | 1 | 4955 | import os
import numpy as np
from ..VideoClip import VideoClip
from imageio import imread
class ImageSequenceClip(VideoClip):
"""
A VideoClip made from a series of images.
Parameters
-----------
sequence
Can be one of these:
- The name of a folder (containing only pictures). The pictures
will be considered in alphanumerical order.
- A list of names of image files. In this case you can choose to
load the pictures in memory pictures
- A list of Numpy arrays representing images. In this last case,
masks are not supported currently.
fps
Number of picture frames to read per second. Instead, you can provide
the duration of each image with durations (see below)
durations
List of the duration of each picture.
with_mask
Should the alpha layer of PNG images be considered as a mask ?
ismask
Will this sequence of pictures be used as an animated mask.
Notes
------
If your sequence is made of image files, the only image kept in
"""
def __init__(self, sequence, fps=None, durations=None, with_mask=True,
ismask=False, load_images=False):
# CODE WRITTEN AS IT CAME, MAY BE IMPROVED IN THE FUTURE
if (fps is None) and (durations is None):
raise ValueError("Please provide either 'fps' or 'durations'.")
VideoClip.__init__(self, ismask=ismask)
# Parse the data
fromfiles = True
if isinstance(sequence, list):
if isinstance(sequence[0], str):
if load_images:
sequence = [imread(f) for f in sequence]
fromfiles = False
else:
fromfiles= True
else:
# sequence is already a list of numpy arrays
fromfiles = False
else:
# sequence is a folder name, make it a list of files:
fromfiles = True
sequence = sorted([os.path.join(sequence, f)
for f in os.listdir(sequence)])
#check that all the images are of the same size
if isinstance(sequence[0], str):
size = imread(sequence[0]).shape
else:
size = sequence[0].shape
for image in sequence:
image1=image
if isinstance(image, str):
image1=imread(image)
if size != image1.shape:
raise Exception("Moviepy: ImageSequenceClip requires all images to be the same size")
self.fps = fps
if fps is not None:
durations = [1.0/fps for image in sequence]
self.images_starts = [1.0*i/fps-np.finfo(np.float32).eps for i in range(len(sequence))]
else:
self.images_starts = [0]+list(np.cumsum(durations))
self.durations = durations
self.duration = sum(durations)
self.end = self.duration
self.sequence = sequence
def find_image_index(t):
return max([i for i in range(len(self.sequence))
if self.images_starts[i]<=t])
if fromfiles:
self.lastindex = None
self.lastimage = None
def make_frame(t):
index = find_image_index(t)
if index != self.lastindex:
self.lastimage = imread(self.sequence[index])[:,:,:3]
self.lastindex = index
return self.lastimage
if with_mask and (imread(self.sequence[0]).shape[2]==4):
self.mask = VideoClip(ismask=True)
self.mask.lastindex = None
self.mask.lastimage = None
def mask_make_frame(t):
index = find_image_index(t)
if index != self.mask.lastindex:
frame = imread(self.sequence[index])[:,:,3]
self.mask.lastimage = frame.astype(float)/255
self.mask.lastindex = index
return self.mask.lastimage
self.mask.make_frame = mask_make_frame
self.mask.size = mask_make_frame(0).shape[:2][::-1]
else:
def make_frame(t):
index = find_image_index(t)
return self.sequence[index][:,:,:3]
if with_mask and (self.sequence[0].shape[2]==4):
self.mask = VideoClip(ismask=True)
def mask_make_frame(t):
index = find_image_index(t)
return 1.0*self.sequence[index][:,:,3]/255
self.mask.make_frame = mask_make_frame
self.mask.size = mask_make_frame(0).shape[:2][::-1]
self.make_frame = make_frame
self.size = make_frame(0).shape[:2][::-1]
| mit |
geekzoo/linux | linux-3.14.31/tools/perf/scripts/python/futex-contention.py | 11261 | 1486 | # futex contention
# (c) 2010, Arnaldo Carvalho de Melo <acme@redhat.com>
# Licensed under the terms of the GNU GPL License version 2
#
# Translation of:
#
# http://sourceware.org/systemtap/wiki/WSFutexContention
#
# to perf python scripting.
#
# Measures futex contention
import os, sys
sys.path.append(os.environ['PERF_EXEC_PATH'] + '/scripts/python/Perf-Trace-Util/lib/Perf/Trace')
from Util import *
process_names = {}
thread_thislock = {}
thread_blocktime = {}
lock_waits = {} # long-lived stats on (tid,lock) blockage elapsed time
process_names = {} # long-lived pid-to-execname mapping
def syscalls__sys_enter_futex(event, ctxt, cpu, s, ns, tid, comm,
nr, uaddr, op, val, utime, uaddr2, val3):
cmd = op & FUTEX_CMD_MASK
if cmd != FUTEX_WAIT:
return # we don't care about originators of WAKE events
process_names[tid] = comm
thread_thislock[tid] = uaddr
thread_blocktime[tid] = nsecs(s, ns)
def syscalls__sys_exit_futex(event, ctxt, cpu, s, ns, tid, comm,
nr, ret):
if thread_blocktime.has_key(tid):
elapsed = nsecs(s, ns) - thread_blocktime[tid]
add_stats(lock_waits, (tid, thread_thislock[tid]), elapsed)
del thread_blocktime[tid]
del thread_thislock[tid]
def trace_begin():
print "Press control+C to stop and show the summary"
def trace_end():
for (tid, lock) in lock_waits:
min, max, avg, count = lock_waits[tid, lock]
print "%s[%d] lock %x contended %d times, %d avg ns" % \
(process_names[tid], tid, lock, count, avg)
| gpl-2.0 |
mascot6699/Hackapi-Demo | src/accounts/forms.py | 69 | 3356 | from __future__ import unicode_literals
from django.contrib.auth.forms import AuthenticationForm
from django import forms
from crispy_forms.helper import FormHelper
from crispy_forms.layout import Layout, Div, Submit, HTML, Button, Row, Field
from crispy_forms.bootstrap import AppendedText, PrependedText, FormActions
from authtools import forms as authtoolsforms
from django.contrib.auth import forms as authforms
from django.core.urlresolvers import reverse
class LoginForm(AuthenticationForm):
remember_me = forms.BooleanField(required=False, initial=False)
def __init__(self, *args, **kwargs):
super(LoginForm, self).__init__(*args, **kwargs)
self.helper = FormHelper()
self.fields["username"].widget.input_type = "email" # ugly hack
self.helper.layout = Layout(
Field('username', placeholder="Enter Email", autofocus=""),
Field('password', placeholder="Enter Password"),
HTML('<a href="{}">Forgot Password?</a>'.format(
reverse("accounts:password-reset"))),
Field('remember_me'),
Submit('sign_in', 'Log in',
css_class="btn btn-lg btn-primary btn-block"),
)
class SignupForm(authtoolsforms.UserCreationForm):
def __init__(self, *args, **kwargs):
super(SignupForm, self).__init__(*args, **kwargs)
self.helper = FormHelper()
self.fields["email"].widget.input_type = "email" # ugly hack
self.helper.layout = Layout(
Field('email', placeholder="Enter Email", autofocus=""),
Field('name', placeholder="Enter Full Name"),
Field('password1', placeholder="Enter Password"),
Field('password2', placeholder="Re-enter Password"),
Submit('sign_up', 'Sign up', css_class="btn-warning"),
)
class PasswordChangeForm(authforms.PasswordChangeForm):
def __init__(self, *args, **kwargs):
super(PasswordChangeForm, self).__init__(*args, **kwargs)
self.helper = FormHelper()
self.helper.layout = Layout(
Field('old_password', placeholder="Enter old password",
autofocus=""),
Field('new_password1', placeholder="Enter new password"),
Field('new_password2', placeholder="Enter new password (again)"),
Submit('pass_change', 'Change Password', css_class="btn-warning"),
)
class PasswordResetForm(authtoolsforms.FriendlyPasswordResetForm):
def __init__(self, *args, **kwargs):
super(PasswordResetForm, self).__init__(*args, **kwargs)
self.helper = FormHelper()
self.helper.layout = Layout(
Field('email', placeholder="Enter email",
autofocus=""),
Submit('pass_reset', 'Reset Password', css_class="btn-warning"),
)
class SetPasswordForm(authforms.SetPasswordForm):
def __init__(self, *args, **kwargs):
super(SetPasswordForm, self).__init__(*args, **kwargs)
self.helper = FormHelper()
self.helper.layout = Layout(
Field('new_password1', placeholder="Enter new password",
autofocus=""),
Field('new_password2', placeholder="Enter new password (again)"),
Submit('pass_change', 'Change Password', css_class="btn-warning"),
)
| mit |
bdaroz/the-blue-alliance | tests/test_fms_api_event_list_parser.py | 4 | 20655 | import datetime
import json
import unittest2
from datafeeds.parsers.fms_api.fms_api_event_list_parser import FMSAPIEventListParser
from google.appengine.ext import ndb
from google.appengine.ext import testbed
from consts.event_type import EventType
from models.sitevar import Sitevar
class TestFMSAPIEventListParser(unittest2.TestCase):
def setUp(self):
self.testbed = testbed.Testbed()
self.testbed.activate()
self.testbed.init_datastore_v3_stub()
self.testbed.init_memcache_stub()
ndb.get_context().clear_cache() # Prevent data from leaking between tests
def tearDown(self):
self.testbed.deactivate()
def test_parse_event_list(self):
with open('test_data/fms_api/2015_event_list.json', 'r') as f:
events, districts = FMSAPIEventListParser(2015).parse(json.loads(f.read()))
self.assertTrue(isinstance(events, list))
self.assertTrue(isinstance(districts, list))
# File has 6 events, but we ignore CMP divisions (only subdivisions), so only 5 are expected back
self.assertEquals(len(events), 5)
self.assertEquals(len(districts), 1)
def test_parse_regional_event(self):
with open('test_data/fms_api/2015_event_list.json', 'r') as f:
events, districts = FMSAPIEventListParser(2015).parse(json.loads(f.read()))
event = events[0]
self.assertEquals(event.key_name, "2015nyny")
self.assertEquals(event.name, "New York City Regional")
self.assertEquals(event.short_name, "New York City")
self.assertEquals(event.event_short, "nyny")
self.assertEquals(event.official, True)
self.assertEquals(event.start_date, datetime.datetime(year=2015, month=3, day=12, hour=0, minute=0, second=0))
self.assertEquals(event.end_date, datetime.datetime(year=2015, month=3, day=15, hour=23, minute=59, second=59))
self.assertEquals(event.venue, "Jacob K. Javits Convention Center")
self.assertEquals(event.city, "New York")
self.assertEquals(event.state_prov, "NY")
self.assertEquals(event.country, "USA")
self.assertEquals(event.year, 2015)
self.assertEquals(event.event_type_enum, EventType.REGIONAL)
self.assertEquals(event.district_key, None)
def test_parse_district_event(self):
with open('test_data/fms_api/2015_event_list.json', 'r') as f:
events, districts = FMSAPIEventListParser(2015).parse(json.loads(f.read()))
event = events[1]
district = districts[0]
self.assertEquals(event.key_name, "2015cthar")
self.assertEquals(event.name, "NE District - Hartford Event")
self.assertEquals(event.short_name, "Hartford")
self.assertEquals(event.event_short, "cthar")
self.assertEquals(event.official, True)
self.assertEquals(event.start_date, datetime.datetime(year=2015, month=3, day=27, hour=0, minute=0, second=0))
self.assertEquals(event.end_date, datetime.datetime(year=2015, month=3, day=29, hour=23, minute=59, second=59))
self.assertEquals(event.venue, "Hartford Public High School")
self.assertEquals(event.city, "Hartford")
self.assertEquals(event.state_prov, "CT")
self.assertEquals(event.country, "USA")
self.assertEquals(event.year, 2015)
self.assertEquals(event.event_type_enum, EventType.DISTRICT)
self.assertEquals(event.district_key, district.key)
self.assertEquals(district.key_name, "2015ne")
self.assertEquals(district.abbreviation, "ne")
self.assertEquals(district.year, 2015)
def test_parse_district_cmp(self):
with open('test_data/fms_api/2015_event_list.json', 'r') as f:
events, districts = FMSAPIEventListParser(2015).parse(json.loads(f.read()))
event = events[2]
self.assertEquals(event.key_name, "2015necmp")
self.assertEquals(event.name, "NE FIRST District Championship presented by United Technologies")
self.assertEquals(event.short_name, "NE FIRST")
self.assertEquals(event.event_short, "necmp")
self.assertEquals(event.official, True)
self.assertEquals(event.start_date, datetime.datetime(year=2015, month=4, day=8, hour=0, minute=0, second=0))
self.assertEquals(event.end_date, datetime.datetime(year=2015, month=4, day=11, hour=23, minute=59, second=59))
self.assertEquals(event.venue, "Sports and Recreation Center, WPI")
self.assertEquals(event.city, "Worcester")
self.assertEquals(event.state_prov, "MA")
self.assertEquals(event.country, "USA")
self.assertEquals(event.year, 2015)
self.assertEquals(event.event_type_enum, EventType.DISTRICT_CMP)
self.assertEquals(event.district_key, districts[0].key)
def test_parse_cmp_subdivision(self):
with open('test_data/fms_api/2015_event_list.json', 'r') as f:
events, districts = FMSAPIEventListParser(2015).parse(json.loads(f.read()))
event = events[3]
self.assertEquals(event.key_name, "2015tes")
self.assertEquals(event.name, "Tesla Division")
self.assertEquals(event.short_name, "Tesla")
self.assertEquals(event.event_short, "tes")
self.assertEquals(event.official, True)
self.assertEquals(event.start_date, datetime.datetime(year=2015, month=4, day=22, hour=0, minute=0, second=0))
self.assertEquals(event.end_date, datetime.datetime(year=2015, month=4, day=25, hour=23, minute=59, second=59))
self.assertEquals(event.venue, "Edward Jones Dome")
self.assertEquals(event.city, "St. Louis")
self.assertEquals(event.state_prov, "MO")
self.assertEquals(event.country, "USA")
self.assertEquals(event.year, 2015)
self.assertEquals(event.event_type_enum, EventType.CMP_DIVISION)
self.assertEquals(event.district_key, None)
def test_parse_offseason(self):
with open('test_data/fms_api/2015_event_list.json', 'r') as f:
events, districts = FMSAPIEventListParser(2015).parse(json.loads(f.read()))
event = events[4]
self.assertEquals(event.key_name, "2015iri")
self.assertEquals(event.name, "Indiana Robotics Invitational")
self.assertEquals(event.short_name, "Indiana Robotics Invitational")
self.assertEquals(event.event_short, "iri")
self.assertEquals(event.official, False)
self.assertEquals(event.start_date, datetime.datetime(year=2015, month=7, day=17, hour=0, minute=0, second=0))
self.assertEquals(event.end_date, datetime.datetime(year=2015, month=7, day=18, hour=23, minute=59, second=59))
self.assertEquals(event.venue, "Lawrence North HS")
self.assertEquals(event.city, "Indianapolis")
self.assertEquals(event.state_prov, "IN")
self.assertEquals(event.country, "USA")
self.assertEquals(event.year, 2015)
self.assertEquals(event.event_type_enum, EventType.OFFSEASON)
self.assertEquals(event.district_key, None)
def test_parse_2017_event(self):
with open('test_data/fms_api/2017_event_list.json', 'r') as f:
events, districts = FMSAPIEventListParser(2017).parse(json.loads(f.read()))
self.assertEqual(len(events), 165)
self.assertEqual(len(districts), 10)
event = events[16]
self.assertEquals(event.key_name, "2017casj")
self.assertEquals(event.name, "Silicon Valley Regional")
self.assertEquals(event.short_name, "Silicon Valley")
self.assertEquals(event.event_short, "casj")
self.assertEquals(event.official, True)
self.assertEquals(event.start_date, datetime.datetime(year=2017, month=3, day=29, hour=0, minute=0, second=0))
self.assertEquals(event.end_date, datetime.datetime(year=2017, month=4, day=1, hour=23, minute=59, second=59))
self.assertEquals(event.venue, "San Jose State University - The Event Center")
self.assertEquals(event.city, "San Jose")
self.assertEquals(event.state_prov, "CA")
self.assertEquals(event.country, "USA")
self.assertEquals(event.year, 2017)
self.assertEquals(event.event_type_enum, EventType.REGIONAL)
self.assertEquals(event.district_key, None)
# New in 2017
self.assertEquals(event.website, "http://www.firstsv.org")
def test_parse_2017_events_with_cmp_hacks(self):
hack_sitevar = Sitevar(id='cmp_registration_hacks')
hack_sitevar.contents = {
"event_name_override": [
{"event": "2017cmpmo", "name": "FIRST Championship Event", "short_name": "Championship"},
{"event": "2017cmptx", "name": "FIRST Championship Event", "short_name": "Championship"}],
"set_start_to_last_day": ["2017cmptx", "2017cmpmo"],
"divisions_to_skip": ["2017arc", "2017cars", "2017cur", "2017dal", "2017dar"],
}
hack_sitevar.put()
with open('test_data/fms_api/2017_event_list.json', 'r') as f:
events, districts = FMSAPIEventListParser(2017).parse(json.loads(f.read()))
self.assertEqual(len(events), 160)
self.assertEqual(len(districts), 10)
non_einstein_types = EventType.CMP_EVENT_TYPES
non_einstein_types.remove(EventType.CMP_FINALS)
for key in hack_sitevar.contents['divisions_to_skip']:
self.assertFalse(filter(lambda e: e.key_name == key, events))
einstein_stl = next(e for e in events if e.key_name == '2017cmpmo')
self.assertIsNotNone(einstein_stl)
self.assertEqual(einstein_stl.name, "FIRST Championship Event (St. Louis)")
self.assertEqual(einstein_stl.short_name, "Championship (St. Louis)")
self.assertEquals(einstein_stl.start_date, datetime.datetime(year=2017, month=4, day=29, hour=0, minute=0, second=0))
self.assertEquals(einstein_stl.end_date, datetime.datetime(year=2017, month=4, day=29, hour=23, minute=59, second=59))
einstein_hou = next(e for e in events if e.key_name == '2017cmptx')
self.assertIsNotNone(einstein_hou)
self.assertEqual(einstein_hou.name, "FIRST Championship Event (Houston)")
self.assertEqual(einstein_hou.short_name, "Championship (Houston)")
self.assertEquals(einstein_hou.start_date, datetime.datetime(year=2017, month=4, day=22, hour=0, minute=0, second=0))
self.assertEquals(einstein_hou.end_date, datetime.datetime(year=2017, month=4, day=22, hour=23, minute=59, second=59))
def test_parse_2017_official_offseason(self):
with open('test_data/fms_api/2017_event_list.json', 'r') as f:
events, districts = FMSAPIEventListParser(2017).parse(json.loads(f.read()))
self.assertEqual(len(events), 165)
self.assertEqual(len(districts), 10)
event = next(e for e in events if e.key_name == "2017iri")
self.assertEquals(event.key_name, "2017iri")
self.assertEquals(event.name, "Indiana Robotics Invitational")
self.assertEquals(event.short_name, "Indiana Robotics Invitational")
self.assertEquals(event.event_short, "iri")
self.assertEquals(event.official, True)
self.assertEquals(event.start_date, datetime.datetime(year=2017, month=7, day=14, hour=0, minute=0, second=0))
self.assertEquals(event.end_date, datetime.datetime(year=2017, month=7, day=15, hour=23, minute=59, second=59))
self.assertEquals(event.venue, "Lawrence North High School")
self.assertEquals(event.city, "Indianapolis")
self.assertEquals(event.state_prov, "IN")
self.assertEquals(event.country, "USA")
self.assertEquals(event.year, 2017)
self.assertEquals(event.event_type_enum, EventType.OFFSEASON)
self.assertEquals(event.district_key, None)
self.assertEquals(event.website, "http://indianaroboticsinvitational.org/")
self.assertIsNone(event.webcast)
def test_parse_2018_event(self):
with open('test_data/fms_api/2018_event_list.json', 'r') as f:
events, districts = FMSAPIEventListParser(2018).parse(json.loads(f.read()))
self.assertEqual(len(events), 178)
self.assertEqual(len(districts), 10)
event = events[18]
self.assertEquals(event.key_name, "2018casj")
self.assertEquals(event.name, "Silicon Valley Regional")
self.assertEquals(event.short_name, "Silicon Valley")
self.assertEquals(event.event_short, "casj")
self.assertEquals(event.official, True)
self.assertEquals(event.start_date, datetime.datetime(year=2018, month=3, day=28, hour=0, minute=0, second=0))
self.assertEquals(event.end_date, datetime.datetime(year=2018, month=3, day=31, hour=23, minute=59, second=59))
self.assertEquals(event.venue, "San Jose State University - The Event Center")
self.assertEquals(event.city, "San Jose")
self.assertEquals(event.state_prov, "CA")
self.assertEquals(event.country, "USA")
self.assertEquals(event.year, 2018)
self.assertEquals(event.event_type_enum, EventType.REGIONAL)
self.assertEquals(event.district_key, None)
self.assertEquals(event.website, "http://www.firstsv.org")
# New in 2018
self.assertEqual(event.webcast, [{"type": "twitch", "channel": "firstinspires9"}, {"type": "twitch", "channel": "firstinspires10"}])
def test_parse_division_parent(self):
with open('test_data/fms_api/2017_event_list.json', 'r') as f:
events, districts = FMSAPIEventListParser(2017).parse(json.loads(f.read()))
self.assertEqual(len(events), 165)
self.assertEqual(len(districts), 10)
# Test division <-> parent associations
for event in events:
event_key = event.key.id()
if event_key == '2017micmp':
self.assertEqual(event.parent_event, None)
self.assertEqual(
event.divisions,
[
ndb.Key('Event', '2017micmp1'),
ndb.Key('Event', '2017micmp2'),
ndb.Key('Event', '2017micmp3'),
ndb.Key('Event', '2017micmp4'),
]
)
elif event_key in {'2017micmp1', '2017micmp2', '2017micmp3', '2017micmp4'}:
self.assertEqual(event.parent_event, ndb.Key('Event', '2017micmp'))
self.assertEqual(event.divisions, [])
elif event_key == '2017cmptx':
self.assertEqual(event.parent_event, None)
self.assertEqual(
event.divisions,
[
ndb.Key('Event', '2017carv'),
ndb.Key('Event', '2017gal'),
ndb.Key('Event', '2017hop'),
ndb.Key('Event', '2017new'),
ndb.Key('Event', '2017roe'),
ndb.Key('Event', '2017tur'),
]
)
elif event_key in {'2017carv', '2017gal', '2017hop', '2017new', '2017roe', '2017tur'}:
self.assertEqual(event.parent_event, ndb.Key('Event', '2017cmptx'))
self.assertEqual(event.divisions, [])
elif event_key == '2017cmpmo':
self.assertEqual(event.parent_event, None)
self.assertEqual(
event.divisions,
[
ndb.Key('Event', '2017arc'),
ndb.Key('Event', '2017cars'),
ndb.Key('Event', '2017cur'),
ndb.Key('Event', '2017dal'),
ndb.Key('Event', '2017dar'),
ndb.Key('Event', '2017tes'),
]
)
elif event_key in {'2017arc', '2017cars', '2017cur', '2017dal', '2017dar', '2017tes'}:
self.assertEqual(event.parent_event, ndb.Key('Event', '2017cmpmo'))
self.assertEqual(event.divisions, [])
else:
self.assertEqual(event.parent_event, None)
self.assertEqual(event.divisions, [])
with open('test_data/fms_api/2018_event_list.json', 'r') as f:
events, districts = FMSAPIEventListParser(2018).parse(json.loads(f.read()))
self.assertEqual(len(events), 178)
self.assertEqual(len(districts), 10)
# Test division <-> parent associations
for event in events:
event_key = event.key.id()
if event_key == '2018oncmp':
self.assertEqual(event.parent_event, None)
self.assertEqual(
event.divisions,
[
ndb.Key('Event', '2018oncmp1'),
ndb.Key('Event', '2018oncmp2'),
]
)
elif event_key in {'2018oncmp1', '2018oncmp2'}:
self.assertEqual(event.parent_event, ndb.Key('Event', '2018oncmp'))
self.assertEqual(event.divisions, [])
elif event_key == '2018micmp':
self.assertEqual(event.parent_event, None)
self.assertEqual(
event.divisions,
[
ndb.Key('Event', '2018micmp1'),
ndb.Key('Event', '2018micmp2'),
ndb.Key('Event', '2018micmp3'),
ndb.Key('Event', '2018micmp4'),
]
)
elif event_key in {'2018micmp1', '2018micmp2', '2018micmp3', '2018micmp4'}:
self.assertEqual(event.parent_event, ndb.Key('Event', '2018micmp'))
self.assertEqual(event.divisions, [])
elif event_key == '2018cmptx':
self.assertEqual(event.parent_event, None)
self.assertEqual(
event.divisions,
[
ndb.Key('Event', '2018carv'),
ndb.Key('Event', '2018gal'),
ndb.Key('Event', '2018hop'),
ndb.Key('Event', '2018new'),
ndb.Key('Event', '2018roe'),
ndb.Key('Event', '2018tur'),
]
)
elif event_key in {'2018carv', '2018gal', '2018hop', '2018new', '2018roe', '2018tur'}:
self.assertEqual(event.parent_event, ndb.Key('Event', '2018cmptx'))
self.assertEqual(event.divisions, [])
elif event_key == '2018cmpmi':
self.assertEqual(event.parent_event, None)
self.assertEqual(
event.divisions,
[
ndb.Key('Event', '2018arc'),
ndb.Key('Event', '2018cars'),
ndb.Key('Event', '2018cur'),
ndb.Key('Event', '2018dal'),
ndb.Key('Event', '2018dar'),
ndb.Key('Event', '2018tes'),
]
)
elif event_key in {'2018arc', '2018cars', '2018cur', '2018dal', '2018dar', '2018tes'}:
self.assertEqual(event.parent_event, ndb.Key('Event', '2018cmpmi'))
self.assertEqual(event.divisions, [])
else:
self.assertEqual(event.parent_event, None)
self.assertEqual(event.divisions, [])
| mit |
sjperkins/tensorflow | tensorflow/contrib/distributions/python/kernel_tests/bijectors/affine_test.py | 20 | 31328 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Affine Tests."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import itertools
import numpy as np
from tensorflow.contrib.distributions.python.ops.bijectors.affine import Affine
from tensorflow.python.framework import dtypes
from tensorflow.python.ops import array_ops
from tensorflow.python.ops.distributions.bijector_test_util import assert_scalar_congruency
from tensorflow.python.platform import test
class AffineBijectorTest(test.TestCase):
"""Tests correctness of the Y = scale @ x + shift transformation."""
def testProperties(self):
with self.test_session():
mu = -1.
# scale corresponds to 1.
bijector = Affine(shift=mu, event_ndims=0)
self.assertEqual("affine", bijector.name)
def testNoBatchScalarViaIdentity(self):
with self.test_session() as sess:
def static_run(fun, x):
return fun(x).eval()
def dynamic_run(fun, x_value):
x_value = np.array(x_value)
x = array_ops.placeholder(dtypes.float32, name="x")
return sess.run(fun(x), feed_dict={x: x_value})
for run in (static_run, dynamic_run):
mu = -1.
# Corresponds to scale = 2
bijector = Affine(
shift=mu, scale_identity_multiplier=2., event_ndims=0)
self.assertEqual(0, bijector.event_ndims.eval()) # "is scalar"
x = [1., 2, 3] # Three scalar samples (no batches).
self.assertAllClose([1., 3, 5], run(bijector.forward, x))
self.assertAllClose([1., 1.5, 2.], run(bijector.inverse, x))
self.assertAllClose(-np.log(2.),
run(bijector.inverse_log_det_jacobian, x))
def testNoBatchScalarViaDiag(self):
with self.test_session() as sess:
def static_run(fun, x):
return fun(x).eval()
def dynamic_run(fun, x_value):
x_value = np.array(x_value)
x = array_ops.placeholder(dtypes.float32, name="x")
return sess.run(fun(x), feed_dict={x: x_value})
for run in (static_run, dynamic_run):
mu = -1.
# Corresponds to scale = 2
bijector = Affine(shift=mu, scale_diag=[2.], event_ndims=0)
self.assertEqual(0, bijector.event_ndims.eval()) # "is scalar"
x = [1., 2, 3] # Three scalar samples (no batches).
self.assertAllClose([1., 3, 5], run(bijector.forward, x))
self.assertAllClose([1., 1.5, 2.], run(bijector.inverse, x))
self.assertAllClose(-np.log(2.),
run(bijector.inverse_log_det_jacobian, x))
def testWeirdSampleNoBatchScalarViaIdentity(self):
with self.test_session() as sess:
def static_run(fun, x):
return fun(x).eval()
def dynamic_run(fun, x_value):
x_value = np.array(x_value)
x = array_ops.placeholder(dtypes.float32, name="x")
return sess.run(fun(x), feed_dict={x: x_value})
for run in (static_run, dynamic_run):
mu = -1.
# Corresponds to scale = 2.
bijector = Affine(
shift=mu, scale_identity_multiplier=2., event_ndims=0)
self.assertEqual(0, bijector.event_ndims.eval()) # "is scalar"
x = [[1., 2, 3], [4, 5, 6]] # Weird sample shape.
self.assertAllClose([[1., 3, 5],
[7, 9, 11]],
run(bijector.forward, x))
self.assertAllClose([[1., 1.5, 2.],
[2.5, 3, 3.5]],
run(bijector.inverse, x))
self.assertAllClose(-np.log(2.),
run(bijector.inverse_log_det_jacobian, x))
def testOneBatchScalarViaIdentity(self):
with self.test_session() as sess:
def static_run(fun, x):
return fun(x).eval()
def dynamic_run(fun, x_value):
x_value = np.array(x_value)
x = array_ops.placeholder(dtypes.float32, name="x")
return sess.run(fun(x), feed_dict={x: x_value})
for run in (static_run, dynamic_run):
mu = [1.]
# One batch, scalar.
# Corresponds to scale = 1.
bijector = Affine(shift=mu, event_ndims=0)
self.assertEqual(0, bijector.event_ndims.eval()) # "is scalar"
x = [1.] # One sample from one batches.
self.assertAllClose([2.], run(bijector.forward, x))
self.assertAllClose([0.], run(bijector.inverse, x))
self.assertAllClose(0., run(bijector.inverse_log_det_jacobian, x))
def testOneBatchScalarViaDiag(self):
with self.test_session() as sess:
def static_run(fun, x):
return fun(x).eval()
def dynamic_run(fun, x_value):
x_value = np.array(x_value)
x = array_ops.placeholder(dtypes.float32, name="x")
return sess.run(fun(x), feed_dict={x: x_value})
for run in (static_run, dynamic_run):
mu = [1.]
# One batch, scalar.
# Corresponds to scale = 1.
bijector = Affine(shift=mu, scale_diag=[1.], event_ndims=0)
self.assertEqual(0, bijector.event_ndims.eval()) # "is scalar"
x = [1.] # One sample from one batches.
self.assertAllClose([2.], run(bijector.forward, x))
self.assertAllClose([0.], run(bijector.inverse, x))
self.assertAllClose(0., run(bijector.inverse_log_det_jacobian, x))
def testTwoBatchScalarIdentityViaIdentity(self):
with self.test_session() as sess:
def static_run(fun, x):
return fun(x).eval()
def dynamic_run(fun, x_value):
x_value = np.array(x_value)
x = array_ops.placeholder(dtypes.float32, name="x")
return sess.run(fun(x), feed_dict={x: x_value})
for run in (static_run, dynamic_run):
mu = [1., -1]
# Univariate, two batches.
# Corresponds to scale = 1.
bijector = Affine(shift=mu, event_ndims=0)
self.assertEqual(0, bijector.event_ndims.eval()) # "is scalar"
x = [1., 1] # One sample from each of two batches.
self.assertAllClose([2., 0], run(bijector.forward, x))
self.assertAllClose([0., 2], run(bijector.inverse, x))
self.assertAllClose(0., run(bijector.inverse_log_det_jacobian, x))
def testTwoBatchScalarIdentityViaDiag(self):
with self.test_session() as sess:
def static_run(fun, x):
return fun(x).eval()
def dynamic_run(fun, x_value):
x_value = np.array(x_value)
x = array_ops.placeholder(dtypes.float32, name="x")
return sess.run(fun(x), feed_dict={x: x_value})
for run in (static_run, dynamic_run):
mu = [1., -1]
# Univariate, two batches.
# Corresponds to scale = 1.
bijector = Affine(shift=mu, scale_diag=[1.], event_ndims=0)
self.assertEqual(0, bijector.event_ndims.eval()) # "is scalar"
x = [1., 1] # One sample from each of two batches.
self.assertAllClose([2., 0], run(bijector.forward, x))
self.assertAllClose([0., 2], run(bijector.inverse, x))
self.assertAllClose(0., run(bijector.inverse_log_det_jacobian, x))
def testNoBatchMultivariateIdentity(self):
with self.test_session() as sess:
def static_run(fun, x):
return fun(x).eval()
def dynamic_run(fun, x_value):
x_value = np.array(x_value)
x = array_ops.placeholder(dtypes.float32, name="x")
return sess.run(fun(x), feed_dict={x: x_value})
for run in (static_run, dynamic_run):
mu = [1., -1]
# Multivariate
# Corresponds to scale = [[1., 0], [0, 1.]]
bijector = Affine(shift=mu)
self.assertEqual(1, bijector.event_ndims.eval()) # "is vector"
x = [1., 1]
# matmul(sigma, x) + shift
# = [-1, -1] + [1, -1]
self.assertAllClose([2., 0], run(bijector.forward, x))
self.assertAllClose([0., 2], run(bijector.inverse, x))
# x is a 2-batch of 2-vectors.
# The first vector is [1, 1], the second is [-1, -1].
# Each undergoes matmul(sigma, x) + shift.
x = [[1., 1], [-1., -1]]
self.assertAllClose([[2., 0], [0., -2]], run(bijector.forward, x))
self.assertAllClose([[0., 2], [-2., 0]], run(bijector.inverse, x))
self.assertAllClose(0., run(bijector.inverse_log_det_jacobian, x))
def testNoBatchMultivariateDiag(self):
with self.test_session() as sess:
def static_run(fun, x):
return fun(x).eval()
def dynamic_run(fun, x_value):
x_value = np.array(x_value)
x = array_ops.placeholder(dtypes.float32, name="x")
return sess.run(fun(x), feed_dict={x: x_value})
for run in (static_run, dynamic_run):
mu = [1., -1]
# Multivariate
# Corresponds to scale = [[2., 0], [0, 1.]]
bijector = Affine(shift=mu, scale_diag=[2., 1])
self.assertEqual(1, bijector.event_ndims.eval()) # "is vector"
x = [1., 1]
# matmul(sigma, x) + shift
# = [-1, -1] + [1, -1]
self.assertAllClose([3., 0], run(bijector.forward, x))
self.assertAllClose([0., 2], run(bijector.inverse, x))
self.assertAllClose(-np.log(2.),
run(bijector.inverse_log_det_jacobian, x))
# x is a 2-batch of 2-vectors.
# The first vector is [1, 1], the second is [-1, -1].
# Each undergoes matmul(sigma, x) + shift.
x = [[1., 1],
[-1., -1]]
self.assertAllClose([[3., 0],
[-1., -2]],
run(bijector.forward, x))
self.assertAllClose([[0., 2],
[-1., 0]],
run(bijector.inverse, x))
self.assertAllClose(-np.log(2.),
run(bijector.inverse_log_det_jacobian, x))
def testNoBatchMultivariateFullDynamic(self):
with self.test_session() as sess:
x = array_ops.placeholder(dtypes.float32, name="x")
mu = array_ops.placeholder(dtypes.float32, name="mu")
scale_diag = array_ops.placeholder(dtypes.float32, name="scale_diag")
event_ndims = array_ops.placeholder(dtypes.int32, name="event_ndims")
x_value = np.array([[1., 1]], dtype=np.float32)
mu_value = np.array([1., -1], dtype=np.float32)
scale_diag_value = np.array([2., 2], dtype=np.float32)
event_ndims_value = np.array(1, dtype=np.int32)
feed_dict = {
x: x_value,
mu: mu_value,
scale_diag: scale_diag_value,
event_ndims: event_ndims_value
}
bijector = Affine(
shift=mu, scale_diag=scale_diag, event_ndims=event_ndims)
self.assertEqual(1, sess.run(bijector.event_ndims, feed_dict))
self.assertAllClose([[3., 1]], sess.run(bijector.forward(x), feed_dict))
self.assertAllClose([[0., 1]], sess.run(bijector.inverse(x), feed_dict))
self.assertAllClose(
-np.log(4),
sess.run(bijector.inverse_log_det_jacobian(x), feed_dict))
def testBatchMultivariateIdentity(self):
with self.test_session() as sess:
def static_run(fun, x):
return fun(x).eval()
def dynamic_run(fun, x_value):
x_value = np.array(x_value, dtype=np.float32)
x = array_ops.placeholder(dtypes.float32, name="x")
return sess.run(fun(x), feed_dict={x: x_value})
for run in (static_run, dynamic_run):
mu = [[1., -1]]
# Corresponds to 1 2x2 matrix, with twos on the diagonal.
scale = 2.
bijector = Affine(shift=mu, scale_identity_multiplier=scale)
self.assertEqual(1, bijector.event_ndims.eval()) # "is vector"
x = [[[1., 1]]]
self.assertAllClose([[[3., 1]]], run(bijector.forward, x))
self.assertAllClose([[[0., 1]]], run(bijector.inverse, x))
self.assertAllClose(-np.log(4),
run(bijector.inverse_log_det_jacobian, x))
def testBatchMultivariateDiag(self):
with self.test_session() as sess:
def static_run(fun, x):
return fun(x).eval()
def dynamic_run(fun, x_value):
x_value = np.array(x_value, dtype=np.float32)
x = array_ops.placeholder(dtypes.float32, name="x")
return sess.run(fun(x), feed_dict={x: x_value})
for run in (static_run, dynamic_run):
mu = [[1., -1]]
# Corresponds to 1 2x2 matrix, with twos on the diagonal.
scale_diag = [[2., 2]]
bijector = Affine(shift=mu, scale_diag=scale_diag)
self.assertEqual(1, bijector.event_ndims.eval()) # "is vector"
x = [[[1., 1]]]
self.assertAllClose([[[3., 1]]], run(bijector.forward, x))
self.assertAllClose([[[0., 1]]], run(bijector.inverse, x))
self.assertAllClose([-np.log(4)],
run(bijector.inverse_log_det_jacobian, x))
def testBatchMultivariateFullDynamic(self):
with self.test_session() as sess:
x = array_ops.placeholder(dtypes.float32, name="x")
mu = array_ops.placeholder(dtypes.float32, name="mu")
scale_diag = array_ops.placeholder(dtypes.float32, name="scale_diag")
event_ndims = array_ops.placeholder(dtypes.int32, name="event_ndims")
x_value = np.array([[[1., 1]]], dtype=np.float32)
mu_value = np.array([[1., -1]], dtype=np.float32)
scale_diag_value = np.array([[2., 2]], dtype=np.float32)
event_ndims_value = 1
feed_dict = {
x: x_value,
mu: mu_value,
scale_diag: scale_diag_value,
event_ndims: event_ndims_value
}
bijector = Affine(
shift=mu, scale_diag=scale_diag, event_ndims=event_ndims)
self.assertEqual(1, sess.run(bijector.event_ndims, feed_dict))
self.assertAllClose([[[3., 1]]], sess.run(bijector.forward(x), feed_dict))
self.assertAllClose([[[0., 1]]], sess.run(bijector.inverse(x), feed_dict))
self.assertAllClose([-np.log(4)],
sess.run(
bijector.inverse_log_det_jacobian(x), feed_dict))
def testIdentityWithDiagUpdate(self):
with self.test_session() as sess:
def static_run(fun, x):
return fun(x).eval()
def dynamic_run(fun, x_value):
x_value = np.array(x_value)
x = array_ops.placeholder(dtypes.float32, name="x")
return sess.run(fun(x), feed_dict={x: x_value})
for run in (static_run, dynamic_run):
mu = -1.
# Corresponds to scale = 2
bijector = Affine(
shift=mu,
scale_identity_multiplier=1.,
scale_diag=[1.],
event_ndims=0)
self.assertEqual(0, bijector.event_ndims.eval()) # "is vector"
x = [1., 2, 3] # Three scalar samples (no batches).
self.assertAllClose([1., 3, 5], run(bijector.forward, x))
self.assertAllClose([1., 1.5, 2.], run(bijector.inverse, x))
self.assertAllClose(-np.log(2.),
run(bijector.inverse_log_det_jacobian, x))
def testIdentityWithTriL(self):
with self.test_session() as sess:
def static_run(fun, x):
return fun(x).eval()
def dynamic_run(fun, x_value):
x_value = np.array(x_value)
x = array_ops.placeholder(dtypes.float32, name="x")
return sess.run(fun(x), feed_dict={x: x_value})
for run in (static_run, dynamic_run):
mu = -1.
# scale = [[2., 0], [2, 2]]
bijector = Affine(
shift=mu,
scale_identity_multiplier=1.,
scale_tril=[[1., 0], [2., 1]])
self.assertEqual(1, bijector.event_ndims.eval()) # "is vector"
x = [[1., 2]] # One multivariate sample.
self.assertAllClose([[1., 5]], run(bijector.forward, x))
self.assertAllClose([[1., 0.5]], run(bijector.inverse, x))
self.assertAllClose(-np.log(4.),
run(bijector.inverse_log_det_jacobian, x))
def testDiagWithTriL(self):
with self.test_session() as sess:
def static_run(fun, x):
return fun(x).eval()
def dynamic_run(fun, x_value):
x_value = np.array(x_value)
x = array_ops.placeholder(dtypes.float32, name="x")
return sess.run(fun(x), feed_dict={x: x_value})
for run in (static_run, dynamic_run):
mu = -1.
# scale = [[2., 0], [2, 3]]
bijector = Affine(
shift=mu, scale_diag=[1., 2.], scale_tril=[[1., 0], [2., 1]])
self.assertEqual(1, bijector.event_ndims.eval()) # "is vector"
x = [[1., 2]] # One multivariate sample.
self.assertAllClose([[1., 7]], run(bijector.forward, x))
self.assertAllClose([[1., 1 / 3.]], run(bijector.inverse, x))
self.assertAllClose(-np.log(6.),
run(bijector.inverse_log_det_jacobian, x))
def testIdentityAndDiagWithTriL(self):
with self.test_session() as sess:
def static_run(fun, x):
return fun(x).eval()
def dynamic_run(fun, x_value):
x_value = np.array(x_value)
x = array_ops.placeholder(dtypes.float32, name="x")
return sess.run(fun(x), feed_dict={x: x_value})
for run in (static_run, dynamic_run):
mu = -1.
# scale = [[3., 0], [2, 4]]
bijector = Affine(
shift=mu,
scale_identity_multiplier=1.0,
scale_diag=[1., 2.],
scale_tril=[[1., 0], [2., 1]])
self.assertEqual(1, bijector.event_ndims.eval()) # "is vector"
x = [[1., 2]] # One multivariate sample.
self.assertAllClose([[2., 9]], run(bijector.forward, x))
self.assertAllClose([[2 / 3., 5 / 12.]], run(bijector.inverse, x))
self.assertAllClose(-np.log(12.),
run(bijector.inverse_log_det_jacobian, x))
def testIdentityWithVDVTUpdate(self):
with self.test_session() as sess:
def static_run(fun, x):
return fun(x).eval()
def dynamic_run(fun, x_value):
x_value = np.array(x_value)
x = array_ops.placeholder(dtypes.float32, name="x")
return sess.run(fun(x), feed_dict={x: x_value})
for run in (static_run, dynamic_run):
mu = -1.
# Corresponds to scale = [[10, 0, 0], [0, 2, 0], [0, 0, 3]]
bijector = Affine(
shift=mu,
scale_identity_multiplier=2.,
scale_perturb_diag=[2., 1],
scale_perturb_factor=[[2., 0], [0., 0], [0, 1]])
bijector_ref = Affine(shift=mu, scale_diag=[10., 2, 3])
self.assertEqual(1, bijector.event_ndims.eval()) # "is vector"
x = [1., 2, 3] # Vector.
self.assertAllClose([9., 3, 8], run(bijector.forward, x))
self.assertAllClose(
run(bijector_ref.forward, x), run(bijector.forward, x))
self.assertAllClose([0.2, 1.5, 4 / 3.], run(bijector.inverse, x))
self.assertAllClose(
run(bijector_ref.inverse, x), run(bijector.inverse, x))
self.assertAllClose(-np.log(60.),
run(bijector.inverse_log_det_jacobian, x))
self.assertAllClose(
run(bijector.inverse_log_det_jacobian, x),
run(bijector_ref.inverse_log_det_jacobian, x))
def testDiagWithVDVTUpdate(self):
with self.test_session() as sess:
def static_run(fun, x):
return fun(x).eval()
def dynamic_run(fun, x_value):
x_value = np.array(x_value)
x = array_ops.placeholder(dtypes.float32, name="x")
return sess.run(fun(x), feed_dict={x: x_value})
for run in (static_run, dynamic_run):
mu = -1.
# Corresponds to scale = [[10, 0, 0], [0, 3, 0], [0, 0, 5]]
bijector = Affine(
shift=mu,
scale_diag=[2., 3, 4],
scale_perturb_diag=[2., 1],
scale_perturb_factor=[[2., 0], [0., 0], [0, 1]])
bijector_ref = Affine(shift=mu, scale_diag=[10., 3, 5])
self.assertEqual(1, bijector.event_ndims.eval()) # "is vector"
x = [1., 2, 3] # Vector.
self.assertAllClose([9., 5, 14], run(bijector.forward, x))
self.assertAllClose(
run(bijector_ref.forward, x), run(bijector.forward, x))
self.assertAllClose([0.2, 1., 0.8], run(bijector.inverse, x))
self.assertAllClose(
run(bijector_ref.inverse, x), run(bijector.inverse, x))
self.assertAllClose(-np.log(150.),
run(bijector.inverse_log_det_jacobian, x))
self.assertAllClose(
run(bijector.inverse_log_det_jacobian, x),
run(bijector_ref.inverse_log_det_jacobian, x))
def testTriLWithVDVTUpdate(self):
with self.test_session() as sess:
def static_run(fun, x):
return fun(x).eval()
def dynamic_run(fun, x_value):
x_value = np.array(x_value)
x = array_ops.placeholder(dtypes.float32, name="x")
return sess.run(fun(x), feed_dict={x: x_value})
for run in (static_run, dynamic_run):
mu = -1.
# Corresponds to scale = [[10, 0, 0], [1, 3, 0], [2, 3, 5]]
bijector = Affine(
shift=mu,
scale_tril=[[2., 0, 0], [1, 3, 0], [2, 3, 4]],
scale_perturb_diag=[2., 1],
scale_perturb_factor=[[2., 0], [0., 0], [0, 1]])
bijector_ref = Affine(
shift=mu, scale_tril=[[10., 0, 0], [1, 3, 0], [2, 3, 5]])
self.assertEqual(1, bijector.event_ndims.eval()) # "is vector"
x = [1., 2, 3] # Vector.
self.assertAllClose([9., 6, 22], run(bijector.forward, x))
self.assertAllClose(
run(bijector_ref.forward, x), run(bijector.forward, x))
self.assertAllClose([0.2, 14 / 15., 4 / 25.], run(bijector.inverse, x))
self.assertAllClose(
run(bijector_ref.inverse, x), run(bijector.inverse, x))
self.assertAllClose(-np.log(150.),
run(bijector.inverse_log_det_jacobian, x))
self.assertAllClose(
run(bijector.inverse_log_det_jacobian, x),
run(bijector_ref.inverse_log_det_jacobian, x))
def testTriLWithVDVTUpdateNoDiagonal(self):
with self.test_session() as sess:
def static_run(fun, x):
return fun(x).eval()
def dynamic_run(fun, x_value):
x_value = np.array(x_value)
x = array_ops.placeholder(dtypes.float32, name="x")
return sess.run(fun(x), feed_dict={x: x_value})
for run in (static_run, dynamic_run):
mu = -1.
# Corresponds to scale = [[6, 0, 0], [1, 3, 0], [2, 3, 5]]
bijector = Affine(
shift=mu,
scale_tril=[[2., 0, 0], [1, 3, 0], [2, 3, 4]],
scale_perturb_diag=None,
scale_perturb_factor=[[2., 0], [0., 0], [0, 1]])
bijector_ref = Affine(
shift=mu, scale_tril=[[6., 0, 0], [1, 3, 0], [2, 3, 5]])
self.assertEqual(1, bijector.event_ndims.eval()) # "is vector"
x = [1., 2, 3] # Vector.
self.assertAllClose([5., 6, 22], run(bijector.forward, x))
self.assertAllClose(
run(bijector_ref.forward, x), run(bijector.forward, x))
self.assertAllClose([1 / 3., 8 / 9., 4 / 30.], run(bijector.inverse, x))
self.assertAllClose(
run(bijector_ref.inverse, x), run(bijector.inverse, x))
self.assertAllClose(-np.log(90.),
run(bijector.inverse_log_det_jacobian, x))
self.assertAllClose(
run(bijector.inverse_log_det_jacobian, x),
run(bijector_ref.inverse_log_det_jacobian, x))
def testNoBatchMultivariateRaisesWhenSingular(self):
with self.test_session():
mu = [1., -1]
bijector = Affine(
shift=mu,
# Has zero on the diagonal.
scale_diag=[0., 1],
validate_args=True)
with self.assertRaisesOpError("Condition x > 0"):
bijector.forward([1., 1.]).eval()
def testEventNdimsLargerThanOneRaises(self):
with self.test_session():
mu = [1., -1]
# Scale corresponds to 2x2 identity matrix.
bijector = Affine(shift=mu, event_ndims=2, validate_args=True)
bijector.forward([1., 1.]).eval()
def testScaleZeroScalarRaises(self):
with self.test_session():
mu = -1.
# Check Identity matrix with zero scaling.
bijector = Affine(
shift=mu,
scale_identity_multiplier=0.0,
event_ndims=0,
validate_args=True)
with self.assertRaisesOpError("Condition x > 0"):
bijector.forward(1.).eval()
# Check Diag matrix with zero scaling.
bijector = Affine(
shift=mu, scale_diag=[0.0], event_ndims=0, validate_args=True)
with self.assertRaisesOpError("Condition x > 0"):
bijector.forward(1.).eval()
def testScalarCongruency(self):
with self.test_session():
bijector = Affine(
shift=3.6, scale_identity_multiplier=0.42, event_ndims=0)
assert_scalar_congruency(
bijector, lower_x=-2., upper_x=2.)
def _makeScale(self,
x,
scale_identity_multiplier=None,
scale_diag=None,
scale_tril=None,
scale_perturb_factor=None,
scale_perturb_diag=None):
"""Create a scale matrix. Return None if it can not be created."""
c = scale_identity_multiplier
d1 = scale_diag
tril = scale_tril
v = scale_perturb_factor
d2 = scale_perturb_diag
# Ambiguous low rank update.
if v is None and d2 is not None:
return None
if c is None and d1 is None and tril is None:
# Special case when no scale args are passed in. This means use an
# identity matrix.
if v is None and d2 is None:
c = 1.
# No scale.
else:
return None
matrix = np.float32(0.)
if c is not None:
# Infer the dimension from x.
matrix += c * self._matrix_diag(np.ones_like(x))
if d1 is not None:
matrix += self._matrix_diag(np.array(d1, dtype=np.float32))
if tril is not None:
matrix += np.array(tril, dtype=np.float32)
if v is not None:
v = np.array(v, dtype=np.float32)
if v.ndim < 2:
vt = v.T
else:
vt = np.swapaxes(v, axis1=v.ndim - 2, axis2=v.ndim - 1)
if d2 is not None:
d2 = self._matrix_diag(np.array(d2, dtype=np.float32))
right = np.matmul(d2, vt)
else:
right = vt
matrix += np.matmul(v, right)
return matrix
def _matrix_diag(self, d):
"""Batch version of np.diag."""
orig_shape = d.shape
d = np.reshape(d, (int(np.prod(d.shape[:-1])), d.shape[-1]))
diag_list = []
for i in range(d.shape[0]):
diag_list.append(np.diag(d[i, ...]))
return np.reshape(diag_list, orig_shape + (d.shape[-1],))
def _testLegalInputs(self, shift=None, scale_params=None, x=None):
def _powerset(x):
s = list(x)
return itertools.chain.from_iterable(
itertools.combinations(s, r) for r in range(len(s) + 1))
for args in _powerset(scale_params.items()):
with self.test_session():
args = dict(args)
scale_args = dict({"x": x}, **args)
scale = self._makeScale(**scale_args)
bijector_args = dict({"event_ndims": 1}, **args)
# We haven't specified enough information for the scale.
if scale is None:
with self.assertRaisesRegexp(ValueError, ("must be specified.")):
bijector = Affine(shift=shift, **bijector_args)
else:
bijector = Affine(shift=shift, **bijector_args)
np_x = x
# For the case a vector is passed in, we need to make the shape
# match the matrix for matmul to work.
if x.ndim == scale.ndim - 1:
np_x = np.expand_dims(x, axis=-1)
forward = np.matmul(scale, np_x) + shift
if x.ndim == scale.ndim - 1:
forward = np.squeeze(forward, axis=-1)
self.assertAllClose(forward, bijector.forward(x).eval())
backward = np.linalg.solve(scale, np_x - shift)
if x.ndim == scale.ndim - 1:
backward = np.squeeze(backward, axis=-1)
self.assertAllClose(backward, bijector.inverse(x).eval())
ildj = -np.log(np.abs(np.linalg.det(scale)))
# TODO(jvdillon): We need to make it so the scale_identity_multiplier
# case does not deviate in expected shape. Fixing this will get rid of
# these special cases.
if (ildj.ndim > 0 and (len(scale_args) == 1 or (
len(scale_args) == 2 and
scale_args.get("scale_identity_multiplier", None) is not None))):
ildj = np.squeeze(ildj[0])
elif ildj.ndim < scale.ndim - 2:
ildj = np.reshape(ildj, scale.shape[0:-2])
self.assertAllClose(ildj, bijector.inverse_log_det_jacobian(x).eval())
def testLegalInputs(self):
self._testLegalInputs(
shift=np.float32(-1),
scale_params={
"scale_identity_multiplier": 2.,
"scale_diag": [2., 3.],
"scale_tril": [[1., 0.],
[-3., 3.]],
"scale_perturb_factor": [[1., 0],
[1.5, 3.]],
"scale_perturb_diag": [3., 1.]
},
x=np.array(
[1., 2], dtype=np.float32))
def testLegalInputsWithBatch(self):
# Shape of scale is [2, 1, 2, 2]
self._testLegalInputs(
shift=np.float32(-1),
scale_params={
"scale_identity_multiplier": 2.,
"scale_diag": [[[2., 3.]], [[1., 2]]],
"scale_tril": [[[[1., 0.], [-3., 3.]]], [[[0.5, 0.], [1., 1.]]]],
"scale_perturb_factor": [[[[1., 0], [1.5, 3.]]],
[[[1., 0], [1., 1.]]]],
"scale_perturb_diag": [[[3., 1.]], [[0.5, 1.]]]
},
x=np.array(
[[[1., 2]], [[3., 4]]], dtype=np.float32))
def testNegativeDetTrilPlusVDVT(self):
# scale = [[3.7, 2.7],
# [-0.3, -1.3]]
# inv(scale) = [[0.325, 0.675],
# [-0.075, -0.925]]
# eig(scale) = [3.5324, -1.1324]
self._testLegalInputs(
shift=np.float32(-1),
scale_params={
"scale_tril": [[1., 0], [-3, -4]],
"scale_perturb_factor": [[0.1, 0], [0.5, 0.3]],
"scale_perturb_diag": [3., 1]
},
x=np.array(
[1., 2], dtype=np.float32))
def testScalePropertyAssertsCorrectly(self):
with self.test_session():
with self.assertRaises(NotImplementedError):
scale = Affine( # pylint:disable=unused-variable
scale_tril=[[1., 0], [2, 1]],
scale_perturb_factor=[2., 1.]).scale
if __name__ == "__main__":
test.main()
| apache-2.0 |
kobejean/tensorflow | tensorflow/python/kernel_tests/stack_op_test.py | 6 | 13529 | # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Functional tests for Stack and ParallelStack Ops."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import errors_impl
from tensorflow.python.framework import ops
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import gradient_checker
from tensorflow.python.ops import variables
from tensorflow.python.platform import test
def np_split_squeeze(array, axis):
axis_len = array.shape[axis]
return [
np.squeeze(
arr, axis=(axis,)) for arr in np.split(
array, axis_len, axis=axis)
]
class StackOpTest(test.TestCase):
def testSimple(self):
np.random.seed(7)
with self.test_session(use_gpu=True):
for shape in (2,), (3,), (2, 3), (3, 2), (4, 3, 2):
for dtype in [np.bool, np.float32, np.int32, np.int64]:
data = np.random.randn(*shape).astype(dtype)
# Convert [data[0], data[1], ...] separately to tensorflow
# TODO(irving): Remove list() once we handle maps correctly
xs = list(map(constant_op.constant, data))
# Stack back into a single tensorflow tensor
c = array_ops.stack(xs)
self.assertAllEqual(c.eval(), data)
def testSimpleParallelCPU(self):
np.random.seed(7)
with self.test_session(use_gpu=False):
for shape in (2,), (3,), (2, 3), (3, 2), (4, 3, 2):
data = np.random.randn(*shape).astype(np.float32)
xs = list(map(constant_op.constant, data))
c = array_ops.parallel_stack(xs)
self.assertAllEqual(c.eval(), data)
def testSimpleParallelGPU(self):
np.random.seed(7)
with self.test_session(use_gpu=True):
for shape in (2,), (3,), (2, 3), (3, 2), (4, 3, 2):
data = np.random.randn(*shape).astype(np.float32)
xs = list(map(constant_op.constant, data))
c = array_ops.parallel_stack(xs)
self.assertAllEqual(c.eval(), data)
def testConst(self):
np.random.seed(7)
with self.test_session(use_gpu=True):
for shape in (2,), (3,), (2, 3), (3, 2), (4, 3, 2):
for dtype in [np.bool, np.float32, np.int32, np.int64]:
data = np.random.randn(*shape).astype(dtype)
# Stack back into a single tensorflow tensor directly using np array
c = array_ops.stack(data)
# This is implemented via a Const:
self.assertEqual(c.op.type, "Const")
self.assertAllEqual(c.eval(), data)
# Python lists also work for 1-D case:
if len(shape) == 1:
data_list = list(data)
cl = array_ops.stack(data_list)
self.assertEqual(cl.op.type, "Const")
self.assertAllEqual(cl.eval(), data)
# Verify that shape induction works with shapes produced via const stack
a = constant_op.constant([1, 2, 3, 4, 5, 6])
b = array_ops.reshape(a, array_ops.stack([2, 3]))
self.assertAllEqual(b.get_shape(), [2, 3])
def testConstParallelCPU(self):
np.random.seed(7)
with self.test_session(use_gpu=False):
for shape in (2,), (3,), (2, 3), (3, 2), (4, 3, 2):
data = np.random.randn(*shape).astype(np.float32)
if len(shape) == 1:
data_list = list(data)
cl = array_ops.parallel_stack(data_list)
self.assertAllEqual(cl.eval(), data)
data = np.random.randn(*shape).astype(np.float32)
c = array_ops.parallel_stack(data)
self.assertAllEqual(c.eval(), data)
def testConstParallelGPU(self):
np.random.seed(7)
with self.test_session(use_gpu=True):
for shape in (2,), (3,), (2, 3), (3, 2), (4, 3, 2):
data = np.random.randn(*shape).astype(np.float32)
if len(shape) == 1:
data_list = list(data)
cl = array_ops.parallel_stack(data_list)
self.assertAllEqual(cl.eval(), data)
data = np.random.randn(*shape).astype(np.float32)
c = array_ops.parallel_stack(data)
self.assertAllEqual(c.eval(), data)
def testGradientsAxis0(self):
np.random.seed(7)
for shape in (2,), (3,), (2, 3), (3, 2), (4, 3, 2):
data = np.random.randn(*shape)
shapes = [shape[1:]] * shape[0]
with self.test_session(use_gpu=True):
# TODO(irving): Remove list() once we handle maps correctly
xs = list(map(constant_op.constant, data))
c = array_ops.stack(xs)
err = gradient_checker.compute_gradient_error(xs, shapes, c, shape)
self.assertLess(err, 1e-6)
def testGradientsAxis1(self):
np.random.seed(7)
for shape in (2, 3), (3, 2), (4, 3, 2):
data = np.random.randn(*shape)
shapes = [shape[1:]] * shape[0]
out_shape = list(shape[1:])
out_shape.insert(1, shape[0])
with self.test_session(use_gpu=True):
# TODO(irving): Remove list() once we handle maps correctly
xs = list(map(constant_op.constant, data))
c = array_ops.stack(xs, axis=1)
err = gradient_checker.compute_gradient_error(xs, shapes, c, out_shape)
self.assertLess(err, 1e-6)
def testZeroSizeCPU(self):
# Verify that stack doesn't crash for zero size inputs
with self.test_session(use_gpu=False):
for shape in (0,), (3, 0), (0, 3):
x = np.zeros((2,) + shape).astype(np.int32)
p = array_ops.stack(list(x)).eval()
self.assertAllEqual(p, x)
p = array_ops.parallel_stack(list(x)).eval()
self.assertAllEqual(p, x)
def testZeroSizeGPU(self):
# Verify that stack doesn't crash for zero size inputs
with self.test_session(use_gpu=True):
for shape in (0,), (3, 0), (0, 3):
x = np.zeros((2,) + shape).astype(np.int32)
p = array_ops.stack(list(x)).eval()
self.assertAllEqual(p, x)
p = array_ops.parallel_stack(list(x)).eval()
self.assertAllEqual(p, x)
def testAxis0DefaultCPU(self):
with self.test_session(use_gpu=False):
t = [constant_op.constant([1, 2, 3]), constant_op.constant([4, 5, 6])]
stacked = array_ops.stack(t).eval()
parallel_stacked = array_ops.parallel_stack(t).eval()
expected = np.array([[1, 2, 3], [4, 5, 6]])
self.assertAllEqual(stacked, expected)
self.assertAllEqual(parallel_stacked, expected)
def testAxis0DefaultGPU(self):
with self.test_session(use_gpu=True):
t = [constant_op.constant([1, 2, 3]), constant_op.constant([4, 5, 6])]
stacked = array_ops.stack(t).eval()
parallel_stacked = array_ops.parallel_stack(t).eval()
expected = np.array([[1, 2, 3], [4, 5, 6]])
self.assertAllEqual(stacked, expected)
self.assertAllEqual(parallel_stacked, expected)
def testAgainstNumpy(self):
# For 1 to 5 dimensions.
for i in range(1, 6):
expected = np.random.random(np.random.permutation(i) + 1)
# For all the possible axis to split it, including negative indices.
for j in range(-i, i):
test_arrays = np_split_squeeze(expected, j)
with self.test_session(use_gpu=True):
actual_pack = array_ops.stack(test_arrays, axis=j)
self.assertEqual(expected.shape, actual_pack.get_shape())
actual_pack = actual_pack.eval()
actual_stack = array_ops.stack(test_arrays, axis=j)
self.assertEqual(expected.shape, actual_stack.get_shape())
actual_stack = actual_stack.eval()
self.assertNDArrayNear(expected, actual_stack, 1e-6)
def testDimOutOfRange(self):
t = [constant_op.constant([1, 2, 3]), constant_op.constant([4, 5, 6])]
with self.assertRaisesRegexp(ValueError, r"axis = 2 not in \[-2, 2\)"):
array_ops.stack(t, axis=2)
def testDimOutOfNegativeRange(self):
t = [constant_op.constant([1, 2, 3]), constant_op.constant([4, 5, 6])]
with self.assertRaisesRegexp(ValueError, r"axis = -3 not in \[-2, 2\)"):
array_ops.stack(t, axis=-3)
class AutomaticStackingTest(test.TestCase):
def testSimple(self):
with self.test_session(use_gpu=True):
self.assertAllEqual(
[1, 0, 2],
ops.convert_to_tensor([1, constant_op.constant(0), 2]).eval())
self.assertAllEqual([[0, 0, 0], [0, 1, 0], [0, 0, 0]],
ops.convert_to_tensor(
[[0, 0, 0], [0, constant_op.constant(1), 0],
[0, 0, 0]]).eval())
self.assertAllEqual([[0, 0, 0], [0, 1, 0], [0, 0, 0]],
ops.convert_to_tensor(
[[0, 0, 0], constant_op.constant([0, 1, 0]),
[0, 0, 0]]).eval())
self.assertAllEqual([[0, 0, 0], [0, 1, 0], [0, 0, 0]],
ops.convert_to_tensor([
constant_op.constant([0, 0, 0]),
constant_op.constant([0, 1, 0]),
constant_op.constant([0, 0, 0])
]).eval())
def testWithNDArray(self):
with self.test_session(use_gpu=True):
result = ops.convert_to_tensor([[[0., 0.],
constant_op.constant([1., 1.])],
np.array(
[[2., 2.], [3., 3.]],
dtype=np.float32)])
self.assertAllEqual([[[0., 0.], [1., 1.]], [[2., 2.], [3., 3.]]],
result.eval())
def testVariable(self):
with self.test_session(use_gpu=True):
v = variables.Variable(17)
result = ops.convert_to_tensor([[0, 0, 0], [0, v, 0], [0, 0, 0]])
v.initializer.run()
self.assertAllEqual([[0, 0, 0], [0, 17, 0], [0, 0, 0]], result.eval())
v.assign(38).op.run()
self.assertAllEqual([[0, 0, 0], [0, 38, 0], [0, 0, 0]], result.eval())
def testDtype(self):
t_0 = ops.convert_to_tensor([[0., 0., 0.], [0., 0., 0.], [0., 0., 0.]])
self.assertEqual(dtypes.float32, t_0.dtype)
t_1 = ops.convert_to_tensor([[0., 0., 0.], constant_op.constant(
[0., 0., 0.], dtype=dtypes.float64), [0., 0., 0.]])
self.assertEqual(dtypes.float64, t_1.dtype)
t_2 = ops.convert_to_tensor(
[[0., 0., 0.], [0., 0., 0.], [0., 0., 0.]], dtype=dtypes.float64)
self.assertEqual(dtypes.float64, t_2.dtype)
t_3 = ops.convert_to_tensor(
[[0., 0., 0.],
constant_op.constant([0., 0., 0.], dtype=dtypes.float64), [0., 0., 0.]
],
dtype=dtypes.float32)
self.assertEqual(dtypes.float32, t_3.dtype)
t_4 = ops.convert_to_tensor(
[constant_op.constant([0., 0., 0.], dtype=dtypes.float64)],
dtype=dtypes.float32)
self.assertEqual(dtypes.float32, t_4.dtype)
with self.assertRaises(TypeError):
ops.convert_to_tensor([
constant_op.constant(
[0., 0., 0.], dtype=dtypes.float32), constant_op.constant(
[0., 0., 0.], dtype=dtypes.float64), [0., 0., 0.]
])
def testDtypeConversionWhenTensorDtypeMismatch(self):
t_0 = ops.convert_to_tensor([0., 0., 0.])
self.assertEqual(dtypes.float32, t_0.dtype)
t_1 = ops.convert_to_tensor([0, 0, 0])
self.assertEqual(dtypes.int32, t_1.dtype)
t_2 = ops.convert_to_tensor([t_0, t_0, t_1], dtype=dtypes.float64)
self.assertEqual(dtypes.float64, t_2.dtype)
def testPlaceholder(self):
with self.test_session(use_gpu=True):
# Test using placeholder with a defined shape.
ph_0 = array_ops.placeholder(dtypes.int32, shape=[])
result_0 = ops.convert_to_tensor([[0, 0, 0], [0, ph_0, 0], [0, 0, 0]])
self.assertAllEqual(
[[0, 0, 0], [0, 1, 0], [0, 0, 0]], result_0.eval(feed_dict={ph_0: 1}))
self.assertAllEqual(
[[0, 0, 0], [0, 2, 0], [0, 0, 0]], result_0.eval(feed_dict={ph_0: 2}))
# Test using placeholder with an undefined shape.
ph_1 = array_ops.placeholder(dtypes.int32)
result_1 = ops.convert_to_tensor([[0, 0, 0], [0, ph_1, 0], [0, 0, 0]])
self.assertAllEqual(
[[0, 0, 0], [0, 1, 0], [0, 0, 0]], result_1.eval(feed_dict={ph_1: 1}))
self.assertAllEqual(
[[0, 0, 0], [0, 2, 0], [0, 0, 0]], result_1.eval(feed_dict={ph_1: 2}))
def testShapeErrors(self):
# Static shape error.
ph_0 = array_ops.placeholder(dtypes.int32, shape=[1])
with self.assertRaises(ValueError):
ops.convert_to_tensor([[0, 0, 0], [0, ph_0, 0], [0, 0, 0]])
# Dynamic shape error.
ph_1 = array_ops.placeholder(dtypes.int32)
result_1 = ops.convert_to_tensor([[0, 0, 0], [0, ph_1, 0], [0, 0, 0]])
with self.test_session(use_gpu=True):
with self.assertRaises(errors_impl.InvalidArgumentError):
result_1.eval(feed_dict={ph_1: [1]})
if __name__ == "__main__":
test.main()
| apache-2.0 |
jeffmahoney/supybot | plugins/Topic/config.py | 9 | 3866 | ###
# Copyright (c) 2005, Jeremiah Fincher
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright notice,
# this list of conditions, and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions, and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of the author of this software nor the name of
# contributors to this software may be used to endorse or promote products
# derived from this software without specific prior written consent.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
###
import supybot.conf as conf
import supybot.registry as registry
def configure(advanced):
# This will be called by supybot to configure this module. advanced is
# a bool that specifies whether the user identified himself as an advanced
# user or not. You should effect your configuration by manipulating the
# registry as appropriate.
from supybot.questions import expect, anything, something, yn
conf.registerPlugin('Topic', True)
class TopicFormat(registry.TemplatedString):
"Value must include $topic, otherwise the actual topic would be left out."
requiredTemplates = ['topic']
Topic = conf.registerPlugin('Topic')
conf.registerChannelValue(Topic, 'separator',
registry.StringSurroundedBySpaces(' || ', """Determines what separator is
used between individually added topics in the channel topic."""))
conf.registerChannelValue(Topic, 'format',
TopicFormat('$topic ($nick)', """Determines what format is used to add
topics in the topic. All the standard substitutes apply, in addition to
"$topic" for the topic itself."""))
conf.registerChannelValue(Topic, 'recognizeTopiclen',
registry.Boolean(True, """Determines whether the bot will recognize the
TOPICLEN value sent to it by the server and thus refuse to send TOPICs
longer than the TOPICLEN. These topics are likely to be truncated by the
server anyway, so this defaults to True."""))
conf.registerChannelValue(Topic, 'default',
registry.String('', """Determines what the default topic for the channel
is. This is used by the default command to set this topic."""))
conf.registerGroup(Topic, 'undo')
conf.registerChannelValue(Topic.undo, 'max',
registry.NonNegativeInteger(10, """Determines the number of previous
topics to keep around in case the undo command is called."""))
conf.registerChannelValue(Topic, 'requireManageCapability',
registry.String('channel,op; channel,halfop',
"""Determines the
capabilities required (if any) to make any topic changes,
(everything except for read-only operations). Use 'channel,capab' for
channel-level capabilities.
Note that absence of an explicit anticapability means user has
capability."""))
# vim:set shiftwidth=4 softtabstop=4 expandtab textwidth=79:
| bsd-3-clause |
j717273419/ibus | ibus/keysyms.py | 15 | 30522 | # -*- Mode: Python; py-indent-offset: 4 -*-
# pygtk - Python bindings for the GTK toolkit.
# Copyright (C) 1998-2003 James Henstridge
#
# gtk/keysyms.py: list of keysyms.
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
VoidSymbol = 0xFFFFFF
BackSpace = 0xFF08
Tab = 0xFF09
Linefeed = 0xFF0A
Clear = 0xFF0B
Return = 0xFF0D
Pause = 0xFF13
Scroll_Lock = 0xFF14
Sys_Req = 0xFF15
Escape = 0xFF1B
Delete = 0xFFFF
Multi_key = 0xFF20
Codeinput = 0xFF37
SingleCandidate = 0xFF3C
MultipleCandidate = 0xFF3D
PreviousCandidate = 0xFF3E
Kanji = 0xFF21
Muhenkan = 0xFF22
Henkan_Mode = 0xFF23
Henkan = 0xFF23
Romaji = 0xFF24
Hiragana = 0xFF25
Katakana = 0xFF26
Hiragana_Katakana = 0xFF27
Zenkaku = 0xFF28
Hankaku = 0xFF29
Zenkaku_Hankaku = 0xFF2A
Touroku = 0xFF2B
Massyo = 0xFF2C
Kana_Lock = 0xFF2D
Kana_Shift = 0xFF2E
Eisu_Shift = 0xFF2F
Eisu_toggle = 0xFF30
Kanji_Bangou = 0xFF37
Zen_Koho = 0xFF3D
Mae_Koho = 0xFF3E
Home = 0xFF50
Left = 0xFF51
Up = 0xFF52
Right = 0xFF53
Down = 0xFF54
Prior = 0xFF55
Page_Up = 0xFF55
Next = 0xFF56
Page_Down = 0xFF56
End = 0xFF57
Begin = 0xFF58
Select = 0xFF60
Print = 0xFF61
Execute = 0xFF62
Insert = 0xFF63
Undo = 0xFF65
Redo = 0xFF66
Menu = 0xFF67
Find = 0xFF68
Cancel = 0xFF69
Help = 0xFF6A
Break = 0xFF6B
Mode_switch = 0xFF7E
script_switch = 0xFF7E
Num_Lock = 0xFF7F
KP_Space = 0xFF80
KP_Tab = 0xFF89
KP_Enter = 0xFF8D
KP_F1 = 0xFF91
KP_F2 = 0xFF92
KP_F3 = 0xFF93
KP_F4 = 0xFF94
KP_Home = 0xFF95
KP_Left = 0xFF96
KP_Up = 0xFF97
KP_Right = 0xFF98
KP_Down = 0xFF99
KP_Prior = 0xFF9A
KP_Page_Up = 0xFF9A
KP_Next = 0xFF9B
KP_Page_Down = 0xFF9B
KP_End = 0xFF9C
KP_Begin = 0xFF9D
KP_Insert = 0xFF9E
KP_Delete = 0xFF9F
KP_Equal = 0xFFBD
KP_Multiply = 0xFFAA
KP_Add = 0xFFAB
KP_Separator = 0xFFAC
KP_Subtract = 0xFFAD
KP_Decimal = 0xFFAE
KP_Divide = 0xFFAF
KP_0 = 0xFFB0
KP_1 = 0xFFB1
KP_2 = 0xFFB2
KP_3 = 0xFFB3
KP_4 = 0xFFB4
KP_5 = 0xFFB5
KP_6 = 0xFFB6
KP_7 = 0xFFB7
KP_8 = 0xFFB8
KP_9 = 0xFFB9
F1 = 0xFFBE
F2 = 0xFFBF
F3 = 0xFFC0
F4 = 0xFFC1
F5 = 0xFFC2
F6 = 0xFFC3
F7 = 0xFFC4
F8 = 0xFFC5
F9 = 0xFFC6
F10 = 0xFFC7
F11 = 0xFFC8
L1 = 0xFFC8
F12 = 0xFFC9
L2 = 0xFFC9
F13 = 0xFFCA
L3 = 0xFFCA
F14 = 0xFFCB
L4 = 0xFFCB
F15 = 0xFFCC
L5 = 0xFFCC
F16 = 0xFFCD
L6 = 0xFFCD
F17 = 0xFFCE
L7 = 0xFFCE
F18 = 0xFFCF
L8 = 0xFFCF
F19 = 0xFFD0
L9 = 0xFFD0
F20 = 0xFFD1
L10 = 0xFFD1
F21 = 0xFFD2
R1 = 0xFFD2
F22 = 0xFFD3
R2 = 0xFFD3
F23 = 0xFFD4
R3 = 0xFFD4
F24 = 0xFFD5
R4 = 0xFFD5
F25 = 0xFFD6
R5 = 0xFFD6
F26 = 0xFFD7
R6 = 0xFFD7
F27 = 0xFFD8
R7 = 0xFFD8
F28 = 0xFFD9
R8 = 0xFFD9
F29 = 0xFFDA
R9 = 0xFFDA
F30 = 0xFFDB
R10 = 0xFFDB
F31 = 0xFFDC
R11 = 0xFFDC
F32 = 0xFFDD
R12 = 0xFFDD
F33 = 0xFFDE
R13 = 0xFFDE
F34 = 0xFFDF
R14 = 0xFFDF
F35 = 0xFFE0
R15 = 0xFFE0
Shift_L = 0xFFE1
Shift_R = 0xFFE2
Control_L = 0xFFE3
Control_R = 0xFFE4
Caps_Lock = 0xFFE5
Shift_Lock = 0xFFE6
Meta_L = 0xFFE7
Meta_R = 0xFFE8
Alt_L = 0xFFE9
Alt_R = 0xFFEA
Super_L = 0xFFEB
Super_R = 0xFFEC
Hyper_L = 0xFFED
Hyper_R = 0xFFEE
ISO_Lock = 0xFE01
ISO_Level2_Latch = 0xFE02
ISO_Level3_Shift = 0xFE03
ISO_Level3_Latch = 0xFE04
ISO_Level3_Lock = 0xFE05
ISO_Group_Shift = 0xFF7E
ISO_Group_Latch = 0xFE06
ISO_Group_Lock = 0xFE07
ISO_Next_Group = 0xFE08
ISO_Next_Group_Lock = 0xFE09
ISO_Prev_Group = 0xFE0A
ISO_Prev_Group_Lock = 0xFE0B
ISO_First_Group = 0xFE0C
ISO_First_Group_Lock = 0xFE0D
ISO_Last_Group = 0xFE0E
ISO_Last_Group_Lock = 0xFE0F
ISO_Left_Tab = 0xFE20
ISO_Move_Line_Up = 0xFE21
ISO_Move_Line_Down = 0xFE22
ISO_Partial_Line_Up = 0xFE23
ISO_Partial_Line_Down = 0xFE24
ISO_Partial_Space_Left = 0xFE25
ISO_Partial_Space_Right = 0xFE26
ISO_Set_Margin_Left = 0xFE27
ISO_Set_Margin_Right = 0xFE28
ISO_Release_Margin_Left = 0xFE29
ISO_Release_Margin_Right = 0xFE2A
ISO_Release_Both_Margins = 0xFE2B
ISO_Fast_Cursor_Left = 0xFE2C
ISO_Fast_Cursor_Right = 0xFE2D
ISO_Fast_Cursor_Up = 0xFE2E
ISO_Fast_Cursor_Down = 0xFE2F
ISO_Continuous_Underline = 0xFE30
ISO_Discontinuous_Underline = 0xFE31
ISO_Emphasize = 0xFE32
ISO_Center_Object = 0xFE33
ISO_Enter = 0xFE34
dead_grave = 0xFE50
dead_acute = 0xFE51
dead_circumflex = 0xFE52
dead_tilde = 0xFE53
dead_macron = 0xFE54
dead_breve = 0xFE55
dead_abovedot = 0xFE56
dead_diaeresis = 0xFE57
dead_abovering = 0xFE58
dead_doubleacute = 0xFE59
dead_caron = 0xFE5A
dead_cedilla = 0xFE5B
dead_ogonek = 0xFE5C
dead_iota = 0xFE5D
dead_voiced_sound = 0xFE5E
dead_semivoiced_sound = 0xFE5F
dead_belowdot = 0xFE60
First_Virtual_Screen = 0xFED0
Prev_Virtual_Screen = 0xFED1
Next_Virtual_Screen = 0xFED2
Last_Virtual_Screen = 0xFED4
Terminate_Server = 0xFED5
AccessX_Enable = 0xFE70
AccessX_Feedback_Enable = 0xFE71
RepeatKeys_Enable = 0xFE72
SlowKeys_Enable = 0xFE73
BounceKeys_Enable = 0xFE74
StickyKeys_Enable = 0xFE75
MouseKeys_Enable = 0xFE76
MouseKeys_Accel_Enable = 0xFE77
Overlay1_Enable = 0xFE78
Overlay2_Enable = 0xFE79
AudibleBell_Enable = 0xFE7A
Pointer_Left = 0xFEE0
Pointer_Right = 0xFEE1
Pointer_Up = 0xFEE2
Pointer_Down = 0xFEE3
Pointer_UpLeft = 0xFEE4
Pointer_UpRight = 0xFEE5
Pointer_DownLeft = 0xFEE6
Pointer_DownRight = 0xFEE7
Pointer_Button_Dflt = 0xFEE8
Pointer_Button1 = 0xFEE9
Pointer_Button2 = 0xFEEA
Pointer_Button3 = 0xFEEB
Pointer_Button4 = 0xFEEC
Pointer_Button5 = 0xFEED
Pointer_DblClick_Dflt = 0xFEEE
Pointer_DblClick1 = 0xFEEF
Pointer_DblClick2 = 0xFEF0
Pointer_DblClick3 = 0xFEF1
Pointer_DblClick4 = 0xFEF2
Pointer_DblClick5 = 0xFEF3
Pointer_Drag_Dflt = 0xFEF4
Pointer_Drag1 = 0xFEF5
Pointer_Drag2 = 0xFEF6
Pointer_Drag3 = 0xFEF7
Pointer_Drag4 = 0xFEF8
Pointer_Drag5 = 0xFEFD
Pointer_EnableKeys = 0xFEF9
Pointer_Accelerate = 0xFEFA
Pointer_DfltBtnNext = 0xFEFB
Pointer_DfltBtnPrev = 0xFEFC
_3270_Duplicate = 0xFD01
_3270_FieldMark = 0xFD02
_3270_Right2 = 0xFD03
_3270_Left2 = 0xFD04
_3270_BackTab = 0xFD05
_3270_EraseEOF = 0xFD06
_3270_EraseInput = 0xFD07
_3270_Reset = 0xFD08
_3270_Quit = 0xFD09
_3270_PA1 = 0xFD0A
_3270_PA2 = 0xFD0B
_3270_PA3 = 0xFD0C
_3270_Test = 0xFD0D
_3270_Attn = 0xFD0E
_3270_CursorBlink = 0xFD0F
_3270_AltCursor = 0xFD10
_3270_KeyClick = 0xFD11
_3270_Jump = 0xFD12
_3270_Ident = 0xFD13
_3270_Rule = 0xFD14
_3270_Copy = 0xFD15
_3270_Play = 0xFD16
_3270_Setup = 0xFD17
_3270_Record = 0xFD18
_3270_ChangeScreen = 0xFD19
_3270_DeleteWord = 0xFD1A
_3270_ExSelect = 0xFD1B
_3270_CursorSelect = 0xFD1C
_3270_PrintScreen = 0xFD1D
_3270_Enter = 0xFD1E
space = 0x020
exclam = 0x021
quotedbl = 0x022
numbersign = 0x023
dollar = 0x024
percent = 0x025
ampersand = 0x026
apostrophe = 0x027
quoteright = 0x027
parenleft = 0x028
parenright = 0x029
asterisk = 0x02a
plus = 0x02b
comma = 0x02c
minus = 0x02d
period = 0x02e
slash = 0x02f
_0 = 0x030
_1 = 0x031
_2 = 0x032
_3 = 0x033
_4 = 0x034
_5 = 0x035
_6 = 0x036
_7 = 0x037
_8 = 0x038
_9 = 0x039
colon = 0x03a
semicolon = 0x03b
less = 0x03c
equal = 0x03d
greater = 0x03e
question = 0x03f
at = 0x040
A = 0x041
B = 0x042
C = 0x043
D = 0x044
E = 0x045
F = 0x046
G = 0x047
H = 0x048
I = 0x049
J = 0x04a
K = 0x04b
L = 0x04c
M = 0x04d
N = 0x04e
O = 0x04f
P = 0x050
Q = 0x051
R = 0x052
S = 0x053
T = 0x054
U = 0x055
V = 0x056
W = 0x057
X = 0x058
Y = 0x059
Z = 0x05a
bracketleft = 0x05b
backslash = 0x05c
bracketright = 0x05d
asciicircum = 0x05e
underscore = 0x05f
grave = 0x060
quoteleft = 0x060
a = 0x061
b = 0x062
c = 0x063
d = 0x064
e = 0x065
f = 0x066
g = 0x067
h = 0x068
i = 0x069
j = 0x06a
k = 0x06b
l = 0x06c
m = 0x06d
n = 0x06e
o = 0x06f
p = 0x070
q = 0x071
r = 0x072
s = 0x073
t = 0x074
u = 0x075
v = 0x076
w = 0x077
x = 0x078
y = 0x079
z = 0x07a
braceleft = 0x07b
bar = 0x07c
braceright = 0x07d
asciitilde = 0x07e
nobreakspace = 0x0a0
exclamdown = 0x0a1
cent = 0x0a2
sterling = 0x0a3
currency = 0x0a4
yen = 0x0a5
brokenbar = 0x0a6
section = 0x0a7
diaeresis = 0x0a8
copyright = 0x0a9
ordfeminine = 0x0aa
guillemotleft = 0x0ab
notsign = 0x0ac
hyphen = 0x0ad
registered = 0x0ae
macron = 0x0af
degree = 0x0b0
plusminus = 0x0b1
twosuperior = 0x0b2
threesuperior = 0x0b3
acute = 0x0b4
mu = 0x0b5
paragraph = 0x0b6
periodcentered = 0x0b7
cedilla = 0x0b8
onesuperior = 0x0b9
masculine = 0x0ba
guillemotright = 0x0bb
onequarter = 0x0bc
onehalf = 0x0bd
threequarters = 0x0be
questiondown = 0x0bf
Agrave = 0x0c0
Aacute = 0x0c1
Acircumflex = 0x0c2
Atilde = 0x0c3
Adiaeresis = 0x0c4
Aring = 0x0c5
AE = 0x0c6
Ccedilla = 0x0c7
Egrave = 0x0c8
Eacute = 0x0c9
Ecircumflex = 0x0ca
Ediaeresis = 0x0cb
Igrave = 0x0cc
Iacute = 0x0cd
Icircumflex = 0x0ce
Idiaeresis = 0x0cf
ETH = 0x0d0
Eth = 0x0d0
Ntilde = 0x0d1
Ograve = 0x0d2
Oacute = 0x0d3
Ocircumflex = 0x0d4
Otilde = 0x0d5
Odiaeresis = 0x0d6
multiply = 0x0d7
Ooblique = 0x0d8
Ugrave = 0x0d9
Uacute = 0x0da
Ucircumflex = 0x0db
Udiaeresis = 0x0dc
Yacute = 0x0dd
THORN = 0x0de
Thorn = 0x0de
ssharp = 0x0df
agrave = 0x0e0
aacute = 0x0e1
acircumflex = 0x0e2
atilde = 0x0e3
adiaeresis = 0x0e4
aring = 0x0e5
ae = 0x0e6
ccedilla = 0x0e7
egrave = 0x0e8
eacute = 0x0e9
ecircumflex = 0x0ea
ediaeresis = 0x0eb
igrave = 0x0ec
iacute = 0x0ed
icircumflex = 0x0ee
idiaeresis = 0x0ef
eth = 0x0f0
ntilde = 0x0f1
ograve = 0x0f2
oacute = 0x0f3
ocircumflex = 0x0f4
otilde = 0x0f5
odiaeresis = 0x0f6
division = 0x0f7
oslash = 0x0f8
ugrave = 0x0f9
uacute = 0x0fa
ucircumflex = 0x0fb
udiaeresis = 0x0fc
yacute = 0x0fd
thorn = 0x0fe
ydiaeresis = 0x0ff
Aogonek = 0x1a1
breve = 0x1a2
Lstroke = 0x1a3
Lcaron = 0x1a5
Sacute = 0x1a6
Scaron = 0x1a9
Scedilla = 0x1aa
Tcaron = 0x1ab
Zacute = 0x1ac
Zcaron = 0x1ae
Zabovedot = 0x1af
aogonek = 0x1b1
ogonek = 0x1b2
lstroke = 0x1b3
lcaron = 0x1b5
sacute = 0x1b6
caron = 0x1b7
scaron = 0x1b9
scedilla = 0x1ba
tcaron = 0x1bb
zacute = 0x1bc
doubleacute = 0x1bd
zcaron = 0x1be
zabovedot = 0x1bf
Racute = 0x1c0
Abreve = 0x1c3
Lacute = 0x1c5
Cacute = 0x1c6
Ccaron = 0x1c8
Eogonek = 0x1ca
Ecaron = 0x1cc
Dcaron = 0x1cf
Dstroke = 0x1d0
Nacute = 0x1d1
Ncaron = 0x1d2
Odoubleacute = 0x1d5
Rcaron = 0x1d8
Uring = 0x1d9
Udoubleacute = 0x1db
Tcedilla = 0x1de
racute = 0x1e0
abreve = 0x1e3
lacute = 0x1e5
cacute = 0x1e6
ccaron = 0x1e8
eogonek = 0x1ea
ecaron = 0x1ec
dcaron = 0x1ef
dstroke = 0x1f0
nacute = 0x1f1
ncaron = 0x1f2
odoubleacute = 0x1f5
udoubleacute = 0x1fb
rcaron = 0x1f8
uring = 0x1f9
tcedilla = 0x1fe
abovedot = 0x1ff
Hstroke = 0x2a1
Hcircumflex = 0x2a6
Iabovedot = 0x2a9
Gbreve = 0x2ab
Jcircumflex = 0x2ac
hstroke = 0x2b1
hcircumflex = 0x2b6
idotless = 0x2b9
gbreve = 0x2bb
jcircumflex = 0x2bc
Cabovedot = 0x2c5
Ccircumflex = 0x2c6
Gabovedot = 0x2d5
Gcircumflex = 0x2d8
Ubreve = 0x2dd
Scircumflex = 0x2de
cabovedot = 0x2e5
ccircumflex = 0x2e6
gabovedot = 0x2f5
gcircumflex = 0x2f8
ubreve = 0x2fd
scircumflex = 0x2fe
kra = 0x3a2
kappa = 0x3a2
Rcedilla = 0x3a3
Itilde = 0x3a5
Lcedilla = 0x3a6
Emacron = 0x3aa
Gcedilla = 0x3ab
Tslash = 0x3ac
rcedilla = 0x3b3
itilde = 0x3b5
lcedilla = 0x3b6
emacron = 0x3ba
gcedilla = 0x3bb
tslash = 0x3bc
ENG = 0x3bd
eng = 0x3bf
Amacron = 0x3c0
Iogonek = 0x3c7
Eabovedot = 0x3cc
Imacron = 0x3cf
Ncedilla = 0x3d1
Omacron = 0x3d2
Kcedilla = 0x3d3
Uogonek = 0x3d9
Utilde = 0x3dd
Umacron = 0x3de
amacron = 0x3e0
iogonek = 0x3e7
eabovedot = 0x3ec
imacron = 0x3ef
ncedilla = 0x3f1
omacron = 0x3f2
kcedilla = 0x3f3
uogonek = 0x3f9
utilde = 0x3fd
umacron = 0x3fe
OE = 0x13bc
oe = 0x13bd
Ydiaeresis = 0x13be
overline = 0x47e
kana_fullstop = 0x4a1
kana_openingbracket = 0x4a2
kana_closingbracket = 0x4a3
kana_comma = 0x4a4
kana_conjunctive = 0x4a5
kana_middledot = 0x4a5
kana_WO = 0x4a6
kana_a = 0x4a7
kana_i = 0x4a8
kana_u = 0x4a9
kana_e = 0x4aa
kana_o = 0x4ab
kana_ya = 0x4ac
kana_yu = 0x4ad
kana_yo = 0x4ae
kana_tsu = 0x4af
kana_tu = 0x4af
prolongedsound = 0x4b0
kana_A = 0x4b1
kana_I = 0x4b2
kana_U = 0x4b3
kana_E = 0x4b4
kana_O = 0x4b5
kana_KA = 0x4b6
kana_KI = 0x4b7
kana_KU = 0x4b8
kana_KE = 0x4b9
kana_KO = 0x4ba
kana_SA = 0x4bb
kana_SHI = 0x4bc
kana_SU = 0x4bd
kana_SE = 0x4be
kana_SO = 0x4bf
kana_TA = 0x4c0
kana_CHI = 0x4c1
kana_TI = 0x4c1
kana_TSU = 0x4c2
kana_TU = 0x4c2
kana_TE = 0x4c3
kana_TO = 0x4c4
kana_NA = 0x4c5
kana_NI = 0x4c6
kana_NU = 0x4c7
kana_NE = 0x4c8
kana_NO = 0x4c9
kana_HA = 0x4ca
kana_HI = 0x4cb
kana_FU = 0x4cc
kana_HU = 0x4cc
kana_HE = 0x4cd
kana_HO = 0x4ce
kana_MA = 0x4cf
kana_MI = 0x4d0
kana_MU = 0x4d1
kana_ME = 0x4d2
kana_MO = 0x4d3
kana_YA = 0x4d4
kana_YU = 0x4d5
kana_YO = 0x4d6
kana_RA = 0x4d7
kana_RI = 0x4d8
kana_RU = 0x4d9
kana_RE = 0x4da
kana_RO = 0x4db
kana_WA = 0x4dc
kana_N = 0x4dd
voicedsound = 0x4de
semivoicedsound = 0x4df
kana_switch = 0xFF7E
Arabic_comma = 0x5ac
Arabic_semicolon = 0x5bb
Arabic_question_mark = 0x5bf
Arabic_hamza = 0x5c1
Arabic_maddaonalef = 0x5c2
Arabic_hamzaonalef = 0x5c3
Arabic_hamzaonwaw = 0x5c4
Arabic_hamzaunderalef = 0x5c5
Arabic_hamzaonyeh = 0x5c6
Arabic_alef = 0x5c7
Arabic_beh = 0x5c8
Arabic_tehmarbuta = 0x5c9
Arabic_teh = 0x5ca
Arabic_theh = 0x5cb
Arabic_jeem = 0x5cc
Arabic_hah = 0x5cd
Arabic_khah = 0x5ce
Arabic_dal = 0x5cf
Arabic_thal = 0x5d0
Arabic_ra = 0x5d1
Arabic_zain = 0x5d2
Arabic_seen = 0x5d3
Arabic_sheen = 0x5d4
Arabic_sad = 0x5d5
Arabic_dad = 0x5d6
Arabic_tah = 0x5d7
Arabic_zah = 0x5d8
Arabic_ain = 0x5d9
Arabic_ghain = 0x5da
Arabic_tatweel = 0x5e0
Arabic_feh = 0x5e1
Arabic_qaf = 0x5e2
Arabic_kaf = 0x5e3
Arabic_lam = 0x5e4
Arabic_meem = 0x5e5
Arabic_noon = 0x5e6
Arabic_ha = 0x5e7
Arabic_heh = 0x5e7
Arabic_waw = 0x5e8
Arabic_alefmaksura = 0x5e9
Arabic_yeh = 0x5ea
Arabic_fathatan = 0x5eb
Arabic_dammatan = 0x5ec
Arabic_kasratan = 0x5ed
Arabic_fatha = 0x5ee
Arabic_damma = 0x5ef
Arabic_kasra = 0x5f0
Arabic_shadda = 0x5f1
Arabic_sukun = 0x5f2
Arabic_switch = 0xFF7E
Serbian_dje = 0x6a1
Macedonia_gje = 0x6a2
Cyrillic_io = 0x6a3
Ukrainian_ie = 0x6a4
Ukranian_je = 0x6a4
Macedonia_dse = 0x6a5
Ukrainian_i = 0x6a6
Ukranian_i = 0x6a6
Ukrainian_yi = 0x6a7
Ukranian_yi = 0x6a7
Cyrillic_je = 0x6a8
Serbian_je = 0x6a8
Cyrillic_lje = 0x6a9
Serbian_lje = 0x6a9
Cyrillic_nje = 0x6aa
Serbian_nje = 0x6aa
Serbian_tshe = 0x6ab
Macedonia_kje = 0x6ac
Ukrainian_ghe_with_upturn = 0x6ad
Byelorussian_shortu = 0x6ae
Cyrillic_dzhe = 0x6af
Serbian_dze = 0x6af
numerosign = 0x6b0
Serbian_DJE = 0x6b1
Macedonia_GJE = 0x6b2
Cyrillic_IO = 0x6b3
Ukrainian_IE = 0x6b4
Ukranian_JE = 0x6b4
Macedonia_DSE = 0x6b5
Ukrainian_I = 0x6b6
Ukranian_I = 0x6b6
Ukrainian_YI = 0x6b7
Ukranian_YI = 0x6b7
Cyrillic_JE = 0x6b8
Serbian_JE = 0x6b8
Cyrillic_LJE = 0x6b9
Serbian_LJE = 0x6b9
Cyrillic_NJE = 0x6ba
Serbian_NJE = 0x6ba
Serbian_TSHE = 0x6bb
Macedonia_KJE = 0x6bc
Ukrainian_GHE_WITH_UPTURN = 0x6bd
Byelorussian_SHORTU = 0x6be
Cyrillic_DZHE = 0x6bf
Serbian_DZE = 0x6bf
Cyrillic_yu = 0x6c0
Cyrillic_a = 0x6c1
Cyrillic_be = 0x6c2
Cyrillic_tse = 0x6c3
Cyrillic_de = 0x6c4
Cyrillic_ie = 0x6c5
Cyrillic_ef = 0x6c6
Cyrillic_ghe = 0x6c7
Cyrillic_ha = 0x6c8
Cyrillic_i = 0x6c9
Cyrillic_shorti = 0x6ca
Cyrillic_ka = 0x6cb
Cyrillic_el = 0x6cc
Cyrillic_em = 0x6cd
Cyrillic_en = 0x6ce
Cyrillic_o = 0x6cf
Cyrillic_pe = 0x6d0
Cyrillic_ya = 0x6d1
Cyrillic_er = 0x6d2
Cyrillic_es = 0x6d3
Cyrillic_te = 0x6d4
Cyrillic_u = 0x6d5
Cyrillic_zhe = 0x6d6
Cyrillic_ve = 0x6d7
Cyrillic_softsign = 0x6d8
Cyrillic_yeru = 0x6d9
Cyrillic_ze = 0x6da
Cyrillic_sha = 0x6db
Cyrillic_e = 0x6dc
Cyrillic_shcha = 0x6dd
Cyrillic_che = 0x6de
Cyrillic_hardsign = 0x6df
Cyrillic_YU = 0x6e0
Cyrillic_A = 0x6e1
Cyrillic_BE = 0x6e2
Cyrillic_TSE = 0x6e3
Cyrillic_DE = 0x6e4
Cyrillic_IE = 0x6e5
Cyrillic_EF = 0x6e6
Cyrillic_GHE = 0x6e7
Cyrillic_HA = 0x6e8
Cyrillic_I = 0x6e9
Cyrillic_SHORTI = 0x6ea
Cyrillic_KA = 0x6eb
Cyrillic_EL = 0x6ec
Cyrillic_EM = 0x6ed
Cyrillic_EN = 0x6ee
Cyrillic_O = 0x6ef
Cyrillic_PE = 0x6f0
Cyrillic_YA = 0x6f1
Cyrillic_ER = 0x6f2
Cyrillic_ES = 0x6f3
Cyrillic_TE = 0x6f4
Cyrillic_U = 0x6f5
Cyrillic_ZHE = 0x6f6
Cyrillic_VE = 0x6f7
Cyrillic_SOFTSIGN = 0x6f8
Cyrillic_YERU = 0x6f9
Cyrillic_ZE = 0x6fa
Cyrillic_SHA = 0x6fb
Cyrillic_E = 0x6fc
Cyrillic_SHCHA = 0x6fd
Cyrillic_CHE = 0x6fe
Cyrillic_HARDSIGN = 0x6ff
Greek_ALPHAaccent = 0x7a1
Greek_EPSILONaccent = 0x7a2
Greek_ETAaccent = 0x7a3
Greek_IOTAaccent = 0x7a4
Greek_IOTAdiaeresis = 0x7a5
Greek_OMICRONaccent = 0x7a7
Greek_UPSILONaccent = 0x7a8
Greek_UPSILONdieresis = 0x7a9
Greek_OMEGAaccent = 0x7ab
Greek_accentdieresis = 0x7ae
Greek_horizbar = 0x7af
Greek_alphaaccent = 0x7b1
Greek_epsilonaccent = 0x7b2
Greek_etaaccent = 0x7b3
Greek_iotaaccent = 0x7b4
Greek_iotadieresis = 0x7b5
Greek_iotaaccentdieresis = 0x7b6
Greek_omicronaccent = 0x7b7
Greek_upsilonaccent = 0x7b8
Greek_upsilondieresis = 0x7b9
Greek_upsilonaccentdieresis = 0x7ba
Greek_omegaaccent = 0x7bb
Greek_ALPHA = 0x7c1
Greek_BETA = 0x7c2
Greek_GAMMA = 0x7c3
Greek_DELTA = 0x7c4
Greek_EPSILON = 0x7c5
Greek_ZETA = 0x7c6
Greek_ETA = 0x7c7
Greek_THETA = 0x7c8
Greek_IOTA = 0x7c9
Greek_KAPPA = 0x7ca
Greek_LAMDA = 0x7cb
Greek_LAMBDA = 0x7cb
Greek_MU = 0x7cc
Greek_NU = 0x7cd
Greek_XI = 0x7ce
Greek_OMICRON = 0x7cf
Greek_PI = 0x7d0
Greek_RHO = 0x7d1
Greek_SIGMA = 0x7d2
Greek_TAU = 0x7d4
Greek_UPSILON = 0x7d5
Greek_PHI = 0x7d6
Greek_CHI = 0x7d7
Greek_PSI = 0x7d8
Greek_OMEGA = 0x7d9
Greek_alpha = 0x7e1
Greek_beta = 0x7e2
Greek_gamma = 0x7e3
Greek_delta = 0x7e4
Greek_epsilon = 0x7e5
Greek_zeta = 0x7e6
Greek_eta = 0x7e7
Greek_theta = 0x7e8
Greek_iota = 0x7e9
Greek_kappa = 0x7ea
Greek_lamda = 0x7eb
Greek_lambda = 0x7eb
Greek_mu = 0x7ec
Greek_nu = 0x7ed
Greek_xi = 0x7ee
Greek_omicron = 0x7ef
Greek_pi = 0x7f0
Greek_rho = 0x7f1
Greek_sigma = 0x7f2
Greek_finalsmallsigma = 0x7f3
Greek_tau = 0x7f4
Greek_upsilon = 0x7f5
Greek_phi = 0x7f6
Greek_chi = 0x7f7
Greek_psi = 0x7f8
Greek_omega = 0x7f9
Greek_switch = 0xFF7E
leftradical = 0x8a1
topleftradical = 0x8a2
horizconnector = 0x8a3
topintegral = 0x8a4
botintegral = 0x8a5
vertconnector = 0x8a6
topleftsqbracket = 0x8a7
botleftsqbracket = 0x8a8
toprightsqbracket = 0x8a9
botrightsqbracket = 0x8aa
topleftparens = 0x8ab
botleftparens = 0x8ac
toprightparens = 0x8ad
botrightparens = 0x8ae
leftmiddlecurlybrace = 0x8af
rightmiddlecurlybrace = 0x8b0
topleftsummation = 0x8b1
botleftsummation = 0x8b2
topvertsummationconnector = 0x8b3
botvertsummationconnector = 0x8b4
toprightsummation = 0x8b5
botrightsummation = 0x8b6
rightmiddlesummation = 0x8b7
lessthanequal = 0x8bc
notequal = 0x8bd
greaterthanequal = 0x8be
integral = 0x8bf
therefore = 0x8c0
variation = 0x8c1
infinity = 0x8c2
nabla = 0x8c5
approximate = 0x8c8
similarequal = 0x8c9
ifonlyif = 0x8cd
implies = 0x8ce
identical = 0x8cf
radical = 0x8d6
includedin = 0x8da
includes = 0x8db
intersection = 0x8dc
union = 0x8dd
logicaland = 0x8de
logicalor = 0x8df
partialderivative = 0x8ef
function = 0x8f6
leftarrow = 0x8fb
uparrow = 0x8fc
rightarrow = 0x8fd
downarrow = 0x8fe
blank = 0x9df
soliddiamond = 0x9e0
checkerboard = 0x9e1
ht = 0x9e2
ff = 0x9e3
cr = 0x9e4
lf = 0x9e5
nl = 0x9e8
vt = 0x9e9
lowrightcorner = 0x9ea
uprightcorner = 0x9eb
upleftcorner = 0x9ec
lowleftcorner = 0x9ed
crossinglines = 0x9ee
horizlinescan1 = 0x9ef
horizlinescan3 = 0x9f0
horizlinescan5 = 0x9f1
horizlinescan7 = 0x9f2
horizlinescan9 = 0x9f3
leftt = 0x9f4
rightt = 0x9f5
bott = 0x9f6
topt = 0x9f7
vertbar = 0x9f8
emspace = 0xaa1
enspace = 0xaa2
em3space = 0xaa3
em4space = 0xaa4
digitspace = 0xaa5
punctspace = 0xaa6
thinspace = 0xaa7
hairspace = 0xaa8
emdash = 0xaa9
endash = 0xaaa
signifblank = 0xaac
ellipsis = 0xaae
doubbaselinedot = 0xaaf
onethird = 0xab0
twothirds = 0xab1
onefifth = 0xab2
twofifths = 0xab3
threefifths = 0xab4
fourfifths = 0xab5
onesixth = 0xab6
fivesixths = 0xab7
careof = 0xab8
figdash = 0xabb
leftanglebracket = 0xabc
decimalpoint = 0xabd
rightanglebracket = 0xabe
marker = 0xabf
oneeighth = 0xac3
threeeighths = 0xac4
fiveeighths = 0xac5
seveneighths = 0xac6
trademark = 0xac9
signaturemark = 0xaca
trademarkincircle = 0xacb
leftopentriangle = 0xacc
rightopentriangle = 0xacd
emopencircle = 0xace
emopenrectangle = 0xacf
leftsinglequotemark = 0xad0
rightsinglequotemark = 0xad1
leftdoublequotemark = 0xad2
rightdoublequotemark = 0xad3
prescription = 0xad4
minutes = 0xad6
seconds = 0xad7
latincross = 0xad9
hexagram = 0xada
filledrectbullet = 0xadb
filledlefttribullet = 0xadc
filledrighttribullet = 0xadd
emfilledcircle = 0xade
emfilledrect = 0xadf
enopencircbullet = 0xae0
enopensquarebullet = 0xae1
openrectbullet = 0xae2
opentribulletup = 0xae3
opentribulletdown = 0xae4
openstar = 0xae5
enfilledcircbullet = 0xae6
enfilledsqbullet = 0xae7
filledtribulletup = 0xae8
filledtribulletdown = 0xae9
leftpointer = 0xaea
rightpointer = 0xaeb
club = 0xaec
diamond = 0xaed
heart = 0xaee
maltesecross = 0xaf0
dagger = 0xaf1
doubledagger = 0xaf2
checkmark = 0xaf3
ballotcross = 0xaf4
musicalsharp = 0xaf5
musicalflat = 0xaf6
malesymbol = 0xaf7
femalesymbol = 0xaf8
telephone = 0xaf9
telephonerecorder = 0xafa
phonographcopyright = 0xafb
caret = 0xafc
singlelowquotemark = 0xafd
doublelowquotemark = 0xafe
cursor = 0xaff
leftcaret = 0xba3
rightcaret = 0xba6
downcaret = 0xba8
upcaret = 0xba9
overbar = 0xbc0
downtack = 0xbc2
upshoe = 0xbc3
downstile = 0xbc4
underbar = 0xbc6
jot = 0xbca
quad = 0xbcc
uptack = 0xbce
circle = 0xbcf
upstile = 0xbd3
downshoe = 0xbd6
rightshoe = 0xbd8
leftshoe = 0xbda
lefttack = 0xbdc
righttack = 0xbfc
hebrew_doublelowline = 0xcdf
hebrew_aleph = 0xce0
hebrew_bet = 0xce1
hebrew_beth = 0xce1
hebrew_gimel = 0xce2
hebrew_gimmel = 0xce2
hebrew_dalet = 0xce3
hebrew_daleth = 0xce3
hebrew_he = 0xce4
hebrew_waw = 0xce5
hebrew_zain = 0xce6
hebrew_zayin = 0xce6
hebrew_chet = 0xce7
hebrew_het = 0xce7
hebrew_tet = 0xce8
hebrew_teth = 0xce8
hebrew_yod = 0xce9
hebrew_finalkaph = 0xcea
hebrew_kaph = 0xceb
hebrew_lamed = 0xcec
hebrew_finalmem = 0xced
hebrew_mem = 0xcee
hebrew_finalnun = 0xcef
hebrew_nun = 0xcf0
hebrew_samech = 0xcf1
hebrew_samekh = 0xcf1
hebrew_ayin = 0xcf2
hebrew_finalpe = 0xcf3
hebrew_pe = 0xcf4
hebrew_finalzade = 0xcf5
hebrew_finalzadi = 0xcf5
hebrew_zade = 0xcf6
hebrew_zadi = 0xcf6
hebrew_qoph = 0xcf7
hebrew_kuf = 0xcf7
hebrew_resh = 0xcf8
hebrew_shin = 0xcf9
hebrew_taw = 0xcfa
hebrew_taf = 0xcfa
Hebrew_switch = 0xFF7E
Thai_kokai = 0xda1
Thai_khokhai = 0xda2
Thai_khokhuat = 0xda3
Thai_khokhwai = 0xda4
Thai_khokhon = 0xda5
Thai_khorakhang = 0xda6
Thai_ngongu = 0xda7
Thai_chochan = 0xda8
Thai_choching = 0xda9
Thai_chochang = 0xdaa
Thai_soso = 0xdab
Thai_chochoe = 0xdac
Thai_yoying = 0xdad
Thai_dochada = 0xdae
Thai_topatak = 0xdaf
Thai_thothan = 0xdb0
Thai_thonangmontho = 0xdb1
Thai_thophuthao = 0xdb2
Thai_nonen = 0xdb3
Thai_dodek = 0xdb4
Thai_totao = 0xdb5
Thai_thothung = 0xdb6
Thai_thothahan = 0xdb7
Thai_thothong = 0xdb8
Thai_nonu = 0xdb9
Thai_bobaimai = 0xdba
Thai_popla = 0xdbb
Thai_phophung = 0xdbc
Thai_fofa = 0xdbd
Thai_phophan = 0xdbe
Thai_fofan = 0xdbf
Thai_phosamphao = 0xdc0
Thai_moma = 0xdc1
Thai_yoyak = 0xdc2
Thai_rorua = 0xdc3
Thai_ru = 0xdc4
Thai_loling = 0xdc5
Thai_lu = 0xdc6
Thai_wowaen = 0xdc7
Thai_sosala = 0xdc8
Thai_sorusi = 0xdc9
Thai_sosua = 0xdca
Thai_hohip = 0xdcb
Thai_lochula = 0xdcc
Thai_oang = 0xdcd
Thai_honokhuk = 0xdce
Thai_paiyannoi = 0xdcf
Thai_saraa = 0xdd0
Thai_maihanakat = 0xdd1
Thai_saraaa = 0xdd2
Thai_saraam = 0xdd3
Thai_sarai = 0xdd4
Thai_saraii = 0xdd5
Thai_saraue = 0xdd6
Thai_sarauee = 0xdd7
Thai_sarau = 0xdd8
Thai_sarauu = 0xdd9
Thai_phinthu = 0xdda
Thai_maihanakat_maitho = 0xdde
Thai_baht = 0xddf
Thai_sarae = 0xde0
Thai_saraae = 0xde1
Thai_sarao = 0xde2
Thai_saraaimaimuan = 0xde3
Thai_saraaimaimalai = 0xde4
Thai_lakkhangyao = 0xde5
Thai_maiyamok = 0xde6
Thai_maitaikhu = 0xde7
Thai_maiek = 0xde8
Thai_maitho = 0xde9
Thai_maitri = 0xdea
Thai_maichattawa = 0xdeb
Thai_thanthakhat = 0xdec
Thai_nikhahit = 0xded
Thai_leksun = 0xdf0
Thai_leknung = 0xdf1
Thai_leksong = 0xdf2
Thai_leksam = 0xdf3
Thai_leksi = 0xdf4
Thai_lekha = 0xdf5
Thai_lekhok = 0xdf6
Thai_lekchet = 0xdf7
Thai_lekpaet = 0xdf8
Thai_lekkao = 0xdf9
Hangul = 0xff31
Hangul_Start = 0xff32
Hangul_End = 0xff33
Hangul_Hanja = 0xff34
Hangul_Jamo = 0xff35
Hangul_Romaja = 0xff36
Hangul_Codeinput = 0xff37
Hangul_Jeonja = 0xff38
Hangul_Banja = 0xff39
Hangul_PreHanja = 0xff3a
Hangul_PostHanja = 0xff3b
Hangul_SingleCandidate = 0xff3c
Hangul_MultipleCandidate = 0xff3d
Hangul_PreviousCandidate = 0xff3e
Hangul_Special = 0xff3f
Hangul_switch = 0xFF7E
Hangul_Kiyeog = 0xea1
Hangul_SsangKiyeog = 0xea2
Hangul_KiyeogSios = 0xea3
Hangul_Nieun = 0xea4
Hangul_NieunJieuj = 0xea5
Hangul_NieunHieuh = 0xea6
Hangul_Dikeud = 0xea7
Hangul_SsangDikeud = 0xea8
Hangul_Rieul = 0xea9
Hangul_RieulKiyeog = 0xeaa
Hangul_RieulMieum = 0xeab
Hangul_RieulPieub = 0xeac
Hangul_RieulSios = 0xead
Hangul_RieulTieut = 0xeae
Hangul_RieulPhieuf = 0xeaf
Hangul_RieulHieuh = 0xeb0
Hangul_Mieum = 0xeb1
Hangul_Pieub = 0xeb2
Hangul_SsangPieub = 0xeb3
Hangul_PieubSios = 0xeb4
Hangul_Sios = 0xeb5
Hangul_SsangSios = 0xeb6
Hangul_Ieung = 0xeb7
Hangul_Jieuj = 0xeb8
Hangul_SsangJieuj = 0xeb9
Hangul_Cieuc = 0xeba
Hangul_Khieuq = 0xebb
Hangul_Tieut = 0xebc
Hangul_Phieuf = 0xebd
Hangul_Hieuh = 0xebe
Hangul_A = 0xebf
Hangul_AE = 0xec0
Hangul_YA = 0xec1
Hangul_YAE = 0xec2
Hangul_EO = 0xec3
Hangul_E = 0xec4
Hangul_YEO = 0xec5
Hangul_YE = 0xec6
Hangul_O = 0xec7
Hangul_WA = 0xec8
Hangul_WAE = 0xec9
Hangul_OE = 0xeca
Hangul_YO = 0xecb
Hangul_U = 0xecc
Hangul_WEO = 0xecd
Hangul_WE = 0xece
Hangul_WI = 0xecf
Hangul_YU = 0xed0
Hangul_EU = 0xed1
Hangul_YI = 0xed2
Hangul_I = 0xed3
Hangul_J_Kiyeog = 0xed4
Hangul_J_SsangKiyeog = 0xed5
Hangul_J_KiyeogSios = 0xed6
Hangul_J_Nieun = 0xed7
Hangul_J_NieunJieuj = 0xed8
Hangul_J_NieunHieuh = 0xed9
Hangul_J_Dikeud = 0xeda
Hangul_J_Rieul = 0xedb
Hangul_J_RieulKiyeog = 0xedc
Hangul_J_RieulMieum = 0xedd
Hangul_J_RieulPieub = 0xede
Hangul_J_RieulSios = 0xedf
Hangul_J_RieulTieut = 0xee0
Hangul_J_RieulPhieuf = 0xee1
Hangul_J_RieulHieuh = 0xee2
Hangul_J_Mieum = 0xee3
Hangul_J_Pieub = 0xee4
Hangul_J_PieubSios = 0xee5
Hangul_J_Sios = 0xee6
Hangul_J_SsangSios = 0xee7
Hangul_J_Ieung = 0xee8
Hangul_J_Jieuj = 0xee9
Hangul_J_Cieuc = 0xeea
Hangul_J_Khieuq = 0xeeb
Hangul_J_Tieut = 0xeec
Hangul_J_Phieuf = 0xeed
Hangul_J_Hieuh = 0xeee
Hangul_RieulYeorinHieuh = 0xeef
Hangul_SunkyeongeumMieum = 0xef0
Hangul_SunkyeongeumPieub = 0xef1
Hangul_PanSios = 0xef2
Hangul_KkogjiDalrinIeung = 0xef3
Hangul_SunkyeongeumPhieuf = 0xef4
Hangul_YeorinHieuh = 0xef5
Hangul_AraeA = 0xef6
Hangul_AraeAE = 0xef7
Hangul_J_PanSios = 0xef8
Hangul_J_KkogjiDalrinIeung = 0xef9
Hangul_J_YeorinHieuh = 0xefa
Korean_Won = 0xeff
Armenian_eternity = 0x14a1
Armenian_section_sign = 0x14a2
Armenian_full_stop = 0x14a3
Armenian_verjaket = 0x14a3
Armenian_parenright = 0x14a4
Armenian_parenleft = 0x14a5
Armenian_guillemotright = 0x14a6
Armenian_guillemotleft = 0x14a7
Armenian_em_dash = 0x14a8
Armenian_dot = 0x14a9
Armenian_mijaket = 0x14a9
Armenian_separation_mark = 0x14aa
Armenian_but = 0x14aa
Armenian_comma = 0x14ab
Armenian_en_dash = 0x14ac
Armenian_hyphen = 0x14ad
Armenian_yentamna = 0x14ad
Armenian_ellipsis = 0x14ae
Armenian_exclam = 0x14af
Armenian_amanak = 0x14af
Armenian_accent = 0x14b0
Armenian_shesht = 0x14b0
Armenian_question = 0x14b1
Armenian_paruyk = 0x14b1
Armenian_AYB = 0x14b2
Armenian_ayb = 0x14b3
Armenian_BEN = 0x14b4
Armenian_ben = 0x14b5
Armenian_GIM = 0x14b6
Armenian_gim = 0x14b7
Armenian_DA = 0x14b8
Armenian_da = 0x14b9
Armenian_YECH = 0x14ba
Armenian_yech = 0x14bb
Armenian_ZA = 0x14bc
Armenian_za = 0x14bd
Armenian_E = 0x14be
Armenian_e = 0x14bf
Armenian_AT = 0x14c0
Armenian_at = 0x14c1
Armenian_TO = 0x14c2
Armenian_to = 0x14c3
Armenian_ZHE = 0x14c4
Armenian_zhe = 0x14c5
Armenian_INI = 0x14c6
Armenian_ini = 0x14c7
Armenian_LYUN = 0x14c8
Armenian_lyun = 0x14c9
Armenian_KHE = 0x14ca
Armenian_khe = 0x14cb
Armenian_TSA = 0x14cc
Armenian_tsa = 0x14cd
Armenian_KEN = 0x14ce
Armenian_ken = 0x14cf
Armenian_HO = 0x14d0
Armenian_ho = 0x14d1
Armenian_DZA = 0x14d2
Armenian_dza = 0x14d3
Armenian_GHAT = 0x14d4
Armenian_ghat = 0x14d5
Armenian_TCHE = 0x14d6
Armenian_tche = 0x14d7
Armenian_MEN = 0x14d8
Armenian_men = 0x14d9
Armenian_HI = 0x14da
Armenian_hi = 0x14db
Armenian_NU = 0x14dc
Armenian_nu = 0x14dd
Armenian_SHA = 0x14de
Armenian_sha = 0x14df
Armenian_VO = 0x14e0
Armenian_vo = 0x14e1
Armenian_CHA = 0x14e2
Armenian_cha = 0x14e3
Armenian_PE = 0x14e4
Armenian_pe = 0x14e5
Armenian_JE = 0x14e6
Armenian_je = 0x14e7
Armenian_RA = 0x14e8
Armenian_ra = 0x14e9
Armenian_SE = 0x14ea
Armenian_se = 0x14eb
Armenian_VEV = 0x14ec
Armenian_vev = 0x14ed
Armenian_TYUN = 0x14ee
Armenian_tyun = 0x14ef
Armenian_RE = 0x14f0
Armenian_re = 0x14f1
Armenian_TSO = 0x14f2
Armenian_tso = 0x14f3
Armenian_VYUN = 0x14f4
Armenian_vyun = 0x14f5
Armenian_PYUR = 0x14f6
Armenian_pyur = 0x14f7
Armenian_KE = 0x14f8
Armenian_ke = 0x14f9
Armenian_O = 0x14fa
Armenian_o = 0x14fb
Armenian_FE = 0x14fc
Armenian_fe = 0x14fd
Armenian_apostrophe = 0x14fe
Armenian_ligature_ew = 0x14ff
Georgian_an = 0x15d0
Georgian_ban = 0x15d1
Georgian_gan = 0x15d2
Georgian_don = 0x15d3
Georgian_en = 0x15d4
Georgian_vin = 0x15d5
Georgian_zen = 0x15d6
Georgian_tan = 0x15d7
Georgian_in = 0x15d8
Georgian_kan = 0x15d9
Georgian_las = 0x15da
Georgian_man = 0x15db
Georgian_nar = 0x15dc
Georgian_on = 0x15dd
Georgian_par = 0x15de
Georgian_zhar = 0x15df
Georgian_rae = 0x15e0
Georgian_san = 0x15e1
Georgian_tar = 0x15e2
Georgian_un = 0x15e3
Georgian_phar = 0x15e4
Georgian_khar = 0x15e5
Georgian_ghan = 0x15e6
Georgian_qar = 0x15e7
Georgian_shin = 0x15e8
Georgian_chin = 0x15e9
Georgian_can = 0x15ea
Georgian_jil = 0x15eb
Georgian_cil = 0x15ec
Georgian_char = 0x15ed
Georgian_xan = 0x15ee
Georgian_jhan = 0x15ef
Georgian_hae = 0x15f0
Georgian_he = 0x15f1
Georgian_hie = 0x15f2
Georgian_we = 0x15f3
Georgian_har = 0x15f4
Georgian_hoe = 0x15f5
Georgian_fi = 0x15f6
EcuSign = 0x20a0
ColonSign = 0x20a1
CruzeiroSign = 0x20a2
FFrancSign = 0x20a3
LiraSign = 0x20a4
MillSign = 0x20a5
NairaSign = 0x20a6
PesetaSign = 0x20a7
RupeeSign = 0x20a8
WonSign = 0x20a9
NewSheqelSign = 0x20aa
DongSign = 0x20ab
EuroSign = 0x20ac
__name_to_keycode = {}
__keycode_to_name = {}
for key, value in vars().items():
if key.startswith("__") or \
key in ("name_to_keycode", "keycode_to_name", "VoidSymbol"):
continue
if key.startswith("_"):
key = key[1:]
__name_to_keycode[key] = value
__keycode_to_name[value] = key
def name_to_keycode(name):
return __name_to_keycode.get(name, VoidSymbol)
def keycode_to_name(code):
if __keycode_to_name.has_key(code):
return __keycode_to_name[code]
if code < 0xffff:
return "0x%04x" % code
else:
return "0x%06x" % code
| lgpl-2.1 |
bistaray/odoo-8.0-public | l10n_us_administrative/__openerp__.py | 2 | 1462 | # -*- coding: utf-8 -*-
######################################################################
#
# Note: Program metadata is available in /__init__.py
#
######################################################################
{
"name" : "United States - States & Counties",
"version" : "1.7",
"author" : "Ursa Information Systems",
"summary": "Add missing states, add US county equivalents with filter (Ursa).",
'description':
"""
Based on US Census information (updated 2014) - adds a dropdown list on the partner form view, filtered by state. In some states, sales tax rates are based on Counties. For those where this is the case, this module could be used in conjunction with rules to determine sales tax.
Contributors
------------
* Scott Saunders <scosist@asphaltzipper.com>
""",
'maintainer': 'Ursa Information Systems',
'website': 'http://www.ursainfosystems.com',
'images': ['static/description/l10n_us_administrative.png'],
"category" : "Localization",
"depends" : ["base", ],
"init_xml" : [],
"demo_xml" : [],
"data" : [
'view/res_partner.xml',
'view/res_country.xml',
'data/res.country.state.csv',
'data/res.country.state.county.csv',
'data/ir.model.access.csv',
],
"test" : [
],
"auto_install": False,
"application": False,
"installable": True,
}
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 |
belmiromoreira/nova | nova/tests/unit/api/openstack/compute/test_microversions.py | 42 | 15815 | # Copyright 2014 IBM Corp.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from oslo_config import cfg
from oslo_serialization import jsonutils
from nova.api.openstack import api_version_request as api_version
from nova import test
from nova.tests.unit.api.openstack import fakes
CONF = cfg.CONF
class MicroversionsTest(test.NoDBTestCase):
header_name = 'X-OpenStack-Nova-API-Version'
def _test_microversions(self, app, req, ret_code, ret_header=None):
req.environ['CONTENT_TYPE'] = "application/json"
res = req.get_response(app)
self.assertEqual(ret_code, res.status_int)
if ret_header:
self.assertEqual(ret_header,
res.headers[self.header_name])
return res
@mock.patch("nova.api.openstack.APIRouterV21.api_extension_namespace",
return_value='nova.api.v3.test_extensions')
def test_microversions_no_header(self, mock_namespace):
app = fakes.wsgi_app_v21(init_only='test-microversions')
req = fakes.HTTPRequest.blank('/v2/fake/microversions')
res = req.get_response(app)
self.assertEqual(200, res.status_int)
resp_json = jsonutils.loads(res.body)
self.assertEqual('val', resp_json['param'])
@mock.patch("nova.api.openstack.APIRouterV21.api_extension_namespace",
return_value='nova.api.v3.test_extensions')
def test_microversions_return_header(self, mock_namespace):
app = fakes.wsgi_app_v21(init_only='test-microversions')
req = fakes.HTTPRequest.blank('/v2/fake/microversions')
res = req.get_response(app)
self.assertEqual(200, res.status_int)
resp_json = jsonutils.loads(res.body)
self.assertEqual('val', resp_json['param'])
self.assertEqual("2.1", res.headers[self.header_name])
self.assertEqual(self.header_name, res.headers['Vary'])
@mock.patch("nova.api.openstack.api_version_request.max_api_version")
@mock.patch("nova.api.openstack.APIRouterV21.api_extension_namespace",
return_value='nova.api.v3.test_extensions')
def test_microversions_return_header_non_default(self, mock_namespace,
mock_maxver):
mock_maxver.return_value = api_version.APIVersionRequest("2.3")
app = fakes.wsgi_app_v21(init_only='test-microversions')
req = fakes.HTTPRequest.blank('/v2/fake/microversions')
req.headers = {self.header_name: '2.3'}
res = req.get_response(app)
self.assertEqual(200, res.status_int)
resp_json = jsonutils.loads(res.body)
self.assertEqual('val2', resp_json['param'])
self.assertEqual("2.3", res.headers[self.header_name])
self.assertEqual(self.header_name, res.headers['Vary'])
@mock.patch("nova.api.openstack.api_version_request.max_api_version")
@mock.patch("nova.api.openstack.APIRouterV21.api_extension_namespace",
return_value='nova.api.v3.test_extensions')
def test_microversions_return_header_fault(self, mock_namespace,
mock_maxver):
mock_maxver.return_value = api_version.APIVersionRequest("3.0")
app = fakes.wsgi_app_v21(init_only='test-microversions')
req = fakes.HTTPRequest.blank('/v2/fake/microversions')
req.headers = {self.header_name: '3.0'}
res = req.get_response(app)
self.assertEqual(400, res.status_int)
self.assertEqual("3.0", res.headers[self.header_name])
self.assertEqual(self.header_name, res.headers['Vary'])
@mock.patch("nova.api.openstack.api_version_request.max_api_version")
@mock.patch("nova.api.openstack.APIRouterV21.api_extension_namespace",
return_value='nova.api.v3.test_extensions')
def _check_microversion_response(self, url, req_version, resp_param,
mock_namespace, mock_maxver):
mock_maxver.return_value = api_version.APIVersionRequest('2.3')
app = fakes.wsgi_app_v21(init_only='test-microversions')
req = fakes.HTTPRequest.blank(url)
req.headers = {self.header_name: req_version}
res = req.get_response(app)
self.assertEqual(200, res.status_int)
resp_json = jsonutils.loads(res.body)
self.assertEqual(resp_param, resp_json['param'])
def test_microversions_with_header(self):
self._check_microversion_response('/v2/fake/microversions',
'2.3', 'val2')
def test_microversions_with_header_exact_match(self):
self._check_microversion_response('/v2/fake/microversions',
'2.2', 'val2')
def test_microversions2_no_2_1_version(self):
self._check_microversion_response('/v2/fake/microversions2',
'2.3', 'controller2_val1')
@mock.patch("nova.api.openstack.api_version_request.max_api_version")
@mock.patch("nova.api.openstack.APIRouterV21.api_extension_namespace",
return_value='nova.api.v3.test_extensions')
def test_microversions2_later_version(self, mock_namespace, mock_maxver):
mock_maxver.return_value = api_version.APIVersionRequest("3.1")
app = fakes.wsgi_app_v21(init_only='test-microversions')
req = fakes.HTTPRequest.blank('/v2/fake/microversions2')
req.headers = {self.header_name: '3.0'}
res = req.get_response(app)
self.assertEqual(202, res.status_int)
resp_json = jsonutils.loads(res.body)
self.assertEqual('controller2_val2', resp_json['param'])
@mock.patch("nova.api.openstack.api_version_request.max_api_version")
@mock.patch("nova.api.openstack.APIRouterV21.api_extension_namespace",
return_value='nova.api.v3.test_extensions')
def test_microversions2_version_too_high(self, mock_namespace,
mock_maxver):
mock_maxver.return_value = api_version.APIVersionRequest("3.5")
app = fakes.wsgi_app_v21(init_only='test-microversions')
req = fakes.HTTPRequest.blank('/v2/fake/microversions2')
req.headers = {self.header_name: '3.2'}
res = req.get_response(app)
self.assertEqual(404, res.status_int)
@mock.patch("nova.api.openstack.APIRouterV21.api_extension_namespace",
return_value='nova.api.v3.test_extensions')
def test_microversions2_version_too_low(self, mock_namespace):
app = fakes.wsgi_app_v21(init_only='test-microversions')
req = fakes.HTTPRequest.blank('/v2/fake/microversions2')
req.headers = {self.header_name: '2.1'}
res = req.get_response(app)
self.assertEqual(404, res.status_int)
@mock.patch("nova.api.openstack.api_version_request.max_api_version")
@mock.patch("nova.api.openstack.APIRouterV21.api_extension_namespace",
return_value='nova.api.v3.test_extensions')
def test_microversions_global_version_too_high(self, mock_namespace,
mock_maxver):
mock_maxver.return_value = api_version.APIVersionRequest("3.5")
app = fakes.wsgi_app_v21(init_only='test-microversions')
req = fakes.HTTPRequest.blank('/v2/fake/microversions2')
req.headers = {self.header_name: '3.7'}
res = req.get_response(app)
self.assertEqual(406, res.status_int)
res_json = jsonutils.loads(res.body)
self.assertEqual("Version 3.7 is not supported by the API. "
"Minimum is 2.1 and maximum is 3.5.",
res_json['computeFault']['message'])
@mock.patch("nova.api.openstack.api_version_request.max_api_version")
@mock.patch("nova.api.openstack.APIRouterV21.api_extension_namespace",
return_value='nova.api.v3.test_extensions')
def test_microversions_schema(self, mock_namespace, mock_maxver):
mock_maxver.return_value = api_version.APIVersionRequest("3.3")
app = fakes.wsgi_app_v21(init_only='test-microversions')
req = fakes.HTTPRequest.blank('/v2/fake/microversions3')
req.method = 'POST'
req.headers = {self.header_name: '2.2'}
req.environ['CONTENT_TYPE'] = "application/json"
req.body = jsonutils.dumps({'dummy': {'val': 'foo'}})
res = req.get_response(app)
self.assertEqual(200, res.status_int)
resp_json = jsonutils.loads(res.body)
self.assertEqual('create_val1', resp_json['param'])
self.assertEqual("2.2", res.headers[self.header_name])
self.assertEqual(self.header_name, res.headers['Vary'])
@mock.patch("nova.api.openstack.api_version_request.max_api_version")
@mock.patch("nova.api.openstack.APIRouterV21.api_extension_namespace",
return_value='nova.api.v3.test_extensions')
def test_microversions_schema_fail(self, mock_namespace, mock_maxver):
mock_maxver.return_value = api_version.APIVersionRequest("3.3")
app = fakes.wsgi_app_v21(init_only='test-microversions')
req = fakes.HTTPRequest.blank('/v2/fake/microversions3')
req.method = 'POST'
req.headers = {self.header_name: '2.2'}
req.environ['CONTENT_TYPE'] = "application/json"
req.body = jsonutils.dumps({'dummy': {'invalid_param': 'foo'}})
res = req.get_response(app)
self.assertEqual(400, res.status_int)
resp_json = jsonutils.loads(res.body)
self.assertTrue(resp_json['badRequest']['message'].startswith(
"Invalid input for field/attribute dummy."))
@mock.patch("nova.api.openstack.api_version_request.max_api_version")
@mock.patch("nova.api.openstack.APIRouterV21.api_extension_namespace",
return_value='nova.api.v3.test_extensions')
def test_microversions_schema_out_of_version_check(self, mock_namespace,
mock_maxver):
mock_maxver.return_value = api_version.APIVersionRequest("3.3")
app = fakes.wsgi_app_v21(init_only='test-microversions')
req = fakes.HTTPRequest.blank('/v2/fake/microversions3/1')
req.method = 'PUT'
req.headers = {self.header_name: '2.2'}
req.body = jsonutils.dumps({'dummy': {'inv_val': 'foo'}})
req.environ['CONTENT_TYPE'] = "application/json"
res = req.get_response(app)
self.assertEqual(200, res.status_int)
resp_json = jsonutils.loads(res.body)
self.assertEqual('update_val1', resp_json['param'])
self.assertEqual("2.2", res.headers[self.header_name])
@mock.patch("nova.api.openstack.api_version_request.max_api_version")
@mock.patch("nova.api.openstack.APIRouterV21.api_extension_namespace",
return_value='nova.api.v3.test_extensions')
def test_microversions_schema_second_version(self, mock_namespace,
mock_maxver):
mock_maxver.return_value = api_version.APIVersionRequest("3.3")
app = fakes.wsgi_app_v21(init_only='test-microversions')
req = fakes.HTTPRequest.blank('/v2/fake/microversions3/1')
req.headers = {self.header_name: '2.10'}
req.environ['CONTENT_TYPE'] = "application/json"
req.method = 'PUT'
req.body = jsonutils.dumps({'dummy': {'val2': 'foo'}})
res = req.get_response(app)
self.assertEqual(200, res.status_int)
resp_json = jsonutils.loads(res.body)
self.assertEqual('update_val1', resp_json['param'])
self.assertEqual("2.10", res.headers[self.header_name])
@mock.patch("nova.api.openstack.api_version_request.max_api_version")
@mock.patch("nova.api.openstack.APIRouterV21.api_extension_namespace",
return_value='nova.api.v3.test_extensions')
def _test_microversions_inner_function(self, version, expected_resp,
mock_namespace,
mock_maxver):
mock_maxver.return_value = api_version.APIVersionRequest("2.2")
app = fakes.wsgi_app_v21(init_only='test-microversions')
req = fakes.HTTPRequest.blank('/v2/fake/microversions4')
req.headers = {self.header_name: version}
req.environ['CONTENT_TYPE'] = "application/json"
req.method = 'POST'
res = req.get_response(app)
self.assertEqual(200, res.status_int)
resp_json = jsonutils.loads(res.body)
self.assertEqual(expected_resp, resp_json['param'])
self.assertEqual(version, res.headers[self.header_name])
def test_microversions_inner_function_v22(self):
self._test_microversions_inner_function('2.2', 'controller4_val2')
def test_microversions_inner_function_v21(self):
self._test_microversions_inner_function('2.1', 'controller4_val1')
@mock.patch("nova.api.openstack.api_version_request.max_api_version")
@mock.patch("nova.api.openstack.APIRouterV21.api_extension_namespace",
return_value='nova.api.v3.test_extensions')
def test_with_extends_decorator(self, mock_namespace, mock_maxver):
mock_maxver.return_value = api_version.APIVersionRequest('2.4')
app = fakes.wsgi_app_v21(init_only='test-microversions')
req = fakes.HTTPRequest.blank('/v2/fake/microversions5/item')
req.headers = {'X-OpenStack-Nova-API-Version': '2.4'}
res = req.get_response(app)
self.assertEqual(200, res.status_int)
expected_res = {
"extend_ctrlr2": "val_2",
"extend_ctrlr1": "val_1",
"base_param": "base_val"}
resp_json = jsonutils.loads(res.body)
for param in resp_json:
self.assertIn(param, expected_res)
self.assertEqual(expected_res[param], resp_json[param])
self.assertEqual(3, len(resp_json))
@mock.patch("nova.api.openstack.api_version_request.max_api_version")
@mock.patch("nova.api.openstack.APIRouterV21.api_extension_namespace",
return_value='nova.api.v3.test_extensions')
def _test_microversions_actions(self, ret_code, ret_header, req_header,
mock_namespace,
mock_maxver):
mock_maxver.return_value = api_version.APIVersionRequest("2.3")
app = fakes.wsgi_app_v21(init_only='test-microversions')
req = fakes.HTTPRequest.blank('/v2/fake/microversions3/1/action')
if req_header:
req.headers = {self.header_name: req_header}
req.method = 'POST'
req.body = jsonutils.dumps({'foo': None})
res = self._test_microversions(app, req, ret_code,
ret_header=ret_header)
if ret_code == 202:
resp_json = jsonutils.loads(res.body)
self.assertEqual({'foo': 'bar'}, resp_json)
def test_microversions_actions(self):
self._test_microversions_actions(202, "2.1", "2.1")
def test_microversions_actions_too_high(self):
self._test_microversions_actions(404, "2.3", "2.3")
def test_microversions_actions_no_header(self):
self._test_microversions_actions(202, "2.1", None)
| apache-2.0 |
synopat/pyload | module/plugins/hoster/WrzucTo.py | 8 | 2068 | # -*- coding: utf-8 -*-
import re
import pycurl
from ..internal.SimpleHoster import SimpleHoster
class WrzucTo(SimpleHoster):
__name__ = "WrzucTo"
__type__ = "hoster"
__version__ = "0.09"
__status__ = "testing"
__pattern__ = r'http://(?:www\.)?wrzuc\.to/(\w+(\.wt|\.html)|(\w+/?linki/\w+))'
__config__ = [("activated", "bool", "Activated", True),
("use_premium", "bool", "Use premium account if available", True),
("fallback", "bool",
"Fallback to free download if premium fails", True),
("chk_filesize", "bool", "Check file size", True),
("max_wait", "int", "Reconnect if waiting time is greater than minutes", 10)]
__description__ = """Wrzuc.to hoster plugin"""
__license__ = "GPLv3"
__authors__ = [("zoidberg", "zoidberg@mujmail.cz")]
NAME_PATTERN = r'id="file_info">\s*<strong>(?P<N>.*?)</strong>'
SIZE_PATTERN = r'class="info">\s*<tr>\s*<td>(?P<S>.*?)</td>'
COOKIES = [("wrzuc.to", "language", "en")]
def setup(self):
self.multiDL = True
def handle_free(self, pyfile):
data = dict(re.findall(r'(md5|file): "(.*?)"', self.data))
if len(data) != 2:
self.error(_("No file ID"))
self.req.http.c.setopt(
pycurl.HTTPHEADER,
["X-Requested-With: XMLHttpRequest"])
self.req.http.lastURL = pyfile.url
self.load(
"http://www.wrzuc.to/ajax/server/prepair",
post={
'md5': data['md5']})
self.req.http.lastURL = pyfile.url
self.data = self.load(
"http://www.wrzuc.to/ajax/server/download_link",
post={
'file': data['file']})
data.update(
re.findall(
r'"(download_link|server_id)":"(.*?)"',
self.data))
if len(data) != 4:
self.error(_("No download URL"))
self.link = "http://%s.wrzuc.to/pobierz/%s" % (
data['server_id'], data['download_link'])
| gpl-3.0 |
googleapis/googleapis-gen | google/cloud/videointelligence/v1p3beta1/videointelligence-v1p3beta1-py/google/cloud/videointelligence_v1p3beta1/types/video_intelligence.py | 1 | 66604 | # -*- coding: utf-8 -*-
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import proto # type: ignore
from google.protobuf import duration_pb2 # type: ignore
from google.protobuf import timestamp_pb2 # type: ignore
from google.rpc import status_pb2 # type: ignore
__protobuf__ = proto.module(
package='google.cloud.videointelligence.v1p3beta1',
manifest={
'LabelDetectionMode',
'Likelihood',
'StreamingFeature',
'Feature',
'AnnotateVideoRequest',
'VideoContext',
'LabelDetectionConfig',
'ShotChangeDetectionConfig',
'ObjectTrackingConfig',
'ExplicitContentDetectionConfig',
'FaceDetectionConfig',
'PersonDetectionConfig',
'TextDetectionConfig',
'VideoSegment',
'LabelSegment',
'LabelFrame',
'Entity',
'LabelAnnotation',
'ExplicitContentFrame',
'ExplicitContentAnnotation',
'NormalizedBoundingBox',
'TimestampedObject',
'Track',
'DetectedAttribute',
'Celebrity',
'CelebrityTrack',
'CelebrityRecognitionAnnotation',
'DetectedLandmark',
'FaceDetectionAnnotation',
'PersonDetectionAnnotation',
'VideoAnnotationResults',
'AnnotateVideoResponse',
'VideoAnnotationProgress',
'AnnotateVideoProgress',
'SpeechTranscriptionConfig',
'SpeechContext',
'SpeechTranscription',
'SpeechRecognitionAlternative',
'WordInfo',
'NormalizedVertex',
'NormalizedBoundingPoly',
'TextSegment',
'TextFrame',
'TextAnnotation',
'ObjectTrackingFrame',
'ObjectTrackingAnnotation',
'LogoRecognitionAnnotation',
'StreamingAnnotateVideoRequest',
'StreamingVideoConfig',
'StreamingAnnotateVideoResponse',
'StreamingVideoAnnotationResults',
'StreamingShotChangeDetectionConfig',
'StreamingLabelDetectionConfig',
'StreamingExplicitContentDetectionConfig',
'StreamingObjectTrackingConfig',
'StreamingAutomlActionRecognitionConfig',
'StreamingAutomlClassificationConfig',
'StreamingAutomlObjectTrackingConfig',
'StreamingStorageConfig',
},
)
class LabelDetectionMode(proto.Enum):
r"""Label detection mode."""
LABEL_DETECTION_MODE_UNSPECIFIED = 0
SHOT_MODE = 1
FRAME_MODE = 2
SHOT_AND_FRAME_MODE = 3
class Likelihood(proto.Enum):
r"""Bucketized representation of likelihood."""
LIKELIHOOD_UNSPECIFIED = 0
VERY_UNLIKELY = 1
UNLIKELY = 2
POSSIBLE = 3
LIKELY = 4
VERY_LIKELY = 5
class StreamingFeature(proto.Enum):
r"""Streaming video annotation feature."""
STREAMING_FEATURE_UNSPECIFIED = 0
STREAMING_LABEL_DETECTION = 1
STREAMING_SHOT_CHANGE_DETECTION = 2
STREAMING_EXPLICIT_CONTENT_DETECTION = 3
STREAMING_OBJECT_TRACKING = 4
STREAMING_AUTOML_ACTION_RECOGNITION = 23
STREAMING_AUTOML_CLASSIFICATION = 21
STREAMING_AUTOML_OBJECT_TRACKING = 22
class Feature(proto.Enum):
r"""Video annotation feature."""
FEATURE_UNSPECIFIED = 0
LABEL_DETECTION = 1
SHOT_CHANGE_DETECTION = 2
EXPLICIT_CONTENT_DETECTION = 3
FACE_DETECTION = 4
SPEECH_TRANSCRIPTION = 6
TEXT_DETECTION = 7
OBJECT_TRACKING = 9
LOGO_RECOGNITION = 12
CELEBRITY_RECOGNITION = 13
PERSON_DETECTION = 14
class AnnotateVideoRequest(proto.Message):
r"""Video annotation request.
Attributes:
input_uri (str):
Input video location. Currently, only `Cloud
Storage <https://cloud.google.com/storage/>`__ URIs are
supported. URIs must be specified in the following format:
``gs://bucket-id/object-id`` (other URI formats return
[google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]).
For more information, see `Request
URIs <https://cloud.google.com/storage/docs/request-endpoints>`__.
To identify multiple videos, a video URI may include
wildcards in the ``object-id``. Supported wildcards: '*' to
match 0 or more characters; '?' to match 1 character. If
unset, the input video should be embedded in the request as
``input_content``. If set, ``input_content`` must be unset.
input_content (bytes):
The video data bytes. If unset, the input video(s) should be
specified via the ``input_uri``. If set, ``input_uri`` must
be unset.
features (Sequence[google.cloud.videointelligence_v1p3beta1.types.Feature]):
Required. Requested video annotation
features.
video_context (google.cloud.videointelligence_v1p3beta1.types.VideoContext):
Additional video context and/or feature-
pecific parameters.
output_uri (str):
Optional. Location where the output (in JSON format) should
be stored. Currently, only `Cloud
Storage <https://cloud.google.com/storage/>`__ URIs are
supported. These must be specified in the following format:
``gs://bucket-id/object-id`` (other URI formats return
[google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]).
For more information, see `Request
URIs <https://cloud.google.com/storage/docs/request-endpoints>`__.
location_id (str):
Optional. Cloud region where annotation should take place.
Supported cloud regions are: ``us-east1``, ``us-west1``,
``europe-west1``, ``asia-east1``. If no region is specified,
the region will be determined based on video file location.
"""
input_uri = proto.Field(
proto.STRING,
number=1,
)
input_content = proto.Field(
proto.BYTES,
number=6,
)
features = proto.RepeatedField(
proto.ENUM,
number=2,
enum='Feature',
)
video_context = proto.Field(
proto.MESSAGE,
number=3,
message='VideoContext',
)
output_uri = proto.Field(
proto.STRING,
number=4,
)
location_id = proto.Field(
proto.STRING,
number=5,
)
class VideoContext(proto.Message):
r"""Video context and/or feature-specific parameters.
Attributes:
segments (Sequence[google.cloud.videointelligence_v1p3beta1.types.VideoSegment]):
Video segments to annotate. The segments may
overlap and are not required to be contiguous or
span the whole video. If unspecified, each video
is treated as a single segment.
label_detection_config (google.cloud.videointelligence_v1p3beta1.types.LabelDetectionConfig):
Config for LABEL_DETECTION.
shot_change_detection_config (google.cloud.videointelligence_v1p3beta1.types.ShotChangeDetectionConfig):
Config for SHOT_CHANGE_DETECTION.
explicit_content_detection_config (google.cloud.videointelligence_v1p3beta1.types.ExplicitContentDetectionConfig):
Config for EXPLICIT_CONTENT_DETECTION.
face_detection_config (google.cloud.videointelligence_v1p3beta1.types.FaceDetectionConfig):
Config for FACE_DETECTION.
speech_transcription_config (google.cloud.videointelligence_v1p3beta1.types.SpeechTranscriptionConfig):
Config for SPEECH_TRANSCRIPTION.
text_detection_config (google.cloud.videointelligence_v1p3beta1.types.TextDetectionConfig):
Config for TEXT_DETECTION.
person_detection_config (google.cloud.videointelligence_v1p3beta1.types.PersonDetectionConfig):
Config for PERSON_DETECTION.
object_tracking_config (google.cloud.videointelligence_v1p3beta1.types.ObjectTrackingConfig):
Config for OBJECT_TRACKING.
"""
segments = proto.RepeatedField(
proto.MESSAGE,
number=1,
message='VideoSegment',
)
label_detection_config = proto.Field(
proto.MESSAGE,
number=2,
message='LabelDetectionConfig',
)
shot_change_detection_config = proto.Field(
proto.MESSAGE,
number=3,
message='ShotChangeDetectionConfig',
)
explicit_content_detection_config = proto.Field(
proto.MESSAGE,
number=4,
message='ExplicitContentDetectionConfig',
)
face_detection_config = proto.Field(
proto.MESSAGE,
number=5,
message='FaceDetectionConfig',
)
speech_transcription_config = proto.Field(
proto.MESSAGE,
number=6,
message='SpeechTranscriptionConfig',
)
text_detection_config = proto.Field(
proto.MESSAGE,
number=8,
message='TextDetectionConfig',
)
person_detection_config = proto.Field(
proto.MESSAGE,
number=11,
message='PersonDetectionConfig',
)
object_tracking_config = proto.Field(
proto.MESSAGE,
number=13,
message='ObjectTrackingConfig',
)
class LabelDetectionConfig(proto.Message):
r"""Config for LABEL_DETECTION.
Attributes:
label_detection_mode (google.cloud.videointelligence_v1p3beta1.types.LabelDetectionMode):
What labels should be detected with LABEL_DETECTION, in
addition to video-level labels or segment-level labels. If
unspecified, defaults to ``SHOT_MODE``.
stationary_camera (bool):
Whether the video has been shot from a stationary (i.e.,
non-moving) camera. When set to true, might improve
detection accuracy for moving objects. Should be used with
``SHOT_AND_FRAME_MODE`` enabled.
model (str):
Model to use for label detection.
Supported values: "builtin/stable" (the default
if unset) and "builtin/latest".
frame_confidence_threshold (float):
The confidence threshold we perform filtering on the labels
from frame-level detection. If not set, it is set to 0.4 by
default. The valid range for this threshold is [0.1, 0.9].
Any value set outside of this range will be clipped. Note:
For best results, follow the default threshold. We will
update the default threshold everytime when we release a new
model.
video_confidence_threshold (float):
The confidence threshold we perform filtering on the labels
from video-level and shot-level detections. If not set, it's
set to 0.3 by default. The valid range for this threshold is
[0.1, 0.9]. Any value set outside of this range will be
clipped. Note: For best results, follow the default
threshold. We will update the default threshold everytime
when we release a new model.
"""
label_detection_mode = proto.Field(
proto.ENUM,
number=1,
enum='LabelDetectionMode',
)
stationary_camera = proto.Field(
proto.BOOL,
number=2,
)
model = proto.Field(
proto.STRING,
number=3,
)
frame_confidence_threshold = proto.Field(
proto.FLOAT,
number=4,
)
video_confidence_threshold = proto.Field(
proto.FLOAT,
number=5,
)
class ShotChangeDetectionConfig(proto.Message):
r"""Config for SHOT_CHANGE_DETECTION.
Attributes:
model (str):
Model to use for shot change detection.
Supported values: "builtin/stable" (the default
if unset) and "builtin/latest".
"""
model = proto.Field(
proto.STRING,
number=1,
)
class ObjectTrackingConfig(proto.Message):
r"""Config for OBJECT_TRACKING.
Attributes:
model (str):
Model to use for object tracking.
Supported values: "builtin/stable" (the default
if unset) and "builtin/latest".
"""
model = proto.Field(
proto.STRING,
number=1,
)
class ExplicitContentDetectionConfig(proto.Message):
r"""Config for EXPLICIT_CONTENT_DETECTION.
Attributes:
model (str):
Model to use for explicit content detection.
Supported values: "builtin/stable" (the default
if unset) and "builtin/latest".
"""
model = proto.Field(
proto.STRING,
number=1,
)
class FaceDetectionConfig(proto.Message):
r"""Config for FACE_DETECTION.
Attributes:
model (str):
Model to use for face detection.
Supported values: "builtin/stable" (the default
if unset) and "builtin/latest".
include_bounding_boxes (bool):
Whether bounding boxes are included in the
face annotation output.
include_attributes (bool):
Whether to enable face attributes detection, such as
glasses, dark_glasses, mouth_open etc. Ignored if
'include_bounding_boxes' is set to false.
"""
model = proto.Field(
proto.STRING,
number=1,
)
include_bounding_boxes = proto.Field(
proto.BOOL,
number=2,
)
include_attributes = proto.Field(
proto.BOOL,
number=5,
)
class PersonDetectionConfig(proto.Message):
r"""Config for PERSON_DETECTION.
Attributes:
include_bounding_boxes (bool):
Whether bounding boxes are included in the
person detection annotation output.
include_pose_landmarks (bool):
Whether to enable pose landmarks detection. Ignored if
'include_bounding_boxes' is set to false.
include_attributes (bool):
Whether to enable person attributes detection, such as cloth
color (black, blue, etc), type (coat, dress, etc), pattern
(plain, floral, etc), hair, etc. Ignored if
'include_bounding_boxes' is set to false.
"""
include_bounding_boxes = proto.Field(
proto.BOOL,
number=1,
)
include_pose_landmarks = proto.Field(
proto.BOOL,
number=2,
)
include_attributes = proto.Field(
proto.BOOL,
number=3,
)
class TextDetectionConfig(proto.Message):
r"""Config for TEXT_DETECTION.
Attributes:
language_hints (Sequence[str]):
Language hint can be specified if the
language to be detected is known a priori. It
can increase the accuracy of the detection.
Language hint must be language code in BCP-47
format.
Automatic language detection is performed if no
hint is provided.
model (str):
Model to use for text detection.
Supported values: "builtin/stable" (the default
if unset) and "builtin/latest".
"""
language_hints = proto.RepeatedField(
proto.STRING,
number=1,
)
model = proto.Field(
proto.STRING,
number=2,
)
class VideoSegment(proto.Message):
r"""Video segment.
Attributes:
start_time_offset (google.protobuf.duration_pb2.Duration):
Time-offset, relative to the beginning of the
video, corresponding to the start of the segment
(inclusive).
end_time_offset (google.protobuf.duration_pb2.Duration):
Time-offset, relative to the beginning of the
video, corresponding to the end of the segment
(inclusive).
"""
start_time_offset = proto.Field(
proto.MESSAGE,
number=1,
message=duration_pb2.Duration,
)
end_time_offset = proto.Field(
proto.MESSAGE,
number=2,
message=duration_pb2.Duration,
)
class LabelSegment(proto.Message):
r"""Video segment level annotation results for label detection.
Attributes:
segment (google.cloud.videointelligence_v1p3beta1.types.VideoSegment):
Video segment where a label was detected.
confidence (float):
Confidence that the label is accurate. Range: [0, 1].
"""
segment = proto.Field(
proto.MESSAGE,
number=1,
message='VideoSegment',
)
confidence = proto.Field(
proto.FLOAT,
number=2,
)
class LabelFrame(proto.Message):
r"""Video frame level annotation results for label detection.
Attributes:
time_offset (google.protobuf.duration_pb2.Duration):
Time-offset, relative to the beginning of the
video, corresponding to the video frame for this
location.
confidence (float):
Confidence that the label is accurate. Range: [0, 1].
"""
time_offset = proto.Field(
proto.MESSAGE,
number=1,
message=duration_pb2.Duration,
)
confidence = proto.Field(
proto.FLOAT,
number=2,
)
class Entity(proto.Message):
r"""Detected entity from video analysis.
Attributes:
entity_id (str):
Opaque entity ID. Some IDs may be available in `Google
Knowledge Graph Search
API <https://developers.google.com/knowledge-graph/>`__.
description (str):
Textual description, e.g., ``Fixed-gear bicycle``.
language_code (str):
Language code for ``description`` in BCP-47 format.
"""
entity_id = proto.Field(
proto.STRING,
number=1,
)
description = proto.Field(
proto.STRING,
number=2,
)
language_code = proto.Field(
proto.STRING,
number=3,
)
class LabelAnnotation(proto.Message):
r"""Label annotation.
Attributes:
entity (google.cloud.videointelligence_v1p3beta1.types.Entity):
Detected entity.
category_entities (Sequence[google.cloud.videointelligence_v1p3beta1.types.Entity]):
Common categories for the detected entity. For example, when
the label is ``Terrier``, the category is likely ``dog``.
And in some cases there might be more than one categories
e.g., ``Terrier`` could also be a ``pet``.
segments (Sequence[google.cloud.videointelligence_v1p3beta1.types.LabelSegment]):
All video segments where a label was
detected.
frames (Sequence[google.cloud.videointelligence_v1p3beta1.types.LabelFrame]):
All video frames where a label was detected.
"""
entity = proto.Field(
proto.MESSAGE,
number=1,
message='Entity',
)
category_entities = proto.RepeatedField(
proto.MESSAGE,
number=2,
message='Entity',
)
segments = proto.RepeatedField(
proto.MESSAGE,
number=3,
message='LabelSegment',
)
frames = proto.RepeatedField(
proto.MESSAGE,
number=4,
message='LabelFrame',
)
class ExplicitContentFrame(proto.Message):
r"""Video frame level annotation results for explicit content.
Attributes:
time_offset (google.protobuf.duration_pb2.Duration):
Time-offset, relative to the beginning of the
video, corresponding to the video frame for this
location.
pornography_likelihood (google.cloud.videointelligence_v1p3beta1.types.Likelihood):
Likelihood of the pornography content..
"""
time_offset = proto.Field(
proto.MESSAGE,
number=1,
message=duration_pb2.Duration,
)
pornography_likelihood = proto.Field(
proto.ENUM,
number=2,
enum='Likelihood',
)
class ExplicitContentAnnotation(proto.Message):
r"""Explicit content annotation (based on per-frame visual
signals only). If no explicit content has been detected in a
frame, no annotations are present for that frame.
Attributes:
frames (Sequence[google.cloud.videointelligence_v1p3beta1.types.ExplicitContentFrame]):
All video frames where explicit content was
detected.
"""
frames = proto.RepeatedField(
proto.MESSAGE,
number=1,
message='ExplicitContentFrame',
)
class NormalizedBoundingBox(proto.Message):
r"""Normalized bounding box. The normalized vertex coordinates are
relative to the original image. Range: [0, 1].
Attributes:
left (float):
Left X coordinate.
top (float):
Top Y coordinate.
right (float):
Right X coordinate.
bottom (float):
Bottom Y coordinate.
"""
left = proto.Field(
proto.FLOAT,
number=1,
)
top = proto.Field(
proto.FLOAT,
number=2,
)
right = proto.Field(
proto.FLOAT,
number=3,
)
bottom = proto.Field(
proto.FLOAT,
number=4,
)
class TimestampedObject(proto.Message):
r"""For tracking related features. An object at time_offset with
attributes, and located with normalized_bounding_box.
Attributes:
normalized_bounding_box (google.cloud.videointelligence_v1p3beta1.types.NormalizedBoundingBox):
Normalized Bounding box in a frame, where the
object is located.
time_offset (google.protobuf.duration_pb2.Duration):
Time-offset, relative to the beginning of the
video, corresponding to the video frame for this
object.
attributes (Sequence[google.cloud.videointelligence_v1p3beta1.types.DetectedAttribute]):
Optional. The attributes of the object in the
bounding box.
landmarks (Sequence[google.cloud.videointelligence_v1p3beta1.types.DetectedLandmark]):
Optional. The detected landmarks.
"""
normalized_bounding_box = proto.Field(
proto.MESSAGE,
number=1,
message='NormalizedBoundingBox',
)
time_offset = proto.Field(
proto.MESSAGE,
number=2,
message=duration_pb2.Duration,
)
attributes = proto.RepeatedField(
proto.MESSAGE,
number=3,
message='DetectedAttribute',
)
landmarks = proto.RepeatedField(
proto.MESSAGE,
number=4,
message='DetectedLandmark',
)
class Track(proto.Message):
r"""A track of an object instance.
Attributes:
segment (google.cloud.videointelligence_v1p3beta1.types.VideoSegment):
Video segment of a track.
timestamped_objects (Sequence[google.cloud.videointelligence_v1p3beta1.types.TimestampedObject]):
The object with timestamp and attributes per
frame in the track.
attributes (Sequence[google.cloud.videointelligence_v1p3beta1.types.DetectedAttribute]):
Optional. Attributes in the track level.
confidence (float):
Optional. The confidence score of the tracked
object.
"""
segment = proto.Field(
proto.MESSAGE,
number=1,
message='VideoSegment',
)
timestamped_objects = proto.RepeatedField(
proto.MESSAGE,
number=2,
message='TimestampedObject',
)
attributes = proto.RepeatedField(
proto.MESSAGE,
number=3,
message='DetectedAttribute',
)
confidence = proto.Field(
proto.FLOAT,
number=4,
)
class DetectedAttribute(proto.Message):
r"""A generic detected attribute represented by name in string
format.
Attributes:
name (str):
The name of the attribute, for example, glasses,
dark_glasses, mouth_open. A full list of supported type
names will be provided in the document.
confidence (float):
Detected attribute confidence. Range [0, 1].
value (str):
Text value of the detection result. For
example, the value for "HairColor" can be
"black", "blonde", etc.
"""
name = proto.Field(
proto.STRING,
number=1,
)
confidence = proto.Field(
proto.FLOAT,
number=2,
)
value = proto.Field(
proto.STRING,
number=3,
)
class Celebrity(proto.Message):
r"""Celebrity definition.
Attributes:
name (str):
The resource name of the celebrity. Have the format
``video-intelligence/kg-mid`` indicates a celebrity from
preloaded gallery. kg-mid is the id in Google knowledge
graph, which is unique for the celebrity.
display_name (str):
The celebrity name.
description (str):
Textual description of additional information
about the celebrity, if applicable.
"""
name = proto.Field(
proto.STRING,
number=1,
)
display_name = proto.Field(
proto.STRING,
number=2,
)
description = proto.Field(
proto.STRING,
number=3,
)
class CelebrityTrack(proto.Message):
r"""The annotation result of a celebrity face track.
RecognizedCelebrity field could be empty if the face track does
not have any matched celebrities.
Attributes:
celebrities (Sequence[google.cloud.videointelligence_v1p3beta1.types.CelebrityTrack.RecognizedCelebrity]):
Top N match of the celebrities for the face
in this track.
face_track (google.cloud.videointelligence_v1p3beta1.types.Track):
A track of a person's face.
"""
class RecognizedCelebrity(proto.Message):
r"""The recognized celebrity with confidence score.
Attributes:
celebrity (google.cloud.videointelligence_v1p3beta1.types.Celebrity):
The recognized celebrity.
confidence (float):
Recognition confidence. Range [0, 1].
"""
celebrity = proto.Field(
proto.MESSAGE,
number=1,
message='Celebrity',
)
confidence = proto.Field(
proto.FLOAT,
number=2,
)
celebrities = proto.RepeatedField(
proto.MESSAGE,
number=1,
message=RecognizedCelebrity,
)
face_track = proto.Field(
proto.MESSAGE,
number=3,
message='Track',
)
class CelebrityRecognitionAnnotation(proto.Message):
r"""Celebrity recognition annotation per video.
Attributes:
celebrity_tracks (Sequence[google.cloud.videointelligence_v1p3beta1.types.CelebrityTrack]):
The tracks detected from the input video,
including recognized celebrities and other
detected faces in the video.
"""
celebrity_tracks = proto.RepeatedField(
proto.MESSAGE,
number=1,
message='CelebrityTrack',
)
class DetectedLandmark(proto.Message):
r"""A generic detected landmark represented by name in string
format and a 2D location.
Attributes:
name (str):
The name of this landmark, for example, left_hand,
right_shoulder.
point (google.cloud.videointelligence_v1p3beta1.types.NormalizedVertex):
The 2D point of the detected landmark using
the normalized image coordindate system. The
normalized coordinates have the range from 0 to
1.
confidence (float):
The confidence score of the detected landmark. Range [0, 1].
"""
name = proto.Field(
proto.STRING,
number=1,
)
point = proto.Field(
proto.MESSAGE,
number=2,
message='NormalizedVertex',
)
confidence = proto.Field(
proto.FLOAT,
number=3,
)
class FaceDetectionAnnotation(proto.Message):
r"""Face detection annotation.
Attributes:
tracks (Sequence[google.cloud.videointelligence_v1p3beta1.types.Track]):
The face tracks with attributes.
thumbnail (bytes):
The thumbnail of a person's face.
"""
tracks = proto.RepeatedField(
proto.MESSAGE,
number=3,
message='Track',
)
thumbnail = proto.Field(
proto.BYTES,
number=4,
)
class PersonDetectionAnnotation(proto.Message):
r"""Person detection annotation per video.
Attributes:
tracks (Sequence[google.cloud.videointelligence_v1p3beta1.types.Track]):
The detected tracks of a person.
"""
tracks = proto.RepeatedField(
proto.MESSAGE,
number=1,
message='Track',
)
class VideoAnnotationResults(proto.Message):
r"""Annotation results for a single video.
Attributes:
input_uri (str):
Video file location in `Cloud
Storage <https://cloud.google.com/storage/>`__.
segment (google.cloud.videointelligence_v1p3beta1.types.VideoSegment):
Video segment on which the annotation is run.
segment_label_annotations (Sequence[google.cloud.videointelligence_v1p3beta1.types.LabelAnnotation]):
Topical label annotations on video level or
user-specified segment level. There is exactly
one element for each unique label.
segment_presence_label_annotations (Sequence[google.cloud.videointelligence_v1p3beta1.types.LabelAnnotation]):
Presence label annotations on video level or user-specified
segment level. There is exactly one element for each unique
label. Compared to the existing topical
``segment_label_annotations``, this field presents more
fine-grained, segment-level labels detected in video content
and is made available only when the client sets
``LabelDetectionConfig.model`` to "builtin/latest" in the
request.
shot_label_annotations (Sequence[google.cloud.videointelligence_v1p3beta1.types.LabelAnnotation]):
Topical label annotations on shot level.
There is exactly one element for each unique
label.
shot_presence_label_annotations (Sequence[google.cloud.videointelligence_v1p3beta1.types.LabelAnnotation]):
Presence label annotations on shot level. There is exactly
one element for each unique label. Compared to the existing
topical ``shot_label_annotations``, this field presents more
fine-grained, shot-level labels detected in video content
and is made available only when the client sets
``LabelDetectionConfig.model`` to "builtin/latest" in the
request.
frame_label_annotations (Sequence[google.cloud.videointelligence_v1p3beta1.types.LabelAnnotation]):
Label annotations on frame level.
There is exactly one element for each unique
label.
face_detection_annotations (Sequence[google.cloud.videointelligence_v1p3beta1.types.FaceDetectionAnnotation]):
Face detection annotations.
shot_annotations (Sequence[google.cloud.videointelligence_v1p3beta1.types.VideoSegment]):
Shot annotations. Each shot is represented as
a video segment.
explicit_annotation (google.cloud.videointelligence_v1p3beta1.types.ExplicitContentAnnotation):
Explicit content annotation.
speech_transcriptions (Sequence[google.cloud.videointelligence_v1p3beta1.types.SpeechTranscription]):
Speech transcription.
text_annotations (Sequence[google.cloud.videointelligence_v1p3beta1.types.TextAnnotation]):
OCR text detection and tracking.
Annotations for list of detected text snippets.
Each will have list of frame information
associated with it.
object_annotations (Sequence[google.cloud.videointelligence_v1p3beta1.types.ObjectTrackingAnnotation]):
Annotations for list of objects detected and
tracked in video.
logo_recognition_annotations (Sequence[google.cloud.videointelligence_v1p3beta1.types.LogoRecognitionAnnotation]):
Annotations for list of logos detected,
tracked and recognized in video.
person_detection_annotations (Sequence[google.cloud.videointelligence_v1p3beta1.types.PersonDetectionAnnotation]):
Person detection annotations.
celebrity_recognition_annotations (google.cloud.videointelligence_v1p3beta1.types.CelebrityRecognitionAnnotation):
Celebrity recognition annotations.
error (google.rpc.status_pb2.Status):
If set, indicates an error. Note that for a single
``AnnotateVideoRequest`` some videos may succeed and some
may fail.
"""
input_uri = proto.Field(
proto.STRING,
number=1,
)
segment = proto.Field(
proto.MESSAGE,
number=10,
message='VideoSegment',
)
segment_label_annotations = proto.RepeatedField(
proto.MESSAGE,
number=2,
message='LabelAnnotation',
)
segment_presence_label_annotations = proto.RepeatedField(
proto.MESSAGE,
number=23,
message='LabelAnnotation',
)
shot_label_annotations = proto.RepeatedField(
proto.MESSAGE,
number=3,
message='LabelAnnotation',
)
shot_presence_label_annotations = proto.RepeatedField(
proto.MESSAGE,
number=24,
message='LabelAnnotation',
)
frame_label_annotations = proto.RepeatedField(
proto.MESSAGE,
number=4,
message='LabelAnnotation',
)
face_detection_annotations = proto.RepeatedField(
proto.MESSAGE,
number=13,
message='FaceDetectionAnnotation',
)
shot_annotations = proto.RepeatedField(
proto.MESSAGE,
number=6,
message='VideoSegment',
)
explicit_annotation = proto.Field(
proto.MESSAGE,
number=7,
message='ExplicitContentAnnotation',
)
speech_transcriptions = proto.RepeatedField(
proto.MESSAGE,
number=11,
message='SpeechTranscription',
)
text_annotations = proto.RepeatedField(
proto.MESSAGE,
number=12,
message='TextAnnotation',
)
object_annotations = proto.RepeatedField(
proto.MESSAGE,
number=14,
message='ObjectTrackingAnnotation',
)
logo_recognition_annotations = proto.RepeatedField(
proto.MESSAGE,
number=19,
message='LogoRecognitionAnnotation',
)
person_detection_annotations = proto.RepeatedField(
proto.MESSAGE,
number=20,
message='PersonDetectionAnnotation',
)
celebrity_recognition_annotations = proto.Field(
proto.MESSAGE,
number=21,
message='CelebrityRecognitionAnnotation',
)
error = proto.Field(
proto.MESSAGE,
number=9,
message=status_pb2.Status,
)
class AnnotateVideoResponse(proto.Message):
r"""Video annotation response. Included in the ``response`` field of the
``Operation`` returned by the ``GetOperation`` call of the
``google::longrunning::Operations`` service.
Attributes:
annotation_results (Sequence[google.cloud.videointelligence_v1p3beta1.types.VideoAnnotationResults]):
Annotation results for all videos specified in
``AnnotateVideoRequest``.
"""
annotation_results = proto.RepeatedField(
proto.MESSAGE,
number=1,
message='VideoAnnotationResults',
)
class VideoAnnotationProgress(proto.Message):
r"""Annotation progress for a single video.
Attributes:
input_uri (str):
Video file location in `Cloud
Storage <https://cloud.google.com/storage/>`__.
progress_percent (int):
Approximate percentage processed thus far.
Guaranteed to be 100 when fully processed.
start_time (google.protobuf.timestamp_pb2.Timestamp):
Time when the request was received.
update_time (google.protobuf.timestamp_pb2.Timestamp):
Time of the most recent update.
feature (google.cloud.videointelligence_v1p3beta1.types.Feature):
Specifies which feature is being tracked if
the request contains more than one feature.
segment (google.cloud.videointelligence_v1p3beta1.types.VideoSegment):
Specifies which segment is being tracked if
the request contains more than one segment.
"""
input_uri = proto.Field(
proto.STRING,
number=1,
)
progress_percent = proto.Field(
proto.INT32,
number=2,
)
start_time = proto.Field(
proto.MESSAGE,
number=3,
message=timestamp_pb2.Timestamp,
)
update_time = proto.Field(
proto.MESSAGE,
number=4,
message=timestamp_pb2.Timestamp,
)
feature = proto.Field(
proto.ENUM,
number=5,
enum='Feature',
)
segment = proto.Field(
proto.MESSAGE,
number=6,
message='VideoSegment',
)
class AnnotateVideoProgress(proto.Message):
r"""Video annotation progress. Included in the ``metadata`` field of the
``Operation`` returned by the ``GetOperation`` call of the
``google::longrunning::Operations`` service.
Attributes:
annotation_progress (Sequence[google.cloud.videointelligence_v1p3beta1.types.VideoAnnotationProgress]):
Progress metadata for all videos specified in
``AnnotateVideoRequest``.
"""
annotation_progress = proto.RepeatedField(
proto.MESSAGE,
number=1,
message='VideoAnnotationProgress',
)
class SpeechTranscriptionConfig(proto.Message):
r"""Config for SPEECH_TRANSCRIPTION.
Attributes:
language_code (str):
Required. *Required* The language of the supplied audio as a
`BCP-47 <https://www.rfc-editor.org/rfc/bcp/bcp47.txt>`__
language tag. Example: "en-US". See `Language
Support <https://cloud.google.com/speech/docs/languages>`__
for a list of the currently supported language codes.
max_alternatives (int):
Optional. Maximum number of recognition hypotheses to be
returned. Specifically, the maximum number of
``SpeechRecognitionAlternative`` messages within each
``SpeechTranscription``. The server may return fewer than
``max_alternatives``. Valid values are ``0``-``30``. A value
of ``0`` or ``1`` will return a maximum of one. If omitted,
will return a maximum of one.
filter_profanity (bool):
Optional. If set to ``true``, the server will attempt to
filter out profanities, replacing all but the initial
character in each filtered word with asterisks, e.g. "f***".
If set to ``false`` or omitted, profanities won't be
filtered out.
speech_contexts (Sequence[google.cloud.videointelligence_v1p3beta1.types.SpeechContext]):
Optional. A means to provide context to
assist the speech recognition.
enable_automatic_punctuation (bool):
Optional. If 'true', adds punctuation to
recognition result hypotheses. This feature is
only available in select languages. Setting this
for requests in other languages has no effect at
all. The default 'false' value does not add
punctuation to result hypotheses. NOTE: "This is
currently offered as an experimental service,
complimentary to all users. In the future this
may be exclusively available as a premium
feature.".
audio_tracks (Sequence[int]):
Optional. For file formats, such as MXF or
MKV, supporting multiple audio tracks, specify
up to two tracks. Default: track 0.
enable_speaker_diarization (bool):
Optional. If 'true', enables speaker detection for each
recognized word in the top alternative of the recognition
result using a speaker_tag provided in the WordInfo. Note:
When this is true, we send all the words from the beginning
of the audio for the top alternative in every consecutive
response. This is done in order to improve our speaker tags
as our models learn to identify the speakers in the
conversation over time.
diarization_speaker_count (int):
Optional. If set, specifies the estimated number of speakers
in the conversation. If not set, defaults to '2'. Ignored
unless enable_speaker_diarization is set to true.
enable_word_confidence (bool):
Optional. If ``true``, the top result includes a list of
words and the confidence for those words. If ``false``, no
word-level confidence information is returned. The default
is ``false``.
"""
language_code = proto.Field(
proto.STRING,
number=1,
)
max_alternatives = proto.Field(
proto.INT32,
number=2,
)
filter_profanity = proto.Field(
proto.BOOL,
number=3,
)
speech_contexts = proto.RepeatedField(
proto.MESSAGE,
number=4,
message='SpeechContext',
)
enable_automatic_punctuation = proto.Field(
proto.BOOL,
number=5,
)
audio_tracks = proto.RepeatedField(
proto.INT32,
number=6,
)
enable_speaker_diarization = proto.Field(
proto.BOOL,
number=7,
)
diarization_speaker_count = proto.Field(
proto.INT32,
number=8,
)
enable_word_confidence = proto.Field(
proto.BOOL,
number=9,
)
class SpeechContext(proto.Message):
r"""Provides "hints" to the speech recognizer to favor specific
words and phrases in the results.
Attributes:
phrases (Sequence[str]):
Optional. A list of strings containing words and phrases
"hints" so that the speech recognition is more likely to
recognize them. This can be used to improve the accuracy for
specific words and phrases, for example, if specific
commands are typically spoken by the user. This can also be
used to add additional words to the vocabulary of the
recognizer. See `usage
limits <https://cloud.google.com/speech/limits#content>`__.
"""
phrases = proto.RepeatedField(
proto.STRING,
number=1,
)
class SpeechTranscription(proto.Message):
r"""A speech recognition result corresponding to a portion of the
audio.
Attributes:
alternatives (Sequence[google.cloud.videointelligence_v1p3beta1.types.SpeechRecognitionAlternative]):
May contain one or more recognition hypotheses (up to the
maximum specified in ``max_alternatives``). These
alternatives are ordered in terms of accuracy, with the top
(first) alternative being the most probable, as ranked by
the recognizer.
language_code (str):
Output only. The
`BCP-47 <https://www.rfc-editor.org/rfc/bcp/bcp47.txt>`__
language tag of the language in this result. This language
code was detected to have the most likelihood of being
spoken in the audio.
"""
alternatives = proto.RepeatedField(
proto.MESSAGE,
number=1,
message='SpeechRecognitionAlternative',
)
language_code = proto.Field(
proto.STRING,
number=2,
)
class SpeechRecognitionAlternative(proto.Message):
r"""Alternative hypotheses (a.k.a. n-best list).
Attributes:
transcript (str):
Transcript text representing the words that
the user spoke.
confidence (float):
Output only. The confidence estimate between 0.0 and 1.0. A
higher number indicates an estimated greater likelihood that
the recognized words are correct. This field is set only for
the top alternative. This field is not guaranteed to be
accurate and users should not rely on it to be always
provided. The default of 0.0 is a sentinel value indicating
``confidence`` was not set.
words (Sequence[google.cloud.videointelligence_v1p3beta1.types.WordInfo]):
Output only. A list of word-specific information for each
recognized word. Note: When ``enable_speaker_diarization``
is set to true, you will see all the words from the
beginning of the audio.
"""
transcript = proto.Field(
proto.STRING,
number=1,
)
confidence = proto.Field(
proto.FLOAT,
number=2,
)
words = proto.RepeatedField(
proto.MESSAGE,
number=3,
message='WordInfo',
)
class WordInfo(proto.Message):
r"""Word-specific information for recognized words. Word information is
only included in the response when certain request parameters are
set, such as ``enable_word_time_offsets``.
Attributes:
start_time (google.protobuf.duration_pb2.Duration):
Time offset relative to the beginning of the audio, and
corresponding to the start of the spoken word. This field is
only set if ``enable_word_time_offsets=true`` and only in
the top hypothesis. This is an experimental feature and the
accuracy of the time offset can vary.
end_time (google.protobuf.duration_pb2.Duration):
Time offset relative to the beginning of the audio, and
corresponding to the end of the spoken word. This field is
only set if ``enable_word_time_offsets=true`` and only in
the top hypothesis. This is an experimental feature and the
accuracy of the time offset can vary.
word (str):
The word corresponding to this set of
information.
confidence (float):
Output only. The confidence estimate between 0.0 and 1.0. A
higher number indicates an estimated greater likelihood that
the recognized words are correct. This field is set only for
the top alternative. This field is not guaranteed to be
accurate and users should not rely on it to be always
provided. The default of 0.0 is a sentinel value indicating
``confidence`` was not set.
speaker_tag (int):
Output only. A distinct integer value is assigned for every
speaker within the audio. This field specifies which one of
those speakers was detected to have spoken this word. Value
ranges from 1 up to diarization_speaker_count, and is only
set if speaker diarization is enabled.
"""
start_time = proto.Field(
proto.MESSAGE,
number=1,
message=duration_pb2.Duration,
)
end_time = proto.Field(
proto.MESSAGE,
number=2,
message=duration_pb2.Duration,
)
word = proto.Field(
proto.STRING,
number=3,
)
confidence = proto.Field(
proto.FLOAT,
number=4,
)
speaker_tag = proto.Field(
proto.INT32,
number=5,
)
class NormalizedVertex(proto.Message):
r"""A vertex represents a 2D point in the image.
NOTE: the normalized vertex coordinates are relative to the
original image and range from 0 to 1.
Attributes:
x (float):
X coordinate.
y (float):
Y coordinate.
"""
x = proto.Field(
proto.FLOAT,
number=1,
)
y = proto.Field(
proto.FLOAT,
number=2,
)
class NormalizedBoundingPoly(proto.Message):
r"""Normalized bounding polygon for text (that might not be aligned with
axis). Contains list of the corner points in clockwise order
starting from top-left corner. For example, for a rectangular
bounding box: When the text is horizontal it might look like: 0----1
\| \| 3----2
When it's clockwise rotated 180 degrees around the top-left corner
it becomes: 2----3 \| \| 1----0
and the vertex order will still be (0, 1, 2, 3). Note that values
can be less than 0, or greater than 1 due to trignometric
calculations for location of the box.
Attributes:
vertices (Sequence[google.cloud.videointelligence_v1p3beta1.types.NormalizedVertex]):
Normalized vertices of the bounding polygon.
"""
vertices = proto.RepeatedField(
proto.MESSAGE,
number=1,
message='NormalizedVertex',
)
class TextSegment(proto.Message):
r"""Video segment level annotation results for text detection.
Attributes:
segment (google.cloud.videointelligence_v1p3beta1.types.VideoSegment):
Video segment where a text snippet was
detected.
confidence (float):
Confidence for the track of detected text. It
is calculated as the highest over all frames
where OCR detected text appears.
frames (Sequence[google.cloud.videointelligence_v1p3beta1.types.TextFrame]):
Information related to the frames where OCR
detected text appears.
"""
segment = proto.Field(
proto.MESSAGE,
number=1,
message='VideoSegment',
)
confidence = proto.Field(
proto.FLOAT,
number=2,
)
frames = proto.RepeatedField(
proto.MESSAGE,
number=3,
message='TextFrame',
)
class TextFrame(proto.Message):
r"""Video frame level annotation results for text annotation
(OCR). Contains information regarding timestamp and bounding box
locations for the frames containing detected OCR text snippets.
Attributes:
rotated_bounding_box (google.cloud.videointelligence_v1p3beta1.types.NormalizedBoundingPoly):
Bounding polygon of the detected text for
this frame.
time_offset (google.protobuf.duration_pb2.Duration):
Timestamp of this frame.
"""
rotated_bounding_box = proto.Field(
proto.MESSAGE,
number=1,
message='NormalizedBoundingPoly',
)
time_offset = proto.Field(
proto.MESSAGE,
number=2,
message=duration_pb2.Duration,
)
class TextAnnotation(proto.Message):
r"""Annotations related to one detected OCR text snippet. This
will contain the corresponding text, confidence value, and frame
level information for each detection.
Attributes:
text (str):
The detected text.
segments (Sequence[google.cloud.videointelligence_v1p3beta1.types.TextSegment]):
All video segments where OCR detected text
appears.
"""
text = proto.Field(
proto.STRING,
number=1,
)
segments = proto.RepeatedField(
proto.MESSAGE,
number=2,
message='TextSegment',
)
class ObjectTrackingFrame(proto.Message):
r"""Video frame level annotations for object detection and
tracking. This field stores per frame location, time offset, and
confidence.
Attributes:
normalized_bounding_box (google.cloud.videointelligence_v1p3beta1.types.NormalizedBoundingBox):
The normalized bounding box location of this
object track for the frame.
time_offset (google.protobuf.duration_pb2.Duration):
The timestamp of the frame in microseconds.
"""
normalized_bounding_box = proto.Field(
proto.MESSAGE,
number=1,
message='NormalizedBoundingBox',
)
time_offset = proto.Field(
proto.MESSAGE,
number=2,
message=duration_pb2.Duration,
)
class ObjectTrackingAnnotation(proto.Message):
r"""Annotations corresponding to one tracked object.
Attributes:
segment (google.cloud.videointelligence_v1p3beta1.types.VideoSegment):
Non-streaming batch mode ONLY.
Each object track corresponds to one video
segment where it appears.
track_id (int):
Streaming mode ONLY. In streaming mode, we do not know the
end time of a tracked object before it is completed. Hence,
there is no VideoSegment info returned. Instead, we provide
a unique identifiable integer track_id so that the customers
can correlate the results of the ongoing
ObjectTrackAnnotation of the same track_id over time.
entity (google.cloud.videointelligence_v1p3beta1.types.Entity):
Entity to specify the object category that
this track is labeled as.
confidence (float):
Object category's labeling confidence of this
track.
frames (Sequence[google.cloud.videointelligence_v1p3beta1.types.ObjectTrackingFrame]):
Information corresponding to all frames where
this object track appears. Non-streaming batch
mode: it may be one or multiple
ObjectTrackingFrame messages in frames.
Streaming mode: it can only be one
ObjectTrackingFrame message in frames.
"""
segment = proto.Field(
proto.MESSAGE,
number=3,
oneof='track_info',
message='VideoSegment',
)
track_id = proto.Field(
proto.INT64,
number=5,
oneof='track_info',
)
entity = proto.Field(
proto.MESSAGE,
number=1,
message='Entity',
)
confidence = proto.Field(
proto.FLOAT,
number=4,
)
frames = proto.RepeatedField(
proto.MESSAGE,
number=2,
message='ObjectTrackingFrame',
)
class LogoRecognitionAnnotation(proto.Message):
r"""Annotation corresponding to one detected, tracked and
recognized logo class.
Attributes:
entity (google.cloud.videointelligence_v1p3beta1.types.Entity):
Entity category information to specify the
logo class that all the logo tracks within this
LogoRecognitionAnnotation are recognized as.
tracks (Sequence[google.cloud.videointelligence_v1p3beta1.types.Track]):
All logo tracks where the recognized logo
appears. Each track corresponds to one logo
instance appearing in consecutive frames.
segments (Sequence[google.cloud.videointelligence_v1p3beta1.types.VideoSegment]):
All video segments where the recognized logo
appears. There might be multiple instances of
the same logo class appearing in one
VideoSegment.
"""
entity = proto.Field(
proto.MESSAGE,
number=1,
message='Entity',
)
tracks = proto.RepeatedField(
proto.MESSAGE,
number=2,
message='Track',
)
segments = proto.RepeatedField(
proto.MESSAGE,
number=3,
message='VideoSegment',
)
class StreamingAnnotateVideoRequest(proto.Message):
r"""The top-level message sent by the client for the
``StreamingAnnotateVideo`` method. Multiple
``StreamingAnnotateVideoRequest`` messages are sent. The first
message must only contain a ``StreamingVideoConfig`` message. All
subsequent messages must only contain ``input_content`` data.
Attributes:
video_config (google.cloud.videointelligence_v1p3beta1.types.StreamingVideoConfig):
Provides information to the annotator, specifing how to
process the request. The first
``AnnotateStreamingVideoRequest`` message must only contain
a ``video_config`` message.
input_content (bytes):
The video data to be annotated. Chunks of video data are
sequentially sent in ``StreamingAnnotateVideoRequest``
messages. Except the initial
``StreamingAnnotateVideoRequest`` message containing only
``video_config``, all subsequent
``AnnotateStreamingVideoRequest`` messages must only contain
``input_content`` field. Note: as with all bytes fields,
protobuffers use a pure binary representation (not base64).
"""
video_config = proto.Field(
proto.MESSAGE,
number=1,
oneof='streaming_request',
message='StreamingVideoConfig',
)
input_content = proto.Field(
proto.BYTES,
number=2,
oneof='streaming_request',
)
class StreamingVideoConfig(proto.Message):
r"""Provides information to the annotator that specifies how to
process the request.
Attributes:
shot_change_detection_config (google.cloud.videointelligence_v1p3beta1.types.StreamingShotChangeDetectionConfig):
Config for STREAMING_SHOT_CHANGE_DETECTION.
label_detection_config (google.cloud.videointelligence_v1p3beta1.types.StreamingLabelDetectionConfig):
Config for STREAMING_LABEL_DETECTION.
explicit_content_detection_config (google.cloud.videointelligence_v1p3beta1.types.StreamingExplicitContentDetectionConfig):
Config for STREAMING_EXPLICIT_CONTENT_DETECTION.
object_tracking_config (google.cloud.videointelligence_v1p3beta1.types.StreamingObjectTrackingConfig):
Config for STREAMING_OBJECT_TRACKING.
automl_action_recognition_config (google.cloud.videointelligence_v1p3beta1.types.StreamingAutomlActionRecognitionConfig):
Config for STREAMING_AUTOML_ACTION_RECOGNITION.
automl_classification_config (google.cloud.videointelligence_v1p3beta1.types.StreamingAutomlClassificationConfig):
Config for STREAMING_AUTOML_CLASSIFICATION.
automl_object_tracking_config (google.cloud.videointelligence_v1p3beta1.types.StreamingAutomlObjectTrackingConfig):
Config for STREAMING_AUTOML_OBJECT_TRACKING.
feature (google.cloud.videointelligence_v1p3beta1.types.StreamingFeature):
Requested annotation feature.
storage_config (google.cloud.videointelligence_v1p3beta1.types.StreamingStorageConfig):
Streaming storage option. By default: storage
is disabled.
"""
shot_change_detection_config = proto.Field(
proto.MESSAGE,
number=2,
oneof='streaming_config',
message='StreamingShotChangeDetectionConfig',
)
label_detection_config = proto.Field(
proto.MESSAGE,
number=3,
oneof='streaming_config',
message='StreamingLabelDetectionConfig',
)
explicit_content_detection_config = proto.Field(
proto.MESSAGE,
number=4,
oneof='streaming_config',
message='StreamingExplicitContentDetectionConfig',
)
object_tracking_config = proto.Field(
proto.MESSAGE,
number=5,
oneof='streaming_config',
message='StreamingObjectTrackingConfig',
)
automl_action_recognition_config = proto.Field(
proto.MESSAGE,
number=23,
oneof='streaming_config',
message='StreamingAutomlActionRecognitionConfig',
)
automl_classification_config = proto.Field(
proto.MESSAGE,
number=21,
oneof='streaming_config',
message='StreamingAutomlClassificationConfig',
)
automl_object_tracking_config = proto.Field(
proto.MESSAGE,
number=22,
oneof='streaming_config',
message='StreamingAutomlObjectTrackingConfig',
)
feature = proto.Field(
proto.ENUM,
number=1,
enum='StreamingFeature',
)
storage_config = proto.Field(
proto.MESSAGE,
number=30,
message='StreamingStorageConfig',
)
class StreamingAnnotateVideoResponse(proto.Message):
r"""``StreamingAnnotateVideoResponse`` is the only message returned to
the client by ``StreamingAnnotateVideo``. A series of zero or more
``StreamingAnnotateVideoResponse`` messages are streamed back to the
client.
Attributes:
error (google.rpc.status_pb2.Status):
If set, returns a [google.rpc.Status][google.rpc.Status]
message that specifies the error for the operation.
annotation_results (google.cloud.videointelligence_v1p3beta1.types.StreamingVideoAnnotationResults):
Streaming annotation results.
annotation_results_uri (str):
Google Cloud Storage(GCS) URI that stores annotation results
of one streaming session in JSON format. It is the
annotation_result_storage_directory from the request
followed by '/cloud_project_number-session_id'.
"""
error = proto.Field(
proto.MESSAGE,
number=1,
message=status_pb2.Status,
)
annotation_results = proto.Field(
proto.MESSAGE,
number=2,
message='StreamingVideoAnnotationResults',
)
annotation_results_uri = proto.Field(
proto.STRING,
number=3,
)
class StreamingVideoAnnotationResults(proto.Message):
r"""Streaming annotation results corresponding to a portion of
the video that is currently being processed.
Attributes:
shot_annotations (Sequence[google.cloud.videointelligence_v1p3beta1.types.VideoSegment]):
Shot annotation results. Each shot is
represented as a video segment.
label_annotations (Sequence[google.cloud.videointelligence_v1p3beta1.types.LabelAnnotation]):
Label annotation results.
explicit_annotation (google.cloud.videointelligence_v1p3beta1.types.ExplicitContentAnnotation):
Explicit content annotation results.
object_annotations (Sequence[google.cloud.videointelligence_v1p3beta1.types.ObjectTrackingAnnotation]):
Object tracking results.
"""
shot_annotations = proto.RepeatedField(
proto.MESSAGE,
number=1,
message='VideoSegment',
)
label_annotations = proto.RepeatedField(
proto.MESSAGE,
number=2,
message='LabelAnnotation',
)
explicit_annotation = proto.Field(
proto.MESSAGE,
number=3,
message='ExplicitContentAnnotation',
)
object_annotations = proto.RepeatedField(
proto.MESSAGE,
number=4,
message='ObjectTrackingAnnotation',
)
class StreamingShotChangeDetectionConfig(proto.Message):
r"""Config for STREAMING_SHOT_CHANGE_DETECTION. """
class StreamingLabelDetectionConfig(proto.Message):
r"""Config for STREAMING_LABEL_DETECTION.
Attributes:
stationary_camera (bool):
Whether the video has been captured from a
stationary (i.e. non-moving) camera. When set to
true, might improve detection accuracy for
moving objects. Default: false.
"""
stationary_camera = proto.Field(
proto.BOOL,
number=1,
)
class StreamingExplicitContentDetectionConfig(proto.Message):
r"""Config for STREAMING_EXPLICIT_CONTENT_DETECTION. """
class StreamingObjectTrackingConfig(proto.Message):
r"""Config for STREAMING_OBJECT_TRACKING. """
class StreamingAutomlActionRecognitionConfig(proto.Message):
r"""Config for STREAMING_AUTOML_ACTION_RECOGNITION.
Attributes:
model_name (str):
Resource name of AutoML model. Format:
``projects/{project_id}/locations/{location_id}/models/{model_id}``
"""
model_name = proto.Field(
proto.STRING,
number=1,
)
class StreamingAutomlClassificationConfig(proto.Message):
r"""Config for STREAMING_AUTOML_CLASSIFICATION.
Attributes:
model_name (str):
Resource name of AutoML model. Format:
``projects/{project_number}/locations/{location_id}/models/{model_id}``
"""
model_name = proto.Field(
proto.STRING,
number=1,
)
class StreamingAutomlObjectTrackingConfig(proto.Message):
r"""Config for STREAMING_AUTOML_OBJECT_TRACKING.
Attributes:
model_name (str):
Resource name of AutoML model. Format:
``projects/{project_id}/locations/{location_id}/models/{model_id}``
"""
model_name = proto.Field(
proto.STRING,
number=1,
)
class StreamingStorageConfig(proto.Message):
r"""Config for streaming storage option.
Attributes:
enable_storage_annotation_result (bool):
Enable streaming storage. Default: false.
annotation_result_storage_directory (str):
Cloud Storage URI to store all annotation results for one
client. Client should specify this field as the top-level
storage directory. Annotation results of different sessions
will be put into different sub-directories denoted by
project_name and session_id. All sub-directories will be
auto generated by program and will be made accessible to
client in response proto. URIs must be specified in the
following format: ``gs://bucket-id/object-id`` ``bucket-id``
should be a valid Cloud Storage bucket created by client and
bucket permission shall also be configured properly.
``object-id`` can be arbitrary string that make sense to
client. Other URI formats will return error and cause Cloud
Storage write failure.
"""
enable_storage_annotation_result = proto.Field(
proto.BOOL,
number=1,
)
annotation_result_storage_directory = proto.Field(
proto.STRING,
number=3,
)
__all__ = tuple(sorted(__protobuf__.manifest))
| apache-2.0 |
zhuzhezhe/weibobash | env/lib/python3.4/site-packages/setuptools/package_index.py | 301 | 38760 | """PyPI and direct package downloading"""
import sys
import os
import re
import shutil
import socket
import base64
import hashlib
from functools import wraps
from pkg_resources import (
CHECKOUT_DIST, Distribution, BINARY_DIST, normalize_path, SOURCE_DIST,
require, Environment, find_distributions, safe_name, safe_version,
to_filename, Requirement, DEVELOP_DIST,
)
from setuptools import ssl_support
from distutils import log
from distutils.errors import DistutilsError
from setuptools.compat import (urllib2, httplib, StringIO, HTTPError,
urlparse, urlunparse, unquote, splituser,
url2pathname, name2codepoint,
unichr, urljoin, urlsplit, urlunsplit,
ConfigParser)
from setuptools.compat import filterfalse
from fnmatch import translate
from setuptools.py26compat import strip_fragment
from setuptools.py27compat import get_all_headers
EGG_FRAGMENT = re.compile(r'^egg=([-A-Za-z0-9_.]+)$')
HREF = re.compile("""href\\s*=\\s*['"]?([^'"> ]+)""", re.I)
# this is here to fix emacs' cruddy broken syntax highlighting
PYPI_MD5 = re.compile(
'<a href="([^"#]+)">([^<]+)</a>\n\s+\\(<a (?:title="MD5 hash"\n\s+)'
'href="[^?]+\?:action=show_md5&digest=([0-9a-f]{32})">md5</a>\\)'
)
URL_SCHEME = re.compile('([-+.a-z0-9]{2,}):',re.I).match
EXTENSIONS = ".tar.gz .tar.bz2 .tar .zip .tgz".split()
__all__ = [
'PackageIndex', 'distros_for_url', 'parse_bdist_wininst',
'interpret_distro_name',
]
_SOCKET_TIMEOUT = 15
def parse_bdist_wininst(name):
"""Return (base,pyversion) or (None,None) for possible .exe name"""
lower = name.lower()
base, py_ver, plat = None, None, None
if lower.endswith('.exe'):
if lower.endswith('.win32.exe'):
base = name[:-10]
plat = 'win32'
elif lower.startswith('.win32-py',-16):
py_ver = name[-7:-4]
base = name[:-16]
plat = 'win32'
elif lower.endswith('.win-amd64.exe'):
base = name[:-14]
plat = 'win-amd64'
elif lower.startswith('.win-amd64-py',-20):
py_ver = name[-7:-4]
base = name[:-20]
plat = 'win-amd64'
return base,py_ver,plat
def egg_info_for_url(url):
scheme, server, path, parameters, query, fragment = urlparse(url)
base = unquote(path.split('/')[-1])
if server=='sourceforge.net' and base=='download': # XXX Yuck
base = unquote(path.split('/')[-2])
if '#' in base: base, fragment = base.split('#',1)
return base,fragment
def distros_for_url(url, metadata=None):
"""Yield egg or source distribution objects that might be found at a URL"""
base, fragment = egg_info_for_url(url)
for dist in distros_for_location(url, base, metadata): yield dist
if fragment:
match = EGG_FRAGMENT.match(fragment)
if match:
for dist in interpret_distro_name(
url, match.group(1), metadata, precedence = CHECKOUT_DIST
):
yield dist
def distros_for_location(location, basename, metadata=None):
"""Yield egg or source distribution objects based on basename"""
if basename.endswith('.egg.zip'):
basename = basename[:-4] # strip the .zip
if basename.endswith('.egg') and '-' in basename:
# only one, unambiguous interpretation
return [Distribution.from_location(location, basename, metadata)]
if basename.endswith('.exe'):
win_base, py_ver, platform = parse_bdist_wininst(basename)
if win_base is not None:
return interpret_distro_name(
location, win_base, metadata, py_ver, BINARY_DIST, platform
)
# Try source distro extensions (.zip, .tgz, etc.)
#
for ext in EXTENSIONS:
if basename.endswith(ext):
basename = basename[:-len(ext)]
return interpret_distro_name(location, basename, metadata)
return [] # no extension matched
def distros_for_filename(filename, metadata=None):
"""Yield possible egg or source distribution objects based on a filename"""
return distros_for_location(
normalize_path(filename), os.path.basename(filename), metadata
)
def interpret_distro_name(
location, basename, metadata, py_version=None, precedence=SOURCE_DIST,
platform=None
):
"""Generate alternative interpretations of a source distro name
Note: if `location` is a filesystem filename, you should call
``pkg_resources.normalize_path()`` on it before passing it to this
routine!
"""
# Generate alternative interpretations of a source distro name
# Because some packages are ambiguous as to name/versions split
# e.g. "adns-python-1.1.0", "egenix-mx-commercial", etc.
# So, we generate each possible interepretation (e.g. "adns, python-1.1.0"
# "adns-python, 1.1.0", and "adns-python-1.1.0, no version"). In practice,
# the spurious interpretations should be ignored, because in the event
# there's also an "adns" package, the spurious "python-1.1.0" version will
# compare lower than any numeric version number, and is therefore unlikely
# to match a request for it. It's still a potential problem, though, and
# in the long run PyPI and the distutils should go for "safe" names and
# versions in distribution archive names (sdist and bdist).
parts = basename.split('-')
if not py_version and any(re.match('py\d\.\d$', p) for p in parts[2:]):
# it is a bdist_dumb, not an sdist -- bail out
return
for p in range(1,len(parts)+1):
yield Distribution(
location, metadata, '-'.join(parts[:p]), '-'.join(parts[p:]),
py_version=py_version, precedence = precedence,
platform = platform
)
# From Python 2.7 docs
def unique_everseen(iterable, key=None):
"List unique elements, preserving order. Remember all elements ever seen."
# unique_everseen('AAAABBBCCDAABBB') --> A B C D
# unique_everseen('ABBCcAD', str.lower) --> A B C D
seen = set()
seen_add = seen.add
if key is None:
for element in filterfalse(seen.__contains__, iterable):
seen_add(element)
yield element
else:
for element in iterable:
k = key(element)
if k not in seen:
seen_add(k)
yield element
def unique_values(func):
"""
Wrap a function returning an iterable such that the resulting iterable
only ever yields unique items.
"""
@wraps(func)
def wrapper(*args, **kwargs):
return unique_everseen(func(*args, **kwargs))
return wrapper
REL = re.compile("""<([^>]*\srel\s*=\s*['"]?([^'">]+)[^>]*)>""", re.I)
# this line is here to fix emacs' cruddy broken syntax highlighting
@unique_values
def find_external_links(url, page):
"""Find rel="homepage" and rel="download" links in `page`, yielding URLs"""
for match in REL.finditer(page):
tag, rel = match.groups()
rels = set(map(str.strip, rel.lower().split(',')))
if 'homepage' in rels or 'download' in rels:
for match in HREF.finditer(tag):
yield urljoin(url, htmldecode(match.group(1)))
for tag in ("<th>Home Page", "<th>Download URL"):
pos = page.find(tag)
if pos!=-1:
match = HREF.search(page,pos)
if match:
yield urljoin(url, htmldecode(match.group(1)))
user_agent = "Python-urllib/%s setuptools/%s" % (
sys.version[:3], require('setuptools')[0].version
)
class ContentChecker(object):
"""
A null content checker that defines the interface for checking content
"""
def feed(self, block):
"""
Feed a block of data to the hash.
"""
return
def is_valid(self):
"""
Check the hash. Return False if validation fails.
"""
return True
def report(self, reporter, template):
"""
Call reporter with information about the checker (hash name)
substituted into the template.
"""
return
class HashChecker(ContentChecker):
pattern = re.compile(
r'(?P<hash_name>sha1|sha224|sha384|sha256|sha512|md5)='
r'(?P<expected>[a-f0-9]+)'
)
def __init__(self, hash_name, expected):
self.hash_name = hash_name
self.hash = hashlib.new(hash_name)
self.expected = expected
@classmethod
def from_url(cls, url):
"Construct a (possibly null) ContentChecker from a URL"
fragment = urlparse(url)[-1]
if not fragment:
return ContentChecker()
match = cls.pattern.search(fragment)
if not match:
return ContentChecker()
return cls(**match.groupdict())
def feed(self, block):
self.hash.update(block)
def is_valid(self):
return self.hash.hexdigest() == self.expected
def report(self, reporter, template):
msg = template % self.hash_name
return reporter(msg)
class PackageIndex(Environment):
"""A distribution index that scans web pages for download URLs"""
def __init__(
self, index_url="https://pypi.python.org/simple", hosts=('*',),
ca_bundle=None, verify_ssl=True, *args, **kw
):
Environment.__init__(self,*args,**kw)
self.index_url = index_url + "/"[:not index_url.endswith('/')]
self.scanned_urls = {}
self.fetched_urls = {}
self.package_pages = {}
self.allows = re.compile('|'.join(map(translate,hosts))).match
self.to_scan = []
if verify_ssl and ssl_support.is_available and (ca_bundle or ssl_support.find_ca_bundle()):
self.opener = ssl_support.opener_for(ca_bundle)
else: self.opener = urllib2.urlopen
def process_url(self, url, retrieve=False):
"""Evaluate a URL as a possible download, and maybe retrieve it"""
if url in self.scanned_urls and not retrieve:
return
self.scanned_urls[url] = True
if not URL_SCHEME(url):
self.process_filename(url)
return
else:
dists = list(distros_for_url(url))
if dists:
if not self.url_ok(url):
return
self.debug("Found link: %s", url)
if dists or not retrieve or url in self.fetched_urls:
list(map(self.add, dists))
return # don't need the actual page
if not self.url_ok(url):
self.fetched_urls[url] = True
return
self.info("Reading %s", url)
self.fetched_urls[url] = True # prevent multiple fetch attempts
f = self.open_url(url, "Download error on %s: %%s -- Some packages may not be found!" % url)
if f is None: return
self.fetched_urls[f.url] = True
if 'html' not in f.headers.get('content-type', '').lower():
f.close() # not html, we can't process it
return
base = f.url # handle redirects
page = f.read()
if not isinstance(page, str): # We are in Python 3 and got bytes. We want str.
if isinstance(f, HTTPError):
# Errors have no charset, assume latin1:
charset = 'latin-1'
else:
charset = f.headers.get_param('charset') or 'latin-1'
page = page.decode(charset, "ignore")
f.close()
for match in HREF.finditer(page):
link = urljoin(base, htmldecode(match.group(1)))
self.process_url(link)
if url.startswith(self.index_url) and getattr(f,'code',None)!=404:
page = self.process_index(url, page)
def process_filename(self, fn, nested=False):
# process filenames or directories
if not os.path.exists(fn):
self.warn("Not found: %s", fn)
return
if os.path.isdir(fn) and not nested:
path = os.path.realpath(fn)
for item in os.listdir(path):
self.process_filename(os.path.join(path,item), True)
dists = distros_for_filename(fn)
if dists:
self.debug("Found: %s", fn)
list(map(self.add, dists))
def url_ok(self, url, fatal=False):
s = URL_SCHEME(url)
if (s and s.group(1).lower()=='file') or self.allows(urlparse(url)[1]):
return True
msg = ("\nNote: Bypassing %s (disallowed host; see "
"http://bit.ly/1dg9ijs for details).\n")
if fatal:
raise DistutilsError(msg % url)
else:
self.warn(msg, url)
def scan_egg_links(self, search_path):
for item in search_path:
if os.path.isdir(item):
for entry in os.listdir(item):
if entry.endswith('.egg-link'):
self.scan_egg_link(item, entry)
def scan_egg_link(self, path, entry):
lines = [_f for _f in map(str.strip,
open(os.path.join(path, entry))) if _f]
if len(lines)==2:
for dist in find_distributions(os.path.join(path, lines[0])):
dist.location = os.path.join(path, *lines)
dist.precedence = SOURCE_DIST
self.add(dist)
def process_index(self,url,page):
"""Process the contents of a PyPI page"""
def scan(link):
# Process a URL to see if it's for a package page
if link.startswith(self.index_url):
parts = list(map(
unquote, link[len(self.index_url):].split('/')
))
if len(parts)==2 and '#' not in parts[1]:
# it's a package page, sanitize and index it
pkg = safe_name(parts[0])
ver = safe_version(parts[1])
self.package_pages.setdefault(pkg.lower(),{})[link] = True
return to_filename(pkg), to_filename(ver)
return None, None
# process an index page into the package-page index
for match in HREF.finditer(page):
try:
scan(urljoin(url, htmldecode(match.group(1))))
except ValueError:
pass
pkg, ver = scan(url) # ensure this page is in the page index
if pkg:
# process individual package page
for new_url in find_external_links(url, page):
# Process the found URL
base, frag = egg_info_for_url(new_url)
if base.endswith('.py') and not frag:
if ver:
new_url+='#egg=%s-%s' % (pkg,ver)
else:
self.need_version_info(url)
self.scan_url(new_url)
return PYPI_MD5.sub(
lambda m: '<a href="%s#md5=%s">%s</a>' % m.group(1,3,2), page
)
else:
return "" # no sense double-scanning non-package pages
def need_version_info(self, url):
self.scan_all(
"Page at %s links to .py file(s) without version info; an index "
"scan is required.", url
)
def scan_all(self, msg=None, *args):
if self.index_url not in self.fetched_urls:
if msg: self.warn(msg,*args)
self.info(
"Scanning index of all packages (this may take a while)"
)
self.scan_url(self.index_url)
def find_packages(self, requirement):
self.scan_url(self.index_url + requirement.unsafe_name+'/')
if not self.package_pages.get(requirement.key):
# Fall back to safe version of the name
self.scan_url(self.index_url + requirement.project_name+'/')
if not self.package_pages.get(requirement.key):
# We couldn't find the target package, so search the index page too
self.not_found_in_index(requirement)
for url in list(self.package_pages.get(requirement.key,())):
# scan each page that might be related to the desired package
self.scan_url(url)
def obtain(self, requirement, installer=None):
self.prescan()
self.find_packages(requirement)
for dist in self[requirement.key]:
if dist in requirement:
return dist
self.debug("%s does not match %s", requirement, dist)
return super(PackageIndex, self).obtain(requirement,installer)
def check_hash(self, checker, filename, tfp):
"""
checker is a ContentChecker
"""
checker.report(self.debug,
"Validating %%s checksum for %s" % filename)
if not checker.is_valid():
tfp.close()
os.unlink(filename)
raise DistutilsError(
"%s validation failed for %s; "
"possible download problem?" % (
checker.hash.name, os.path.basename(filename))
)
def add_find_links(self, urls):
"""Add `urls` to the list that will be prescanned for searches"""
for url in urls:
if (
self.to_scan is None # if we have already "gone online"
or not URL_SCHEME(url) # or it's a local file/directory
or url.startswith('file:')
or list(distros_for_url(url)) # or a direct package link
):
# then go ahead and process it now
self.scan_url(url)
else:
# otherwise, defer retrieval till later
self.to_scan.append(url)
def prescan(self):
"""Scan urls scheduled for prescanning (e.g. --find-links)"""
if self.to_scan:
list(map(self.scan_url, self.to_scan))
self.to_scan = None # from now on, go ahead and process immediately
def not_found_in_index(self, requirement):
if self[requirement.key]: # we've seen at least one distro
meth, msg = self.info, "Couldn't retrieve index page for %r"
else: # no distros seen for this name, might be misspelled
meth, msg = (self.warn,
"Couldn't find index page for %r (maybe misspelled?)")
meth(msg, requirement.unsafe_name)
self.scan_all()
def download(self, spec, tmpdir):
"""Locate and/or download `spec` to `tmpdir`, returning a local path
`spec` may be a ``Requirement`` object, or a string containing a URL,
an existing local filename, or a project/version requirement spec
(i.e. the string form of a ``Requirement`` object). If it is the URL
of a .py file with an unambiguous ``#egg=name-version`` tag (i.e., one
that escapes ``-`` as ``_`` throughout), a trivial ``setup.py`` is
automatically created alongside the downloaded file.
If `spec` is a ``Requirement`` object or a string containing a
project/version requirement spec, this method returns the location of
a matching distribution (possibly after downloading it to `tmpdir`).
If `spec` is a locally existing file or directory name, it is simply
returned unchanged. If `spec` is a URL, it is downloaded to a subpath
of `tmpdir`, and the local filename is returned. Various errors may be
raised if a problem occurs during downloading.
"""
if not isinstance(spec,Requirement):
scheme = URL_SCHEME(spec)
if scheme:
# It's a url, download it to tmpdir
found = self._download_url(scheme.group(1), spec, tmpdir)
base, fragment = egg_info_for_url(spec)
if base.endswith('.py'):
found = self.gen_setup(found,fragment,tmpdir)
return found
elif os.path.exists(spec):
# Existing file or directory, just return it
return spec
else:
try:
spec = Requirement.parse(spec)
except ValueError:
raise DistutilsError(
"Not a URL, existing file, or requirement spec: %r" %
(spec,)
)
return getattr(self.fetch_distribution(spec, tmpdir),'location',None)
def fetch_distribution(
self, requirement, tmpdir, force_scan=False, source=False,
develop_ok=False, local_index=None
):
"""Obtain a distribution suitable for fulfilling `requirement`
`requirement` must be a ``pkg_resources.Requirement`` instance.
If necessary, or if the `force_scan` flag is set, the requirement is
searched for in the (online) package index as well as the locally
installed packages. If a distribution matching `requirement` is found,
the returned distribution's ``location`` is the value you would have
gotten from calling the ``download()`` method with the matching
distribution's URL or filename. If no matching distribution is found,
``None`` is returned.
If the `source` flag is set, only source distributions and source
checkout links will be considered. Unless the `develop_ok` flag is
set, development and system eggs (i.e., those using the ``.egg-info``
format) will be ignored.
"""
# process a Requirement
self.info("Searching for %s", requirement)
skipped = {}
dist = None
def find(req, env=None):
if env is None:
env = self
# Find a matching distribution; may be called more than once
for dist in env[req.key]:
if dist.precedence==DEVELOP_DIST and not develop_ok:
if dist not in skipped:
self.warn("Skipping development or system egg: %s",dist)
skipped[dist] = 1
continue
if dist in req and (dist.precedence<=SOURCE_DIST or not source):
return dist
if force_scan:
self.prescan()
self.find_packages(requirement)
dist = find(requirement)
if local_index is not None:
dist = dist or find(requirement, local_index)
if dist is None:
if self.to_scan is not None:
self.prescan()
dist = find(requirement)
if dist is None and not force_scan:
self.find_packages(requirement)
dist = find(requirement)
if dist is None:
self.warn(
"No local packages or download links found for %s%s",
(source and "a source distribution of " or ""),
requirement,
)
else:
self.info("Best match: %s", dist)
return dist.clone(location=self.download(dist.location, tmpdir))
def fetch(self, requirement, tmpdir, force_scan=False, source=False):
"""Obtain a file suitable for fulfilling `requirement`
DEPRECATED; use the ``fetch_distribution()`` method now instead. For
backward compatibility, this routine is identical but returns the
``location`` of the downloaded distribution instead of a distribution
object.
"""
dist = self.fetch_distribution(requirement,tmpdir,force_scan,source)
if dist is not None:
return dist.location
return None
def gen_setup(self, filename, fragment, tmpdir):
match = EGG_FRAGMENT.match(fragment)
dists = match and [
d for d in
interpret_distro_name(filename, match.group(1), None) if d.version
] or []
if len(dists)==1: # unambiguous ``#egg`` fragment
basename = os.path.basename(filename)
# Make sure the file has been downloaded to the temp dir.
if os.path.dirname(filename) != tmpdir:
dst = os.path.join(tmpdir, basename)
from setuptools.command.easy_install import samefile
if not samefile(filename, dst):
shutil.copy2(filename, dst)
filename=dst
with open(os.path.join(tmpdir, 'setup.py'), 'w') as file:
file.write(
"from setuptools import setup\n"
"setup(name=%r, version=%r, py_modules=[%r])\n"
% (
dists[0].project_name, dists[0].version,
os.path.splitext(basename)[0]
)
)
return filename
elif match:
raise DistutilsError(
"Can't unambiguously interpret project/version identifier %r; "
"any dashes in the name or version should be escaped using "
"underscores. %r" % (fragment,dists)
)
else:
raise DistutilsError(
"Can't process plain .py files without an '#egg=name-version'"
" suffix to enable automatic setup script generation."
)
dl_blocksize = 8192
def _download_to(self, url, filename):
self.info("Downloading %s", url)
# Download the file
fp, info = None, None
try:
checker = HashChecker.from_url(url)
fp = self.open_url(strip_fragment(url))
if isinstance(fp, HTTPError):
raise DistutilsError(
"Can't download %s: %s %s" % (url, fp.code,fp.msg)
)
headers = fp.info()
blocknum = 0
bs = self.dl_blocksize
size = -1
if "content-length" in headers:
# Some servers return multiple Content-Length headers :(
sizes = get_all_headers(headers, 'Content-Length')
size = max(map(int, sizes))
self.reporthook(url, filename, blocknum, bs, size)
with open(filename,'wb') as tfp:
while True:
block = fp.read(bs)
if block:
checker.feed(block)
tfp.write(block)
blocknum += 1
self.reporthook(url, filename, blocknum, bs, size)
else:
break
self.check_hash(checker, filename, tfp)
return headers
finally:
if fp: fp.close()
def reporthook(self, url, filename, blocknum, blksize, size):
pass # no-op
def open_url(self, url, warning=None):
if url.startswith('file:'):
return local_open(url)
try:
return open_with_auth(url, self.opener)
except (ValueError, httplib.InvalidURL) as v:
msg = ' '.join([str(arg) for arg in v.args])
if warning:
self.warn(warning, msg)
else:
raise DistutilsError('%s %s' % (url, msg))
except urllib2.HTTPError as v:
return v
except urllib2.URLError as v:
if warning:
self.warn(warning, v.reason)
else:
raise DistutilsError("Download error for %s: %s"
% (url, v.reason))
except httplib.BadStatusLine as v:
if warning:
self.warn(warning, v.line)
else:
raise DistutilsError(
'%s returned a bad status line. The server might be '
'down, %s' %
(url, v.line)
)
except httplib.HTTPException as v:
if warning:
self.warn(warning, v)
else:
raise DistutilsError("Download error for %s: %s"
% (url, v))
def _download_url(self, scheme, url, tmpdir):
# Determine download filename
#
name, fragment = egg_info_for_url(url)
if name:
while '..' in name:
name = name.replace('..','.').replace('\\','_')
else:
name = "__downloaded__" # default if URL has no path contents
if name.endswith('.egg.zip'):
name = name[:-4] # strip the extra .zip before download
filename = os.path.join(tmpdir,name)
# Download the file
#
if scheme=='svn' or scheme.startswith('svn+'):
return self._download_svn(url, filename)
elif scheme=='git' or scheme.startswith('git+'):
return self._download_git(url, filename)
elif scheme.startswith('hg+'):
return self._download_hg(url, filename)
elif scheme=='file':
return url2pathname(urlparse(url)[2])
else:
self.url_ok(url, True) # raises error if not allowed
return self._attempt_download(url, filename)
def scan_url(self, url):
self.process_url(url, True)
def _attempt_download(self, url, filename):
headers = self._download_to(url, filename)
if 'html' in headers.get('content-type','').lower():
return self._download_html(url, headers, filename)
else:
return filename
def _download_html(self, url, headers, filename):
file = open(filename)
for line in file:
if line.strip():
# Check for a subversion index page
if re.search(r'<title>([^- ]+ - )?Revision \d+:', line):
# it's a subversion index page:
file.close()
os.unlink(filename)
return self._download_svn(url, filename)
break # not an index page
file.close()
os.unlink(filename)
raise DistutilsError("Unexpected HTML page found at "+url)
def _download_svn(self, url, filename):
url = url.split('#',1)[0] # remove any fragment for svn's sake
creds = ''
if url.lower().startswith('svn:') and '@' in url:
scheme, netloc, path, p, q, f = urlparse(url)
if not netloc and path.startswith('//') and '/' in path[2:]:
netloc, path = path[2:].split('/',1)
auth, host = splituser(netloc)
if auth:
if ':' in auth:
user, pw = auth.split(':',1)
creds = " --username=%s --password=%s" % (user, pw)
else:
creds = " --username="+auth
netloc = host
url = urlunparse((scheme, netloc, url, p, q, f))
self.info("Doing subversion checkout from %s to %s", url, filename)
os.system("svn checkout%s -q %s %s" % (creds, url, filename))
return filename
@staticmethod
def _vcs_split_rev_from_url(url, pop_prefix=False):
scheme, netloc, path, query, frag = urlsplit(url)
scheme = scheme.split('+', 1)[-1]
# Some fragment identification fails
path = path.split('#',1)[0]
rev = None
if '@' in path:
path, rev = path.rsplit('@', 1)
# Also, discard fragment
url = urlunsplit((scheme, netloc, path, query, ''))
return url, rev
def _download_git(self, url, filename):
filename = filename.split('#',1)[0]
url, rev = self._vcs_split_rev_from_url(url, pop_prefix=True)
self.info("Doing git clone from %s to %s", url, filename)
os.system("git clone --quiet %s %s" % (url, filename))
if rev is not None:
self.info("Checking out %s", rev)
os.system("(cd %s && git checkout --quiet %s)" % (
filename,
rev,
))
return filename
def _download_hg(self, url, filename):
filename = filename.split('#',1)[0]
url, rev = self._vcs_split_rev_from_url(url, pop_prefix=True)
self.info("Doing hg clone from %s to %s", url, filename)
os.system("hg clone --quiet %s %s" % (url, filename))
if rev is not None:
self.info("Updating to %s", rev)
os.system("(cd %s && hg up -C -r %s >&-)" % (
filename,
rev,
))
return filename
def debug(self, msg, *args):
log.debug(msg, *args)
def info(self, msg, *args):
log.info(msg, *args)
def warn(self, msg, *args):
log.warn(msg, *args)
# This pattern matches a character entity reference (a decimal numeric
# references, a hexadecimal numeric reference, or a named reference).
entity_sub = re.compile(r'&(#(\d+|x[\da-fA-F]+)|[\w.:-]+);?').sub
def uchr(c):
if not isinstance(c, int):
return c
if c>255: return unichr(c)
return chr(c)
def decode_entity(match):
what = match.group(1)
if what.startswith('#x'):
what = int(what[2:], 16)
elif what.startswith('#'):
what = int(what[1:])
else:
what = name2codepoint.get(what, match.group(0))
return uchr(what)
def htmldecode(text):
"""Decode HTML entities in the given text."""
return entity_sub(decode_entity, text)
def socket_timeout(timeout=15):
def _socket_timeout(func):
def _socket_timeout(*args, **kwargs):
old_timeout = socket.getdefaulttimeout()
socket.setdefaulttimeout(timeout)
try:
return func(*args, **kwargs)
finally:
socket.setdefaulttimeout(old_timeout)
return _socket_timeout
return _socket_timeout
def _encode_auth(auth):
"""
A function compatible with Python 2.3-3.3 that will encode
auth from a URL suitable for an HTTP header.
>>> str(_encode_auth('username%3Apassword'))
'dXNlcm5hbWU6cGFzc3dvcmQ='
Long auth strings should not cause a newline to be inserted.
>>> long_auth = 'username:' + 'password'*10
>>> chr(10) in str(_encode_auth(long_auth))
False
"""
auth_s = unquote(auth)
# convert to bytes
auth_bytes = auth_s.encode()
# use the legacy interface for Python 2.3 support
encoded_bytes = base64.encodestring(auth_bytes)
# convert back to a string
encoded = encoded_bytes.decode()
# strip the trailing carriage return
return encoded.replace('\n','')
class Credential(object):
"""
A username/password pair. Use like a namedtuple.
"""
def __init__(self, username, password):
self.username = username
self.password = password
def __iter__(self):
yield self.username
yield self.password
def __str__(self):
return '%(username)s:%(password)s' % vars(self)
class PyPIConfig(ConfigParser.ConfigParser):
def __init__(self):
"""
Load from ~/.pypirc
"""
defaults = dict.fromkeys(['username', 'password', 'repository'], '')
ConfigParser.ConfigParser.__init__(self, defaults)
rc = os.path.join(os.path.expanduser('~'), '.pypirc')
if os.path.exists(rc):
self.read(rc)
@property
def creds_by_repository(self):
sections_with_repositories = [
section for section in self.sections()
if self.get(section, 'repository').strip()
]
return dict(map(self._get_repo_cred, sections_with_repositories))
def _get_repo_cred(self, section):
repo = self.get(section, 'repository').strip()
return repo, Credential(
self.get(section, 'username').strip(),
self.get(section, 'password').strip(),
)
def find_credential(self, url):
"""
If the URL indicated appears to be a repository defined in this
config, return the credential for that repository.
"""
for repository, cred in self.creds_by_repository.items():
if url.startswith(repository):
return cred
def open_with_auth(url, opener=urllib2.urlopen):
"""Open a urllib2 request, handling HTTP authentication"""
scheme, netloc, path, params, query, frag = urlparse(url)
# Double scheme does not raise on Mac OS X as revealed by a
# failing test. We would expect "nonnumeric port". Refs #20.
if netloc.endswith(':'):
raise httplib.InvalidURL("nonnumeric port: ''")
if scheme in ('http', 'https'):
auth, host = splituser(netloc)
else:
auth = None
if not auth:
cred = PyPIConfig().find_credential(url)
if cred:
auth = str(cred)
info = cred.username, url
log.info('Authenticating as %s for %s (from .pypirc)' % info)
if auth:
auth = "Basic " + _encode_auth(auth)
new_url = urlunparse((scheme,host,path,params,query,frag))
request = urllib2.Request(new_url)
request.add_header("Authorization", auth)
else:
request = urllib2.Request(url)
request.add_header('User-Agent', user_agent)
fp = opener(request)
if auth:
# Put authentication info back into request URL if same host,
# so that links found on the page will work
s2, h2, path2, param2, query2, frag2 = urlparse(fp.url)
if s2==scheme and h2==host:
fp.url = urlunparse((s2,netloc,path2,param2,query2,frag2))
return fp
# adding a timeout to avoid freezing package_index
open_with_auth = socket_timeout(_SOCKET_TIMEOUT)(open_with_auth)
def fix_sf_url(url):
return url # backward compatibility
def local_open(url):
"""Read a local path, with special support for directories"""
scheme, server, path, param, query, frag = urlparse(url)
filename = url2pathname(path)
if os.path.isfile(filename):
return urllib2.urlopen(url)
elif path.endswith('/') and os.path.isdir(filename):
files = []
for f in os.listdir(filename):
if f=='index.html':
with open(os.path.join(filename,f),'r') as fp:
body = fp.read()
break
elif os.path.isdir(os.path.join(filename,f)):
f+='/'
files.append("<a href=%r>%s</a>" % (f,f))
else:
body = ("<html><head><title>%s</title>" % url) + \
"</head><body>%s</body></html>" % '\n'.join(files)
status, message = 200, "OK"
else:
status, message, body = 404, "Path not found", "Not found"
headers = {'content-type': 'text/html'}
return HTTPError(url, status, message, headers, StringIO(body))
| mit |
Orav/kbengine | kbe/src/lib/python/Lib/py_compile.py | 3 | 7279 | """Routine to "compile" a .py file to a .pyc (or .pyo) file.
This module has intimate knowledge of the format of .pyc files.
"""
import importlib._bootstrap
import importlib.machinery
import importlib.util
import os
import os.path
import sys
import traceback
__all__ = ["compile", "main", "PyCompileError"]
class PyCompileError(Exception):
"""Exception raised when an error occurs while attempting to
compile the file.
To raise this exception, use
raise PyCompileError(exc_type,exc_value,file[,msg])
where
exc_type: exception type to be used in error message
type name can be accesses as class variable
'exc_type_name'
exc_value: exception value to be used in error message
can be accesses as class variable 'exc_value'
file: name of file being compiled to be used in error message
can be accesses as class variable 'file'
msg: string message to be written as error message
If no value is given, a default exception message will be
given, consistent with 'standard' py_compile output.
message (or default) can be accesses as class variable
'msg'
"""
def __init__(self, exc_type, exc_value, file, msg=''):
exc_type_name = exc_type.__name__
if exc_type is SyntaxError:
tbtext = ''.join(traceback.format_exception_only(
exc_type, exc_value))
errmsg = tbtext.replace('File "<string>"', 'File "%s"' % file)
else:
errmsg = "Sorry: %s: %s" % (exc_type_name,exc_value)
Exception.__init__(self,msg or errmsg,exc_type_name,exc_value,file)
self.exc_type_name = exc_type_name
self.exc_value = exc_value
self.file = file
self.msg = msg or errmsg
def __str__(self):
return self.msg
def compile(file, cfile=None, dfile=None, doraise=False, optimize=-1):
"""Byte-compile one Python source file to Python bytecode.
:param file: The source file name.
:param cfile: The target byte compiled file name. When not given, this
defaults to the PEP 3147 location.
:param dfile: Purported file name, i.e. the file name that shows up in
error messages. Defaults to the source file name.
:param doraise: Flag indicating whether or not an exception should be
raised when a compile error is found. If an exception occurs and this
flag is set to False, a string indicating the nature of the exception
will be printed, and the function will return to the caller. If an
exception occurs and this flag is set to True, a PyCompileError
exception will be raised.
:param optimize: The optimization level for the compiler. Valid values
are -1, 0, 1 and 2. A value of -1 means to use the optimization
level of the current interpreter, as given by -O command line options.
:return: Path to the resulting byte compiled file.
Note that it isn't necessary to byte-compile Python modules for
execution efficiency -- Python itself byte-compiles a module when
it is loaded, and if it can, writes out the bytecode to the
corresponding .pyc (or .pyo) file.
However, if a Python installation is shared between users, it is a
good idea to byte-compile all modules upon installation, since
other users may not be able to write in the source directories,
and thus they won't be able to write the .pyc/.pyo file, and then
they would be byte-compiling every module each time it is loaded.
This can slow down program start-up considerably.
See compileall.py for a script/module that uses this module to
byte-compile all installed files (or all files in selected
directories).
Do note that FileExistsError is raised if cfile ends up pointing at a
non-regular file or symlink. Because the compilation uses a file renaming,
the resulting file would be regular and thus not the same type of file as
it was previously.
"""
if cfile is None:
if optimize >= 0:
cfile = importlib.util.cache_from_source(file,
debug_override=not optimize)
else:
cfile = importlib.util.cache_from_source(file)
if os.path.islink(cfile):
msg = ('{} is a symlink and will be changed into a regular file if '
'import writes a byte-compiled file to it')
raise FileExistsError(msg.format(cfile))
elif os.path.exists(cfile) and not os.path.isfile(cfile):
msg = ('{} is a non-regular file and will be changed into a regular '
'one if import writes a byte-compiled file to it')
raise FileExistsError(msg.format(cfile))
loader = importlib.machinery.SourceFileLoader('<py_compile>', file)
source_bytes = loader.get_data(file)
try:
code = loader.source_to_code(source_bytes, dfile or file,
_optimize=optimize)
except Exception as err:
py_exc = PyCompileError(err.__class__, err, dfile or file)
if doraise:
raise py_exc
else:
sys.stderr.write(py_exc.msg + '\n')
return
try:
dirname = os.path.dirname(cfile)
if dirname:
os.makedirs(dirname)
except FileExistsError:
pass
source_stats = loader.path_stats(file)
bytecode = importlib._bootstrap._code_to_bytecode(
code, source_stats['mtime'], source_stats['size'])
mode = importlib._bootstrap._calc_mode(file)
importlib._bootstrap._write_atomic(cfile, bytecode, mode)
return cfile
def main(args=None):
"""Compile several source files.
The files named in 'args' (or on the command line, if 'args' is
not specified) are compiled and the resulting bytecode is cached
in the normal manner. This function does not search a directory
structure to locate source files; it only compiles files named
explicitly. If '-' is the only parameter in args, the list of
files is taken from standard input.
"""
if args is None:
args = sys.argv[1:]
rv = 0
if args == ['-']:
while True:
filename = sys.stdin.readline()
if not filename:
break
filename = filename.rstrip('\n')
try:
compile(filename, doraise=True)
except PyCompileError as error:
rv = 1
sys.stderr.write("%s\n" % error.msg)
except OSError as error:
rv = 1
sys.stderr.write("%s\n" % error)
else:
for filename in args:
try:
compile(filename, doraise=True)
except PyCompileError as error:
# return value to indicate at least one failure
rv = 1
sys.stderr.write(error.msg)
return rv
if __name__ == "__main__":
sys.exit(main())
| lgpl-3.0 |
TsinghuaX/edx-platform | lms/djangoapps/open_ended_grading/tests.py | 1 | 17560 | """
Tests for open ended grading interfaces
./manage.py lms --settings test test lms/djangoapps/open_ended_grading
"""
import json
import logging
from django.conf import settings
from django.contrib.auth.models import User
from django.core.urlresolvers import reverse
from django.test.utils import override_settings
from mock import MagicMock, patch, Mock
from xblock.field_data import DictFieldData
from xblock.fields import ScopeIds
from xmodule import peer_grading_module
from xmodule.error_module import ErrorDescriptor
from xmodule.modulestore.django import modulestore
from xmodule.modulestore.tests.django_utils import ModuleStoreTestCase
from xmodule.open_ended_grading_classes import peer_grading_service, controller_query_service
from xmodule.tests import test_util_open_ended
from courseware.tests import factories
from courseware.tests.helpers import LoginEnrollmentTestCase, check_for_get_code, check_for_post_code
from courseware.tests.modulestore_config import TEST_DATA_MIXED_MODULESTORE
from lms.lib.xblock.runtime import LmsModuleSystem
from courseware.roles import CourseStaffRole
from mitxmako.shortcuts import render_to_string
from student.models import unique_id_for_user
from open_ended_grading import staff_grading_service, views, utils
log = logging.getLogger(__name__)
class EmptyStaffGradingService(object):
"""
A staff grading service that does not return a problem list from get_problem_list.
Used for testing to see if error message for empty problem list is correctly displayed.
"""
def get_problem_list(self, course_id, user_id):
"""
Return a staff grading response that is missing a problem list key.
"""
return json.dumps({'success': True, 'error': 'No problems found.'})
def make_instructor(course, user_email):
"""
Makes a given user an instructor in a course.
"""
CourseStaffRole(course.location).add_users(User.objects.get(email=user_email))
class StudentProblemListMockQuery(object):
"""
Mock controller query service for testing student problem list functionality.
"""
def get_grading_status_list(self, *args, **kwargs):
"""
Get a mock grading status list with locations from the open_ended test course.
@returns: json formatted grading status message.
"""
grading_status_list = json.dumps(
{
"version": 1,
"problem_list": [
{
"problem_name": "Test1",
"grader_type": "IN",
"eta_available": True,
"state": "Finished",
"eta": 259200,
"location": "i4x://edX/open_ended/combinedopenended/SampleQuestion1Attempt"
},
{
"problem_name": "Test2",
"grader_type": "NA",
"eta_available": True,
"state": "Waiting to be Graded",
"eta": 259200,
"location": "i4x://edX/open_ended/combinedopenended/SampleQuestion"
},
{
"problem_name": "Test3",
"grader_type": "PE",
"eta_available": True,
"state": "Waiting to be Graded",
"eta": 259200,
"location": "i4x://edX/open_ended/combinedopenended/SampleQuestion454"
},
],
"success": True
}
)
return grading_status_list
@override_settings(MODULESTORE=TEST_DATA_MIXED_MODULESTORE)
class TestStaffGradingService(ModuleStoreTestCase, LoginEnrollmentTestCase):
'''
Check that staff grading service proxy works. Basically just checking the
access control and error handling logic -- all the actual work is on the
backend.
'''
def setUp(self):
self.student = 'view@test.com'
self.instructor = 'view2@test.com'
self.password = 'foo'
self.location = 'TestLocation'
self.create_account('u1', self.student, self.password)
self.create_account('u2', self.instructor, self.password)
self.activate_user(self.student)
self.activate_user(self.instructor)
self.course_id = "edX/toy/2012_Fall"
self.toy = modulestore().get_course(self.course_id)
make_instructor(self.toy, self.instructor)
self.mock_service = staff_grading_service.staff_grading_service()
self.logout()
def test_access(self):
"""
Make sure only staff have access.
"""
self.login(self.student, self.password)
# both get and post should return 404
for view_name in ('staff_grading_get_next', 'staff_grading_save_grade'):
url = reverse(view_name, kwargs={'course_id': self.course_id})
check_for_get_code(self, 404, url)
check_for_post_code(self, 404, url)
def test_get_next(self):
self.login(self.instructor, self.password)
url = reverse('staff_grading_get_next', kwargs={'course_id': self.course_id})
data = {'location': self.location}
response = check_for_post_code(self, 200, url, data)
content = json.loads(response.content)
self.assertTrue(content['success'])
self.assertEquals(content['submission_id'], self.mock_service.cnt)
self.assertIsNotNone(content['submission'])
self.assertIsNotNone(content['num_graded'])
self.assertIsNotNone(content['min_for_ml'])
self.assertIsNotNone(content['num_pending'])
self.assertIsNotNone(content['prompt'])
self.assertIsNotNone(content['ml_error_info'])
self.assertIsNotNone(content['max_score'])
self.assertIsNotNone(content['rubric'])
def save_grade_base(self, skip=False):
self.login(self.instructor, self.password)
url = reverse('staff_grading_save_grade', kwargs={'course_id': self.course_id})
data = {'score': '12',
'feedback': 'great!',
'submission_id': '123',
'location': self.location,
'submission_flagged': "true",
'rubric_scores[]': ['1', '2']}
if skip:
data.update({'skipped': True})
response = check_for_post_code(self, 200, url, data)
content = json.loads(response.content)
self.assertTrue(content['success'], str(content))
self.assertEquals(content['submission_id'], self.mock_service.cnt)
def test_save_grade(self):
self.save_grade_base(skip=False)
def test_save_grade_skip(self):
self.save_grade_base(skip=True)
def test_get_problem_list(self):
self.login(self.instructor, self.password)
url = reverse('staff_grading_get_problem_list', kwargs={'course_id': self.course_id})
data = {}
response = check_for_post_code(self, 200, url, data)
content = json.loads(response.content)
self.assertTrue(content['success'])
self.assertEqual(content['problem_list'], [])
@patch('open_ended_grading.staff_grading_service._service', EmptyStaffGradingService())
def test_get_problem_list_missing(self):
"""
Test to see if a staff grading response missing a problem list is given the appropriate error.
Mock the staff grading service to enable the key to be missing.
"""
# Get a valid user object.
instructor = User.objects.get(email=self.instructor)
# Mock a request object.
request = Mock(
user=instructor,
)
# Get the response and load its content.
response = json.loads(staff_grading_service.get_problem_list(request, self.course_id).content)
# A valid response will have an "error" key.
self.assertTrue('error' in response)
# Check that the error text is correct.
self.assertIn("Cannot find", response['error'])
@override_settings(MODULESTORE=TEST_DATA_MIXED_MODULESTORE)
class TestPeerGradingService(ModuleStoreTestCase, LoginEnrollmentTestCase):
'''
Check that staff grading service proxy works. Basically just checking the
access control and error handling logic -- all the actual work is on the
backend.
'''
def setUp(self):
self.student = 'view@test.com'
self.instructor = 'view2@test.com'
self.password = 'foo'
self.location = 'TestLocation'
self.create_account('u1', self.student, self.password)
self.create_account('u2', self.instructor, self.password)
self.activate_user(self.student)
self.activate_user(self.instructor)
self.course_id = "edX/toy/2012_Fall"
self.toy = modulestore().get_course(self.course_id)
location = "i4x://edX/toy/peergrading/init"
field_data = DictFieldData({'data': "<peergrading/>", 'location': location, 'category':'peergrading'})
self.mock_service = peer_grading_service.MockPeerGradingService()
self.system = LmsModuleSystem(
static_url=settings.STATIC_URL,
track_function=None,
get_module=None,
render_template=render_to_string,
replace_urls=None,
s3_interface=test_util_open_ended.S3_INTERFACE,
open_ended_grading_interface=test_util_open_ended.OPEN_ENDED_GRADING_INTERFACE,
mixins=settings.XBLOCK_MIXINS,
error_descriptor_class=ErrorDescriptor,
)
self.descriptor = peer_grading_module.PeerGradingDescriptor(self.system, field_data, ScopeIds(None, None, None, None))
self.descriptor.xmodule_runtime = self.system
self.peer_module = self.descriptor
self.peer_module.peer_gs = self.mock_service
self.logout()
def test_get_next_submission_success(self):
data = {'location': self.location}
response = self.peer_module.get_next_submission(data)
content = response
self.assertTrue(content['success'])
self.assertIsNotNone(content['submission_id'])
self.assertIsNotNone(content['prompt'])
self.assertIsNotNone(content['submission_key'])
self.assertIsNotNone(content['max_score'])
def test_get_next_submission_missing_location(self):
data = {}
d = self.peer_module.get_next_submission(data)
self.assertFalse(d['success'])
self.assertEqual(d['error'], "Missing required keys: location")
def test_save_grade_success(self):
data = {
'rubric_scores[]': [0, 0],
'location': self.location,
'submission_id': 1,
'submission_key': 'fake key',
'score': 2,
'feedback': 'feedback',
'submission_flagged': 'false',
'answer_unknown': 'false',
'rubric_scores_complete' : 'true'
}
qdict = MagicMock()
def fake_get_item(key):
return data[key]
qdict.__getitem__.side_effect = fake_get_item
qdict.getlist = fake_get_item
qdict.keys = data.keys
response = self.peer_module.save_grade(qdict)
self.assertTrue(response['success'])
def test_save_grade_missing_keys(self):
data = {}
d = self.peer_module.save_grade(data)
self.assertFalse(d['success'])
self.assertTrue(d['error'].find('Missing required keys:') > -1)
def test_is_calibrated_success(self):
data = {'location': self.location}
response = self.peer_module.is_student_calibrated(data)
self.assertTrue(response['success'])
self.assertTrue('calibrated' in response)
def test_is_calibrated_failure(self):
data = {}
response = self.peer_module.is_student_calibrated(data)
self.assertFalse(response['success'])
self.assertFalse('calibrated' in response)
def test_show_calibration_essay_success(self):
data = {'location': self.location}
response = self.peer_module.show_calibration_essay(data)
self.assertTrue(response['success'])
self.assertIsNotNone(response['submission_id'])
self.assertIsNotNone(response['prompt'])
self.assertIsNotNone(response['submission_key'])
self.assertIsNotNone(response['max_score'])
def test_show_calibration_essay_missing_key(self):
data = {}
response = self.peer_module.show_calibration_essay(data)
self.assertFalse(response['success'])
self.assertEqual(response['error'], "Missing required keys: location")
def test_save_calibration_essay_success(self):
data = {
'rubric_scores[]': [0, 0],
'location': self.location,
'submission_id': 1,
'submission_key': 'fake key',
'score': 2,
'feedback': 'feedback',
'submission_flagged': 'false'
}
qdict = MagicMock()
def fake_get_item(key):
return data[key]
qdict.__getitem__.side_effect = fake_get_item
qdict.getlist = fake_get_item
qdict.keys = data.keys
response = self.peer_module.save_calibration_essay(qdict)
self.assertTrue(response['success'])
self.assertTrue('actual_score' in response)
def test_save_calibration_essay_missing_keys(self):
data = {}
response = self.peer_module.save_calibration_essay(data)
self.assertFalse(response['success'])
self.assertTrue(response['error'].find('Missing required keys:') > -1)
self.assertFalse('actual_score' in response)
@override_settings(MODULESTORE=TEST_DATA_MIXED_MODULESTORE)
class TestPanel(ModuleStoreTestCase):
"""
Run tests on the open ended panel
"""
def setUp(self):
# Toy courses should be loaded
self.course_name = 'edX/open_ended/2012_Fall'
self.course = modulestore().get_course(self.course_name)
self.user = factories.UserFactory()
def test_open_ended_panel(self):
"""
Test to see if the peer grading module in the demo course is found
@return:
"""
found_module, peer_grading_module = views.find_peer_grading_module(self.course)
self.assertTrue(found_module)
@patch(
'open_ended_grading.utils.create_controller_query_service',
Mock(
return_value=controller_query_service.MockControllerQueryService(
settings.OPEN_ENDED_GRADING_INTERFACE,
utils.SYSTEM
)
)
)
def test_problem_list(self):
"""
Ensure that the problem list from the grading controller server can be rendered properly locally
@return:
"""
request = Mock(user=self.user)
response = views.student_problem_list(request, self.course.id)
self.assertRegexpMatches(response.content, "Here is a list of open ended problems for this course.")
@override_settings(MODULESTORE=TEST_DATA_MIXED_MODULESTORE)
class TestPeerGradingFound(ModuleStoreTestCase):
"""
Test to see if peer grading modules can be found properly.
"""
def setUp(self):
self.course_name = 'edX/open_ended_nopath/2012_Fall'
self.course = modulestore().get_course(self.course_name)
def test_peer_grading_nopath(self):
"""
The open_ended_nopath course contains a peer grading module with no path to it.
Ensure that the exception is caught.
"""
found, url = views.find_peer_grading_module(self.course)
self.assertEqual(found, False)
@override_settings(MODULESTORE=TEST_DATA_MIXED_MODULESTORE)
class TestStudentProblemList(ModuleStoreTestCase):
"""
Test if the student problem list correctly fetches and parses problems.
"""
def setUp(self):
# Load an open ended course with several problems.
self.course_name = 'edX/open_ended/2012_Fall'
self.course = modulestore().get_course(self.course_name)
self.user = factories.UserFactory()
# Enroll our user in our course and make them an instructor.
make_instructor(self.course, self.user.email)
@patch(
'open_ended_grading.utils.create_controller_query_service',
Mock(return_value=StudentProblemListMockQuery())
)
def test_get_problem_list(self):
"""
Test to see if the StudentProblemList class can get and parse a problem list from ORA.
Mock the get_grading_status_list function using StudentProblemListMockQuery.
"""
# Initialize a StudentProblemList object.
student_problem_list = utils.StudentProblemList(self.course.id, unique_id_for_user(self.user))
# Get the initial problem list from ORA.
success = student_problem_list.fetch_from_grading_service()
# Should be successful, and we should have three problems. See mock class for details.
self.assertTrue(success)
self.assertEqual(len(student_problem_list.problem_list), 3)
# See if the problem locations are valid.
valid_problems = student_problem_list.add_problem_data(reverse('courses'))
# One location is invalid, so we should now have two.
self.assertEqual(len(valid_problems), 2)
# Ensure that human names are being set properly.
self.assertEqual(valid_problems[0]['grader_type_display_name'], "Instructor Assessment")
| agpl-3.0 |
mrquim/mrquimrepo | plugin.video.salts/js2py/host/jsfunctions.py | 26 | 2310 | from ..base import *
RADIX_CHARS = {'1': 1, '0': 0, '3': 3, '2': 2, '5': 5, '4': 4, '7': 7, '6': 6, '9': 9, '8': 8, 'a': 10, 'c': 12,
'b': 11, 'e': 14, 'd': 13, 'g': 16, 'f': 15, 'i': 18, 'h': 17, 'k': 20, 'j': 19, 'm': 22, 'l': 21,
'o': 24, 'n': 23, 'q': 26, 'p': 25, 's': 28, 'r': 27, 'u': 30, 't': 29, 'w': 32, 'v': 31, 'y': 34,
'x': 33, 'z': 35, 'A': 10, 'C': 12, 'B': 11, 'E': 14, 'D': 13, 'G': 16, 'F': 15, 'I': 18, 'H': 17,
'K': 20, 'J': 19, 'M': 22, 'L': 21, 'O': 24, 'N': 23, 'Q': 26, 'P': 25, 'S': 28, 'R': 27, 'U': 30,
'T': 29, 'W': 32, 'V': 31, 'Y': 34, 'X': 33, 'Z': 35}
@Js
def parseInt (string , radix):
string = string.to_string().value.lstrip()
sign = 1
if string and string[0] in ('+', '-'):
if string[0]=='-':
sign = -1
string = string[1:]
r = radix.to_int32()
strip_prefix = True
if r:
if r<2 or r>36:
return NaN
if r!=16:
strip_prefix = False
else:
r = 10
if strip_prefix:
if len(string)>=2 and string[:2] in ('0x', '0X'):
string = string[2:]
r = 16
n = 0
num = 0
while n<len(string):
cand = RADIX_CHARS.get(string[n])
if cand is None or not cand < r:
break
num = cand + num*r
n += 1
if not n:
return NaN
return sign*num
@Js
def parseFloat(string):
string = string.to_string().value.strip()
sign = 1
if string and string[0] in ('+', '-'):
if string[0]=='-':
sign = -1
string = string[1:]
num = None
length = 1
max_len = None
failed = 0
while length<=len(string):
try:
num = float(string[:length])
max_len = length
failed = 0
except:
failed += 1
if failed>4: # cant be a number anymore
break
length += 1
if num is None:
return NaN
return sign*float(string[:max_len])
@Js
def isNaN(number):
if number.to_number().is_nan():
return true
return false
@Js
def isFinite(number):
num = number.to_number()
if num.is_nan() or num.is_infinity():
return false
return true
#todo URI handling!
| gpl-2.0 |
pmghalvorsen/gramps_branch | gramps/plugins/textreport/textplugins.gpr.py | 1 | 13748 | # encoding:utf-8
#
# Gramps - a GTK+/GNOME based genealogy program
#
# Copyright (C) 2009 Benny Malengier
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
MODULE_VERSION="4.2"
#------------------------------------------------------------------------
#
# Ancestor Report
#
#------------------------------------------------------------------------
plg = newplugin()
plg.id = 'ancestor_report'
plg.name = _("Ahnentafel Report")
plg.description = _("Produces a textual ancestral report")
plg.version = '1.0'
plg.gramps_target_version = MODULE_VERSION
plg.status = STABLE
plg.fname = 'ancestorreport.py'
plg.ptype = REPORT
plg.authors = ["Donald N. Allingham"]
plg.authors_email = ["don@gramps-project.org"]
plg.category = CATEGORY_TEXT
plg.reportclass = 'AncestorReport'
plg.optionclass = 'AncestorOptions'
plg.report_modes = [REPORT_MODE_GUI, REPORT_MODE_BKI, REPORT_MODE_CLI]
#------------------------------------------------------------------------
#
# Birthday Report
#
#------------------------------------------------------------------------
plg = newplugin()
plg.id = 'birthday_report'
plg.name = _("Birthday and Anniversary Report")
plg.description = _("Produces a report of birthdays and anniversaries")
plg.version = '1.0'
plg.gramps_target_version = MODULE_VERSION
plg.status = STABLE
plg.fname = 'birthdayreport.py'
plg.ptype = REPORT
plg.authors = ["Douglas S. Blank"]
plg.authors_email = ["dblank@cs.brynmawr.edu"]
plg.category = CATEGORY_TEXT
plg.reportclass = 'BirthdayReport'
plg.optionclass = 'BirthdayOptions'
plg.report_modes = [REPORT_MODE_GUI, REPORT_MODE_BKI, REPORT_MODE_CLI]
#------------------------------------------------------------------------
#
# Custom text BookItem
#
#------------------------------------------------------------------------
plg = newplugin()
plg.id = 'custom_text'
plg.name = _("Custom Text")
plg.description = _("Add custom text to the book report")
plg.version = '1.0'
plg.gramps_target_version = MODULE_VERSION
plg.status = STABLE
plg.fname = 'custombooktext.py'
plg.ptype = REPORT
plg.authors = ["The Gramps Project"]
plg.authors_email = [""]
plg.category = CATEGORY_TEXT
plg.reportclass = 'CustomText'
plg.optionclass = 'CustomTextOptions'
plg.report_modes = [REPORT_MODE_BKI]
#------------------------------------------------------------------------
#
# Descendant Report
#
#------------------------------------------------------------------------
plg = newplugin()
plg.id = 'descend_report'
plg.name = _("Descendant Report")
plg.description = _("Produces a list of descendants of the active person")
plg.version = '1.0'
plg.gramps_target_version = MODULE_VERSION
plg.status = STABLE
plg.fname = 'descendreport.py'
plg.ptype = REPORT
plg.authors = ["Donald N. Allingham"]
plg.authors_email = ["don@gramps-project.org"]
plg.category = CATEGORY_TEXT
plg.reportclass = 'DescendantReport'
plg.optionclass = 'DescendantOptions'
plg.report_modes = [REPORT_MODE_GUI, REPORT_MODE_BKI, REPORT_MODE_CLI]
#------------------------------------------------------------------------
#
# Detailed Ancestral Report
#
#------------------------------------------------------------------------
plg = newplugin()
plg.id = 'det_ancestor_report'
plg.name = _("Detailed Ancestral Report")
plg.description = _("Produces a detailed ancestral report")
plg.version = '1.0'
plg.gramps_target_version = MODULE_VERSION
plg.status = STABLE
plg.fname = 'detancestralreport.py'
plg.ptype = REPORT
plg.authors = ["Bruce DeGrasse"]
plg.authors_email = ["bdegrasse1@attbi.com"]
plg.category = CATEGORY_TEXT
plg.reportclass = 'DetAncestorReport'
plg.optionclass = 'DetAncestorOptions'
plg.report_modes = [REPORT_MODE_GUI, REPORT_MODE_BKI, REPORT_MODE_CLI]
#------------------------------------------------------------------------
#
# Detailed Descendant Report
#
#------------------------------------------------------------------------
plg = newplugin()
plg.id = 'det_descendant_report'
plg.name = _("Detailed Descendant Report")
plg.description = _("Produces a detailed descendant report")
plg.version = '1.0'
plg.gramps_target_version = MODULE_VERSION
plg.status = STABLE
plg.fname = 'detdescendantreport.py'
plg.ptype = REPORT
plg.authors = ["Bruce DeGrasse"]
plg.authors_email = ["bdegrasse1@attbi.com"]
plg.category = CATEGORY_TEXT
plg.reportclass = 'DetDescendantReport'
plg.optionclass = 'DetDescendantOptions'
plg.report_modes = [REPORT_MODE_GUI, REPORT_MODE_BKI, REPORT_MODE_CLI]
#------------------------------------------------------------------------
#
# End of Line Report
#
#------------------------------------------------------------------------
plg = newplugin()
plg.id = 'endofline_report'
plg.name = _("End of Line Report")
plg.description = _("Produces a textual end of line report")
plg.version = '1.0'
plg.gramps_target_version = MODULE_VERSION
plg.status = STABLE
plg.fname = 'endoflinereport.py'
plg.ptype = REPORT
plg.authors = ["Brian G. Matherly"]
plg.authors_email = ["brian@gramps-project.org"]
plg.category = CATEGORY_TEXT
plg.reportclass = 'EndOfLineReport'
plg.optionclass = 'EndOfLineOptions'
plg.report_modes = [REPORT_MODE_GUI, REPORT_MODE_BKI, REPORT_MODE_CLI]
#------------------------------------------------------------------------
#
# Family Group Report
#
#------------------------------------------------------------------------
plg = newplugin()
plg.id = 'family_group'
plg.name = _("Family Group Report")
plg.description = _("Produces a family group report showing information "
"on a set of parents and their children.")
plg.version = '1.0'
plg.gramps_target_version = MODULE_VERSION
plg.status = STABLE
plg.fname = 'familygroup.py'
plg.ptype = REPORT
plg.authors = ["Donald N. Allingham"]
plg.authors_email = ["don@gramps-project.org"]
plg.category = CATEGORY_TEXT
plg.reportclass = 'FamilyGroup'
plg.optionclass = 'FamilyGroupOptions'
plg.report_modes = [REPORT_MODE_GUI, REPORT_MODE_BKI, REPORT_MODE_CLI]
#------------------------------------------------------------------------
#
# Complete Individual Report
#
#------------------------------------------------------------------------
plg = newplugin()
plg.id = 'indiv_complete'
plg.name = _("Complete Individual Report")
plg.description = _("Produces a complete report on the selected people")
plg.version = '1.0'
plg.gramps_target_version = MODULE_VERSION
plg.status = STABLE
plg.fname = 'indivcomplete.py'
plg.ptype = REPORT
plg.authors = ["Donald N. Allingham"]
plg.authors_email = ["don@gramps-project.org"]
plg.category = CATEGORY_TEXT
plg.reportclass = 'IndivCompleteReport'
plg.optionclass = 'IndivCompleteOptions'
plg.report_modes = [REPORT_MODE_GUI, REPORT_MODE_BKI, REPORT_MODE_CLI]
#------------------------------------------------------------------------
#
# Kinship Report
#
#------------------------------------------------------------------------
plg = newplugin()
plg.id = 'kinship_report'
plg.name = _("Kinship Report")
plg.description = _("Produces a textual report of kinship for a given person")
plg.version = '1.0'
plg.gramps_target_version = MODULE_VERSION
plg.status = STABLE
plg.fname = 'kinshipreport.py'
plg.ptype = REPORT
plg.authors = ["Brian G. Matherly"]
plg.authors_email = ["brian@gramps-project.org"]
plg.category = CATEGORY_TEXT
plg.reportclass = 'KinshipReport'
plg.optionclass = 'KinshipOptions'
plg.report_modes = [REPORT_MODE_GUI, REPORT_MODE_BKI, REPORT_MODE_CLI]
#------------------------------------------------------------------------
#
# Tag Report
#
#------------------------------------------------------------------------
plg = newplugin()
plg.id = 'tag_report'
plg.name = _("Tag Report")
plg.description = _("Produces a list of people with a specified tag")
plg.version = '1.0'
plg.gramps_target_version = MODULE_VERSION
plg.status = STABLE
plg.fname = 'tagreport.py'
plg.ptype = REPORT
plg.authors = ["Brian G. Matherly"]
plg.authors_email = ["brian@gramps-project.org"]
plg.category = CATEGORY_TEXT
plg.reportclass = 'TagReport'
plg.optionclass = 'TagOptions'
plg.report_modes = [REPORT_MODE_GUI, REPORT_MODE_BKI, REPORT_MODE_CLI]
plg.require_active = False
#------------------------------------------------------------------------
#
# Number of Ancestors Report
#
#------------------------------------------------------------------------
plg = newplugin()
plg.id = 'number_of_ancestors'
plg.name = _("Number of Ancestors Report")
plg.description = _("Counts number of ancestors of selected person")
plg.version = '1.0'
plg.gramps_target_version = MODULE_VERSION
plg.status = STABLE
plg.fname = 'numberofancestorsreport.py'
plg.ptype = REPORT
plg.authors = ["Brian G. Matherly"]
plg.authors_email = ["brian@gramps-project.org"]
plg.category = CATEGORY_TEXT
plg.reportclass = 'NumberOfAncestorsReport'
plg.optionclass = 'NumberOfAncestorsOptions'
plg.report_modes = [REPORT_MODE_GUI, REPORT_MODE_BKI, REPORT_MODE_CLI]
#------------------------------------------------------------------------
#
# Place Report
#
#------------------------------------------------------------------------
plg = newplugin()
plg.id = 'place_report'
plg.name = _("Place Report")
plg.description = _("Produces a textual place report")
plg.version = '1.0'
plg.gramps_target_version = MODULE_VERSION
plg.status = STABLE
plg.fname = 'placereport.py'
plg.ptype = REPORT
plg.authors = ["Gary Burton"]
plg.authors_email = ["gary.burton@zen.co.uk"]
plg.category = CATEGORY_TEXT
plg.reportclass = 'PlaceReport'
plg.optionclass = 'PlaceOptions'
plg.report_modes = [REPORT_MODE_GUI, REPORT_MODE_BKI, REPORT_MODE_CLI]
plg.require_active = False
#------------------------------------------------------------------------
#
# Book Title Page
#
#------------------------------------------------------------------------
plg = newplugin()
plg.id = 'simple_book_title'
plg.name = _("Title Page")
plg.description = _("Produces a title page for book reports.")
plg.version = '1.0'
plg.gramps_target_version = MODULE_VERSION
plg.status = STABLE
plg.fname = 'simplebooktitle.py'
plg.ptype = REPORT
plg.authors = ["Brian G. Matherly"]
plg.authors_email = ["brian@gramps-project.org"]
plg.category = CATEGORY_TEXT
plg.reportclass = 'SimpleBookTitle'
plg.optionclass = 'SimpleBookTitleOptions'
plg.report_modes = [REPORT_MODE_BKI]
#------------------------------------------------------------------------
#
# Database Summary Report
#
#------------------------------------------------------------------------
plg = newplugin()
plg.id = 'summary'
plg.name = _("Database Summary Report")
plg.description = _("Provides a summary of the current database")
plg.version = '1.0'
plg.gramps_target_version = MODULE_VERSION
plg.status = STABLE
plg.fname = 'summary.py'
plg.ptype = REPORT
plg.authors = ["Brian G. Matherly"]
plg.authors_email = ["brian@gramps-project.org"]
plg.category = CATEGORY_TEXT
plg.reportclass = 'SummaryReport'
plg.optionclass = 'SummaryOptions'
plg.report_modes = [REPORT_MODE_GUI, REPORT_MODE_BKI, REPORT_MODE_CLI]
plg.require_active = False
#------------------------------------------------------------------------
#
# Table Of Contents
#
#------------------------------------------------------------------------
plg = newplugin()
plg.id = 'table_of_contents'
plg.name = _("Table Of Contents")
plg.description = _("Produces a table of contents for book reports.")
plg.version = '1.0'
plg.gramps_target_version = MODULE_VERSION
plg.status = STABLE
plg.fname = 'tableofcontents.py'
plg.ptype = REPORT
plg.authors = ["Nick Hall"]
plg.authors_email = ["nick__hall@hotmail.com"]
plg.category = CATEGORY_TEXT
plg.reportclass = 'TableOfContents'
plg.optionclass = 'TableOfContentsOptions'
plg.report_modes = [REPORT_MODE_BKI]
#------------------------------------------------------------------------
#
# Alphabetical Index
#
#------------------------------------------------------------------------
plg = newplugin()
plg.id = 'alphabetical_index'
plg.name = _("Alphabetical Index")
plg.description = _("Produces an alphabetical index for book reports.")
plg.version = '1.0'
plg.gramps_target_version = MODULE_VERSION
plg.status = STABLE
plg.fname = 'alphabeticalindex.py'
plg.ptype = REPORT
plg.authors = ["Nick Hall"]
plg.authors_email = ["nick__hall@hotmail.com"]
plg.category = CATEGORY_TEXT
plg.reportclass = 'AlphabeticalIndex'
plg.optionclass = 'AlphabeticalIndexOptions'
plg.report_modes = [REPORT_MODE_BKI]
#------------------------------------------------------------------------
#
# Records Report
#
#------------------------------------------------------------------------
plg = newplugin()
plg.id = 'records'
plg.name = _("Records Report")
plg.description = _("Shows some interesting records about people and families")
plg.version = '1.1'
plg.gramps_target_version = MODULE_VERSION
plg.status = STABLE
plg.fname = 'recordsreport.py'
plg.ptype = REPORT
plg.authors = ["Reinhard Müller"]
plg.authors_email = ["reinhard.mueller@bytewise.at"]
plg.category = CATEGORY_TEXT
plg.reportclass = 'RecordsReport'
plg.optionclass = 'RecordsReportOptions'
plg.report_modes = [REPORT_MODE_GUI, REPORT_MODE_CLI, REPORT_MODE_BKI]
| gpl-2.0 |
tangfeixiong/nova | nova/tests/functional/db/test_cell_mapping.py | 23 | 3083 | # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_utils import uuidutils
from nova import context
from nova import exception
from nova.objects import cell_mapping
from nova import test
from nova.tests import fixtures
class CellMappingTestCase(test.NoDBTestCase):
def setUp(self):
super(CellMappingTestCase, self).setUp()
self.useFixture(fixtures.Database(database='api'))
self.context = context.RequestContext('fake-user', 'fake-project')
self.mapping_obj = cell_mapping.CellMapping()
self.uuid = uuidutils.generate_uuid()
sample_mapping = {'uuid': '',
'name': 'fake-cell',
'transport_url': 'rabbit:///',
'database_connection': 'mysql:///'}
def _create_mapping(self, **kwargs):
args = self.sample_mapping.copy()
if 'uuid' not in kwargs:
args['uuid'] = self.uuid
args.update(kwargs)
return self.mapping_obj._create_in_db(self.context, args)
def test_get_by_uuid(self):
mapping = self._create_mapping()
db_mapping = self.mapping_obj._get_by_uuid_from_db(self.context,
mapping['uuid'])
for key in self.mapping_obj.fields.keys():
self.assertEqual(db_mapping[key], mapping[key])
def test_get_by_uuid_not_found(self):
self.assertRaises(exception.CellMappingNotFound,
self.mapping_obj._get_by_uuid_from_db, self.context, self.uuid)
def test_save_in_db(self):
mapping = self._create_mapping()
self.mapping_obj._save_in_db(self.context, mapping['uuid'],
{'name': 'meow'})
db_mapping = self.mapping_obj._get_by_uuid_from_db(self.context,
mapping['uuid'])
self.assertNotEqual(db_mapping['name'], mapping['name'])
for key in [key for key in self.mapping_obj.fields.keys()
if key not in ['name', 'updated_at']]:
self.assertEqual(db_mapping[key], mapping[key])
def test_destroy_in_db(self):
mapping = self._create_mapping()
self.mapping_obj._get_by_uuid_from_db(self.context, mapping['uuid'])
self.mapping_obj._destroy_in_db(self.context, mapping['uuid'])
self.assertRaises(exception.CellMappingNotFound,
self.mapping_obj._get_by_uuid_from_db, self.context,
mapping['uuid'])
def test_destroy_in_db_not_found(self):
self.assertRaises(exception.CellMappingNotFound,
self.mapping_obj._destroy_in_db, self.context, self.uuid)
| apache-2.0 |
richardcs/ansible | lib/ansible/modules/network/eos/eos_facts.py | 42 | 10981 | #!/usr/bin/python
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'network'}
DOCUMENTATION = """
---
module: eos_facts
version_added: "2.2"
author: "Peter Sprygada (@privateip)"
short_description: Collect facts from remote devices running Arista EOS
description:
- Collects a base set of device facts from a remote device that
is running eos. This module prepends all of the
base network fact keys with C(ansible_net_<fact>). The facts
module will always collect a base set of facts from the device
and can enable or disable collection of additional facts.
extends_documentation_fragment: eos
notes:
- Tested against EOS 4.15
options:
gather_subset:
description:
- When supplied, this argument will restrict the facts collected
to a given subset. Possible values for this argument include
all, hardware, config, and interfaces. Can specify a list of
values to include a larger subset. Values can also be used
with an initial C(M(!)) to specify that a specific subset should
not be collected.
required: false
default: '!config'
"""
EXAMPLES = """
# Collect all facts from the device
- eos_facts:
gather_subset: all
# Collect only the config and default facts
- eos_facts:
gather_subset:
- config
# Do not collect hardware facts
- eos_facts:
gather_subset:
- "!hardware"
"""
RETURN = """
ansible_net_gather_subset:
description: The list of fact subsets collected from the device
returned: always
type: list
# default
ansible_net_model:
description: The model name returned from the device
returned: always
type: str
ansible_net_serialnum:
description: The serial number of the remote device
returned: always
type: str
ansible_net_version:
description: The operating system version running on the remote device
returned: always
type: str
ansible_net_hostname:
description: The configured hostname of the device
returned: always
type: str
ansible_net_image:
description: The image file the device is running
returned: always
type: str
ansible_net_fqdn:
description: The fully qualified domain name of the device
returned: always
type: str
# hardware
ansible_net_filesystems:
description: All file system names available on the device
returned: when hardware is configured
type: list
ansible_net_memfree_mb:
description: The available free memory on the remote device in Mb
returned: when hardware is configured
type: int
ansible_net_memtotal_mb:
description: The total memory on the remote device in Mb
returned: when hardware is configured
type: int
# config
ansible_net_config:
description: The current active config from the device
returned: when config is configured
type: str
# interfaces
ansible_net_all_ipv4_addresses:
description: All IPv4 addresses configured on the device
returned: when interfaces is configured
type: list
ansible_net_all_ipv6_addresses:
description: All IPv6 addresses configured on the device
returned: when interfaces is configured
type: list
ansible_net_interfaces:
description: A hash of all interfaces running on the system
returned: when interfaces is configured
type: dict
ansible_net_neighbors:
description: The list of LLDP neighbors from the remote device
returned: when interfaces is configured
type: dict
"""
import re
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six import iteritems
from ansible.module_utils.network.eos.eos import run_commands
from ansible.module_utils.network.eos.eos import eos_argument_spec, check_args
class FactsBase(object):
COMMANDS = frozenset()
def __init__(self, module):
self.module = module
self.facts = dict()
self.responses = None
def populate(self):
self.responses = run_commands(self.module, list(self.COMMANDS), check_rc=False)
class Default(FactsBase):
SYSTEM_MAP = {
'version': 'version',
'serialNumber': 'serialnum',
'modelName': 'model'
}
COMMANDS = [
'show version | json',
'show hostname | json',
'bash timeout 5 cat /mnt/flash/boot-config'
]
def populate(self):
super(Default, self).populate()
data = self.responses[0]
for key, value in iteritems(self.SYSTEM_MAP):
if key in data:
self.facts[value] = data[key]
self.facts.update(self.responses[1])
self.facts.update(self.parse_image())
def parse_image(self):
data = self.responses[2]
if isinstance(data, dict):
data = data['messages'][0]
match = re.search(r'SWI=(.+)$', data, re.M)
if match:
value = match.group(1)
else:
value = None
return dict(image=value)
class Hardware(FactsBase):
COMMANDS = [
'dir all-filesystems',
'show version | json'
]
def populate(self):
super(Hardware, self).populate()
self.facts.update(self.populate_filesystems())
self.facts.update(self.populate_memory())
def populate_filesystems(self):
data = self.responses[0]
if isinstance(data, dict):
data = data['messages'][0]
fs = re.findall(r'^Directory of (.+)/', data, re.M)
return dict(filesystems=fs)
def populate_memory(self):
values = self.responses[1]
return dict(
memfree_mb=int(values['memFree']) / 1024,
memtotal_mb=int(values['memTotal']) / 1024
)
class Config(FactsBase):
COMMANDS = ['show running-config']
def populate(self):
super(Config, self).populate()
self.facts['config'] = self.responses[0]
class Interfaces(FactsBase):
INTERFACE_MAP = {
'description': 'description',
'physicalAddress': 'macaddress',
'mtu': 'mtu',
'bandwidth': 'bandwidth',
'duplex': 'duplex',
'lineProtocolStatus': 'lineprotocol',
'interfaceStatus': 'operstatus',
'forwardingModel': 'type'
}
COMMANDS = [
'show interfaces | json',
'show lldp neighbors | json'
]
def populate(self):
super(Interfaces, self).populate()
self.facts['all_ipv4_addresses'] = list()
self.facts['all_ipv6_addresses'] = list()
data = self.responses[0]
self.facts['interfaces'] = self.populate_interfaces(data)
data = self.responses[1]
if data:
self.facts['neighbors'] = self.populate_neighbors(data['lldpNeighbors'])
def populate_interfaces(self, data):
facts = dict()
for key, value in iteritems(data['interfaces']):
intf = dict()
for remote, local in iteritems(self.INTERFACE_MAP):
if remote in value:
intf[local] = value[remote]
if 'interfaceAddress' in value:
intf['ipv4'] = dict()
for entry in value['interfaceAddress']:
intf['ipv4']['address'] = entry['primaryIp']['address']
intf['ipv4']['masklen'] = entry['primaryIp']['maskLen']
self.add_ip_address(entry['primaryIp']['address'], 'ipv4')
if 'interfaceAddressIp6' in value:
intf['ipv6'] = dict()
for entry in value['interfaceAddressIp6']['globalUnicastIp6s']:
intf['ipv6']['address'] = entry['address']
intf['ipv6']['subnet'] = entry['subnet']
self.add_ip_address(entry['address'], 'ipv6')
facts[key] = intf
return facts
def add_ip_address(self, address, family):
if family == 'ipv4':
self.facts['all_ipv4_addresses'].append(address)
else:
self.facts['all_ipv6_addresses'].append(address)
def populate_neighbors(self, neighbors):
facts = dict()
for value in neighbors:
port = value['port']
if port not in facts:
facts[port] = list()
lldp = dict()
lldp['host'] = value['neighborDevice']
lldp['port'] = value['neighborPort']
facts[port].append(lldp)
return facts
FACT_SUBSETS = dict(
default=Default,
hardware=Hardware,
interfaces=Interfaces,
config=Config
)
VALID_SUBSETS = frozenset(FACT_SUBSETS.keys())
def main():
"""main entry point for module execution
"""
argument_spec = dict(
gather_subset=dict(default=['!config'], type='list')
)
argument_spec.update(eos_argument_spec)
module = AnsibleModule(argument_spec=argument_spec,
supports_check_mode=True)
warnings = list()
check_args(module, warnings)
gather_subset = module.params['gather_subset']
runable_subsets = set()
exclude_subsets = set()
for subset in gather_subset:
if subset == 'all':
runable_subsets.update(VALID_SUBSETS)
continue
if subset.startswith('!'):
subset = subset[1:]
if subset == 'all':
exclude_subsets.update(VALID_SUBSETS)
continue
exclude = True
else:
exclude = False
if subset not in VALID_SUBSETS:
module.fail_json(msg='Subset must be one of [%s], got %s' %
(', '.join(VALID_SUBSETS), subset))
if exclude:
exclude_subsets.add(subset)
else:
runable_subsets.add(subset)
if not runable_subsets:
runable_subsets.update(VALID_SUBSETS)
runable_subsets.difference_update(exclude_subsets)
runable_subsets.add('default')
facts = dict()
facts['gather_subset'] = list(runable_subsets)
instances = list()
for key in runable_subsets:
instances.append(FACT_SUBSETS[key](module))
for inst in instances:
inst.populate()
facts.update(inst.facts)
ansible_facts = dict()
for key, value in iteritems(facts):
key = 'ansible_net_%s' % key
ansible_facts[key] = value
module.exit_json(ansible_facts=ansible_facts, warnings=warnings)
if __name__ == '__main__':
main()
| gpl-3.0 |
AlexRobson/scikit-learn | sklearn/cluster/tests/test_k_means.py | 132 | 25860 | """Testing for K-means"""
import sys
import numpy as np
from scipy import sparse as sp
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.utils.testing import SkipTest
from sklearn.utils.testing import assert_almost_equal
from sklearn.utils.testing import assert_raises
from sklearn.utils.testing import assert_raises_regexp
from sklearn.utils.testing import assert_true
from sklearn.utils.testing import assert_greater
from sklearn.utils.testing import assert_less
from sklearn.utils.testing import assert_warns
from sklearn.utils.testing import if_not_mac_os
from sklearn.utils.validation import DataConversionWarning
from sklearn.utils.extmath import row_norms
from sklearn.metrics.cluster import v_measure_score
from sklearn.cluster import KMeans, k_means
from sklearn.cluster import MiniBatchKMeans
from sklearn.cluster.k_means_ import _labels_inertia
from sklearn.cluster.k_means_ import _mini_batch_step
from sklearn.datasets.samples_generator import make_blobs
from sklearn.externals.six.moves import cStringIO as StringIO
# non centered, sparse centers to check the
centers = np.array([
[0.0, 5.0, 0.0, 0.0, 0.0],
[1.0, 1.0, 4.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 5.0, 1.0],
])
n_samples = 100
n_clusters, n_features = centers.shape
X, true_labels = make_blobs(n_samples=n_samples, centers=centers,
cluster_std=1., random_state=42)
X_csr = sp.csr_matrix(X)
def test_kmeans_dtype():
rnd = np.random.RandomState(0)
X = rnd.normal(size=(40, 2))
X = (X * 10).astype(np.uint8)
km = KMeans(n_init=1).fit(X)
pred_x = assert_warns(DataConversionWarning, km.predict, X)
assert_array_equal(km.labels_, pred_x)
def test_labels_assignment_and_inertia():
# pure numpy implementation as easily auditable reference gold
# implementation
rng = np.random.RandomState(42)
noisy_centers = centers + rng.normal(size=centers.shape)
labels_gold = - np.ones(n_samples, dtype=np.int)
mindist = np.empty(n_samples)
mindist.fill(np.infty)
for center_id in range(n_clusters):
dist = np.sum((X - noisy_centers[center_id]) ** 2, axis=1)
labels_gold[dist < mindist] = center_id
mindist = np.minimum(dist, mindist)
inertia_gold = mindist.sum()
assert_true((mindist >= 0.0).all())
assert_true((labels_gold != -1).all())
# perform label assignment using the dense array input
x_squared_norms = (X ** 2).sum(axis=1)
labels_array, inertia_array = _labels_inertia(
X, x_squared_norms, noisy_centers)
assert_array_almost_equal(inertia_array, inertia_gold)
assert_array_equal(labels_array, labels_gold)
# perform label assignment using the sparse CSR input
x_squared_norms_from_csr = row_norms(X_csr, squared=True)
labels_csr, inertia_csr = _labels_inertia(
X_csr, x_squared_norms_from_csr, noisy_centers)
assert_array_almost_equal(inertia_csr, inertia_gold)
assert_array_equal(labels_csr, labels_gold)
def test_minibatch_update_consistency():
# Check that dense and sparse minibatch update give the same results
rng = np.random.RandomState(42)
old_centers = centers + rng.normal(size=centers.shape)
new_centers = old_centers.copy()
new_centers_csr = old_centers.copy()
counts = np.zeros(new_centers.shape[0], dtype=np.int32)
counts_csr = np.zeros(new_centers.shape[0], dtype=np.int32)
x_squared_norms = (X ** 2).sum(axis=1)
x_squared_norms_csr = row_norms(X_csr, squared=True)
buffer = np.zeros(centers.shape[1], dtype=np.double)
buffer_csr = np.zeros(centers.shape[1], dtype=np.double)
# extract a small minibatch
X_mb = X[:10]
X_mb_csr = X_csr[:10]
x_mb_squared_norms = x_squared_norms[:10]
x_mb_squared_norms_csr = x_squared_norms_csr[:10]
# step 1: compute the dense minibatch update
old_inertia, incremental_diff = _mini_batch_step(
X_mb, x_mb_squared_norms, new_centers, counts,
buffer, 1, None, random_reassign=False)
assert_greater(old_inertia, 0.0)
# compute the new inertia on the same batch to check that it decreased
labels, new_inertia = _labels_inertia(
X_mb, x_mb_squared_norms, new_centers)
assert_greater(new_inertia, 0.0)
assert_less(new_inertia, old_inertia)
# check that the incremental difference computation is matching the
# final observed value
effective_diff = np.sum((new_centers - old_centers) ** 2)
assert_almost_equal(incremental_diff, effective_diff)
# step 2: compute the sparse minibatch update
old_inertia_csr, incremental_diff_csr = _mini_batch_step(
X_mb_csr, x_mb_squared_norms_csr, new_centers_csr, counts_csr,
buffer_csr, 1, None, random_reassign=False)
assert_greater(old_inertia_csr, 0.0)
# compute the new inertia on the same batch to check that it decreased
labels_csr, new_inertia_csr = _labels_inertia(
X_mb_csr, x_mb_squared_norms_csr, new_centers_csr)
assert_greater(new_inertia_csr, 0.0)
assert_less(new_inertia_csr, old_inertia_csr)
# check that the incremental difference computation is matching the
# final observed value
effective_diff = np.sum((new_centers_csr - old_centers) ** 2)
assert_almost_equal(incremental_diff_csr, effective_diff)
# step 3: check that sparse and dense updates lead to the same results
assert_array_equal(labels, labels_csr)
assert_array_almost_equal(new_centers, new_centers_csr)
assert_almost_equal(incremental_diff, incremental_diff_csr)
assert_almost_equal(old_inertia, old_inertia_csr)
assert_almost_equal(new_inertia, new_inertia_csr)
def _check_fitted_model(km):
# check that the number of clusters centers and distinct labels match
# the expectation
centers = km.cluster_centers_
assert_equal(centers.shape, (n_clusters, n_features))
labels = km.labels_
assert_equal(np.unique(labels).shape[0], n_clusters)
# check that the labels assignment are perfect (up to a permutation)
assert_equal(v_measure_score(true_labels, labels), 1.0)
assert_greater(km.inertia_, 0.0)
# check error on dataset being too small
assert_raises(ValueError, km.fit, [[0., 1.]])
def test_k_means_plus_plus_init():
km = KMeans(init="k-means++", n_clusters=n_clusters,
random_state=42).fit(X)
_check_fitted_model(km)
def test_k_means_new_centers():
# Explore the part of the code where a new center is reassigned
X = np.array([[0, 0, 1, 1],
[0, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 1, 0, 0]])
labels = [0, 1, 2, 1, 1, 2]
bad_centers = np.array([[+0, 1, 0, 0],
[.2, 0, .2, .2],
[+0, 0, 0, 0]])
km = KMeans(n_clusters=3, init=bad_centers, n_init=1, max_iter=10,
random_state=1)
for this_X in (X, sp.coo_matrix(X)):
km.fit(this_X)
this_labels = km.labels_
# Reorder the labels so that the first instance is in cluster 0,
# the second in cluster 1, ...
this_labels = np.unique(this_labels, return_index=True)[1][this_labels]
np.testing.assert_array_equal(this_labels, labels)
def _has_blas_lib(libname):
from numpy.distutils.system_info import get_info
return libname in get_info('blas_opt').get('libraries', [])
@if_not_mac_os()
def test_k_means_plus_plus_init_2_jobs():
if _has_blas_lib('openblas'):
raise SkipTest('Multi-process bug with OpenBLAS (see issue #636)')
km = KMeans(init="k-means++", n_clusters=n_clusters, n_jobs=2,
random_state=42).fit(X)
_check_fitted_model(km)
def test_k_means_precompute_distances_flag():
# check that a warning is raised if the precompute_distances flag is not
# supported
km = KMeans(precompute_distances="wrong")
assert_raises(ValueError, km.fit, X)
def test_k_means_plus_plus_init_sparse():
km = KMeans(init="k-means++", n_clusters=n_clusters, random_state=42)
km.fit(X_csr)
_check_fitted_model(km)
def test_k_means_random_init():
km = KMeans(init="random", n_clusters=n_clusters, random_state=42)
km.fit(X)
_check_fitted_model(km)
def test_k_means_random_init_sparse():
km = KMeans(init="random", n_clusters=n_clusters, random_state=42)
km.fit(X_csr)
_check_fitted_model(km)
def test_k_means_plus_plus_init_not_precomputed():
km = KMeans(init="k-means++", n_clusters=n_clusters, random_state=42,
precompute_distances=False).fit(X)
_check_fitted_model(km)
def test_k_means_random_init_not_precomputed():
km = KMeans(init="random", n_clusters=n_clusters, random_state=42,
precompute_distances=False).fit(X)
_check_fitted_model(km)
def test_k_means_perfect_init():
km = KMeans(init=centers.copy(), n_clusters=n_clusters, random_state=42,
n_init=1)
km.fit(X)
_check_fitted_model(km)
def test_k_means_n_init():
rnd = np.random.RandomState(0)
X = rnd.normal(size=(40, 2))
# two regression tests on bad n_init argument
# previous bug: n_init <= 0 threw non-informative TypeError (#3858)
assert_raises_regexp(ValueError, "n_init", KMeans(n_init=0).fit, X)
assert_raises_regexp(ValueError, "n_init", KMeans(n_init=-1).fit, X)
def test_k_means_fortran_aligned_data():
# Check the KMeans will work well, even if X is a fortran-aligned data.
X = np.asfortranarray([[0, 0], [0, 1], [0, 1]])
centers = np.array([[0, 0], [0, 1]])
labels = np.array([0, 1, 1])
km = KMeans(n_init=1, init=centers, precompute_distances=False,
random_state=42)
km.fit(X)
assert_array_equal(km.cluster_centers_, centers)
assert_array_equal(km.labels_, labels)
def test_mb_k_means_plus_plus_init_dense_array():
mb_k_means = MiniBatchKMeans(init="k-means++", n_clusters=n_clusters,
random_state=42)
mb_k_means.fit(X)
_check_fitted_model(mb_k_means)
def test_mb_kmeans_verbose():
mb_k_means = MiniBatchKMeans(init="k-means++", n_clusters=n_clusters,
random_state=42, verbose=1)
old_stdout = sys.stdout
sys.stdout = StringIO()
try:
mb_k_means.fit(X)
finally:
sys.stdout = old_stdout
def test_mb_k_means_plus_plus_init_sparse_matrix():
mb_k_means = MiniBatchKMeans(init="k-means++", n_clusters=n_clusters,
random_state=42)
mb_k_means.fit(X_csr)
_check_fitted_model(mb_k_means)
def test_minibatch_init_with_large_k():
mb_k_means = MiniBatchKMeans(init='k-means++', init_size=10, n_clusters=20)
# Check that a warning is raised, as the number clusters is larger
# than the init_size
assert_warns(RuntimeWarning, mb_k_means.fit, X)
def test_minibatch_k_means_random_init_dense_array():
# increase n_init to make random init stable enough
mb_k_means = MiniBatchKMeans(init="random", n_clusters=n_clusters,
random_state=42, n_init=10).fit(X)
_check_fitted_model(mb_k_means)
def test_minibatch_k_means_random_init_sparse_csr():
# increase n_init to make random init stable enough
mb_k_means = MiniBatchKMeans(init="random", n_clusters=n_clusters,
random_state=42, n_init=10).fit(X_csr)
_check_fitted_model(mb_k_means)
def test_minibatch_k_means_perfect_init_dense_array():
mb_k_means = MiniBatchKMeans(init=centers.copy(), n_clusters=n_clusters,
random_state=42, n_init=1).fit(X)
_check_fitted_model(mb_k_means)
def test_minibatch_k_means_init_multiple_runs_with_explicit_centers():
mb_k_means = MiniBatchKMeans(init=centers.copy(), n_clusters=n_clusters,
random_state=42, n_init=10)
assert_warns(RuntimeWarning, mb_k_means.fit, X)
def test_minibatch_k_means_perfect_init_sparse_csr():
mb_k_means = MiniBatchKMeans(init=centers.copy(), n_clusters=n_clusters,
random_state=42, n_init=1).fit(X_csr)
_check_fitted_model(mb_k_means)
def test_minibatch_sensible_reassign_fit():
# check if identical initial clusters are reassigned
# also a regression test for when there are more desired reassignments than
# samples.
zeroed_X, true_labels = make_blobs(n_samples=100, centers=5,
cluster_std=1., random_state=42)
zeroed_X[::2, :] = 0
mb_k_means = MiniBatchKMeans(n_clusters=20, batch_size=10, random_state=42,
init="random")
mb_k_means.fit(zeroed_X)
# there should not be too many exact zero cluster centers
assert_greater(mb_k_means.cluster_centers_.any(axis=1).sum(), 10)
# do the same with batch-size > X.shape[0] (regression test)
mb_k_means = MiniBatchKMeans(n_clusters=20, batch_size=201,
random_state=42, init="random")
mb_k_means.fit(zeroed_X)
# there should not be too many exact zero cluster centers
assert_greater(mb_k_means.cluster_centers_.any(axis=1).sum(), 10)
def test_minibatch_sensible_reassign_partial_fit():
zeroed_X, true_labels = make_blobs(n_samples=n_samples, centers=5,
cluster_std=1., random_state=42)
zeroed_X[::2, :] = 0
mb_k_means = MiniBatchKMeans(n_clusters=20, random_state=42, init="random")
for i in range(100):
mb_k_means.partial_fit(zeroed_X)
# there should not be too many exact zero cluster centers
assert_greater(mb_k_means.cluster_centers_.any(axis=1).sum(), 10)
def test_minibatch_reassign():
# Give a perfect initialization, but a large reassignment_ratio,
# as a result all the centers should be reassigned and the model
# should not longer be good
for this_X in (X, X_csr):
mb_k_means = MiniBatchKMeans(n_clusters=n_clusters, batch_size=100,
random_state=42)
mb_k_means.fit(this_X)
score_before = mb_k_means.score(this_X)
try:
old_stdout = sys.stdout
sys.stdout = StringIO()
# Turn on verbosity to smoke test the display code
_mini_batch_step(this_X, (X ** 2).sum(axis=1),
mb_k_means.cluster_centers_,
mb_k_means.counts_,
np.zeros(X.shape[1], np.double),
False, distances=np.zeros(X.shape[0]),
random_reassign=True, random_state=42,
reassignment_ratio=1, verbose=True)
finally:
sys.stdout = old_stdout
assert_greater(score_before, mb_k_means.score(this_X))
# Give a perfect initialization, with a small reassignment_ratio,
# no center should be reassigned
for this_X in (X, X_csr):
mb_k_means = MiniBatchKMeans(n_clusters=n_clusters, batch_size=100,
init=centers.copy(),
random_state=42, n_init=1)
mb_k_means.fit(this_X)
clusters_before = mb_k_means.cluster_centers_
# Turn on verbosity to smoke test the display code
_mini_batch_step(this_X, (X ** 2).sum(axis=1),
mb_k_means.cluster_centers_,
mb_k_means.counts_,
np.zeros(X.shape[1], np.double),
False, distances=np.zeros(X.shape[0]),
random_reassign=True, random_state=42,
reassignment_ratio=1e-15)
assert_array_almost_equal(clusters_before, mb_k_means.cluster_centers_)
def test_minibatch_with_many_reassignments():
# Test for the case that the number of clusters to reassign is bigger
# than the batch_size
n_samples = 550
rnd = np.random.RandomState(42)
X = rnd.uniform(size=(n_samples, 10))
# Check that the fit works if n_clusters is bigger than the batch_size.
# Run the test with 550 clusters and 550 samples, because it turned out
# that this values ensure that the number of clusters to reassign
# is always bigger than the batch_size
n_clusters = 550
MiniBatchKMeans(n_clusters=n_clusters,
batch_size=100,
init_size=n_samples,
random_state=42).fit(X)
def test_sparse_mb_k_means_callable_init():
def test_init(X, k, random_state):
return centers
# Small test to check that giving the wrong number of centers
# raises a meaningful error
assert_raises(ValueError,
MiniBatchKMeans(init=test_init, random_state=42).fit, X_csr)
# Now check that the fit actually works
mb_k_means = MiniBatchKMeans(n_clusters=3, init=test_init,
random_state=42).fit(X_csr)
_check_fitted_model(mb_k_means)
def test_mini_batch_k_means_random_init_partial_fit():
km = MiniBatchKMeans(n_clusters=n_clusters, init="random", random_state=42)
# use the partial_fit API for online learning
for X_minibatch in np.array_split(X, 10):
km.partial_fit(X_minibatch)
# compute the labeling on the complete dataset
labels = km.predict(X)
assert_equal(v_measure_score(true_labels, labels), 1.0)
def test_minibatch_default_init_size():
mb_k_means = MiniBatchKMeans(init=centers.copy(), n_clusters=n_clusters,
batch_size=10, random_state=42,
n_init=1).fit(X)
assert_equal(mb_k_means.init_size_, 3 * mb_k_means.batch_size)
_check_fitted_model(mb_k_means)
def test_minibatch_tol():
mb_k_means = MiniBatchKMeans(n_clusters=n_clusters, batch_size=10,
random_state=42, tol=.01).fit(X)
_check_fitted_model(mb_k_means)
def test_minibatch_set_init_size():
mb_k_means = MiniBatchKMeans(init=centers.copy(), n_clusters=n_clusters,
init_size=666, random_state=42,
n_init=1).fit(X)
assert_equal(mb_k_means.init_size, 666)
assert_equal(mb_k_means.init_size_, n_samples)
_check_fitted_model(mb_k_means)
def test_k_means_invalid_init():
km = KMeans(init="invalid", n_init=1, n_clusters=n_clusters)
assert_raises(ValueError, km.fit, X)
def test_mini_match_k_means_invalid_init():
km = MiniBatchKMeans(init="invalid", n_init=1, n_clusters=n_clusters)
assert_raises(ValueError, km.fit, X)
def test_k_means_copyx():
# Check if copy_x=False returns nearly equal X after de-centering.
my_X = X.copy()
km = KMeans(copy_x=False, n_clusters=n_clusters, random_state=42)
km.fit(my_X)
_check_fitted_model(km)
# check if my_X is centered
assert_array_almost_equal(my_X, X)
def test_k_means_non_collapsed():
# Check k_means with a bad initialization does not yield a singleton
# Starting with bad centers that are quickly ignored should not
# result in a repositioning of the centers to the center of mass that
# would lead to collapsed centers which in turns make the clustering
# dependent of the numerical unstabilities.
my_X = np.array([[1.1, 1.1], [0.9, 1.1], [1.1, 0.9], [0.9, 1.1]])
array_init = np.array([[1.0, 1.0], [5.0, 5.0], [-5.0, -5.0]])
km = KMeans(init=array_init, n_clusters=3, random_state=42, n_init=1)
km.fit(my_X)
# centers must not been collapsed
assert_equal(len(np.unique(km.labels_)), 3)
centers = km.cluster_centers_
assert_true(np.linalg.norm(centers[0] - centers[1]) >= 0.1)
assert_true(np.linalg.norm(centers[0] - centers[2]) >= 0.1)
assert_true(np.linalg.norm(centers[1] - centers[2]) >= 0.1)
def test_predict():
km = KMeans(n_clusters=n_clusters, random_state=42)
km.fit(X)
# sanity check: predict centroid labels
pred = km.predict(km.cluster_centers_)
assert_array_equal(pred, np.arange(n_clusters))
# sanity check: re-predict labeling for training set samples
pred = km.predict(X)
assert_array_equal(pred, km.labels_)
# re-predict labels for training set using fit_predict
pred = km.fit_predict(X)
assert_array_equal(pred, km.labels_)
def test_score():
km1 = KMeans(n_clusters=n_clusters, max_iter=1, random_state=42)
s1 = km1.fit(X).score(X)
km2 = KMeans(n_clusters=n_clusters, max_iter=10, random_state=42)
s2 = km2.fit(X).score(X)
assert_greater(s2, s1)
def test_predict_minibatch_dense_input():
mb_k_means = MiniBatchKMeans(n_clusters=n_clusters, random_state=40).fit(X)
# sanity check: predict centroid labels
pred = mb_k_means.predict(mb_k_means.cluster_centers_)
assert_array_equal(pred, np.arange(n_clusters))
# sanity check: re-predict labeling for training set samples
pred = mb_k_means.predict(X)
assert_array_equal(mb_k_means.predict(X), mb_k_means.labels_)
def test_predict_minibatch_kmeanspp_init_sparse_input():
mb_k_means = MiniBatchKMeans(n_clusters=n_clusters, init='k-means++',
n_init=10).fit(X_csr)
# sanity check: re-predict labeling for training set samples
assert_array_equal(mb_k_means.predict(X_csr), mb_k_means.labels_)
# sanity check: predict centroid labels
pred = mb_k_means.predict(mb_k_means.cluster_centers_)
assert_array_equal(pred, np.arange(n_clusters))
# check that models trained on sparse input also works for dense input at
# predict time
assert_array_equal(mb_k_means.predict(X), mb_k_means.labels_)
def test_predict_minibatch_random_init_sparse_input():
mb_k_means = MiniBatchKMeans(n_clusters=n_clusters, init='random',
n_init=10).fit(X_csr)
# sanity check: re-predict labeling for training set samples
assert_array_equal(mb_k_means.predict(X_csr), mb_k_means.labels_)
# sanity check: predict centroid labels
pred = mb_k_means.predict(mb_k_means.cluster_centers_)
assert_array_equal(pred, np.arange(n_clusters))
# check that models trained on sparse input also works for dense input at
# predict time
assert_array_equal(mb_k_means.predict(X), mb_k_means.labels_)
def test_input_dtypes():
X_list = [[0, 0], [10, 10], [12, 9], [-1, 1], [2, 0], [8, 10]]
X_int = np.array(X_list, dtype=np.int32)
X_int_csr = sp.csr_matrix(X_int)
init_int = X_int[:2]
fitted_models = [
KMeans(n_clusters=2).fit(X_list),
KMeans(n_clusters=2).fit(X_int),
KMeans(n_clusters=2, init=init_int, n_init=1).fit(X_list),
KMeans(n_clusters=2, init=init_int, n_init=1).fit(X_int),
# mini batch kmeans is very unstable on such a small dataset hence
# we use many inits
MiniBatchKMeans(n_clusters=2, n_init=10, batch_size=2).fit(X_list),
MiniBatchKMeans(n_clusters=2, n_init=10, batch_size=2).fit(X_int),
MiniBatchKMeans(n_clusters=2, n_init=10, batch_size=2).fit(X_int_csr),
MiniBatchKMeans(n_clusters=2, batch_size=2,
init=init_int, n_init=1).fit(X_list),
MiniBatchKMeans(n_clusters=2, batch_size=2,
init=init_int, n_init=1).fit(X_int),
MiniBatchKMeans(n_clusters=2, batch_size=2,
init=init_int, n_init=1).fit(X_int_csr),
]
expected_labels = [0, 1, 1, 0, 0, 1]
scores = np.array([v_measure_score(expected_labels, km.labels_)
for km in fitted_models])
assert_array_equal(scores, np.ones(scores.shape[0]))
def test_transform():
km = KMeans(n_clusters=n_clusters)
km.fit(X)
X_new = km.transform(km.cluster_centers_)
for c in range(n_clusters):
assert_equal(X_new[c, c], 0)
for c2 in range(n_clusters):
if c != c2:
assert_greater(X_new[c, c2], 0)
def test_fit_transform():
X1 = KMeans(n_clusters=3, random_state=51).fit(X).transform(X)
X2 = KMeans(n_clusters=3, random_state=51).fit_transform(X)
assert_array_equal(X1, X2)
def test_n_init():
# Check that increasing the number of init increases the quality
n_runs = 5
n_init_range = [1, 5, 10]
inertia = np.zeros((len(n_init_range), n_runs))
for i, n_init in enumerate(n_init_range):
for j in range(n_runs):
km = KMeans(n_clusters=n_clusters, init="random", n_init=n_init,
random_state=j).fit(X)
inertia[i, j] = km.inertia_
inertia = inertia.mean(axis=1)
failure_msg = ("Inertia %r should be decreasing"
" when n_init is increasing.") % list(inertia)
for i in range(len(n_init_range) - 1):
assert_true(inertia[i] >= inertia[i + 1], failure_msg)
def test_k_means_function():
# test calling the k_means function directly
# catch output
old_stdout = sys.stdout
sys.stdout = StringIO()
try:
cluster_centers, labels, inertia = k_means(X, n_clusters=n_clusters,
verbose=True)
finally:
sys.stdout = old_stdout
centers = cluster_centers
assert_equal(centers.shape, (n_clusters, n_features))
labels = labels
assert_equal(np.unique(labels).shape[0], n_clusters)
# check that the labels assignment are perfect (up to a permutation)
assert_equal(v_measure_score(true_labels, labels), 1.0)
assert_greater(inertia, 0.0)
# check warning when centers are passed
assert_warns(RuntimeWarning, k_means, X, n_clusters=n_clusters,
init=centers)
# to many clusters desired
assert_raises(ValueError, k_means, X, n_clusters=X.shape[0] + 1)
| bsd-3-clause |
andotrue/k_proj_dev | vendor/doctrine/orm/docs/en/_exts/configurationblock.py | 2577 | 3506 | #Copyright (c) 2010 Fabien Potencier
#
#Permission is hereby granted, free of charge, to any person obtaining a copy
#of this software and associated documentation files (the "Software"), to deal
#in the Software without restriction, including without limitation the rights
#to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
#copies of the Software, and to permit persons to whom the Software is furnished
#to do so, subject to the following conditions:
#
#The above copyright notice and this permission notice shall be included in all
#copies or substantial portions of the Software.
#
#THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
#IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
#FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
#AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
#LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
#OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
#THE SOFTWARE.
from docutils.parsers.rst import Directive, directives
from docutils import nodes
from string import upper
class configurationblock(nodes.General, nodes.Element):
pass
class ConfigurationBlock(Directive):
has_content = True
required_arguments = 0
optional_arguments = 0
final_argument_whitespace = True
option_spec = {}
formats = {
'html': 'HTML',
'xml': 'XML',
'php': 'PHP',
'yaml': 'YAML',
'jinja': 'Twig',
'html+jinja': 'Twig',
'jinja+html': 'Twig',
'php+html': 'PHP',
'html+php': 'PHP',
'ini': 'INI',
'php-annotations': 'Annotations',
}
def run(self):
env = self.state.document.settings.env
node = nodes.Element()
node.document = self.state.document
self.state.nested_parse(self.content, self.content_offset, node)
entries = []
for i, child in enumerate(node):
if isinstance(child, nodes.literal_block):
# add a title (the language name) before each block
#targetid = "configuration-block-%d" % env.new_serialno('configuration-block')
#targetnode = nodes.target('', '', ids=[targetid])
#targetnode.append(child)
innernode = nodes.emphasis(self.formats[child['language']], self.formats[child['language']])
para = nodes.paragraph()
para += [innernode, child]
entry = nodes.list_item('')
entry.append(para)
entries.append(entry)
resultnode = configurationblock()
resultnode.append(nodes.bullet_list('', *entries))
return [resultnode]
def visit_configurationblock_html(self, node):
self.body.append(self.starttag(node, 'div', CLASS='configuration-block'))
def depart_configurationblock_html(self, node):
self.body.append('</div>\n')
def visit_configurationblock_latex(self, node):
pass
def depart_configurationblock_latex(self, node):
pass
def setup(app):
app.add_node(configurationblock,
html=(visit_configurationblock_html, depart_configurationblock_html),
latex=(visit_configurationblock_latex, depart_configurationblock_latex))
app.add_directive('configuration-block', ConfigurationBlock)
| mit |
ojengwa/odoo | addons/sale_mrp/__init__.py | 445 | 1062 | # -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2010 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
import sale_mrp
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 |
o5k/openerp-oemedical-v0.1 | openerp/addons/base/tests/test_ir_attachment.py | 68 | 3224 | import hashlib
import os
import unittest2
import openerp
import openerp.tests.common
class test_ir_attachment(openerp.tests.common.TransactionCase):
def test_00_attachment_flow(self):
registry, cr, uid = self.registry, self.cr, self.uid
root_path = openerp.tools.config['root_path']
ira = registry('ir.attachment')
# Blob1
blob1 = 'blob1'
blob1_b64 = blob1.encode('base64')
blob1_hash = hashlib.sha1(blob1).hexdigest()
blob1_fname = blob1_hash[:3] + '/' + blob1_hash
# Blob2
blob2 = 'blob2'
blob2_b64 = blob2.encode('base64')
blob2_hash = hashlib.sha1(blob2).hexdigest()
blob2_fname = blob2_hash[:3] + '/' + blob2_hash
# 'ir_attachment.location' is undefined test database storage
a1 = ira.create(cr, uid, {'name': 'a1', 'datas': blob1_b64})
a1_read = ira.read(cr, uid, [a1], ['datas'])
self.assertEqual(a1_read[0]['datas'], blob1_b64)
cr.execute("select id,db_datas from ir_attachment where id = %s", (a1,) )
a1_db_datas = str(cr.fetchall()[0][1])
self.assertEqual(a1_db_datas, blob1_b64)
# define a location for filestore
registry('ir.config_parameter').set_param(cr, uid, 'ir_attachment.location', 'file:///filestore')
# Test file storage
a2 = ira.create(cr, uid, {'name': 'a2', 'datas': blob1_b64})
a2_read = ira.read(cr, uid, [a2], ['datas'])
self.assertEqual(a2_read[0]['datas'], blob1_b64)
cr.execute("select id,store_fname from ir_attachment where id = %s", (a2,) )
a2_store_fname = cr.fetchall()[0][1]
self.assertEqual(a2_store_fname, blob1_fname)
a2_fn = os.path.join(root_path, 'filestore', cr.dbname, blob1_hash[:3], blob1_hash)
fc = file(a2_fn).read()
self.assertEqual(fc, blob1)
# create a3 with same blob
a3 = ira.create(cr, uid, {'name': 'a3', 'datas': blob1_b64})
a3_read = ira.read(cr, uid, [a3], ['datas'])
self.assertEqual(a3_read[0]['datas'], blob1_b64)
cr.execute("select id,store_fname from ir_attachment where id = %s", (a3,) )
a3_store_fname = cr.fetchall()[0][1]
self.assertEqual(a3_store_fname, a2_store_fname)
# create a4 blob2
a4 = ira.create(cr, uid, {'name': 'a4', 'datas': blob2_b64})
a4_read = ira.read(cr, uid, [a4], ['datas'])
self.assertEqual(a4_read[0]['datas'], blob2_b64)
a4_fn = os.path.join(root_path, 'filestore', cr.dbname, blob2_hash[:3], blob2_hash)
self.assertTrue(os.path.isfile(a4_fn))
# delete a3 but file stays
ira.unlink(cr, uid, [a3])
self.assertTrue(os.path.isfile(a2_fn))
# delete a2 it is unlinked
ira.unlink(cr, uid, [a2])
self.assertFalse(os.path.isfile(a2_fn))
# update a4 blob2 by blob1
ira.write(cr, uid, [a4], {'datas': blob1_b64})
a4_read = ira.read(cr, uid, [a4], ['datas'])
self.assertEqual(a4_read[0]['datas'], blob1_b64)
# file of a4 disapear and a2 reappear
self.assertFalse(os.path.isfile(a4_fn))
self.assertTrue(os.path.isfile(a2_fn))
# everybody applause
| agpl-3.0 |
SSSD/sssd | src/tests/intg/ent_test.py | 4 | 16139 | #
# ent.py module tests
#
# Copyright (c) 2015 Red Hat, Inc.
# Author: Nikolai Kondrashov <Nikolai.Kondrashov@redhat.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
import re
import os
import io
import pytest
import ent
from util import *
@pytest.fixture(scope="module")
def passwd_path(request):
name = "NSS_WRAPPER_PASSWD"
request.addfinalizer(lambda: restore_envvar_file(name))
return backup_envvar_file(name)
@pytest.fixture(scope="module")
def group_path(request):
name = "NSS_WRAPPER_GROUP"
request.addfinalizer(lambda: restore_envvar_file(name))
return backup_envvar_file(name)
USER1 = dict(name="user1", passwd="x", uid=1001, gid=2001,
gecos="User 1", dir="/home/user1", shell="/bin/bash")
USER2 = dict(name="user2", passwd="x", uid=1002, gid=2002,
gecos="User 2", dir="/home/user2", shell="/bin/bash")
USER_LIST = [USER1, USER2]
USER_NAME_DICT = dict((u["name"], u) for u in USER_LIST)
USER_UID_DICT = dict((u["uid"], u) for u in USER_LIST)
EMPTY_GROUP = dict(name="empty_group", passwd="x", gid=2000,
mem=ent.contains_only())
GROUP1 = dict(name="group1", passwd="x", gid=2001,
mem=ent.contains_only())
GROUP2 = dict(name="group2", passwd="x", gid=2002,
mem=ent.contains_only())
ONE_USER_GROUP1 = dict(name="one_user_group1", passwd="x", gid=2011,
mem=ent.contains_only("user1"))
ONE_USER_GROUP2 = dict(name="one_user_group2", passwd="x", gid=2012,
mem=ent.contains_only("user2"))
TWO_USER_GROUP = dict(name="two_user_group", passwd="x", gid=2020,
mem=ent.contains_only("user1", "user2"))
GROUP_LIST = [EMPTY_GROUP,
GROUP1,
GROUP2,
ONE_USER_GROUP1,
ONE_USER_GROUP2,
TWO_USER_GROUP]
GROUP_NAME_DICT = dict((g["name"], g) for g in GROUP_LIST)
GROUP_GID_DICT = dict((g["gid"], g) for g in GROUP_LIST)
@pytest.fixture(scope="module")
def users_and_groups(request, passwd_path, group_path):
passwd_contents = "".join([
"{name}:{passwd}:{uid}:{gid}:{gecos}:{dir}:{shell}\n".format(**u)
for u in USER_LIST
])
group_contents = "".join([
"%s:%s:%s:%s\n" % (g["name"], g["passwd"], g["gid"],
",".join(g["mem"]))
for g in GROUP_LIST
])
with open(passwd_path, "a") as f:
f.write(passwd_contents)
with open(group_path, "a") as f:
f.write(group_contents)
def test_assert_passwd_by_name(users_and_groups):
ent.assert_passwd_by_name("user1", {})
ent.assert_passwd_by_name("user1", dict(name="user1", uid=1001))
ent.assert_passwd_by_name("user1", USER1)
try:
ent.assert_passwd_by_name("user3", {})
assert False
except AssertionError as e:
assert str(e) in ("'getpwnam(): name not found: user3'",
"\"getpwnam(): name not found: 'user3'\"")
try:
ent.assert_passwd_by_name("user2", dict(name="user1"))
assert False
except AssertionError as e:
assert str(e) == "'name' mismatch: 'user1' != 'user2'"
def test_assert_passwd_by_uid(users_and_groups):
ent.assert_passwd_by_uid(1001, {})
ent.assert_passwd_by_uid(1001, dict(name="user1", uid=1001))
ent.assert_passwd_by_uid(1001, USER1)
try:
ent.assert_passwd_by_uid(1003, {})
assert False
except AssertionError as e:
assert str(e) == "'getpwuid(): uid not found: 1003'"
try:
ent.assert_passwd_by_uid(1002, dict(name="user1"))
assert False
except AssertionError as e:
assert str(e) == "'name' mismatch: 'user1' != 'user2'"
def test_assert_passwd_list(users_and_groups):
ent.assert_passwd_list(ent.contains())
ent.assert_passwd_list(ent.contains(USER1))
ent.assert_passwd_list(ent.contains_only(*USER_LIST))
try:
ent.assert_passwd_list(ent.contains_only())
assert False
except AssertionError as e:
assert not re.search("expected users not found:", str(e))
assert re.search("unexpected users found:", str(e))
try:
ent.assert_passwd_list(ent.contains(dict(name="non_existent")))
assert False
except AssertionError as e:
assert re.search("expected users not found:", str(e))
assert not re.search("unexpected users found:", str(e))
def test_assert_each_passwd_by_name(users_and_groups):
ent.assert_each_passwd_by_name({})
ent.assert_each_passwd_by_name(dict(user1=USER1))
ent.assert_each_passwd_by_name(USER_NAME_DICT)
try:
ent.assert_each_passwd_by_name(dict(user3={}))
assert False
except AssertionError as e:
assert str(e) in ("'getpwnam(): name not found: user3'",
"\"getpwnam(): name not found: 'user3'\"")
try:
ent.assert_each_passwd_by_name(dict(user1=dict(name="user2")))
assert False
except AssertionError as e:
assert str(e) == \
"user 'user1' mismatch: 'name' mismatch: 'user2' != 'user1'"
def test_assert_each_passwd_by_uid(users_and_groups):
ent.assert_each_passwd_by_uid({})
ent.assert_each_passwd_by_uid({1001: USER1})
ent.assert_each_passwd_by_uid(USER_UID_DICT)
try:
ent.assert_each_passwd_by_uid({1003: {}})
assert False
except AssertionError as e:
assert str(e) == "'getpwuid(): uid not found: 1003'"
try:
ent.assert_each_passwd_by_uid({1001: dict(uid=1002)})
assert False
except AssertionError as e:
assert str(e) == \
"user 1001 mismatch: 'uid' mismatch: 1002 != 1001"
def test_assert_each_passwd_with_name(users_and_groups):
ent.assert_each_passwd_with_name([])
ent.assert_each_passwd_with_name([USER1])
ent.assert_each_passwd_with_name(USER_LIST)
try:
ent.assert_each_passwd_with_name([dict(name="user3")])
assert False
except AssertionError as e:
assert str(e) in ("'getpwnam(): name not found: user3'",
"\"getpwnam(): name not found: 'user3'\"")
try:
ent.assert_each_passwd_with_name([dict(name="user1", uid=1002)])
assert False
except AssertionError as e:
assert str(e) == \
"user 'user1' mismatch: 'uid' mismatch: 1002 != 1001"
def test_assert_each_passwd_with_uid(users_and_groups):
ent.assert_each_passwd_with_uid([])
ent.assert_each_passwd_with_uid([USER1])
ent.assert_each_passwd_with_uid(USER_LIST)
try:
ent.assert_each_passwd_with_uid([dict(uid=1003)])
assert False
except AssertionError as e:
assert str(e) == "'getpwuid(): uid not found: 1003'"
try:
ent.assert_each_passwd_with_uid([dict(name="user2", uid=1001)])
assert False
except AssertionError as e:
assert str(e) == \
"user 1001 mismatch: 'name' mismatch: 'user2' != 'user1'"
def test_assert_passwd(users_and_groups):
ent.assert_passwd(ent.contains())
ent.assert_passwd(ent.contains(USER1))
ent.assert_passwd(ent.contains_only(*USER_LIST))
try:
ent.assert_passwd(ent.contains(dict(name="user3", uid=1003)))
assert False
except AssertionError as e:
assert re.search("list mismatch:", str(e))
assert re.search("expected users not found:", str(e))
assert not re.search("unexpected users found:", str(e))
try:
ent.assert_passwd(ent.contains_only(USER1))
assert False
except AssertionError as e:
assert re.search("list mismatch:", str(e))
assert not re.search("expected users not found:", str(e))
assert re.search("unexpected users found:", str(e))
def test_group_member_matching(users_and_groups):
ent.assert_group_by_name("empty_group", dict(mem=ent.contains()))
ent.assert_group_by_name("empty_group", dict(mem=ent.contains_only()))
try:
ent.assert_group_by_name("empty_group",
dict(mem=ent.contains("user1")))
except AssertionError as e:
assert re.search("member list mismatch:", str(e))
assert re.search("expected members not found:", str(e))
ent.assert_group_by_name("one_user_group1", dict(mem=ent.contains()))
ent.assert_group_by_name("one_user_group1",
dict(mem=ent.contains("user1")))
ent.assert_group_by_name("one_user_group1",
dict(mem=ent.contains_only("user1")))
try:
ent.assert_group_by_name("one_user_group1",
dict(mem=ent.contains_only()))
except AssertionError as e:
assert re.search("member list mismatch:", str(e))
assert re.search("unexpected members found:", str(e))
assert not re.search("expected members not found:", str(e))
try:
ent.assert_group_by_name("one_user_group1",
dict(mem=ent.contains_only("user3")))
except AssertionError as e:
assert re.search("member list mismatch:", str(e))
assert re.search("unexpected members found:", str(e))
assert re.search("expected members not found:", str(e))
try:
ent.assert_group_by_name("one_user_group1",
dict(mem=ent.contains("user3")))
except AssertionError as e:
assert re.search("member list mismatch:", str(e))
assert not re.search("unexpected members found:", str(e))
assert re.search("expected members not found:", str(e))
ent.assert_group_by_name("two_user_group", dict(mem=ent.contains()))
ent.assert_group_by_name("two_user_group",
dict(mem=ent.contains("user1")))
ent.assert_group_by_name("two_user_group",
dict(mem=ent.contains("user1", "user2")))
ent.assert_group_by_name("two_user_group",
dict(mem=ent.contains_only("user1", "user2")))
try:
ent.assert_group_by_name("two_user_group",
dict(mem=ent.contains_only("user1")))
except AssertionError as e:
assert re.search("member list mismatch:", str(e))
assert re.search("unexpected members found:", str(e))
assert not re.search("expected members not found:", str(e))
def test_assert_group_by_name(users_and_groups):
ent.assert_group_by_name("group1", {})
ent.assert_group_by_name("group1", dict(name="group1", gid=2001))
ent.assert_group_by_name("group1", GROUP1)
try:
ent.assert_group_by_name("group3", {})
assert False
except AssertionError as e:
assert str(e) in ("'getgrnam(): name not found: group3'",
"\"getgrnam(): name not found: 'group3'\"")
try:
ent.assert_group_by_name("group2", dict(name="group1"))
assert False
except AssertionError as e:
assert str(e) == "'name' mismatch: 'group1' != 'group2'"
def test_assert_group_by_gid(users_and_groups):
ent.assert_group_by_gid(2001, {})
ent.assert_group_by_gid(2001, dict(name="group1", gid=2001))
ent.assert_group_by_gid(2001, GROUP1)
try:
ent.assert_group_by_gid(2003, {})
assert False
except AssertionError as e:
assert str(e) == "'getgrgid(): gid not found: 2003'"
try:
ent.assert_group_by_gid(2002, dict(name="group1"))
assert False
except AssertionError as e:
assert str(e) == "'name' mismatch: 'group1' != 'group2'"
def test_assert_group_list(users_and_groups):
ent.assert_group_list(ent.contains())
ent.assert_group_list(ent.contains(GROUP1))
ent.assert_group_list(ent.contains_only(*GROUP_LIST))
try:
ent.assert_group_list(ent.contains_only())
assert False
except AssertionError as e:
assert not re.search("expected groups not found:", str(e))
assert re.search("unexpected groups found:", str(e))
try:
ent.assert_group_list(ent.contains(dict(name="non_existent")))
assert False
except AssertionError as e:
assert re.search("expected groups not found:", str(e))
assert not re.search("unexpected groups found:", str(e))
def test_assert_each_group_by_name(users_and_groups):
ent.assert_each_group_by_name({})
ent.assert_each_group_by_name(dict(group1=GROUP1))
ent.assert_each_group_by_name(GROUP_NAME_DICT)
try:
ent.assert_each_group_by_name(dict(group3={}))
assert False
except AssertionError as e:
assert str(e) in ("'getgrnam(): name not found: group3'",
"\"getgrnam(): name not found: 'group3'\"")
try:
ent.assert_each_group_by_name(dict(group1=dict(name="group2")))
assert False
except AssertionError as e:
assert str(e) == "group 'group1' mismatch: " + \
"'name' mismatch: 'group2' != 'group1'"
def test_assert_each_group_by_gid(users_and_groups):
ent.assert_each_group_by_gid({})
ent.assert_each_group_by_gid({2001: GROUP1})
ent.assert_each_group_by_gid(GROUP_GID_DICT)
try:
ent.assert_each_group_by_gid({2003: {}})
assert False
except AssertionError as e:
assert str(e) == "'getgrgid(): gid not found: 2003'"
try:
ent.assert_each_group_by_gid({2001: dict(gid=2002)})
assert False
except AssertionError as e:
assert str(e) == \
"group 2001 mismatch: 'gid' mismatch: 2002 != 2001"
def test_assert_each_group_with_name(users_and_groups):
ent.assert_each_group_with_name([])
ent.assert_each_group_with_name([GROUP1])
ent.assert_each_group_with_name(GROUP_LIST)
try:
ent.assert_each_group_with_name([dict(name="group3")])
assert False
except AssertionError as e:
assert str(e) in ("'getgrnam(): name not found: group3'",
"\"getgrnam(): name not found: 'group3'\"")
try:
ent.assert_each_group_with_name([dict(name="group1", gid=2002)])
assert False
except AssertionError as e:
assert str(e) == \
"group 'group1' mismatch: 'gid' mismatch: 2002 != 2001"
def test_assert_each_group_with_gid(users_and_groups):
ent.assert_each_group_with_gid([])
ent.assert_each_group_with_gid([GROUP1])
ent.assert_each_group_with_gid(GROUP_LIST)
try:
ent.assert_each_group_with_gid([dict(gid=2003)])
assert False
except AssertionError as e:
assert str(e) == "'getgrgid(): gid not found: 2003'"
try:
ent.assert_each_group_with_gid([dict(name="group2", gid=2001)])
assert False
except AssertionError as e:
assert str(e) == \
"group 2001 mismatch: 'name' mismatch: 'group2' != 'group1'"
def test_assert_group(users_and_groups):
ent.assert_group(ent.contains())
ent.assert_group(ent.contains(GROUP1))
ent.assert_group(ent.contains_only(*GROUP_LIST))
try:
ent.assert_group(ent.contains(dict(name="group3", gid=2003)))
assert False
except AssertionError as e:
assert re.search("list mismatch:", str(e))
assert re.search("expected groups not found:", str(e))
assert not re.search("unexpected groups found:", str(e))
try:
ent.assert_group(ent.contains_only(GROUP1))
assert False
except AssertionError as e:
assert re.search("list mismatch:", str(e))
assert not re.search("expected groups not found:", str(e))
assert re.search("unexpected groups found:", str(e))
| gpl-3.0 |
kwailamchan/programming-languages | javascript/backbone/backbone-templates/backbone-fileupload/venvs/lib/python2.7/site-packages/django/contrib/databrowse/tests.py | 95 | 1906 | from django.contrib import databrowse
from django.db import models
from django.test import TestCase
class SomeModel(models.Model):
some_field = models.CharField(max_length=50)
def __unicode__(self):
return self.some_field
class SomeOtherModel(models.Model):
some_other_field = models.CharField(max_length=50)
def __unicode__(self):
return self.some_other_field
class YetAnotherModel(models.Model):
yet_another_field = models.CharField(max_length=50)
def __unicode__(self):
return self.yet_another_field
class DatabrowseTests(TestCase):
def test_databrowse_register_unregister(self):
databrowse.site.register(SomeModel)
self.assertTrue(SomeModel in databrowse.site.registry)
databrowse.site.register(SomeOtherModel, YetAnotherModel)
self.assertTrue(SomeOtherModel in databrowse.site.registry)
self.assertTrue(YetAnotherModel in databrowse.site.registry)
self.assertRaisesMessage(
databrowse.sites.AlreadyRegistered,
'The model SomeModel is already registered',
databrowse.site.register, SomeModel, SomeOtherModel
)
databrowse.site.unregister(SomeOtherModel)
self.assertFalse(SomeOtherModel in databrowse.site.registry)
databrowse.site.unregister(SomeModel, YetAnotherModel)
self.assertFalse(SomeModel in databrowse.site.registry)
self.assertFalse(YetAnotherModel in databrowse.site.registry)
self.assertRaisesMessage(
databrowse.sites.NotRegistered,
'The model SomeModel is not registered',
databrowse.site.unregister, SomeModel, SomeOtherModel
)
self.assertRaisesMessage(
databrowse.sites.AlreadyRegistered,
'The model SomeModel is already registered',
databrowse.site.register, SomeModel, SomeModel
)
| mit |
FinalsClub/karmaworld | karmaworld/apps/users/migrations/0004_auto__add_field_generickarmaevent_event_type.py | 1 | 14051 | # -*- coding: utf-8 -*-
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding field 'GenericKarmaEvent.event_type'
db.add_column(u'users_generickarmaevent', 'event_type',
self.gf('django.db.models.fields.CharField')(default='none', max_length=15),
keep_default=False)
def backwards(self, orm):
# Deleting field 'GenericKarmaEvent.event_type'
db.delete_column(u'users_generickarmaevent', 'event_type')
models = {
u'auth.group': {
'Meta': {'object_name': 'Group'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
u'auth.permission': {
'Meta': {'ordering': "(u'content_type__app_label', u'content_type__model', u'codename')", 'unique_together': "((u'content_type', u'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['contenttypes.ContentType']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
u'auth.user': {
'Meta': {'object_name': 'User'},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
u'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
u'courses.course': {
'Meta': {'ordering': "['-file_count', 'school', 'name']", 'unique_together': "(('name', 'department'),)", 'object_name': 'Course'},
'created_at': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'department': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['courses.Department']", 'null': 'True', 'blank': 'True'}),
'desc': ('django.db.models.fields.TextField', [], {'max_length': '511', 'null': 'True', 'blank': 'True'}),
'file_count': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'flags': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'instructor_email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'null': 'True', 'blank': 'True'}),
'instructor_name': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'school': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['courses.School']", 'null': 'True', 'blank': 'True'}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '150', 'null': 'True'}),
'updated_at': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.utcnow'}),
'url': ('django.db.models.fields.URLField', [], {'max_length': '511', 'null': 'True', 'blank': 'True'})
},
u'courses.department': {
'Meta': {'unique_together': "(('name', 'school'),)", 'object_name': 'Department'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'school': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['courses.School']"}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '150', 'null': 'True'}),
'url': ('django.db.models.fields.URLField', [], {'max_length': '511', 'null': 'True', 'blank': 'True'})
},
u'courses.school': {
'Meta': {'ordering': "['-file_count', '-priority', 'name']", 'object_name': 'School'},
'alias': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'facebook_id': ('django.db.models.fields.BigIntegerField', [], {'null': 'True', 'blank': 'True'}),
'file_count': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'hashtag': ('django.db.models.fields.CharField', [], {'max_length': '16', 'unique': 'True', 'null': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'location': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'priority': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '150', 'null': 'True'}),
'url': ('django.db.models.fields.URLField', [], {'max_length': '511', 'blank': 'True'}),
'usde_id': ('django.db.models.fields.BigIntegerField', [], {'unique': 'True', 'null': 'True', 'blank': 'True'})
},
u'licenses.license': {
'Meta': {'object_name': 'License'},
'html': ('django.db.models.fields.TextField', [], {}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'})
},
u'notes.note': {
'Meta': {'ordering': "['-uploaded_at']", 'unique_together': "(('fp_file', 'upstream_link'),)", 'object_name': 'Note'},
'course': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['courses.Course']"}),
'file_type': ('django.db.models.fields.CharField', [], {'default': "'???'", 'max_length': '15', 'null': 'True', 'blank': 'True'}),
'flags': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'fp_file': ('django_filepicker.models.FPFileField', [], {'max_length': '100', 'null': 'True', 'blank': 'True'}),
'gdrive_url': ('django.db.models.fields.URLField', [], {'max_length': '1024', 'unique': 'True', 'null': 'True', 'blank': 'True'}),
'html': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'ip': ('django.db.models.fields.GenericIPAddressField', [], {'max_length': '39', 'null': 'True', 'blank': 'True'}),
'is_hidden': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'license': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['licenses.License']", 'null': 'True', 'blank': 'True'}),
'mimetype': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'pdf_file': ('django.db.models.fields.files.FileField', [], {'max_length': '100', 'null': 'True', 'blank': 'True'}),
'slug': ('django.db.models.fields.SlugField', [], {'unique': 'True', 'max_length': '255'}),
'static_html': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'text': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'thanks': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0'}),
'tweeted': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'uploaded_at': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.utcnow', 'null': 'True'}),
'upstream_link': ('django.db.models.fields.URLField', [], {'max_length': '1024', 'unique': 'True', 'null': 'True', 'blank': 'True'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['auth.User']", 'null': 'True', 'on_delete': 'models.SET_NULL', 'blank': 'True'}),
'year': ('django.db.models.fields.IntegerField', [], {'default': '2014', 'null': 'True', 'blank': 'True'})
},
u'taggit.tag': {
'Meta': {'ordering': "['namespace', 'name']", 'object_name': 'Tag'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '100'}),
'namespace': ('django.db.models.fields.CharField', [], {'max_length': '100', 'null': 'True', 'blank': 'True'}),
'slug': ('django.db.models.fields.SlugField', [], {'unique': 'True', 'max_length': '100'})
},
u'taggit.taggeditem': {
'Meta': {'object_name': 'TaggedItem'},
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "u'taggit_taggeditem_tagged_items'", 'to': u"orm['contenttypes.ContentType']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'object_id': ('django.db.models.fields.IntegerField', [], {'db_index': 'True'}),
'tag': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "u'taggit_taggeditem_items'", 'to': u"orm['taggit.Tag']"})
},
u'users.coursekarmaevent': {
'Meta': {'object_name': 'CourseKarmaEvent'},
'course': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['courses.Course']"}),
'event_type': ('django.db.models.fields.CharField', [], {'max_length': '15'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'points': ('django.db.models.fields.IntegerField', [], {}),
'timestamp': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.utcnow'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['auth.User']"})
},
u'users.generickarmaevent': {
'Meta': {'object_name': 'GenericKarmaEvent'},
'event_type': ('django.db.models.fields.CharField', [], {'default': "'none'", 'max_length': '15'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'message': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'points': ('django.db.models.fields.IntegerField', [], {}),
'timestamp': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.utcnow'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['auth.User']"})
},
u'users.notekarmaevent': {
'Meta': {'object_name': 'NoteKarmaEvent'},
'event_type': ('django.db.models.fields.CharField', [], {'max_length': '15'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'note': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['notes.Note']"}),
'points': ('django.db.models.fields.IntegerField', [], {}),
'timestamp': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.utcnow'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['auth.User']"})
},
u'users.userprofile': {
'Meta': {'object_name': 'UserProfile'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'school': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['courses.School']", 'null': 'True', 'blank': 'True'}),
'user': ('django.db.models.fields.related.OneToOneField', [], {'to': u"orm['auth.User']", 'unique': 'True'})
}
}
complete_apps = ['users'] | agpl-3.0 |
Drizzt12/Jane | Jane-Old/Calorie_Tracker.py | 1 | 1885 | import shelve
c = shelve.open("Calories")
from datetime import datetime
now = datetime.now()
month = now.month
year = now.year
day = now.day
date = str(month) + "/" + str(day) + "/" + str(year)
yesterday = str(month) + "/" + str(day-1) + "/" + str(year)
print "In the month/day/year format, today is: ", date
print "Welcome to your calorie tracker!"
todayc = c[date]
todayneeded = 2600-todayc
print "you had", c[yesterday],"calories yesterday"
print "You have had", todayc,"calories today."
print "You need", todayneeded,"calories for the rest of today."
x=0
while x!="quit":
print "what would you like to do?"
print "valid answers are 'add' 'subtract' 'data', 'amount', and 'quit'"
x = raw_input(">>> ")
todayc = c[date]
todayneeded = 2600-todayc
if x=="quit":
print "Bye!"
elif x=="data":
print "Yogurt: 140C, Peach Yogurt: 180C"
print "Slice of bread: 90C, Sandwich: 200C"
print "Banana: 105C, Apple: 95C"
print "Bowl of cereal: 125C, Bowl of oatmeal: 150C"
print "Large Egg: 78C, Small pancake: 64C"
print "1 Cup of spaghetti: 221C, Can of lentil soup: 317C"
print "1 oz of corn chips: 147C, 1 oz of potato chips: 152C"
print "One medium carrot: 25C, One tablespoon of hummus: 25C"
print "One multigrain cracker: 10C, Three meat biggie at twisters: 650C"
print "Twisters Breakfast burrito: 400C, Taco: 156C"
print "Hot Dog: 170C"
elif x=="add":
try:
add = input("How many calories would you like to add? ")
c[date] = todayc+add
except NameError:
print "Sorry, I do not understand."
print "ok."
elif x=="subtract":
try:
subtract = input("How many calories would you like to remove? ")
c[date] = todayc-subtract
except NameError:
print "Sorry, I do not understand."
print "ok."
elif x=="amount":
print "you have had", todayc,"calories today."
print "you need", todayneeded,"calories for the rest of today."
| mit |
santoshkumarsingh/Data-Wrangling-with-MongoDB | Lesson_6_Case_Study/03-Iterative_parsing/iterative_parsing.py | 2 | 1123 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Your task is to use the iterative parsing to process the map file and
find out not only what tags are there, but also how many, to get the
feeling on how much of which data you can expect to have in the map.
The output should be a dictionary with the tag name as the key
and number of times this tag can be encountered in the map as value.
Note that your code will be tested with a different data file than the 'example.osm'
"""
import xml.etree.ElementTree as ET
import pprint
def count_tags(filename):
tags = {}
for event, elem in ET.iterparse(filename):
if elem.tag in tags:
tags[elem.tag] += 1
else:
tags[elem.tag] = 1
return tags
def test():
tags = count_tags('example.osm')
pprint.pprint(tags)
assert tags == {'bounds': 1,
'member': 3,
'nd': 4,
'node': 20,
'osm': 1,
'relation': 1,
'tag': 7,
'way': 1}
if __name__ == "__main__":
test() | agpl-3.0 |
infobloxopen/infoblox-netmri | infoblox_netmri/api/broker/v2_9_0/device_object_broker.py | 14 | 52439 | from ..broker import Broker
class DeviceObjectBroker(Broker):
controller = "device_objects"
def show(self, **kwargs):
"""Shows the details for the specified device object.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceObjectID: The internal NetMRI identifier for this network object.
:type DeviceObjectID: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param methods: A list of device object methods. The listed methods will be called on each device object returned and included in the output. Available methods are: device_cfg_context, data_source, device.
:type methods: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param include: A list of associated object types to include in the output. The listed associations will be returned as outputs named according to the association name (see outputs below). Available includes are: device_cfg_context, data_source, device.
:type include: Array of String
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return device_object: The device object identified by the specified DeviceObjectID.
:rtype device_object: DeviceObject
"""
return self.api_request(self._get_method_fullname("show"), kwargs)
def index(self, **kwargs):
"""Lists the available device objects. Any of the inputs listed may be be used to narrow the list; other inputs will be ignored. Of the various ways to query lists, using this method is most efficient.
**Inputs**
| ``api version min:`` 2.6
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceID: The internal NetMRI identifier for the device to which this network object belongs.
:type DeviceID: Array of Integer
| ``api version min:`` 2.6
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceObjectID: The internal NetMRI identifier for this network object.
:type DeviceObjectID: Array of Integer
| ``api version min:`` 2.6
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param ObjName: Name of this network object.
:type ObjName: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceGroupID: The internal NetMRI identifier of the device groups to which to limit the results.
:type DeviceGroupID: Array of Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param timestamp: The data returned will represent the device objects as of this date and time. If omitted, the result will indicate the most recently collected data.
:type timestamp: DateTime
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param methods: A list of device object methods. The listed methods will be called on each device object returned and included in the output. Available methods are: device_cfg_context, data_source, device.
:type methods: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param include: A list of associated object types to include in the output. The listed associations will be returned as outputs named according to the association name (see outputs below). Available includes are: device_cfg_context, data_source, device.
:type include: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 0
:param start: The record number to return in the selected page of data. It will always appear, although it may not be the first record. See the :limit for more information.
:type start: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 1000
:param limit: The size of the page of data, that is, the maximum number of records returned. The limit size will be used to break the data up into pages and the first page with the start record will be returned. So if you have 100 records and use a :limit of 10 and a :start of 10, you will get records 10-19. The maximum limit is 10000.
:type limit: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` DeviceObjectID
:param sort: The data field(s) to use for sorting the output. Default is DeviceObjectID. Valid values are DeviceObjectID, DeviceID, DeviceCfgContextID, DataSourceID, ObjFirstSeenTime, ObjStartTime, ObjEndTime, ObjTimestamp, ObjChangedCols, ObjName, ObjUseCount, ObjArtificialInd, ObjConfigText, ObjProvisionData.
:type sort: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` asc
:param dir: The direction(s) in which to sort the data. Default is 'asc'. Valid values are 'asc' and 'desc'.
:type dir: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param select: The list of attributes to return for each DeviceObject. Valid values are DeviceObjectID, DeviceID, DeviceCfgContextID, DataSourceID, ObjFirstSeenTime, ObjStartTime, ObjEndTime, ObjTimestamp, ObjChangedCols, ObjName, ObjUseCount, ObjArtificialInd, ObjConfigText, ObjProvisionData. If empty or omitted, all attributes will be returned.
:type select: Array
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_field: The field name for NIOS GOTO that is used for locating a row position of records.
:type goto_field: String
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_value: The value of goto_field for NIOS GOTO that is used for locating a row position of records.
:type goto_value: String
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return device_objects: An array of the DeviceObject objects that match the specified input criteria.
:rtype device_objects: Array of DeviceObject
"""
return self.api_list_request(self._get_method_fullname("index"), kwargs)
def search(self, **kwargs):
"""Lists the available device objects matching the input criteria. This method provides a more flexible search interface than the index method, but searching using this method is more demanding on the system and will not perform to the same level as the index method. The input fields listed below will be used as in the index method, to filter the result, along with the optional query string and XML filter described below.
**Inputs**
| ``api version min:`` 2.6
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DataSourceID: The internal NetMRI identifier for the collector NetMRI that collected this data record.
:type DataSourceID: Array of Integer
| ``api version min:`` 2.6
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceCfgContextID: The internal NetMRI identifier of the Configuration context of declaration of this network object.
:type DeviceCfgContextID: Array of Integer
| ``api version min:`` 2.6
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceID: The internal NetMRI identifier for the device to which this network object belongs.
:type DeviceID: Array of Integer
| ``api version min:`` 2.6
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceObjectID: The internal NetMRI identifier for this network object.
:type DeviceObjectID: Array of Integer
| ``api version min:`` 2.6
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param ObjArtificialInd: Flag indicating this object network does not exist in the device configuration.
:type ObjArtificialInd: Array of Boolean
| ``api version min:`` 2.6
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param ObjChangedCols: The fields that changed between this revision of the record and the previous revision.
:type ObjChangedCols: Array of String
| ``api version min:`` 2.6
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param ObjConfigText: Original text of the definition of his network object in the device configuration.
:type ObjConfigText: Array of String
| ``api version min:`` 2.6
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param ObjEndTime: The ending effective time of this record, or empty if still in effect.
:type ObjEndTime: Array of DateTime
| ``api version min:`` 2.6
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param ObjFirstSeenTime: The timestamp of when NetMRI first discovered this network object.
:type ObjFirstSeenTime: Array of DateTime
| ``api version min:`` 2.6
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param ObjName: Name of this network object.
:type ObjName: Array of String
| ``api version min:`` 2.6
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param ObjProvisionData: Internal data - do not modify, may change without warning.
:type ObjProvisionData: Array of String
| ``api version min:`` 2.6
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param ObjStartTime: The starting effective time of this record.
:type ObjStartTime: Array of DateTime
| ``api version min:`` 2.6
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param ObjTimestamp: The date and time this record was collected or calculated.
:type ObjTimestamp: Array of DateTime
| ``api version min:`` 2.6
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param ObjUseCount: Total count of usage of this network by other elements of the configuration (rules, other network objects).
:type ObjUseCount: Array of Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceGroupID: The internal NetMRI identifier of the device groups to which to limit the results.
:type DeviceGroupID: Array of Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param timestamp: The data returned will represent the device objects as of this date and time. If omitted, the result will indicate the most recently collected data.
:type timestamp: DateTime
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param methods: A list of device object methods. The listed methods will be called on each device object returned and included in the output. Available methods are: device_cfg_context, data_source, device.
:type methods: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param include: A list of associated object types to include in the output. The listed associations will be returned as outputs named according to the association name (see outputs below). Available includes are: device_cfg_context, data_source, device.
:type include: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 0
:param start: The record number to return in the selected page of data. It will always appear, although it may not be the first record. See the :limit for more information.
:type start: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 1000
:param limit: The size of the page of data, that is, the maximum number of records returned. The limit size will be used to break the data up into pages and the first page with the start record will be returned. So if you have 100 records and use a :limit of 10 and a :start of 10, you will get records 10-19. The maximum limit is 10000.
:type limit: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` DeviceObjectID
:param sort: The data field(s) to use for sorting the output. Default is DeviceObjectID. Valid values are DeviceObjectID, DeviceID, DeviceCfgContextID, DataSourceID, ObjFirstSeenTime, ObjStartTime, ObjEndTime, ObjTimestamp, ObjChangedCols, ObjName, ObjUseCount, ObjArtificialInd, ObjConfigText, ObjProvisionData.
:type sort: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` asc
:param dir: The direction(s) in which to sort the data. Default is 'asc'. Valid values are 'asc' and 'desc'.
:type dir: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param select: The list of attributes to return for each DeviceObject. Valid values are DeviceObjectID, DeviceID, DeviceCfgContextID, DataSourceID, ObjFirstSeenTime, ObjStartTime, ObjEndTime, ObjTimestamp, ObjChangedCols, ObjName, ObjUseCount, ObjArtificialInd, ObjConfigText, ObjProvisionData. If empty or omitted, all attributes will be returned.
:type select: Array
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_field: The field name for NIOS GOTO that is used for locating a row position of records.
:type goto_field: String
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_value: The value of goto_field for NIOS GOTO that is used for locating a row position of records.
:type goto_value: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param query: This value will be matched against device objects, looking to see if one or more of the listed attributes contain the passed value. You may also surround the value with '/' and '/' to perform a regular expression search rather than a containment operation. Any record that matches will be returned. The attributes searched are: DataSourceID, DeviceCfgContextID, DeviceID, DeviceObjectID, ObjArtificialInd, ObjChangedCols, ObjConfigText, ObjEndTime, ObjFirstSeenTime, ObjName, ObjProvisionData, ObjStartTime, ObjTimestamp, ObjUseCount.
:type query: String
| ``api version min:`` 2.3
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param xml_filter: A SetFilter XML structure to further refine the search. The SetFilter will be applied AFTER any search query or field values, but before any limit options. The limit and pagination will be enforced after the filter. Remind that this kind of filter may be costly and inefficient if not associated with a database filtering.
:type xml_filter: String
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return device_objects: An array of the DeviceObject objects that match the specified input criteria.
:rtype device_objects: Array of DeviceObject
"""
return self.api_list_request(self._get_method_fullname("search"), kwargs)
def find(self, **kwargs):
"""Lists the available device objects matching the input specification. This provides the most flexible search specification of all the query mechanisms, enabling searching using comparison operations other than equality. However, it is more complex to use and will not perform as efficiently as the index or search methods. In the input descriptions below, 'field names' refers to the following fields: DataSourceID, DeviceCfgContextID, DeviceID, DeviceObjectID, ObjArtificialInd, ObjChangedCols, ObjConfigText, ObjEndTime, ObjFirstSeenTime, ObjName, ObjProvisionData, ObjStartTime, ObjTimestamp, ObjUseCount.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DataSourceID: The operator to apply to the field DataSourceID. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DataSourceID: The internal NetMRI identifier for the collector NetMRI that collected this data record. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DataSourceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DataSourceID: If op_DataSourceID is specified, the field named in this input will be compared to the value in DataSourceID using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DataSourceID must be specified if op_DataSourceID is specified.
:type val_f_DataSourceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DataSourceID: If op_DataSourceID is specified, this value will be compared to the value in DataSourceID using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DataSourceID must be specified if op_DataSourceID is specified.
:type val_c_DataSourceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DeviceCfgContextID: The operator to apply to the field DeviceCfgContextID. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DeviceCfgContextID: The internal NetMRI identifier of the Configuration context of declaration of this network object. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DeviceCfgContextID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DeviceCfgContextID: If op_DeviceCfgContextID is specified, the field named in this input will be compared to the value in DeviceCfgContextID using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DeviceCfgContextID must be specified if op_DeviceCfgContextID is specified.
:type val_f_DeviceCfgContextID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DeviceCfgContextID: If op_DeviceCfgContextID is specified, this value will be compared to the value in DeviceCfgContextID using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DeviceCfgContextID must be specified if op_DeviceCfgContextID is specified.
:type val_c_DeviceCfgContextID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DeviceID: The operator to apply to the field DeviceID. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DeviceID: The internal NetMRI identifier for the device to which this network object belongs. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DeviceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DeviceID: If op_DeviceID is specified, the field named in this input will be compared to the value in DeviceID using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DeviceID must be specified if op_DeviceID is specified.
:type val_f_DeviceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DeviceID: If op_DeviceID is specified, this value will be compared to the value in DeviceID using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DeviceID must be specified if op_DeviceID is specified.
:type val_c_DeviceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DeviceObjectID: The operator to apply to the field DeviceObjectID. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DeviceObjectID: The internal NetMRI identifier for this network object. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DeviceObjectID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DeviceObjectID: If op_DeviceObjectID is specified, the field named in this input will be compared to the value in DeviceObjectID using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DeviceObjectID must be specified if op_DeviceObjectID is specified.
:type val_f_DeviceObjectID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DeviceObjectID: If op_DeviceObjectID is specified, this value will be compared to the value in DeviceObjectID using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DeviceObjectID must be specified if op_DeviceObjectID is specified.
:type val_c_DeviceObjectID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_ObjArtificialInd: The operator to apply to the field ObjArtificialInd. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. ObjArtificialInd: Flag indicating this object network does not exist in the device configuration. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_ObjArtificialInd: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_ObjArtificialInd: If op_ObjArtificialInd is specified, the field named in this input will be compared to the value in ObjArtificialInd using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_ObjArtificialInd must be specified if op_ObjArtificialInd is specified.
:type val_f_ObjArtificialInd: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_ObjArtificialInd: If op_ObjArtificialInd is specified, this value will be compared to the value in ObjArtificialInd using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_ObjArtificialInd must be specified if op_ObjArtificialInd is specified.
:type val_c_ObjArtificialInd: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_ObjChangedCols: The operator to apply to the field ObjChangedCols. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. ObjChangedCols: The fields that changed between this revision of the record and the previous revision. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_ObjChangedCols: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_ObjChangedCols: If op_ObjChangedCols is specified, the field named in this input will be compared to the value in ObjChangedCols using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_ObjChangedCols must be specified if op_ObjChangedCols is specified.
:type val_f_ObjChangedCols: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_ObjChangedCols: If op_ObjChangedCols is specified, this value will be compared to the value in ObjChangedCols using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_ObjChangedCols must be specified if op_ObjChangedCols is specified.
:type val_c_ObjChangedCols: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_ObjConfigText: The operator to apply to the field ObjConfigText. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. ObjConfigText: Original text of the definition of his network object in the device configuration. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_ObjConfigText: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_ObjConfigText: If op_ObjConfigText is specified, the field named in this input will be compared to the value in ObjConfigText using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_ObjConfigText must be specified if op_ObjConfigText is specified.
:type val_f_ObjConfigText: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_ObjConfigText: If op_ObjConfigText is specified, this value will be compared to the value in ObjConfigText using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_ObjConfigText must be specified if op_ObjConfigText is specified.
:type val_c_ObjConfigText: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_ObjEndTime: The operator to apply to the field ObjEndTime. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. ObjEndTime: The ending effective time of this record, or empty if still in effect. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_ObjEndTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_ObjEndTime: If op_ObjEndTime is specified, the field named in this input will be compared to the value in ObjEndTime using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_ObjEndTime must be specified if op_ObjEndTime is specified.
:type val_f_ObjEndTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_ObjEndTime: If op_ObjEndTime is specified, this value will be compared to the value in ObjEndTime using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_ObjEndTime must be specified if op_ObjEndTime is specified.
:type val_c_ObjEndTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_ObjFirstSeenTime: The operator to apply to the field ObjFirstSeenTime. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. ObjFirstSeenTime: The timestamp of when NetMRI first discovered this network object. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_ObjFirstSeenTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_ObjFirstSeenTime: If op_ObjFirstSeenTime is specified, the field named in this input will be compared to the value in ObjFirstSeenTime using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_ObjFirstSeenTime must be specified if op_ObjFirstSeenTime is specified.
:type val_f_ObjFirstSeenTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_ObjFirstSeenTime: If op_ObjFirstSeenTime is specified, this value will be compared to the value in ObjFirstSeenTime using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_ObjFirstSeenTime must be specified if op_ObjFirstSeenTime is specified.
:type val_c_ObjFirstSeenTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_ObjName: The operator to apply to the field ObjName. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. ObjName: Name of this network object. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_ObjName: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_ObjName: If op_ObjName is specified, the field named in this input will be compared to the value in ObjName using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_ObjName must be specified if op_ObjName is specified.
:type val_f_ObjName: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_ObjName: If op_ObjName is specified, this value will be compared to the value in ObjName using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_ObjName must be specified if op_ObjName is specified.
:type val_c_ObjName: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_ObjProvisionData: The operator to apply to the field ObjProvisionData. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. ObjProvisionData: Internal data - do not modify, may change without warning. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_ObjProvisionData: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_ObjProvisionData: If op_ObjProvisionData is specified, the field named in this input will be compared to the value in ObjProvisionData using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_ObjProvisionData must be specified if op_ObjProvisionData is specified.
:type val_f_ObjProvisionData: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_ObjProvisionData: If op_ObjProvisionData is specified, this value will be compared to the value in ObjProvisionData using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_ObjProvisionData must be specified if op_ObjProvisionData is specified.
:type val_c_ObjProvisionData: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_ObjStartTime: The operator to apply to the field ObjStartTime. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. ObjStartTime: The starting effective time of this record. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_ObjStartTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_ObjStartTime: If op_ObjStartTime is specified, the field named in this input will be compared to the value in ObjStartTime using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_ObjStartTime must be specified if op_ObjStartTime is specified.
:type val_f_ObjStartTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_ObjStartTime: If op_ObjStartTime is specified, this value will be compared to the value in ObjStartTime using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_ObjStartTime must be specified if op_ObjStartTime is specified.
:type val_c_ObjStartTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_ObjTimestamp: The operator to apply to the field ObjTimestamp. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. ObjTimestamp: The date and time this record was collected or calculated. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_ObjTimestamp: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_ObjTimestamp: If op_ObjTimestamp is specified, the field named in this input will be compared to the value in ObjTimestamp using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_ObjTimestamp must be specified if op_ObjTimestamp is specified.
:type val_f_ObjTimestamp: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_ObjTimestamp: If op_ObjTimestamp is specified, this value will be compared to the value in ObjTimestamp using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_ObjTimestamp must be specified if op_ObjTimestamp is specified.
:type val_c_ObjTimestamp: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_ObjUseCount: The operator to apply to the field ObjUseCount. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. ObjUseCount: Total count of usage of this network by other elements of the configuration (rules, other network objects). For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_ObjUseCount: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_ObjUseCount: If op_ObjUseCount is specified, the field named in this input will be compared to the value in ObjUseCount using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_ObjUseCount must be specified if op_ObjUseCount is specified.
:type val_f_ObjUseCount: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_ObjUseCount: If op_ObjUseCount is specified, this value will be compared to the value in ObjUseCount using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_ObjUseCount must be specified if op_ObjUseCount is specified.
:type val_c_ObjUseCount: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceGroupID: The internal NetMRI identifier of the device groups to which to limit the results.
:type DeviceGroupID: Array of Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param timestamp: The data returned will represent the device objects as of this date and time. If omitted, the result will indicate the most recently collected data.
:type timestamp: DateTime
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param methods: A list of device object methods. The listed methods will be called on each device object returned and included in the output. Available methods are: device_cfg_context, data_source, device.
:type methods: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param include: A list of associated object types to include in the output. The listed associations will be returned as outputs named according to the association name (see outputs below). Available includes are: device_cfg_context, data_source, device.
:type include: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 0
:param start: The record number to return in the selected page of data. It will always appear, although it may not be the first record. See the :limit for more information.
:type start: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 1000
:param limit: The size of the page of data, that is, the maximum number of records returned. The limit size will be used to break the data up into pages and the first page with the start record will be returned. So if you have 100 records and use a :limit of 10 and a :start of 10, you will get records 10-19. The maximum limit is 10000.
:type limit: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` DeviceObjectID
:param sort: The data field(s) to use for sorting the output. Default is DeviceObjectID. Valid values are DeviceObjectID, DeviceID, DeviceCfgContextID, DataSourceID, ObjFirstSeenTime, ObjStartTime, ObjEndTime, ObjTimestamp, ObjChangedCols, ObjName, ObjUseCount, ObjArtificialInd, ObjConfigText, ObjProvisionData.
:type sort: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` asc
:param dir: The direction(s) in which to sort the data. Default is 'asc'. Valid values are 'asc' and 'desc'.
:type dir: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param select: The list of attributes to return for each DeviceObject. Valid values are DeviceObjectID, DeviceID, DeviceCfgContextID, DataSourceID, ObjFirstSeenTime, ObjStartTime, ObjEndTime, ObjTimestamp, ObjChangedCols, ObjName, ObjUseCount, ObjArtificialInd, ObjConfigText, ObjProvisionData. If empty or omitted, all attributes will be returned.
:type select: Array
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_field: The field name for NIOS GOTO that is used for locating a row position of records.
:type goto_field: String
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_value: The value of goto_field for NIOS GOTO that is used for locating a row position of records.
:type goto_value: String
| ``api version min:`` 2.3
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param xml_filter: A SetFilter XML structure to further refine the search. The SetFilter will be applied AFTER any search query or field values, but before any limit options. The limit and pagination will be enforced after the filter. Remind that this kind of filter may be costly and inefficient if not associated with a database filtering.
:type xml_filter: String
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return device_objects: An array of the DeviceObject objects that match the specified input criteria.
:rtype device_objects: Array of DeviceObject
"""
return self.api_list_request(self._get_method_fullname("find"), kwargs)
def data_source(self, **kwargs):
"""The collector NetMRI that collected this data record.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceObjectID: The internal NetMRI identifier for this network object.
:type DeviceObjectID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : The collector NetMRI that collected this data record.
:rtype : DataSource
"""
return self.api_request(self._get_method_fullname("data_source"), kwargs)
def device_cfg_context(self, **kwargs):
"""The configuration context to which this network object belongs.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceObjectID: The internal NetMRI identifier for this network object.
:type DeviceObjectID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : The configuration context to which this network object belongs.
:rtype : DeviceCfgContext
"""
return self.api_request(self._get_method_fullname("device_cfg_context"), kwargs)
def device(self, **kwargs):
"""The device from which this data was collected.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceObjectID: The internal NetMRI identifier for this network object.
:type DeviceObjectID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : The device from which this data was collected.
:rtype : Device
"""
return self.api_request(self._get_method_fullname("device"), kwargs)
def to_detail(self, **kwargs):
"""Returns the deail for an object.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param object_id: None
:type object_id: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 0
:param view: 0=tostring, 1=tooltip, 2-popup
:type view: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return detail: None
:rtype detail: String
"""
return self.api_request(self._get_method_fullname("to_detail"), kwargs)
| apache-2.0 |
Dhivyap/ansible | test/units/modules/network/onyx/test_onyx_mlag_vip.py | 52 | 3272 | #
# Copyright: Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from units.compat.mock import patch
from ansible.modules.network.onyx import onyx_mlag_vip
from units.modules.utils import set_module_args
from .onyx_module import TestOnyxModule, load_fixture
class TestOnyxMlagVipModule(TestOnyxModule):
module = onyx_mlag_vip
def setUp(self):
super(TestOnyxMlagVipModule, self).setUp()
self._mlag_enabled = True
self.mock_show_mlag = patch.object(
onyx_mlag_vip.OnyxMLagVipModule,
"_show_mlag")
self.show_mlag = self.mock_show_mlag.start()
self.mock_show_mlag_vip = patch.object(
onyx_mlag_vip.OnyxMLagVipModule,
"_show_mlag_vip")
self.show_mlag_vip = self.mock_show_mlag_vip.start()
self.mock_load_config = patch(
'ansible.module_utils.network.onyx.onyx.load_config')
self.load_config = self.mock_load_config.start()
def tearDown(self):
super(TestOnyxMlagVipModule, self).tearDown()
self.mock_show_mlag.stop()
self.mock_show_mlag_vip.stop()
self.mock_load_config.stop()
def load_fixtures(self, commands=None, transport='cli'):
if self._mlag_enabled:
config_file = 'onyx_mlag_vip_show.cfg'
self.show_mlag_vip.return_value = load_fixture(config_file)
config_file = 'onyx_mlag_show.cfg'
self.show_mlag.return_value = load_fixture(config_file)
else:
self.show_mlag_vip.return_value = None
self.show_mlag.return_value = None
self.load_config.return_value = None
def test_mlag_no_change(self):
set_module_args(dict(ipaddress='10.209.25.107/24',
group_name='neo-mlag-vip-500',
mac_address='00:00:5E:00:01:4E'))
self.execute_module(changed=False)
def test_mlag_change(self):
self._mlag_enabled = False
set_module_args(dict(ipaddress='10.209.25.107/24',
group_name='neo-mlag-vip-500',
mac_address='00:00:5E:00:01:4E',
delay=0))
commands = ['mlag-vip neo-mlag-vip-500 ip 10.209.25.107 /24 force',
'mlag system-mac 00:00:5e:00:01:4e', 'no mlag shutdown']
self.execute_module(changed=True, commands=commands)
def test_mlag_send_group_name_only_change(self):
self._mlag_enabled = False
set_module_args(dict(group_name='neo-mlag-vip-500',
delay=0))
commands = ['mlag-vip neo-mlag-vip-500',
'no mlag shutdown']
self.execute_module(changed=True, commands=commands)
def test_mlag_absent_no_change(self):
self._mlag_enabled = False
set_module_args(dict(state='absent'))
self.execute_module(changed=False)
def test_mlag_absent_change(self):
set_module_args(dict(state='absent', delay=0))
commands = ['no mlag-vip']
self.execute_module(changed=True, commands=commands)
| gpl-3.0 |
AladdinSonni/phantomjs | src/breakpad/src/tools/gyp/test/rules/gyptest-all.py | 137 | 1073 | #!/usr/bin/env python
# Copyright (c) 2009 Google Inc. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""
Verifies simple rules when using an explicit build target of 'all'.
"""
import TestGyp
test = TestGyp.TestGyp()
test.run_gyp('actions.gyp', chdir='src')
test.relocate('src', 'relocate/src')
test.build('actions.gyp', test.ALL, chdir='relocate/src')
expect = """\
Hello from program.c
Hello from function1.in
Hello from function2.in
"""
if test.format == 'xcode':
chdir = 'relocate/src/subdir1'
else:
chdir = 'relocate/src'
test.run_built_executable('program', chdir=chdir, stdout=expect)
expect = """\
Hello from program.c
Hello from function3.in
"""
if test.format == 'xcode':
chdir = 'relocate/src/subdir3'
else:
chdir = 'relocate/src'
test.run_built_executable('program2', chdir=chdir, stdout=expect)
test.must_match('relocate/src/subdir2/file1.out', "Hello from file1.in\n")
test.must_match('relocate/src/subdir2/file2.out', "Hello from file2.in\n")
test.pass_test()
| bsd-3-clause |
guh/linux-imx6-3.14-tune | tools/perf/scripts/python/Perf-Trace-Util/lib/Perf/Trace/Core.py | 11088 | 3246 | # Core.py - Python extension for perf script, core functions
#
# Copyright (C) 2010 by Tom Zanussi <tzanussi@gmail.com>
#
# This software may be distributed under the terms of the GNU General
# Public License ("GPL") version 2 as published by the Free Software
# Foundation.
from collections import defaultdict
def autodict():
return defaultdict(autodict)
flag_fields = autodict()
symbolic_fields = autodict()
def define_flag_field(event_name, field_name, delim):
flag_fields[event_name][field_name]['delim'] = delim
def define_flag_value(event_name, field_name, value, field_str):
flag_fields[event_name][field_name]['values'][value] = field_str
def define_symbolic_field(event_name, field_name):
# nothing to do, really
pass
def define_symbolic_value(event_name, field_name, value, field_str):
symbolic_fields[event_name][field_name]['values'][value] = field_str
def flag_str(event_name, field_name, value):
string = ""
if flag_fields[event_name][field_name]:
print_delim = 0
keys = flag_fields[event_name][field_name]['values'].keys()
keys.sort()
for idx in keys:
if not value and not idx:
string += flag_fields[event_name][field_name]['values'][idx]
break
if idx and (value & idx) == idx:
if print_delim and flag_fields[event_name][field_name]['delim']:
string += " " + flag_fields[event_name][field_name]['delim'] + " "
string += flag_fields[event_name][field_name]['values'][idx]
print_delim = 1
value &= ~idx
return string
def symbol_str(event_name, field_name, value):
string = ""
if symbolic_fields[event_name][field_name]:
keys = symbolic_fields[event_name][field_name]['values'].keys()
keys.sort()
for idx in keys:
if not value and not idx:
string = symbolic_fields[event_name][field_name]['values'][idx]
break
if (value == idx):
string = symbolic_fields[event_name][field_name]['values'][idx]
break
return string
trace_flags = { 0x00: "NONE", \
0x01: "IRQS_OFF", \
0x02: "IRQS_NOSUPPORT", \
0x04: "NEED_RESCHED", \
0x08: "HARDIRQ", \
0x10: "SOFTIRQ" }
def trace_flag_str(value):
string = ""
print_delim = 0
keys = trace_flags.keys()
for idx in keys:
if not value and not idx:
string += "NONE"
break
if idx and (value & idx) == idx:
if print_delim:
string += " | ";
string += trace_flags[idx]
print_delim = 1
value &= ~idx
return string
def taskState(state):
states = {
0 : "R",
1 : "S",
2 : "D",
64: "DEAD"
}
if state not in states:
return "Unknown"
return states[state]
class EventHeaders:
def __init__(self, common_cpu, common_secs, common_nsecs,
common_pid, common_comm):
self.cpu = common_cpu
self.secs = common_secs
self.nsecs = common_nsecs
self.pid = common_pid
self.comm = common_comm
def ts(self):
return (self.secs * (10 ** 9)) + self.nsecs
def ts_format(self):
return "%d.%d" % (self.secs, int(self.nsecs / 1000))
| gpl-2.0 |
mollstam/UnrealPy | UnrealPyEmbed/Source/Python/Lib/python27/user.py | 313 | 1627 | """Hook to allow user-specified customization code to run.
As a policy, Python doesn't run user-specified code on startup of
Python programs (interactive sessions execute the script specified in
the PYTHONSTARTUP environment variable if it exists).
However, some programs or sites may find it convenient to allow users
to have a standard customization file, which gets run when a program
requests it. This module implements such a mechanism. A program
that wishes to use the mechanism must execute the statement
import user
The user module looks for a file .pythonrc.py in the user's home
directory and if it can be opened, execfile()s it in its own global
namespace. Errors during this phase are not caught; that's up to the
program that imports the user module, if it wishes.
The user's .pythonrc.py could conceivably test for sys.version if it
wishes to do different things depending on the Python version.
"""
from warnings import warnpy3k
warnpy3k("the user module has been removed in Python 3.0", stacklevel=2)
del warnpy3k
import os
home = os.curdir # Default
if 'HOME' in os.environ:
home = os.environ['HOME']
elif os.name == 'posix':
home = os.path.expanduser("~/")
elif os.name == 'nt': # Contributed by Jeff Bauer
if 'HOMEPATH' in os.environ:
if 'HOMEDRIVE' in os.environ:
home = os.environ['HOMEDRIVE'] + os.environ['HOMEPATH']
else:
home = os.environ['HOMEPATH']
pythonrc = os.path.join(home, ".pythonrc.py")
try:
f = open(pythonrc)
except IOError:
pass
else:
f.close()
execfile(pythonrc)
| mit |
meejah/cuvner | cuv/watch.py | 1 | 3461 | # great kurt-idea: make "lessopen" shit work with this, so "if less a
# file, and a .coverae 'up there somewhere' then highlight it"
# prints out annotated coverage to the terminal, with a
# banner-per-file showing coverage, and a total at the end.
from __future__ import print_function, absolute_import
import sys
import math
from os.path import realpath, join, split
from time import sleep
import coverage
import colors
import click
import six
from pygments import highlight
from pygments.formatters import Terminal256Formatter, TerminalFormatter
from pygments.lexers import get_lexer_by_name
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler
from . import util
from .analysis import CoverageAnalysis, create_analysis
from .diff import diff_coverage_data
def show_missing(data, file_coverage, common):
max_fname = max([len(nm) - common for nm in file_coverage])
format_str = u'{:>%d}: {}' % (max_fname,)
width = click.get_terminal_size()[0]
for fname in file_coverage:
analysis = create_analysis(data, fname)
if len(analysis.missing):
print(format_str.format(fname[common:], analysis._missing_formatted))
def _new_covered_lines(data_a, data_b, cfg):
"""
"""
files_a = set(data_a.get_data().measured_files())
files_b = set(data_b.get_data().measured_files())
common_files = files_a.intersection(files_b)
new_coverage = {}
for fname in common_files:
a = create_analysis(data_a, fname)
b = create_analysis(data_b, fname)
new_covered_lines = []
if a.statements == b.statements:
for x in a.statements:
if x in a.missing:
if x not in b.missing:
new_covered_lines.append(x)
if new_covered_lines:
new_coverage[fname] = new_covered_lines
return new_coverage
def watch_coverage(keywords, cfg):
file_coverage = []
file_coverage = list(cfg.measured_filenames(keywords))
file_coverage.sort()
common = util.common_root_path(file_coverage)
existing_data = cfg.data
coverage_fname = cfg.get_data().data_files.filename
# ugh
class Handler(FileSystemEventHandler):
def __init__(self):
# a real closure!
self._existing_data = existing_data
def on_modified(self, event):
if event.src_path == coverage_fname:
click.echo("New coverage data:")
new_data = coverage.Coverage(data_file=coverage_fname)
new_data.load()
diff_coverage_data(self._existing_data, new_data, cfg)
newly_covered = _new_covered_lines(self._existing_data, new_data, cfg)
self._existing_data = new_data
click.echo('----')
click.echo("newly covered:")
for k, v in newly_covered.items():
click.echo(" {}: {}".format(k, v))
click.echo('----')
show_missing(self._existing_data, file_coverage, len(common))
handler = Handler()
observer = Observer()
print("Watching: {}".format(coverage_fname))
observer.schedule(handler, split(coverage_fname)[0])
observer.start()
show_missing(existing_data, file_coverage, len(common))
try:
while True:
sleep(1)
except KeyboardInterrupt:
observer.stop()
observer.join()
| mit |
kiyoto/statsmodels | statsmodels/tools/linalg.py | 25 | 8038 | '''local, adjusted version from scipy.linalg.basic.py
changes:
The only changes are that additional results are returned
'''
from __future__ import print_function
from statsmodels.compat.python import lmap, range
import numpy as np
from scipy.linalg import svd as decomp_svd
from scipy.linalg.lapack import get_lapack_funcs
from numpy import asarray, zeros, sum, conjugate, dot, transpose
import numpy
from numpy import asarray_chkfinite, single
from numpy.linalg import LinAlgError
### Linear Least Squares
def lstsq(a, b, cond=None, overwrite_a=0, overwrite_b=0):
"""Compute least-squares solution to equation :m:`a x = b`
Compute a vector x such that the 2-norm :m:`|b - a x|` is minimised.
Parameters
----------
a : array, shape (M, N)
b : array, shape (M,) or (M, K)
cond : float
Cutoff for 'small' singular values; used to determine effective
rank of a. Singular values smaller than rcond*largest_singular_value
are considered zero.
overwrite_a : boolean
Discard data in a (may enhance performance)
overwrite_b : boolean
Discard data in b (may enhance performance)
Returns
-------
x : array, shape (N,) or (N, K) depending on shape of b
Least-squares solution
residues : array, shape () or (1,) or (K,)
Sums of residues, squared 2-norm for each column in :m:`b - a x`
If rank of matrix a is < N or > M this is an empty array.
If b was 1-d, this is an (1,) shape array, otherwise the shape is (K,)
rank : integer
Effective rank of matrix a
s : array, shape (min(M,N),)
Singular values of a. The condition number of a is abs(s[0]/s[-1]).
Raises LinAlgError if computation does not converge
"""
a1, b1 = lmap(asarray_chkfinite, (a, b))
if a1.ndim != 2:
raise ValueError('expected matrix')
m, n = a1.shape
if b1.ndim == 2:
nrhs = b1.shape[1]
else:
nrhs = 1
if m != b1.shape[0]:
raise ValueError('incompatible dimensions')
gelss, = get_lapack_funcs(('gelss',), (a1, b1))
if n > m:
# need to extend b matrix as it will be filled with
# a larger solution matrix
b2 = zeros((n, nrhs), dtype=gelss.dtype)
if b1.ndim == 2:
b2[:m, :] = b1
else:
b2[:m, 0] = b1
b1 = b2
overwrite_a = overwrite_a or (a1 is not a and not hasattr(a, '__array__'))
overwrite_b = overwrite_b or (b1 is not b and not hasattr(b, '__array__'))
if gelss.module_name[:7] == 'flapack':
# get optimal work array
work = gelss(a1, b1, lwork=-1)[4]
lwork = work[0].real.astype(np.int)
v, x, s, rank, work, info = gelss(
a1, b1, cond=cond, lwork=lwork, overwrite_a=overwrite_a,
overwrite_b=overwrite_b)
else:
raise NotImplementedError('calling gelss from %s' %
gelss.module_name)
if info > 0:
raise LinAlgError("SVD did not converge in Linear Least Squares")
if info < 0:
raise ValueError('illegal value in %-th argument of '
'internal gelss' % -info)
resids = asarray([], dtype=x.dtype)
if n < m:
x1 = x[:n]
if rank == n:
resids = sum(x[n:]**2, axis=0)
x = x1
return x, resids, rank, s
def pinv(a, cond=None, rcond=None):
"""Compute the (Moore-Penrose) pseudo-inverse of a matrix.
Calculate a generalized inverse of a matrix using a least-squares
solver.
Parameters
----------
a : array, shape (M, N)
Matrix to be pseudo-inverted
cond, rcond : float
Cutoff for 'small' singular values in the least-squares solver.
Singular values smaller than rcond*largest_singular_value are
considered zero.
Returns
-------
B : array, shape (N, M)
Raises LinAlgError if computation does not converge
Examples
--------
>>> from numpy import *
>>> a = random.randn(9, 6)
>>> B = linalg.pinv(a)
>>> allclose(a, dot(a, dot(B, a)))
True
>>> allclose(B, dot(B, dot(a, B)))
True
"""
a = asarray_chkfinite(a)
b = numpy.identity(a.shape[0], dtype=a.dtype)
if rcond is not None:
cond = rcond
return lstsq(a, b, cond=cond)[0]
eps = numpy.finfo(float).eps
feps = numpy.finfo(single).eps
_array_precision = {'f': 0, 'd': 1, 'F': 0, 'D': 1}
def pinv2(a, cond=None, rcond=None):
"""Compute the (Moore-Penrose) pseudo-inverse of a matrix.
Calculate a generalized inverse of a matrix using its
singular-value decomposition and including all 'large' singular
values.
Parameters
----------
a : array, shape (M, N)
Matrix to be pseudo-inverted
cond, rcond : float or None
Cutoff for 'small' singular values.
Singular values smaller than rcond*largest_singular_value are
considered zero.
If None or -1, suitable machine precision is used.
Returns
-------
B : array, shape (N, M)
Raises LinAlgError if SVD computation does not converge
Examples
--------
>>> from numpy import *
>>> a = random.randn(9, 6)
>>> B = linalg.pinv2(a)
>>> allclose(a, dot(a, dot(B, a)))
True
>>> allclose(B, dot(B, dot(a, B)))
True
"""
a = asarray_chkfinite(a)
u, s, vh = decomp_svd(a)
t = u.dtype.char
if rcond is not None:
cond = rcond
if cond in [None, -1]:
cond = {0: feps*1e3, 1: eps*1e6}[_array_precision[t]]
m, n = a.shape
cutoff = cond*numpy.maximum.reduce(s)
psigma = zeros((m, n), t)
for i in range(len(s)):
if s[i] > cutoff:
psigma[i, i] = 1.0/conjugate(s[i])
# XXX: use lapack/blas routines for dot
return transpose(conjugate(dot(dot(u, psigma), vh)))
def logdet_symm(m, check_symm=False):
"""
Return log(det(m)) asserting positive definiteness of m.
Parameters
----------
m : array-like
2d array that is positive-definite (and symmetric)
Returns
-------
logdet : float
The log-determinant of m.
"""
from scipy import linalg
if check_symm:
if not np.all(m == m.T): # would be nice to short-circuit check
raise ValueError("m is not symmetric.")
c, _ = linalg.cho_factor(m, lower=True)
return 2*np.sum(np.log(c.diagonal()))
def stationary_solve(r, b):
"""
Solve a linear system for a Toeplitz correlation matrix.
A Toeplitz correlation matrix represents the covariance of a
stationary series with unit variance.
Parameters
----------
r : array-like
A vector describing the coefficient matrix. r[0] is the first
band next to the diagonal, r[1] is the second band, etc.
b : array-like
The right-hand side for which we are solving, i.e. we solve
Tx = b and return b, where T is the Toeplitz coefficient matrix.
Returns
-------
The solution to the linear system.
"""
db = r[0:1]
dim = b.ndim
if b.ndim == 1:
b = b[:, None]
x = b[0:1,:]
for j in range(1, len(b)):
rf = r[0:j][::-1]
a = (b[j,:] - np.dot(rf, x)) / (1 - np.dot(rf, db[::-1]))
z = x - np.outer(db[::-1], a)
x = np.concatenate((z, a[None, :]), axis=0)
if j == len(b) - 1:
break
rn = r[j]
a = (rn - np.dot(rf, db)) / (1 - np.dot(rf, db[::-1]))
z = db - a*db[::-1]
db = np.concatenate((z, np.r_[a]))
if dim == 1:
x = x[:, 0]
return x
if __name__ == '__main__':
#for checking only,
#Note on Windows32:
# linalg doesn't always produce the same results in each call
a0 = np.random.randn(100,10)
b0 = a0.sum(1)[:, None] + np.random.randn(100,3)
lstsq(a0,b0)
pinv(a0)
pinv2(a0)
x = pinv(a0)
x2=scipy.linalg.pinv(a0)
print(np.max(np.abs(x-x2)))
x = pinv2(a0)
x2 = scipy.linalg.pinv2(a0)
print(np.max(np.abs(x-x2)))
| bsd-3-clause |
cbmoore/statsmodels | statsmodels/sandbox/stats/tests/__init__.py | 219 | 6354 | '''
Econometrics for a Datarich Environment
=======================================
Introduction
------------
In many cases we are performing statistical analysis when many observed variables are
available, when we are in a data rich environment. Machine learning has a wide variety
of tools for dimension reduction and penalization when there are many varibles compared
to the number of observation. Chemometrics has a long tradition of using Partial Least
Squares, NIPALS and similar in these cases. In econometrics the same problem shows up
when there are either many possible regressors, many (weak) instruments or when there are
a large number of moment conditions in GMM.
This section is intended to collect some models and tools in this area that are relevant
for the statical analysis and econometrics.
Covariance Matrices
===================
Several methods are available to reduce the small sample noise in estimated covariance
matrices with many variable.
Some applications:
weighting matrix with many moments,
covariance matrix for portfolio choice
Dimension Reduction
===================
Principal Component and Partial Least Squares try to extract the important low dimensional
factors from the data with many variables.
Regression with many regressors
===============================
Factor models, selection of regressors and shrinkage and penalization are used to improve
the statistical properties, when the presence of too many regressors leads to over-fitting
and too noisy small sample estimators and statistics.
Regression with many moments or many instruments
================================================
The same tools apply and can be used in these two cases.
e.g. Tychonov regularization of weighting matrix in GMM, similar to Ridge regression, the
weighting matrix can be shrunk towards the identity matrix.
Simplest case will be part of GMM. I don't know how much will be standalone
functions.
Intended Content
================
PLS
---
what should be available in class?
Factormodel and supporting helper functions
-------------------------------------------
PCA based
~~~~~~~~~
First version based PCA on Stock/Watson and Bai/Ng, and recent papers on the
selection of the number of factors. Not sure about Forni et al. in approach.
Basic support of this needs additional results for PCA, error covariance matrix
of data on reduced factors, required for criteria in Bai/Ng.
Selection criteria based on eigenvalue cutoffs.
Paper on PCA and structural breaks. Could add additional results during
find_nfact to test for parameter stability. I haven't read the paper yet.
Idea: for forecasting, use up to h-step ahead endogenous variables to directly
get the forecasts.
Asymptotic results and distribution: not too much idea yet.
Standard OLS results are conditional on factors, paper by Haerdle (abstract
seems to suggest that this is ok, Park 2009).
Simulation: add function to simulate DGP of Bai/Ng and recent extension.
Sensitivity of selection criteria to heteroscedasticity and autocorrelation.
Bai, J. & Ng, S., 2002. Determining the Number of Factors in
Approximate Factor Models. Econometrica, 70(1), pp.191-221.
Kapetanios, G., 2010. A Testing Procedure for Determining the Number
of Factors in Approximate Factor Models With Large Datasets. Journal
of Business and Economic Statistics, 28(3), pp.397-409.
Onatski, A., 2010. Determining the Number of Factors from Empirical
Distribution of Eigenvalues. Review of Economics and Statistics,
92(4), pp.1004-1016.
Alessi, L., Barigozzi, M. & Capasso, M., 2010. Improved penalization
for determining the number of factors in approximate factor models.
Statistics & Probability Letters, 80(23-24), pp.1806-1813.
Breitung, J. & Eickmeier, S., Testing for structural breaks in dynamic
factor models. Journal of Econometrics, In Press, Accepted Manuscript.
Available at:
http://www.sciencedirect.com/science/article/B6VC0-51G3W92-1/2/f45ce2332443374fd770e42e5a68ddb4
[Accessed November 15, 2010].
Croux, C., Renault, E. & Werker, B., 2004. Dynamic factor models.
Journal of Econometrics, 119(2), pp.223-230.
Forni, M. et al., 2009. Opening the Black Box: Structural Factor
Models with Large Cross Sections. Econometric Theory, 25(05),
pp.1319-1347.
Forni, M. et al., 2000. The Generalized Dynamic-Factor Model:
Identification and Estimation. Review of Economics and Statistics,
82(4), pp.540-554.
Forni, M. & Lippi, M., The general dynamic factor model: One-sided
representation results. Journal of Econometrics, In Press, Accepted
Manuscript. Available at:
http://www.sciencedirect.com/science/article/B6VC0-51FNPJN-1/2/4fcdd0cfb66e3050ff5d19bf2752ed19
[Accessed November 15, 2010].
Kapetanios, G., 2010. A Testing Procedure for Determining the Number
of Factors in Approximate Factor Models With Large Datasets. Journal
of Business and Economic Statistics, 28(3), pp.397-409.
Onatski, A., 2010. Determining the Number of Factors from Empirical
Distribution of Eigenvalues. Review of Economics and Statistics,
92(4), pp.1004-1016.
Park, B.U. et al., 2009. Time Series Modelling With Semiparametric
Factor Dynamics. Journal of the American Statistical Association,
104(485), pp.284-298.
other factor algorithm
~~~~~~~~~~~~~~~~~~~~~~
PLS should fit in reasonably well.
Bai/Ng have a recent paper, where they compare LASSO, PCA, and similar, individual
and in combination.
Check how much we can use scikits.learn for this.
miscellaneous
~~~~~~~~~~~~~
Time series modeling of factors for prediction, ARMA, VARMA.
SUR and correlation structure
What about sandwich estimation, robust covariance matrices?
Similarity to Factor-Garch and Go-Garch
Updating: incremental PCA, ...?
TODO next
=========
MVOLS : OLS with multivariate endogenous and identical exogenous variables.
rewrite and expand current varma_process.VAR
PCA : write a class after all, and/or adjust the current donated class
and keep adding required statistics, e.g.
residual variance, projection of X on k-factors, ... updating ?
FactorModelUnivariate : started, does basic principal component regression,
based on standard information criteria, not Bai/Ng adjusted
FactorModelMultivariate : follow pattern for univariate version and use
MVOLS
'''
| bsd-3-clause |
ram8647/gcb-mobilecsp | common/caching.py | 3 | 19255 | # Copyright 2014 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS-IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Helper classes to implement caching."""
__author__ = 'Pavel Simakov (psimakov@google.com)'
import collections
import datetime
import logging
import sys
import threading
import unittest
import appengine_config
from models.counters import PerfCounter
def iter_all(query, batch_size=100):
"""Yields query results iterator. Proven method for large datasets."""
prev_cursor = None
any_records = True
while any_records:
any_records = False
query = query.with_cursor(prev_cursor)
for entity in query.run(batch_size=batch_size):
any_records = True
yield entity
prev_cursor = query.cursor()
class AbstractScopedSingleton(object):
"""A singleton object bound to and managed by a container.
This singleton stores its instance inside the container. When container is
wiped, the singleton instance is garbage collected and destroyed. You can
use a dict as a container and then wipe it yourself. You can use
threading.local as a container and it will be wiped automatically when
thread exits.
"""
CONTAINER = None
@classmethod
def _instances(cls):
assert cls.CONTAINER is not None
if 'instances' not in cls.CONTAINER:
cls.CONTAINER['instances'] = {}
return cls.CONTAINER['instances']
@classmethod
def instance(cls, *args, **kwargs):
"""Creates new or returns existing instance of the object."""
# pylint: disable=protected-access
_instance = cls._instances().get(cls)
if not _instance:
try:
_instance = cls(*args, **kwargs)
except:
logging.exception(
'Failed to instantiate %s: %s, %s', cls, args, kwargs)
raise
appengine_config.log_appstats_event('%s.create' % cls.__name__, {})
_instance._init_args = (args, kwargs)
cls._instances()[cls] = _instance
else:
_before = _instance._init_args
_now = (args, kwargs)
if _now != _before:
raise AssertionError(
'Singleton initiated with %s already exists. '
'Failed to re-initialized it with %s.' % (_before, _now))
return _instance
@classmethod
def clear_all(cls):
"""Clear all active instances."""
if cls._instances():
for _instance in list(cls._instances().values()):
_instance.clear()
del cls.CONTAINER['instances']
@classmethod
def clear_instance(cls):
"""Destroys the instance of this cls."""
appengine_config.log_appstats_event(
'%s.destroy' % cls.__name__, {})
_instance = cls._instances().get(cls)
if _instance:
del cls._instances()[cls]
def clear(self):
"""Destroys this object and its content."""
appengine_config.log_appstats_event(
'%s.destroy' % self.__class__.__name__, {})
_instance = self._instances().get(self.__class__)
if _instance:
del self._instances()[self.__class__]
_process_scoped_singleton = {}
_request_scoped_singleton = threading.local()
class ProcessScopedSingleton(AbstractScopedSingleton):
"""A singleton object bound to the process."""
CONTAINER = _process_scoped_singleton
class RequestScopedSingleton(AbstractScopedSingleton):
"""A singleton object bound to the request scope."""
CONTAINER = _request_scoped_singleton.__dict__
class LRUCache(object):
"""A dict that supports capped size and LRU eviction of items."""
def __init__(
self, max_item_count=None,
max_size_bytes=None, max_item_size_bytes=None):
assert max_item_count or max_size_bytes
if max_item_count:
assert max_item_count > 0
if max_size_bytes:
assert max_size_bytes > 0
self.total_size = 0
self.max_item_count = max_item_count
self.max_size_bytes = max_size_bytes
self.max_item_size_bytes = max_item_size_bytes
self.items = collections.OrderedDict([])
def get_entry_size(self, key, value):
"""Computes item size. Override and compute properly for your items."""
return sys.getsizeof(key) + sys.getsizeof(value)
def _compute_current_size(self):
total = 0
for key, item in self.items.iteritems():
total += sys.getsizeof(key) + self.get_item_size(item)
return total
def _allocate_space(self, key, value):
"""Remove items in FIFO order until size constraints are met."""
entry_size = self.get_entry_size(key, value)
if self.max_item_size_bytes and entry_size > self.max_item_size_bytes:
return False
while True:
over_count = False
over_size = False
if self.max_item_count:
over_count = len(self.items) >= self.max_item_count
if self.max_size_bytes:
over_size = self.total_size + entry_size >= self.max_size_bytes
if not (over_count or over_size):
if self.max_size_bytes:
self.total_size += entry_size
assert self.total_size < self.max_size_bytes
return True
if self.items:
_key, _value = self.items.popitem(last=False)
if self.max_size_bytes:
self.total_size -= self.get_entry_size(_key, _value)
assert self.total_size >= 0
else:
break
return False
def _record_access(self, key):
"""Pop and re-add the item."""
item = self.items.pop(key)
self.items[key] = item
def contains(self, key):
"""Checks if item is contained without accessing it."""
assert key
return key in self.items
def put(self, key, value):
assert key
if self._allocate_space(key, value):
self.items[key] = value
return True
return False
def get(self, key):
"""Accessing item makes it less likely to be evicted."""
assert key
if key in self.items:
self._record_access(key)
return True, self.items[key]
return False, None
def delete(self, key):
assert key
if key in self.items:
del self.items[key]
return True
return False
class NoopCacheConnection(object):
"""Connection to no-op cache that provides no caching."""
def put(self, *unused_args, **unused_kwargs):
return None
def get(self, *unused_args, **unused_kwargs):
return False, None
def delete(self, *unused_args, **unused_kwargs):
return None
class AbstractCacheEntry(object):
"""Object representation while in cache."""
# we don't track deletions; deleted item will hang around this long
CACHE_ENTRY_TTL_SEC = 5 * 60
@classmethod
def internalize(cls, unused_key, *args, **kwargs):
"""Converts incoming objects into cache entry object."""
return (args, kwargs)
@classmethod
def externalize(cls, unused_key, *args, **kwargs):
"""Converts cache entry into external object."""
return (args, kwargs)
def has_expired(self):
age = (datetime.datetime.utcnow() - self.created_on).total_seconds()
return age > self.CACHE_ENTRY_TTL_SEC
def is_up_to_date(self, unused_key, unused_update):
"""Compare entry and the update object to decide if entry is fresh."""
raise NotImplementedError()
def updated_on(self):
"""Return last update time for entity."""
raise NotImplementedError()
class AbstractCacheConnection(object):
PERSISTENT_ENTITY = None
CACHE_ENTRY = None
@classmethod
def init_counters(cls):
name = cls.__name__
cls.CACHE_RESYNC = PerfCounter(
'gcb-models-%s-cache-resync' % name,
'A number of times an vfs cache was updated.')
cls.CACHE_PUT = PerfCounter(
'gcb-models-%s-cache-put' % name,
'A number of times an object was put into cache.')
cls.CACHE_GET = PerfCounter(
'gcb-models-%s-cache-get' % name,
'A number of times an object was pulled from cache.')
cls.CACHE_DELETE = PerfCounter(
'gcb-models-%s-cache-delete' % name,
'A number of times an object was deleted from cache.')
cls.CACHE_HIT = PerfCounter(
'gcb-models-%s-cache-hit' % name,
'A number of times an object was found cache.')
cls.CACHE_HIT_NONE = PerfCounter(
'gcb-models-%s-cache-hit-none' % name,
'A number of times an object was found cache, but it was None.')
cls.CACHE_MISS = PerfCounter(
'gcb-models-%s-cache-miss' % name,
'A number of times an object was not found in the cache.')
cls.CACHE_NOT_FOUND = PerfCounter(
'gcb-models-%s-cache-not-found' % name,
'A number of times an object was requested, but was not found in '
'the cache or underlying provider.')
cls.CACHE_UPDATE_COUNT = PerfCounter(
'gcb-models-%s-cache-update-count' % name,
'A number of update objects received.')
cls.CACHE_EVICT = PerfCounter(
'gcb-models-%s-cache-evict' % name,
'A number of times an object was evicted from cache because it was '
'changed.')
cls.CACHE_EXPIRE = PerfCounter(
'gcb-models-%s-cache-expire' % name,
'A number of times an object has expired from cache because it was '
'too old.')
@classmethod
def make_key_prefix(cls, ns):
return '%s:%s' % (cls.__name__, ns)
@classmethod
def make_key(cls, ns, entry_key):
return '%s:%s' % (cls.make_key_prefix(ns), entry_key)
@classmethod
def is_enabled(cls):
raise NotImplementedError()
@classmethod
def new_connection(cls, *args, **kwargs):
if not cls.is_enabled():
return NoopCacheConnection()
conn = cls(*args, **kwargs)
# pylint: disable=protected-access
conn.apply_updates(conn._get_incremental_updates())
return conn
def __init__(self, namespace):
"""Override this method and properly instantiate self.cache."""
self.namespace = namespace
self.cache = None
appengine_config.log_appstats_event(
'%s.connect' % self.__class__.__name__, {'namespace': namespace})
def apply_updates(self, updates):
"""Applies a list of global changes to the local cache."""
self.CACHE_RESYNC.inc()
for key, update in updates.iteritems():
_key = self.make_key(self.namespace, key)
found, entry = self.cache.get(_key)
if not found:
continue
if entry is None:
self.CACHE_EVICT.inc()
self.cache.delete(_key)
continue
if not entry.is_up_to_date(key, update):
self.CACHE_EVICT.inc()
self.cache.delete(_key)
continue
if entry.has_expired():
self.CACHE_EXPIRE.inc()
self.cache.delete(_key)
continue
def _get_most_recent_updated_on(self):
"""Get the most recent item cached. Datastore deletions are missed..."""
has_items = False
max_updated_on = datetime.datetime.fromtimestamp(0)
prefix = self.make_key_prefix(self.namespace)
for key, entry in self.cache.items.iteritems():
if not key.startswith(prefix):
continue
has_items = True
if not entry:
continue
updated_on = entry.updated_on()
if not updated_on: # old entities may be missing this field
updated_on = datetime.datetime.fromtimestamp(0)
if updated_on > max_updated_on:
max_updated_on = updated_on
return has_items, max_updated_on
def get_updates_when_empty(self):
"""Override this method to pre-load cache when it's completely empty."""
return {}
def _get_incremental_updates(self):
"""Gets a list of global changes older than the most recent item cached.
WARNING!!! We fetch the updates since the timestamp of the oldest item
we have cached so far. This will bring all objects that have changed or
were created since that time.
This will NOT bring the notifications about object deletions. Thus cache
will continue to serve deleted objects until they expire.
Returns:
an dict of {key: update} objects that represent recent updates
"""
has_items, updated_on = self._get_most_recent_updated_on()
if not has_items:
return self.get_updates_when_empty()
q = self.PERSISTENT_ENTITY.all()
if updated_on:
q.filter('updated_on > ', updated_on)
result = {
entity.key().name(): entity for entity in iter_all(q)}
self.CACHE_UPDATE_COUNT.inc(len(result.keys()))
return result
def put(self, key, *args):
self.CACHE_PUT.inc()
self.cache.put(
self.make_key(self.namespace, key),
self.CACHE_ENTRY.internalize(key, *args))
def get(self, key):
self.CACHE_GET.inc()
_key = self.make_key(self.namespace, key)
found, entry = self.cache.get(_key)
if not found:
self.CACHE_MISS.inc()
return False, None
if not entry:
self.CACHE_HIT_NONE.inc()
return True, None
if entry.has_expired():
self.CACHE_EXPIRE.inc()
self.cache.delete(_key)
return False, None
self.CACHE_HIT.inc()
return True, self.CACHE_ENTRY.externalize(key, entry)
def delete(self, key):
self.CACHE_DELETE.inc()
self.cache.delete(self.make_key(self.namespace, key))
class LRUCacheTests(unittest.TestCase):
def test_ordereddict_works(self):
_dict = collections.OrderedDict([])
_dict['a'] = '1'
_dict['b'] = '2'
_dict['c'] = '3'
self.assertEqual(('a', '1'), _dict.popitem(last=False))
self.assertEqual(('c', '3'), _dict.popitem(last=True))
def test_initialization(self):
with self.assertRaises(AssertionError):
LRUCache()
with self.assertRaises(AssertionError):
LRUCache(max_item_count=-1)
with self.assertRaises(AssertionError):
LRUCache(max_size_bytes=-1)
LRUCache(max_item_count=1)
LRUCache(max_size_bytes=1)
def test_evict_by_count(self):
cache = LRUCache(max_item_count=3)
self.assertTrue(cache.put('a', '1'))
self.assertTrue(cache.put('b', '2'))
self.assertTrue(cache.put('c', '3'))
self.assertTrue(cache.contains('a'))
self.assertTrue(cache.put('d', '4'))
self.assertFalse(cache.contains('a'))
self.assertEquals(cache.get('a'), (False, None))
def test_evict_by_count_lru(self):
cache = LRUCache(max_item_count=3)
self.assertTrue(cache.put('a', '1'))
self.assertTrue(cache.put('b', '2'))
self.assertTrue(cache.put('c', '3'))
self.assertEquals(cache.get('a'), (True, '1'))
self.assertTrue(cache.put('d', '4'))
self.assertTrue(cache.contains('a'))
self.assertFalse(cache.contains('b'))
def test_evict_by_size(self):
min_size = sys.getsizeof(LRUCache(max_item_count=1).items)
item_size = sys.getsizeof('a1')
cache = LRUCache(max_size_bytes=min_size + 3 * item_size)
self.assertTrue(cache.put('a', '1'))
self.assertTrue(cache.put('b', '2'))
self.assertTrue(cache.put('c', '3'))
self.assertFalse(cache.put('d', bytearray(1000)))
def test_evict_by_size_lru(self):
cache = LRUCache(max_size_bytes=5000)
self.assertTrue(cache.put('a', bytearray(4500)))
self.assertTrue(cache.put('b', '2'))
self.assertTrue(cache.put('c', '3'))
self.assertTrue(cache.contains('a'))
self.assertTrue(cache.put('d', bytearray(1000)))
self.assertFalse(cache.contains('a'))
self.assertTrue(cache.contains('b'))
def test_max_item_size(self):
cache = LRUCache(max_size_bytes=5000, max_item_size_bytes=1000)
self.assertFalse(cache.put('a', bytearray(4500)))
self.assertEquals(cache.get('a'), (False, None))
self.assertTrue(cache.put('a', bytearray(500)))
found, _ = cache.get('a')
self.assertTrue(found)
class SingletonTests(unittest.TestCase):
def test_singleton(self):
class A(RequestScopedSingleton):
def __init__(self, data):
self.data = data
class B(RequestScopedSingleton):
def __init__(self, data):
self.data = data
# TODO(psimakov): prevent direct instantiation
A('aaa')
B('bbb')
# using instance() creates and returns the same instance
RequestScopedSingleton.clear_all()
a = A.instance('bar')
b = A.instance('bar')
assert a.data == 'bar'
assert b.data == 'bar'
assert a is b
# re-initialization fails if arguments differ
RequestScopedSingleton.clear_all()
a = A.instance('dog')
try:
b = A.instance('cat')
raise Exception('Expected to fail.')
except AssertionError:
pass
# clearing one keep others
RequestScopedSingleton.clear_all()
a = A.instance('bar')
b = B.instance('cat')
a.clear()
c = B.instance('cat')
assert c is b
# clearing all clears all
RequestScopedSingleton.clear_all()
a = A.instance('bar')
b = B.instance('cat')
RequestScopedSingleton.clear_all()
c = A.instance('bar')
d = B.instance('cat')
assert a is not c
assert b is not d
def run_all_unit_tests():
"""Runs all unit tests in this module."""
suites_list = []
for test_class in [LRUCacheTests, SingletonTests]:
suite = unittest.TestLoader().loadTestsFromTestCase(test_class)
suites_list.append(suite)
unittest.TextTestRunner().run(unittest.TestSuite(suites_list))
if __name__ == '__main__':
run_all_unit_tests()
| apache-2.0 |
GustavoHennig/ansible | lib/ansible/modules/cloud/cloudstack/cs_loadbalancer_rule_member.py | 5 | 10188 | #!/usr/bin/python
# -*- coding: utf-8 -*-
#
# (c) 2015, Darren Worrall <darren@iweb.co.uk>
# (c) 2015, René Moser <mail@renemoser.net>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
ANSIBLE_METADATA = {'metadata_version': '1.0',
'status': ['stableinterface'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: cs_loadbalancer_rule_member
short_description: Manages load balancer rule members on Apache CloudStack based clouds.
description:
- Add and remove load balancer rule members.
version_added: '2.0'
author:
- "Darren Worrall (@dazworrall)"
- "René Moser (@resmo)"
options:
name:
description:
- The name of the load balancer rule.
required: true
ip_address:
description:
- Public IP address from where the network traffic will be load balanced from.
- Only needed to find the rule if C(name) is not unique.
required: false
default: null
aliases: [ 'public_ip' ]
vms:
description:
- List of VMs to assign to or remove from the rule.
required: true
aliases: [ 'vm' ]
state:
description:
- Should the VMs be present or absent from the rule.
required: false
default: 'present'
choices: [ 'present', 'absent' ]
project:
description:
- Name of the project the firewall rule is related to.
required: false
default: null
domain:
description:
- Domain the rule is related to.
required: false
default: null
account:
description:
- Account the rule is related to.
required: false
default: null
zone:
description:
- Name of the zone in which the rule should be located.
- If not set, default zone is used.
required: false
default: null
extends_documentation_fragment: cloudstack
'''
EXAMPLES = '''
# Add VMs to an exising load balancer
- local_action:
module: cs_loadbalancer_rule_member
name: balance_http
vms:
- web01
- web02
# Remove a VM from an existing load balancer
- local_action:
module: cs_loadbalancer_rule_member
name: balance_http
vms:
- web01
- web02
state: absent
# Rolling upgrade of hosts
- hosts: webservers
serial: 1
pre_tasks:
- name: Remove from load balancer
local_action:
module: cs_loadbalancer_rule_member
name: balance_http
vm: "{{ ansible_hostname }}"
state: absent
tasks:
# Perform update
post_tasks:
- name: Add to load balancer
local_action:
module: cs_loadbalancer_rule_member
name: balance_http
vm: "{{ ansible_hostname }}"
state: present
'''
RETURN = '''
---
id:
description: UUID of the rule.
returned: success
type: string
sample: a6f7a5fc-43f8-11e5-a151-feff819cdc9f
zone:
description: Name of zone the rule is related to.
returned: success
type: string
sample: ch-gva-2
project:
description: Name of project the rule is related to.
returned: success
type: string
sample: Production
account:
description: Account the rule is related to.
returned: success
type: string
sample: example account
domain:
description: Domain the rule is related to.
returned: success
type: string
sample: example domain
algorithm:
description: Load balancer algorithm used.
returned: success
type: string
sample: "source"
cidr:
description: CIDR to forward traffic from.
returned: success
type: string
sample: ""
name:
description: Name of the rule.
returned: success
type: string
sample: "http-lb"
description:
description: Description of the rule.
returned: success
type: string
sample: "http load balancer rule"
protocol:
description: Protocol of the rule.
returned: success
type: string
sample: "tcp"
public_port:
description: Public port.
returned: success
type: string
sample: 80
private_port:
description: Private IP address.
returned: success
type: string
sample: 80
public_ip:
description: Public IP address.
returned: success
type: string
sample: "1.2.3.4"
vms:
description: Rule members.
returned: success
type: list
sample: '[ "web01", "web02" ]'
tags:
description: List of resource tags associated with the rule.
returned: success
type: dict
sample: '[ { "key": "foo", "value": "bar" } ]'
state:
description: State of the rule.
returned: success
type: string
sample: "Add"
'''
# import cloudstack common
from ansible.module_utils.cloudstack import *
class AnsibleCloudStackLBRuleMember(AnsibleCloudStack):
def __init__(self, module):
super(AnsibleCloudStackLBRuleMember, self).__init__(module)
self.returns = {
'publicip': 'public_ip',
'algorithm': 'algorithm',
'cidrlist': 'cidr',
'protocol': 'protocol',
}
# these values will be casted to int
self.returns_to_int = {
'publicport': 'public_port',
'privateport': 'private_port',
}
def get_rule(self):
args = self._get_common_args()
args['name'] = self.module.params.get('name')
args['zoneid'] = self.get_zone(key='id')
if self.module.params.get('ip_address'):
args['publicipid'] = self.get_ip_address(key='id')
rules = self.cs.listLoadBalancerRules(**args)
if rules:
if len(rules['loadbalancerrule']) > 1:
self.module.fail_json(msg="More than one rule having name %s. Please pass 'ip_address' as well." % args['name'])
return rules['loadbalancerrule'][0]
return None
def _get_common_args(self):
return {
'account': self.get_account(key='name'),
'domainid': self.get_domain(key='id'),
'projectid': self.get_project(key='id'),
}
def _get_members_of_rule(self, rule):
res = self.cs.listLoadBalancerRuleInstances(id=rule['id'])
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
return res.get('loadbalancerruleinstance', [])
def _ensure_members(self, operation):
if operation not in ['add', 'remove']:
self.module.fail_json(msg="Bad operation: %s" % operation)
rule = self.get_rule()
if not rule:
self.module.fail_json(msg="Unknown rule: %s" % self.module.params.get('name'))
existing = {}
for vm in self._get_members_of_rule(rule=rule):
existing[vm['name']] = vm['id']
wanted_names = self.module.params.get('vms')
if operation =='add':
cs_func = self.cs.assignToLoadBalancerRule
to_change = set(wanted_names) - set(existing.keys())
else:
cs_func = self.cs.removeFromLoadBalancerRule
to_change = set(wanted_names) & set(existing.keys())
if not to_change:
return rule
args = self._get_common_args()
vms = self.cs.listVirtualMachines(**args)
to_change_ids = []
for name in to_change:
for vm in vms.get('virtualmachine', []):
if vm['name'] == name:
to_change_ids.append(vm['id'])
break
else:
self.module.fail_json(msg="Unknown VM: %s" % name)
if to_change_ids:
self.result['changed'] = True
if to_change_ids and not self.module.check_mode:
res = cs_func(
id = rule['id'],
virtualmachineids = to_change_ids,
)
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
poll_async = self.module.params.get('poll_async')
if poll_async:
self.poll_job(res)
rule = self.get_rule()
return rule
def add_members(self):
return self._ensure_members('add')
def remove_members(self):
return self._ensure_members('remove')
def get_result(self, rule):
super(AnsibleCloudStackLBRuleMember, self).get_result(rule)
if rule:
self.result['vms'] = []
for vm in self._get_members_of_rule(rule=rule):
self.result['vms'].append(vm['name'])
return self.result
def main():
argument_spec = cs_argument_spec()
argument_spec.update(dict(
name = dict(required=True),
ip_address = dict(default=None, aliases=['public_ip']),
vms = dict(required=True, aliases=['vm'], type='list'),
state = dict(choices=['present', 'absent'], default='present'),
zone = dict(default=None),
domain = dict(default=None),
project = dict(default=None),
account = dict(default=None),
poll_async = dict(type='bool', default=True),
))
module = AnsibleModule(
argument_spec=argument_spec,
required_together=cs_required_together(),
supports_check_mode=True
)
try:
acs_lb_rule_member = AnsibleCloudStackLBRuleMember(module)
state = module.params.get('state')
if state in ['absent']:
rule = acs_lb_rule_member.remove_members()
else:
rule = acs_lb_rule_member.add_members()
result = acs_lb_rule_member.get_result(rule)
except CloudStackException as e:
module.fail_json(msg='CloudStackException: %s' % str(e))
module.exit_json(**result)
# import module snippets
from ansible.module_utils.basic import *
if __name__ == '__main__':
main()
| gpl-3.0 |
Thoshh/wapad | lib/python2.7/site-packages/django/db/migrations/migration.py | 326 | 8023 | from __future__ import unicode_literals
from django.db.transaction import atomic
from django.utils.encoding import python_2_unicode_compatible
from .exceptions import IrreversibleError
@python_2_unicode_compatible
class Migration(object):
"""
The base class for all migrations.
Migration files will import this from django.db.migrations.Migration
and subclass it as a class called Migration. It will have one or more
of the following attributes:
- operations: A list of Operation instances, probably from django.db.migrations.operations
- dependencies: A list of tuples of (app_path, migration_name)
- run_before: A list of tuples of (app_path, migration_name)
- replaces: A list of migration_names
Note that all migrations come out of migrations and into the Loader or
Graph as instances, having been initialized with their app label and name.
"""
# Operations to apply during this migration, in order.
operations = []
# Other migrations that should be run before this migration.
# Should be a list of (app, migration_name).
dependencies = []
# Other migrations that should be run after this one (i.e. have
# this migration added to their dependencies). Useful to make third-party
# apps' migrations run after your AUTH_USER replacement, for example.
run_before = []
# Migration names in this app that this migration replaces. If this is
# non-empty, this migration will only be applied if all these migrations
# are not applied.
replaces = []
# Is this an initial migration? Initial migrations are skipped on
# --fake-initial if the table or fields already exist. If None, check if
# the migration has any dependencies to determine if there are dependencies
# to tell if db introspection needs to be done. If True, always perform
# introspection. If False, never perform introspection.
initial = None
def __init__(self, name, app_label):
self.name = name
self.app_label = app_label
# Copy dependencies & other attrs as we might mutate them at runtime
self.operations = list(self.__class__.operations)
self.dependencies = list(self.__class__.dependencies)
self.run_before = list(self.__class__.run_before)
self.replaces = list(self.__class__.replaces)
def __eq__(self, other):
if not isinstance(other, Migration):
return False
return (self.name == other.name) and (self.app_label == other.app_label)
def __ne__(self, other):
return not (self == other)
def __repr__(self):
return "<Migration %s.%s>" % (self.app_label, self.name)
def __str__(self):
return "%s.%s" % (self.app_label, self.name)
def __hash__(self):
return hash("%s.%s" % (self.app_label, self.name))
def mutate_state(self, project_state, preserve=True):
"""
Takes a ProjectState and returns a new one with the migration's
operations applied to it. Preserves the original object state by
default and will return a mutated state from a copy.
"""
new_state = project_state
if preserve:
new_state = project_state.clone()
for operation in self.operations:
operation.state_forwards(self.app_label, new_state)
return new_state
def apply(self, project_state, schema_editor, collect_sql=False):
"""
Takes a project_state representing all migrations prior to this one
and a schema_editor for a live database and applies the migration
in a forwards order.
Returns the resulting project state for efficient re-use by following
Migrations.
"""
for operation in self.operations:
# If this operation cannot be represented as SQL, place a comment
# there instead
if collect_sql:
schema_editor.collected_sql.append("--")
if not operation.reduces_to_sql:
schema_editor.collected_sql.append(
"-- MIGRATION NOW PERFORMS OPERATION THAT CANNOT BE WRITTEN AS SQL:"
)
schema_editor.collected_sql.append("-- %s" % operation.describe())
schema_editor.collected_sql.append("--")
if not operation.reduces_to_sql:
continue
# Save the state before the operation has run
old_state = project_state.clone()
operation.state_forwards(self.app_label, project_state)
# Run the operation
if not schema_editor.connection.features.can_rollback_ddl and operation.atomic:
# We're forcing a transaction on a non-transactional-DDL backend
with atomic(schema_editor.connection.alias):
operation.database_forwards(self.app_label, schema_editor, old_state, project_state)
else:
# Normal behaviour
operation.database_forwards(self.app_label, schema_editor, old_state, project_state)
return project_state
def unapply(self, project_state, schema_editor, collect_sql=False):
"""
Takes a project_state representing all migrations prior to this one
and a schema_editor for a live database and applies the migration
in a reverse order.
The backwards migration process consists of two phases:
1. The intermediate states from right before the first until right
after the last operation inside this migration are preserved.
2. The operations are applied in reverse order using the states
recorded in step 1.
"""
# Construct all the intermediate states we need for a reverse migration
to_run = []
new_state = project_state
# Phase 1
for operation in self.operations:
# If it's irreversible, error out
if not operation.reversible:
raise IrreversibleError("Operation %s in %s is not reversible" % (operation, self))
# Preserve new state from previous run to not tamper the same state
# over all operations
new_state = new_state.clone()
old_state = new_state.clone()
operation.state_forwards(self.app_label, new_state)
to_run.insert(0, (operation, old_state, new_state))
# Phase 2
for operation, to_state, from_state in to_run:
if collect_sql:
schema_editor.collected_sql.append("--")
if not operation.reduces_to_sql:
schema_editor.collected_sql.append(
"-- MIGRATION NOW PERFORMS OPERATION THAT CANNOT BE WRITTEN AS SQL:"
)
schema_editor.collected_sql.append("-- %s" % operation.describe())
schema_editor.collected_sql.append("--")
if not operation.reduces_to_sql:
continue
if not schema_editor.connection.features.can_rollback_ddl and operation.atomic:
# We're forcing a transaction on a non-transactional-DDL backend
with atomic(schema_editor.connection.alias):
operation.database_backwards(self.app_label, schema_editor, from_state, to_state)
else:
# Normal behaviour
operation.database_backwards(self.app_label, schema_editor, from_state, to_state)
return project_state
class SwappableTuple(tuple):
"""
Subclass of tuple so Django can tell this was originally a swappable
dependency when it reads the migration file.
"""
def __new__(cls, value, setting):
self = tuple.__new__(cls, value)
self.setting = setting
return self
def swappable_dependency(value):
"""
Turns a setting value into a dependency.
"""
return SwappableTuple((value.split(".", 1)[0], "__first__"), value)
| mit |
woltage/ansible | lib/ansible/utils/module_docs_fragments/openstack.py | 97 | 4021 | # Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
class ModuleDocFragment(object):
# Standard openstack documentation fragment
DOCUMENTATION = '''
options:
cloud:
description:
- Named cloud to operate against. Provides default values for I(auth) and
I(auth_type). This parameter is not needed if I(auth) is provided or if
OpenStack OS_* environment variables are present.
required: false
auth:
description:
- Dictionary containing auth information as needed by the cloud's auth
plugin strategy. For the default I(password) plugin, this would contain
I(auth_url), I(username), I(password), I(project_name) and any
information about domains if the cloud supports them. For other plugins,
this param will need to contain whatever parameters that auth plugin
requires. This parameter is not needed if a named cloud is provided or
OpenStack OS_* environment variables are present.
required: false
auth_type:
description:
- Name of the auth plugin to use. If the cloud uses something other than
password authentication, the name of the plugin should be indicated here
and the contents of the I(auth) parameter should be updated accordingly.
required: false
default: password
region_name:
description:
- Name of the region.
required: false
availability_zone:
description:
- Name of the availability zone.
required: false
wait:
description:
- Should ansible wait until the requested resource is complete.
required: false
default: "yes"
choices: ["yes", "no"]
timeout:
description:
- How long should ansible wait for the requested resource.
required: false
default: 180
api_timeout:
description:
- How long should the socket layer wait before timing out for API calls.
If this is omitted, nothing will be passed to the requests library.
required: false
default: None
validate_certs:
description:
- Whether or not SSL API requests should be verified.
required: false
default: True
aliases: ['verify']
cacert:
description:
- A path to a CA Cert bundle that can be used as part of verifying
SSL API requests.
required: false
default: None
cert:
description:
- A path to a client certificate to use as part of the SSL transaction
required: false
default: None
key:
description:
- A path to a client key to use as part of the SSL transaction
required: false
default: None
endpoint_type:
description:
- Endpoint URL type to fetch from the service catalog.
choices: [public, internal, admin]
required: false
default: public
requirements:
- python >= 2.7
- shade
notes:
- The standard OpenStack environment variables, such as C(OS_USERNAME)
may be user instead of providing explicit values.
- Auth information is driven by os-client-config, which means that values
can come from a yaml config file in /etc/ansible/openstack.yaml,
/etc/openstack/clouds.yaml or ~/.config/openstack/clouds.yaml, then from
standard environment variables, then finally by explicit parameters in
plays. More information can be found at
U(http://docs.openstack.org/developer/os-client-config)
'''
| gpl-3.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.