repo_name stringlengths 5 100 | path stringlengths 4 294 | copies stringclasses 990 values | size stringlengths 4 7 | content stringlengths 666 1M | license stringclasses 15 values |
|---|---|---|---|---|---|
guyrt/honest_ab | setup.py | 1 | 1137 | try:
from setuptools import setup
except ImportError:
from distutils.core import setup
packages = [
'honest_ab'
]
requires = [
'Murmur==0.1.3',
'Django>=1.4.0',
]
tests_require = ['mock==1.0.1']
setup(
name='honest_ab',
description='A/B testing framework for django',
long_description=open('README.rst').read(),
version='0.1',
author=open('AUTHORS.rst').read(),
author_email='richardtguy84@gmail.com',
url='https://github.com/guyrt/honest_ab',
packages=packages,
package_data={'': ['LICENSE.rst', 'AUTHORS.rst', 'README.rst']},
include_package_data=True,
zip_safe=True,
install_requires=requires,
tests_require=tests_require,
#test_suite='vero.tests.client_test',
license=open('LICENSE.rst').read(),
classifiers=(
'Development Status :: 5 - Production/Stable',
'Intended Audience :: Developers',
'Natural Language :: English',
'License :: OSI Approved :: BSD License',
'Programming Language :: Python',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
),
)
| bsd-3-clause |
sssemil/cjdns | node_build/dependencies/libuv/build/gyp/test/generator-output/gyptest-symlink.py | 216 | 1292 | #!/usr/bin/env python
# Copyright (c) 2012 Google Inc. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""
Verifies building a target when the --generator-output= option is used to put
the build configuration files in a separate directory tree referenced by a
symlink.
"""
import TestGyp
import os
# Android doesn't support --generator-output.
test = TestGyp.TestGyp(formats=['!android'])
if not hasattr(os, 'symlink'):
test.skip_test('Missing os.symlink -- skipping test.\n')
test.writable(test.workpath('src'), False)
test.writable(test.workpath('src/subdir2/deeper/build'), True)
test.subdir(test.workpath('build'))
test.subdir(test.workpath('build/deeper'))
test.symlink('build/deeper', test.workpath('symlink'))
test.writable(test.workpath('build/deeper'), True)
test.run_gyp('deeper.gyp',
'-Dset_symroot=2',
'--generator-output=' + test.workpath('symlink'),
chdir='src/subdir2/deeper')
chdir = 'symlink'
test.build('deeper.gyp', test.ALL, chdir=chdir)
if test.format == 'xcode':
chdir = 'src/subdir2/deeper'
test.run_built_executable('deeper',
chdir=chdir,
stdout="Hello from deeper.c\n")
test.pass_test()
| gpl-3.0 |
sanghinitin/golismero | thirdparty_libs/django/template/__init__.py | 561 | 3247 | """
This is the Django template system.
How it works:
The Lexer.tokenize() function converts a template string (i.e., a string containing
markup with custom template tags) to tokens, which can be either plain text
(TOKEN_TEXT), variables (TOKEN_VAR) or block statements (TOKEN_BLOCK).
The Parser() class takes a list of tokens in its constructor, and its parse()
method returns a compiled template -- which is, under the hood, a list of
Node objects.
Each Node is responsible for creating some sort of output -- e.g. simple text
(TextNode), variable values in a given context (VariableNode), results of basic
logic (IfNode), results of looping (ForNode), or anything else. The core Node
types are TextNode, VariableNode, IfNode and ForNode, but plugin modules can
define their own custom node types.
Each Node has a render() method, which takes a Context and returns a string of
the rendered node. For example, the render() method of a Variable Node returns
the variable's value as a string. The render() method of an IfNode returns the
rendered output of whatever was inside the loop, recursively.
The Template class is a convenient wrapper that takes care of template
compilation and rendering.
Usage:
The only thing you should ever use directly in this file is the Template class.
Create a compiled template object with a template_string, then call render()
with a context. In the compilation stage, the TemplateSyntaxError exception
will be raised if the template doesn't have proper syntax.
Sample code:
>>> from django import template
>>> s = u'<html>{% if test %}<h1>{{ varvalue }}</h1>{% endif %}</html>'
>>> t = template.Template(s)
(t is now a compiled template, and its render() method can be called multiple
times with multiple contexts)
>>> c = template.Context({'test':True, 'varvalue': 'Hello'})
>>> t.render(c)
u'<html><h1>Hello</h1></html>'
>>> c = template.Context({'test':False, 'varvalue': 'Hello'})
>>> t.render(c)
u'<html></html>'
"""
# Template lexing symbols
from django.template.base import (ALLOWED_VARIABLE_CHARS, BLOCK_TAG_END,
BLOCK_TAG_START, COMMENT_TAG_END, COMMENT_TAG_START,
FILTER_ARGUMENT_SEPARATOR, FILTER_SEPARATOR, SINGLE_BRACE_END,
SINGLE_BRACE_START, TOKEN_BLOCK, TOKEN_COMMENT, TOKEN_TEXT, TOKEN_VAR,
TRANSLATOR_COMMENT_MARK, UNKNOWN_SOURCE, VARIABLE_ATTRIBUTE_SEPARATOR,
VARIABLE_TAG_END, VARIABLE_TAG_START, filter_re, tag_re)
# Exceptions
from django.template.base import (ContextPopException, InvalidTemplateLibrary,
TemplateDoesNotExist, TemplateEncodingError, TemplateSyntaxError,
VariableDoesNotExist)
# Template parts
from django.template.base import (Context, FilterExpression, Lexer, Node,
NodeList, Parser, RequestContext, Origin, StringOrigin, Template,
TextNode, Token, TokenParser, Variable, VariableNode, constant_string,
filter_raw_string)
# Compiling templates
from django.template.base import (compile_string, resolve_variable,
unescape_string_literal, generic_tag_compiler)
# Library management
from django.template.base import (Library, add_to_builtins, builtins,
get_library, get_templatetags_modules, get_text_list, import_library,
libraries)
__all__ = ('Template', 'Context', 'RequestContext', 'compile_string')
| gpl-2.0 |
tempbottle/aquila | .ycm_extra_conf.py | 8 | 4122 | import glob
import os
import ycm_core
# These are the compilation flags that will be used in case there's no
# compilation database set (by default, one is not set).
# CHANGE THIS LIST OF FLAGS. YES, THIS IS THE DROID YOU HAVE BEEN LOOKING FOR.
flags = [
'-Wall',
'-Wextra',
'-Werror',
'-Wno-variadic-macros',
'-Wshadow',
'-fexceptions',
'-std=c++11',
'-x',
'c++',
'-I',
'.',
'-I',
'./lib/unittestpp',
'-I',
'/usr/include/qt5',
'-I',
'/usr/include/qt5/QtWidgets',
'-I',
'/usr/include/qt5/QtGui',
'-I',
'/usr/include/qt5/QtCore',
'-I',
'/usr/include/SFML',
'-I',
'/usr/include/qwt',
]
for path in glob.glob('/usr/local/qwt*/include'):
flags.append('-I')
flags.append(path)
# Set this to the absolute path to the folder (NOT the file!) containing the
# compile_commands.json file to use that instead of 'flags'. See here for
# more details: http://clang.llvm.org/docs/JSONCompilationDatabase.html
#
# You can get CMake to generate this file for you by adding:
# set( CMAKE_EXPORT_COMPILE_COMMANDS 1 )
# to your CMakeLists.txt file.
#
# Most projects will NOT need to set this to anything; you can just change the
# 'flags' list of compilation flags. Notice that YCM itself uses that approach.
compilation_database_folder = ''
if os.path.exists( compilation_database_folder ):
database = ycm_core.CompilationDatabase( compilation_database_folder )
else:
database = None
SOURCE_EXTENSIONS = [ '.cpp', '.cxx', '.cc', '.c', '.m', '.mm' ]
def DirectoryOfThisScript():
return os.path.dirname( os.path.abspath( __file__ ) )
def MakeRelativePathsInFlagsAbsolute( flags, working_directory ):
if not working_directory:
return list( flags )
new_flags = []
make_next_absolute = False
path_flags = [ '-isystem', '-I', '-iquote', '--sysroot=' ]
for flag in flags:
new_flag = flag
if make_next_absolute:
make_next_absolute = False
if not flag.startswith( '/' ):
new_flag = os.path.join( working_directory, flag )
for path_flag in path_flags:
if flag == path_flag:
make_next_absolute = True
break
if flag.startswith( path_flag ):
path = flag[ len( path_flag ): ]
new_flag = path_flag + os.path.join( working_directory, path )
break
if new_flag:
new_flags.append( new_flag )
return new_flags
def IsHeaderFile( filename ):
extension = os.path.splitext( filename )[ 1 ]
return extension in [ '.h', '.hxx', '.hpp', '.hh' ]
def GetCompilationInfoForFile( filename ):
# The compilation_commands.json file generated by CMake does not have entries
# for header files. So we do our best by asking the db for flags for a
# corresponding source file, if any. If one exists, the flags for that file
# should be good enough.
if IsHeaderFile( filename ):
basename = os.path.splitext( filename )[ 0 ]
for extension in SOURCE_EXTENSIONS:
replacement_file = basename + extension
if os.path.exists( replacement_file ):
compilation_info = database.GetCompilationInfoForFile(
replacement_file )
if compilation_info.compiler_flags_:
return compilation_info
return None
return database.GetCompilationInfoForFile( filename )
def FlagsForFile( filename, **kwargs ):
if database:
# Bear in mind that compilation_info.compiler_flags_ does NOT return a
# python list, but a "list-like" StringVec object
compilation_info = GetCompilationInfoForFile( filename )
if not compilation_info:
return None
final_flags = MakeRelativePathsInFlagsAbsolute(
compilation_info.compiler_flags_,
compilation_info.compiler_working_dir_ )
# NOTE: This is just for YouCompleteMe; it's highly likely that your project
# does NOT need to remove the stdlib flag. DO NOT USE THIS IN YOUR
# ycm_extra_conf IF YOU'RE NOT 100% SURE YOU NEED IT.
try:
final_flags.remove( '-stdlib=libc++' )
except ValueError:
pass
else:
relative_to = DirectoryOfThisScript()
final_flags = MakeRelativePathsInFlagsAbsolute( flags, relative_to )
return {
'flags': final_flags,
'do_cache': True
}
| mit |
rocky/python3-trepan | trepan/processor/command/restart.py | 1 | 2540 | # -*- coding: utf-8 -*-
# Copyright (C) 2009, 2013, 2015, 2020 Rocky Bernstein
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import atexit, os
# Our local modules
from trepan.processor.command.base_cmd import DebuggerCommand
from trepan.misc import wrapped_lines
class RestartCommand(DebuggerCommand):
"""**restart**
Restart debugger and program via an *exec()* call. All state is lost,
and new copy of the debugger is used.
See also:
---------
`run` for another way to restart the debugged program.
See `quit`, `exit` or `kill` for termination commands."""
short_help = "(Hard) restart of program via execv()"
DebuggerCommand.setup(locals(), category="support", max_args=0)
def run(self, args):
sys_argv = self.debugger.restart_argv()
if sys_argv and len(sys_argv) > 0:
confirmed = self.confirm("Restart (execv)", False)
if confirmed:
self.msg(
wrapped_lines(
"Re exec'ing:", repr(sys_argv), self.settings["width"]
)
)
# Run atexit finalize routines. This seems to be Kosher:
# http://mail.python.org/pipermail/python-dev/2009-February/085791.html # NOQA
try:
atexit._run_exitfuncs()
except:
pass
os.execvp(sys_argv[0], sys_argv)
pass
pass
else:
self.errmsg("No executable file and command options recorded.")
pass
return
pass
if __name__ == "__main__":
from trepan.processor.command.mock import dbg_setup
d, cp = dbg_setup()
command = RestartCommand(cp)
command.run([])
import sys
if len(sys.argv) > 1:
# Strip of arguments so we don't loop in exec.
d.orig_sys_argv = ["python", sys.argv[0]]
command.run([])
pass
pass
| gpl-3.0 |
blaiseli/p4-phylogenetics | share/Examples/K_thermus/B_homog/B_mcmc/sMcmc.py | 3 | 1049 | theRunNum = 0
read('../../noTRuberNoGapsNoAmbiguities.nex')
d = Data()
t = func.randomTree(taxNames=d.taxNames)
t.data = d
t.newComp(free=1, spec='empirical')
t.newRMatrix(free=1, spec='ones')
t.setNGammaCat(nGammaCat=4)
t.newGdasrv(free=1, val=0.5)
t.setPInvar(free=0, val=0.0)
m = Mcmc(t, nChains=4, runNum=theRunNum, sampleInterval=10, checkPointInterval=5000, simulate=3)
if 0:
m.tunings.chainTemp = 0.15
m.tunings.parts[0].rMatrix = 0.3
m.tunings.parts[0].comp = 0.2
m.tunings.parts[0].gdasrv = 0.811
if 1:
m.prob.comp = 1
m.prob.rMatrix = 1
m.prob.pInvar = 0
m.prob.gdasrv = 1
m.prob.local = 1
m.prob.root3 = 0
m.prob.compLocation = 0
m.prob.rMatrixLocation = 0
m.prob.relRate = 0
m.autoTune()
m.run(15000)
if 1 and func.which("gnuplot"):
h = Numbers('mcmc_likes_%i' % theRunNum, col=1)
h.plot()
if 1:
cpm = McmcCheckPointReader()
cpm.writeProposalAcceptances()
cpm.writeSwapMatrices()
cpm.compareSplits(1, 2, verbose=True)
func.summarizeMcmcPrams(skip=1000)
| gpl-2.0 |
0-1-0/Python-Arduino-Command-API | tests/test_arduino.py | 1 | 5589 | import logging
import unittest
logging.basicConfig(level=logging.DEBUG)
class MockSerial(object):
def __init__(self, baud, port, timeout=None):
self.port = port
self.baud = baud
self.timeout = timeout
self.output = []
self.input = []
def flush(self):
pass
def write(self, line):
self.output.append(line)
def readline(self):
"""
@TODO: This does not take timeout into account at all.
"""
return self.input.pop(0)
def reset_mock(self):
self.output = []
self.input = []
def push_line(self, line, term='\r\n'):
self.input.append(str(line) + term)
INPUT = "INPUT"
OUTPUT = "OUTPUT"
LOW = "LOW"
HIGH = "HIGH"
READ_LOW = 0
READ_HIGH = 1
MSBFIRST = "MSBFIRST"
LSBFIRST = "LSBFIRST"
class TestArduino(unittest.TestCase):
def parse_cmd_sr(self, cmd_str):
assert cmd_str[0] == '@'
first_index = cmd_str.find('%')
assert first_index != -1
assert cmd_str[-2:] == '$!'
# Skip over the @ and read up to but not including the %.
cmd = cmd_str[1:first_index]
# Skip over the first % and ignore the trailing $!.
args_str = cmd_str[first_index+1:-2]
args = args_str.split('%')
return cmd, args
def setUp(self):
from Arduino.arduino import Arduino
self.mock_serial = MockSerial(9600, '/dev/ttyACM0')
self.board = Arduino(sr=self.mock_serial)
def test_version(self):
from Arduino.arduino import build_cmd_str
expected_version = "version"
self.mock_serial.push_line(expected_version)
self.assertEquals(self.board.version(), expected_version)
self.assertEquals(self.mock_serial.output[0], build_cmd_str('version'))
def test_pinMode_input(self):
from Arduino.arduino import build_cmd_str
pin = 9
self.board.pinMode(pin, INPUT)
self.assertEquals(self.mock_serial.output[0],
build_cmd_str('pm', (-pin,)))
def test_pinMode_output(self):
from Arduino.arduino import build_cmd_str
pin = 9
self.board.pinMode(pin, OUTPUT)
self.assertEquals(self.mock_serial.output[0],
build_cmd_str('pm', (pin,)))
def test_pulseIn_low(self):
from Arduino.arduino import build_cmd_str
expected_duration = 230
self.mock_serial.push_line(expected_duration)
pin = 9
self.assertEquals(self.board.pulseIn(pin, LOW), expected_duration)
self.assertEquals(self.mock_serial.output[0],
build_cmd_str('pi', (-pin,)))
def test_pulseIn_high(self):
from Arduino.arduino import build_cmd_str
expected_duration = 230
pin = 9
self.mock_serial.push_line(expected_duration)
self.assertEquals(self.board.pulseIn(pin, HIGH), expected_duration)
self.assertEquals(self.mock_serial.output[0], build_cmd_str('pi', (pin,)))
def test_digitalRead(self):
from Arduino.arduino import build_cmd_str
pin = 9
self.mock_serial.push_line(READ_LOW)
self.assertEquals(self.board.digitalRead(pin), READ_LOW)
self.assertEquals(self.mock_serial.output[0], build_cmd_str('dr', (pin,)))
def test_digitalWrite_low(self):
from Arduino.arduino import build_cmd_str
pin = 9
self.board.digitalWrite(pin, LOW)
self.assertEquals(self.mock_serial.output[0], build_cmd_str('dw', (-pin,)))
def test_digitalWrite_high(self):
from Arduino.arduino import build_cmd_str
pin = 9
self.board.digitalWrite(pin, HIGH)
self.assertEquals(self.mock_serial.output[0], build_cmd_str('dw', (pin,)))
def test_melody(self):
from Arduino.arduino import build_cmd_str
pin = 9
notes = ["C4"]
duration = 4
C4_NOTE = 262
self.board.Melody(pin, notes, [duration])
self.assertEquals(self.mock_serial.output[0],
build_cmd_str('to', (len(notes), pin, C4_NOTE, duration)))
self.assertEquals(self.mock_serial.output[1],
build_cmd_str('nto', (pin,)))
def test_shiftIn(self):
from Arduino.arduino import build_cmd_str
dataPin = 2
clockPin = 3
pinOrder = MSBFIRST
expected = 0xff
self.mock_serial.push_line(expected)
self.assertEquals(self.board.shiftIn(dataPin, clockPin, pinOrder),
expected)
self.assertEquals(self.mock_serial.output[0],
build_cmd_str('si', (dataPin, clockPin, pinOrder,)))
def test_shiftOut(self):
from Arduino.arduino import build_cmd_str
dataPin = 2
clockPin = 3
pinOrder = MSBFIRST
value = 0xff
self.board.shiftOut(dataPin, clockPin, pinOrder, value)
self.assertEquals(self.mock_serial.output[0],
build_cmd_str('so', (dataPin, clockPin, pinOrder, value)))
def test_analogRead(self):
from Arduino.arduino import build_cmd_str
pin = 9
expected = 1023
self.mock_serial.push_line(expected)
self.assertEquals(self.board.analogRead(pin), expected)
self.assertEquals(self.mock_serial.output[0],
build_cmd_str('ar', (pin,)))
def test_analogWrite(self):
from Arduino.arduino import build_cmd_str
pin = 9
value = 255
self.board.analogWrite(pin, value)
self.assertEquals(self.mock_serial.output[0],
build_cmd_str('aw', (pin, value)))
if __name__ == '__main__':
unittest.main()
| mit |
ryfeus/lambda-packs | Tensorflow_OpenCV_Nightly/source/tensorflow/python/training/sync_replicas_optimizer.py | 19 | 20193 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Synchronize replicas for training."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from tensorflow.core.framework import types_pb2
from tensorflow.python.framework import ops
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import data_flow_ops
from tensorflow.python.ops import state_ops
from tensorflow.python.ops import variable_scope
from tensorflow.python.ops import variables
from tensorflow.python.platform import tf_logging as logging
from tensorflow.python.training import optimizer
from tensorflow.python.training import queue_runner
from tensorflow.python.training import session_manager
from tensorflow.python.training import session_run_hook
# Please note that the gradients from replicas are averaged instead of summed
# (as in the old sync_replicas_optimizer) so you need to increase the learning
# rate according to the number of replicas. This change is introduced to be
# consistent with how gradients are aggregated (averaged) within a batch in a
# replica.
class SyncReplicasOptimizer(optimizer.Optimizer):
"""Class to synchronize, aggregate gradients and pass them to the optimizer.
In a typical asynchronous training environment, it's common to have some
stale gradients. For example, with a N-replica asynchronous training,
gradients will be applied to the variables N times independently. Depending
on each replica's training speed, some gradients might be calculated from
copies of the variable from several steps back (N-1 steps on average). This
optimizer avoids stale gradients by collecting gradients from all replicas,
averaging them, then applying them to the variables in one shot, after
which replicas can fetch the new variables and continue.
The following accumulators/queue are created:
<empty line>
* N `gradient accumulators`, one per variable to train. Gradients are pushed
to them and the chief worker will wait until enough gradients are collected
and then average them before applying to variables. The accumulator will
drop all stale gradients (more details in the accumulator op).
* 1 `token` queue where the optimizer pushes the new global_step value after
all variables are updated.
The following local variable is created:
* `sync_rep_local_step`, one per replica. Compared against the global_step in
each accumulator to check for staleness of the gradients.
The optimizer adds nodes to the graph to collect gradients and pause the
trainers until variables are updated.
For the Parameter Server job:
<empty line>
1. An accumulator is created for each variable, and each replica pushes the
gradients into the accumulators instead of directly applying them to the
variables.
2. Each accumulator averages once enough gradients (replicas_to_aggregate)
have been accumulated.
3. Apply the averaged gradients to the variables.
4. Only after all variables have been updated, increment the global step.
5. Only after step 4, pushes `global_step` in the `token_queue`, once for
each worker replica. The workers can now fetch the global step, use it to
update its local_step variable and start the next batch.
For the replicas:
<empty line>
1. Start a step: fetch variables and compute gradients.
2. Once the gradients have been computed, push them into gradient
accumulators. Each accumulator will check the staleness and drop the stale.
3. After pushing all the gradients, dequeue an updated value of global_step
from the token queue and record that step to its local_step variable. Note
that this is effectively a barrier.
4. Start the next batch.
### Usage
```python
# Create any optimizer to update the variables, say a simple SGD:
opt = GradientDescentOptimizer(learning_rate=0.1)
# Wrap the optimizer with sync_replicas_optimizer with 50 replicas: at each
# step the optimizer collects 50 gradients before applying to variables.
# Note that if you want to have 2 backup replicas, you can change
# total_num_replicas=52 and make sure this number matches how many physical
# replicas you started in your job.
opt = tf.SyncReplicasOptimizer(opt, replicas_to_aggregate=50,
total_num_replicas=50)
# Some models have startup_delays to help stabilize the model but when using
# sync_replicas training, set it to 0.
# Now you can call `minimize()` or `compute_gradients()` and
# `apply_gradients()` normally
training_op = opt.minimize(total_loss, global_step=self.global_step)
# You can create the hook which handles initialization and queues.
sync_replicas_hook = opt.make_session_run_hook(is_chief)
```
In the training program, every worker will run the train_op as if not
synchronized.
```python
with training.MonitoredTrainingSession(
master=workers[worker_id].target, is_chief=is_chief,
hooks=[sync_replicas_hook]) as mon_sess:
while not mon_sess.should_stop():
mon_sess.run(training_op)
```
To use SyncReplicasOptimizer with an `Estimator`, you need to send
sync_replicas_hook while calling the fit.
```
my_estimator = DNNClassifier(..., optimizer=opt)
my_estimator.fit(..., hooks=[sync_replicas_hook])
```
"""
def __init__(self,
opt,
replicas_to_aggregate,
total_num_replicas=None,
variable_averages=None,
variables_to_average=None,
use_locking=False,
name="sync_replicas"):
"""Construct a sync_replicas optimizer.
Args:
opt: The actual optimizer that will be used to compute and apply the
gradients. Must be one of the Optimizer classes.
replicas_to_aggregate: number of replicas to aggregate for each variable
update.
total_num_replicas: Total number of tasks/workers/replicas, could be
different from replicas_to_aggregate.
If total_num_replicas > replicas_to_aggregate: it is backup_replicas +
replicas_to_aggregate.
If total_num_replicas < replicas_to_aggregate: Replicas compute
multiple batches per update to variables.
variable_averages: Optional `ExponentialMovingAverage` object, used to
maintain moving averages for the variables passed in
`variables_to_average`.
variables_to_average: a list of variables that need to be averaged. Only
needed if variable_averages is passed in.
use_locking: If True use locks for update operation.
name: string. Optional name of the returned operation.
"""
if total_num_replicas is None:
total_num_replicas = replicas_to_aggregate
super(SyncReplicasOptimizer, self).__init__(use_locking, name)
logging.info(
"SyncReplicasV2: replicas_to_aggregate=%s; total_num_replicas=%s",
replicas_to_aggregate, total_num_replicas)
self._opt = opt
self._replicas_to_aggregate = replicas_to_aggregate
self._gradients_applied = False
self._variable_averages = variable_averages
self._variables_to_average = variables_to_average
self._total_num_replicas = total_num_replicas
self._tokens_per_step = max(total_num_replicas, replicas_to_aggregate)
self._global_step = None
self._sync_token_queue = None
# The synchronization op will be executed in a queue runner which should
# only be executed by one of the replicas (usually the chief).
self._chief_queue_runner = None
# Remember which accumulator is on which device to set the initial step in
# the accumulator to be global step. This list contains list of the
# following format: (accumulator, device).
self._accumulator_list = []
def compute_gradients(self, *args, **kwargs):
"""Compute gradients of "loss" for the variables in "var_list".
This simply wraps the compute_gradients() from the real optimizer. The
gradients will be aggregated in the apply_gradients() so that user can
modify the gradients like clipping with per replica global norm if needed.
The global norm with aggregated gradients can be bad as one replica's huge
gradients can hurt the gradients from other replicas.
Args:
*args: Arguments for compute_gradients().
**kwargs: Keyword arguments for compute_gradients().
Returns:
A list of (gradient, variable) pairs.
"""
return self._opt.compute_gradients(*args, **kwargs)
def apply_gradients(self, grads_and_vars, global_step=None, name=None):
"""Apply gradients to variables.
This contains most of the synchronization implementation and also wraps the
apply_gradients() from the real optimizer.
Args:
grads_and_vars: List of (gradient, variable) pairs as returned by
compute_gradients().
global_step: Optional Variable to increment by one after the
variables have been updated.
name: Optional name for the returned operation. Default to the
name passed to the Optimizer constructor.
Returns:
train_op: The op to dequeue a token so the replicas can exit this batch
and start the next one. This is executed by each replica.
Raises:
ValueError: If the grads_and_vars is empty.
ValueError: If global step is not provided, the staleness cannot be
checked.
"""
if not grads_and_vars:
raise ValueError("Must supply at least one variable")
if global_step is None:
raise ValueError("Global step is required to check staleness")
self._global_step = global_step
train_ops = []
aggregated_grad = []
var_list = []
self._local_step = variable_scope.variable(
initial_value=0,
trainable=False,
collections=[ops.GraphKeys.LOCAL_VARIABLES],
dtype=global_step.dtype.base_dtype,
name="sync_rep_local_step")
self.local_step_init_op = state_ops.assign(self._local_step, global_step)
chief_init_ops = [self.local_step_init_op]
self.ready_for_local_init_op = variables.report_uninitialized_variables(
variables.global_variables())
with ops.name_scope(None, self._name):
for grad, var in grads_and_vars:
var_list.append(var)
with ops.device(var.device):
# Dense gradients.
if grad is None:
aggregated_grad.append(None) # pass-through.
continue
elif isinstance(grad, ops.Tensor):
grad_accum = data_flow_ops.ConditionalAccumulator(
grad.dtype,
shape=var.get_shape(),
shared_name=var.name + "/grad_accum")
train_ops.append(grad_accum.apply_grad(
grad, local_step=self._local_step))
aggregated_grad.append(grad_accum.take_grad(
self._replicas_to_aggregate))
else:
if not isinstance(grad, ops.IndexedSlices):
raise ValueError("Unknown grad type!")
grad_accum = data_flow_ops.SparseConditionalAccumulator(
grad.dtype, shape=(), shared_name=var.name + "/grad_accum")
train_ops.append(grad_accum.apply_indexed_slices_grad(
grad, local_step=self._local_step))
aggregated_grad.append(grad_accum.take_indexed_slices_grad(
self._replicas_to_aggregate))
self._accumulator_list.append((grad_accum, var.device))
aggregated_grads_and_vars = zip(aggregated_grad, var_list)
# sync_op will be assigned to the same device as the global step.
with ops.device(global_step.device), ops.name_scope(""):
update_op = self._opt.apply_gradients(aggregated_grads_and_vars,
global_step)
# Create token queue.
with ops.device(global_step.device), ops.name_scope(""):
sync_token_queue = (
data_flow_ops.FIFOQueue(-1,
global_step.dtype.base_dtype,
shapes=(),
name="sync_token_q",
shared_name="sync_token_q"))
self._sync_token_queue = sync_token_queue
# dummy_queue is passed to the queue runner. Don't use the real queues
# because the queue runner doesn't automatically reopen it once it
# closed queues in PS devices.
dummy_queue = (
data_flow_ops.FIFOQueue(1,
types_pb2.DT_INT32,
shapes=(),
name="dummy_queue",
shared_name="dummy_queue"))
with ops.device(global_step.device), ops.name_scope(""):
# Replicas have to wait until they can get a token from the token queue.
with ops.control_dependencies(train_ops):
token = sync_token_queue.dequeue()
train_op = state_ops.assign(self._local_step, token)
with ops.control_dependencies([update_op]):
# Sync_op needs to insert tokens to the token queue at the end of the
# step so the replicas can fetch them to start the next step.
tokens = array_ops.fill([self._tokens_per_step], global_step)
sync_op = sync_token_queue.enqueue_many((tokens,))
if self._variable_averages is not None:
with ops.control_dependencies([sync_op]), ops.name_scope(""):
sync_op = self._variable_averages.apply(
self._variables_to_average)
self._chief_queue_runner = queue_runner.QueueRunner(dummy_queue,
[sync_op])
for accum, dev in self._accumulator_list:
with ops.device(dev):
chief_init_ops.append(
accum.set_global_step(
global_step, name="SetGlobalStep"))
self.chief_init_op = control_flow_ops.group(*(chief_init_ops))
self._gradients_applied = True
return train_op
def get_chief_queue_runner(self):
"""Returns the QueueRunner for the chief to execute.
This includes the operations to synchronize replicas: aggregate gradients,
apply to variables, increment global step, insert tokens to token queue.
Note that this can only be called after calling apply_gradients() which
actually generates this queuerunner.
Returns:
A `QueueRunner` for chief to execute.
Raises:
ValueError: If this is called before apply_gradients().
"""
if self._gradients_applied is False:
raise ValueError("Should be called after apply_gradients().")
return self._chief_queue_runner
def get_slot(self, *args, **kwargs):
"""Return a slot named "name" created for "var" by the Optimizer.
This simply wraps the get_slot() from the actual optimizer.
Args:
*args: Arguments for get_slot().
**kwargs: Keyword arguments for get_slot().
Returns:
The `Variable` for the slot if it was created, `None` otherwise.
"""
return self._opt.get_slot(*args, **kwargs)
def get_slot_names(self, *args, **kwargs):
"""Return a list of the names of slots created by the `Optimizer`.
This simply wraps the get_slot_names() from the actual optimizer.
Args:
*args: Arguments for get_slot().
**kwargs: Keyword arguments for get_slot().
Returns:
A list of strings.
"""
return self._opt.get_slot_names(*args, **kwargs)
def get_init_tokens_op(self, num_tokens=-1):
"""Returns the op to fill the sync_token_queue with the tokens.
This is supposed to be executed in the beginning of the chief/sync thread
so that even if the total_num_replicas is less than replicas_to_aggregate,
the model can still proceed as the replicas can compute multiple steps per
variable update. Make sure:
`num_tokens >= replicas_to_aggregate - total_num_replicas`.
Args:
num_tokens: Number of tokens to add to the queue.
Returns:
An op for the chief/sync replica to fill the token queue.
Raises:
ValueError: If this is called before apply_gradients().
ValueError: If num_tokens are smaller than replicas_to_aggregate -
total_num_replicas.
"""
if self._gradients_applied is False:
raise ValueError(
"get_init_tokens_op() should be called after apply_gradients().")
tokens_needed = self._replicas_to_aggregate - self._total_num_replicas
if num_tokens == -1:
num_tokens = self._replicas_to_aggregate
elif num_tokens < tokens_needed:
raise ValueError(
"Too few tokens to finish the first step: %d (given) vs %d (needed)" %
(num_tokens, tokens_needed))
if num_tokens > 0:
with ops.device(self._global_step.device), ops.name_scope(""):
tokens = array_ops.fill([num_tokens], self._global_step)
init_tokens = self._sync_token_queue.enqueue_many((tokens,))
else:
init_tokens = control_flow_ops.no_op(name="no_init_tokens")
return init_tokens
def make_session_run_hook(self, is_chief, num_tokens=-1):
"""Creates a hook to handle SyncReplicasHook ops such as initialization."""
return _SyncReplicasOptimizerHook(self, is_chief, num_tokens)
class _SyncReplicasOptimizerHook(session_run_hook.SessionRunHook):
"""A SessionRunHook handles ops related to SyncReplicasOptimizer."""
def __init__(self, sync_optimizer, is_chief, num_tokens):
"""Creates hook to handle SyncReplicaOptimizer initialization ops.
Args:
sync_optimizer: `SyncReplicasOptimizer` which this hook will initialize.
is_chief: `Bool`, whether is this a chief replica or not.
num_tokens: Number of tokens to add to the queue.
"""
self._sync_optimizer = sync_optimizer
self._is_chief = is_chief
self._num_tokens = num_tokens
def begin(self):
if self._sync_optimizer._gradients_applied is False: # pylint: disable=protected-access
raise ValueError(
"SyncReplicasOptimizer.apply_gradient should be called before using "
"the hook.")
if self._is_chief:
self._local_init_op = self._sync_optimizer.chief_init_op
self._ready_for_local_init_op = (
self._sync_optimizer.ready_for_local_init_op)
self._q_runner = self._sync_optimizer.get_chief_queue_runner()
self._init_tokens_op = self._sync_optimizer.get_init_tokens_op(
self._num_tokens)
else:
self._local_init_op = self._sync_optimizer.local_step_init_op
self._ready_for_local_init_op = (
self._sync_optimizer.ready_for_local_init_op)
self._q_runner = None
self._init_tokens_op = None
def after_create_session(self, session, coord):
"""Runs SyncReplicasOptimizer initialization ops."""
local_init_success, msg = session_manager._ready( # pylint: disable=protected-access
self._ready_for_local_init_op, session,
"Model is not ready for SyncReplicasOptimizer local init.")
if not local_init_success:
raise RuntimeError(
"Init operations did not make model ready for SyncReplicasOptimizer "
"local_init. Init op: %s, error: %s" %
(self._local_init_op.name, msg))
session.run(self._local_init_op)
if self._init_tokens_op is not None:
session.run(self._init_tokens_op)
if self._q_runner is not None:
self._q_runner.create_threads(
session, coord=coord, daemon=True, start=True)
| mit |
Daniex/horizon | horizon/test/patches.py | 67 | 2706 | # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import re
from six.moves import html_parser as _HTMLParser
def parse_starttag_patched(self, i):
"""This method is a patched version of the parse_starttag method from
django.utils.html_parser.HTMLParser class, used to patch bug 1273943.
The code is taken from file django/utils/html_parser.py, commit 6bc1b22299.
"""
self.__starttag_text = None
endpos = self.check_for_whole_start_tag(i)
if endpos < 0:
return endpos
rawdata = self.rawdata
self.__starttag_text = rawdata[i:endpos]
# Now parse the data between i+1 and j into a tag and attrs
attrs = []
tagfind = re.compile('([a-zA-Z][-.a-zA-Z0-9:_]*)(?:\s|/(?!>))*')
match = tagfind.match(rawdata, i + 1)
assert match, 'unexpected call to parse_starttag()'
k = match.end()
self.lasttag = tag = match.group(1).lower()
while k < endpos:
m = _HTMLParser.attrfind.match(rawdata, k)
if not m:
break
attrname, rest, attrvalue = m.group(1, 2, 3)
if not rest:
attrvalue = None
elif (attrvalue[:1] == '\'' == attrvalue[-1:] or
attrvalue[:1] == '"' == attrvalue[-1:]):
attrvalue = attrvalue[1:-1]
if attrvalue:
attrvalue = self.unescape(attrvalue)
attrs.append((attrname.lower(), attrvalue))
k = m.end()
end = rawdata[k:endpos].strip()
if end not in (">", "/>"):
lineno, offset = self.getpos()
if "\n" in self.__starttag_text:
lineno = lineno + self.__starttag_text.count("\n")
offset = (len(self.__starttag_text)
- self.__starttag_text.rfind("\n"))
else:
offset = offset + len(self.__starttag_text)
self.error("junk characters in start tag: %r"
% (rawdata[k:endpos][:20],))
if end.endswith('/>'):
# XHTML-style empty tag: <span attr="value" />
self.handle_startendtag(tag, attrs)
else:
self.handle_starttag(tag, attrs)
if tag in self.CDATA_CONTENT_ELEMENTS:
self.set_cdata_mode(tag) # <--------------------------- Changed
return endpos
| apache-2.0 |
brokenjacobs/ansible | lib/ansible/modules/storage/netapp/sf_volume_access_group_manager.py | 69 | 9488 | #!/usr/bin/python
# (c) 2017, NetApp, Inc
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
ANSIBLE_METADATA = {'metadata_version': '1.0',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
module: sf_volume_access_group_manager
short_description: Manage SolidFire Volume Access Groups
extends_documentation_fragment:
- netapp.solidfire
version_added: '2.3'
author: Sumit Kumar (sumit4@netapp.com)
description:
- Create, destroy, or update volume access groups on SolidFire
options:
state:
description:
- Whether the specified volume access group should exist or not.
required: true
choices: ['present', 'absent']
name:
description:
- Name of the volume access group. It is not required to be unique, but recommended.
required: true
initiators:
description:
- List of initiators to include in the volume access group. If unspecified, the access group will start out without configured initiators.
required: false
default: None
volumes:
description:
- List of volumes to initially include in the volume access group. If unspecified, the access group will start without any volumes.
required: false
default: None
virtual_network_id:
description:
- The ID of the SolidFire Virtual Network ID to associate the volume access group with.
required: false
default: None
virtual_network_tags:
description:
- The ID of the VLAN Virtual Network Tag to associate the volume access group with.
required: false
default: None
attributes:
description: List of Name/Value pairs in JSON object format.
required: false
default: None
volume_access_group_id:
description:
- The ID of the volume access group to modify or delete.
required: false
default: None
'''
EXAMPLES = """
- name: Create Volume Access Group
sf_volume_access_group_manager:
hostname: "{{ solidfire_hostname }}"
username: "{{ solidfire_username }}"
password: "{{ solidfire_password }}"
state: present
name: AnsibleVolumeAccessGroup
volumes: [7,8]
- name: Modify Volume Access Group
sf_volume_access_group_manager:
hostname: "{{ solidfire_hostname }}"
username: "{{ solidfire_username }}"
password: "{{ solidfire_password }}"
state: present
volume_access_group_id: 1
name: AnsibleVolumeAccessGroup-Renamed
attributes: {"volumes": [1,2,3], "virtual_network_id": 12345}
- name: Delete Volume Access Group
sf_volume_access_group_manager:
hostname: "{{ solidfire_hostname }}"
username: "{{ solidfire_username }}"
password: "{{ solidfire_password }}"
state: absent
volume_access_group_id: 1
"""
RETURN = """
"""
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.pycompat24 import get_exception
import ansible.module_utils.netapp as netapp_utils
HAS_SF_SDK = netapp_utils.has_sf_sdk()
class SolidFireVolumeAccessGroup(object):
def __init__(self):
self.argument_spec = netapp_utils.ontap_sf_host_argument_spec()
self.argument_spec.update(dict(
state=dict(required=True, choices=['present', 'absent']),
name=dict(required=True, type='str'),
volume_access_group_id=dict(required=False, type='int', default=None),
initiators=dict(required=False, type='list', default=None),
volumes=dict(required=False, type='list', default=None),
virtual_network_id=dict(required=False, type='list', default=None),
virtual_network_tags=dict(required=False, type='list', default=None),
attributes=dict(required=False, type='dict', default=None),
))
self.module = AnsibleModule(
argument_spec=self.argument_spec,
supports_check_mode=True
)
p = self.module.params
# set up state variables
self.state = p['state']
self.name = p['name']
self.volume_access_group_id = p['volume_access_group_id']
self.initiators = p['initiators']
self.volumes = p['volumes']
self.virtual_network_id = p['virtual_network_id']
self.virtual_network_tags = p['virtual_network_tags']
self.attributes = p['attributes']
if HAS_SF_SDK is False:
self.module.fail_json(msg="Unable to import the SolidFire Python SDK")
else:
self.sfe = netapp_utils.create_sf_connection(module=self.module)
def get_volume_access_group(self):
access_groups_list = self.sfe.list_volume_access_groups()
for group in access_groups_list.volume_access_groups:
if group.name == self.name:
# Update self.volume_access_group_id:
if self.volume_access_group_id is not None:
if group.volume_access_group_id == self.volume_access_group_id:
return group
else:
self.volume_access_group_id = group.volume_access_group_id
return group
return None
def create_volume_access_group(self):
try:
self.sfe.create_volume_access_group(name=self.name,
initiators=self.initiators,
volumes=self.volumes,
virtual_network_id=self.virtual_network_id,
virtual_network_tags=self.virtual_network_tags,
attributes=self.attributes)
except:
err = get_exception()
self.module.fail_json(msg="Error creating volume access group %s" % self.name,
exception=str(err))
def delete_volume_access_group(self):
try:
self.sfe.delete_volume_access_group(volume_access_group_id=self.volume_access_group_id)
except:
err = get_exception()
self.module.fail_json(msg="Error deleting volume access group %s" % self.volume_access_group_id,
exception=str(err))
def update_volume_access_group(self):
try:
self.sfe.modify_volume_access_group(volume_access_group_id=self.volume_access_group_id,
virtual_network_id=self.virtual_network_id,
virtual_network_tags=self.virtual_network_tags,
name=self.name,
initiators=self.initiators,
volumes=self.volumes,
attributes=self.attributes)
except:
err = get_exception()
self.module.fail_json(msg="Error updating volume access group %s" % self.volume_access_group_id,
exception=str(err))
def apply(self):
changed = False
group_exists = False
update_group = False
group_detail = self.get_volume_access_group()
if group_detail:
group_exists = True
if self.state == 'absent':
changed = True
elif self.state == 'present':
# Check if we need to update the group
if self.volumes is not None and group_detail.volumes != self.volumes:
update_group = True
changed = True
elif self.initiators is not None and group_detail.initiators != self.initiators:
update_group = True
changed = True
elif self.virtual_network_id is not None or self.virtual_network_tags is not None or \
self.attributes is not None:
update_group = True
changed = True
else:
if self.state == 'present':
changed = True
if changed:
if self.module.check_mode:
pass
else:
if self.state == 'present':
if not group_exists:
self.create_volume_access_group()
elif update_group:
self.update_volume_access_group()
elif self.state == 'absent':
self.delete_volume_access_group()
self.module.exit_json(changed=changed)
def main():
v = SolidFireVolumeAccessGroup()
v.apply()
if __name__ == '__main__':
main()
| gpl-3.0 |
mayblue9/bokeh | bokeh/transforms/image_downsample.py | 43 | 2158 | from __future__ import absolute_import
import numpy as np
from ..models import ServerDataSource
try:
import scipy
import scipy.misc
except ImportError as e:
print(e)
def source(**kwargs):
kwargs['transform'] = {'resample':'heatmap',
'global_x_range' : [0, 10],
'global_y_range' : [0, 10],
'global_offset_x' : [0],
'global_offset_y' : [0],
'type' : 'ndarray',
}
kwargs['data'] = {'x': [0],
'y': [0],
'dw' : [10],
'dh' : [10],
}
return ServerDataSource(**kwargs)
def downsample(image, image_x_axis, image_y_axis,
x_bounds, y_bounds, x_resolution, y_resolution):
x_resolution, y_resolution = int(round(x_resolution)), int(round(y_resolution))
x_bounds = [x_bounds.start, x_bounds.end]
y_bounds = [y_bounds.start, y_bounds.end]
x_bounds = np.searchsorted(image_x_axis, x_bounds)
y_bounds = np.searchsorted(image_y_axis, y_bounds)
#y_bounds = image.shape[0] + 1 - y_bounds[::-1]
if x_resolution == 0 or y_resolution == 0:
subset = np.zeros((1,1), dtype=image.dtype)
else:
subset = image[y_bounds[0]:y_bounds[1],
x_bounds[0]:x_bounds[1]]
x_downsample_factor = max(round(subset.shape[1] / x_resolution / 3.), 1)
y_downsample_factor = max(round(subset.shape[0] / y_resolution / 3.), 1)
subset = subset[::x_downsample_factor, ::y_downsample_factor]
subset = scipy.misc.imresize(subset, (x_resolution, y_resolution),
interp='nearest')
bounds = image_x_axis[x_bounds[0]:x_bounds[1]]
dw = np.max(bounds) - np.min(bounds)
bounds = image_y_axis[y_bounds[0]:y_bounds[1]]
dh = np.max(bounds) - np.min(bounds)
return {'data' : {'image': [subset],
'x': [image_x_axis[x_bounds[0]]],
'y': [image_y_axis[y_bounds[0]]],
'dw': [dw],
'dh': [dh],
}
}
| bsd-3-clause |
jabesq/home-assistant | tests/components/namecheapdns/test_init.py | 25 | 2281 | """Test the NamecheapDNS component."""
import asyncio
from datetime import timedelta
import pytest
from homeassistant.setup import async_setup_component
from homeassistant.components import namecheapdns
from homeassistant.util.dt import utcnow
from tests.common import async_fire_time_changed
HOST = 'test'
DOMAIN = 'bla'
PASSWORD = 'abcdefgh'
@pytest.fixture
def setup_namecheapdns(hass, aioclient_mock):
"""Fixture that sets up NamecheapDNS."""
aioclient_mock.get(namecheapdns.UPDATE_URL, params={
'host': HOST,
'domain': DOMAIN,
'password': PASSWORD,
}, text='<interface-response><ErrCount>0</ErrCount></interface-response>')
hass.loop.run_until_complete(async_setup_component(
hass, namecheapdns.DOMAIN, {
'namecheapdns': {
'host': HOST,
'domain': DOMAIN,
'password': PASSWORD,
}
}))
@asyncio.coroutine
def test_setup(hass, aioclient_mock):
"""Test setup works if update passes."""
aioclient_mock.get(namecheapdns.UPDATE_URL, params={
'host': HOST,
'domain': DOMAIN,
'password': PASSWORD
}, text='<interface-response><ErrCount>0</ErrCount></interface-response>')
result = yield from async_setup_component(hass, namecheapdns.DOMAIN, {
'namecheapdns': {
'host': HOST,
'domain': DOMAIN,
'password': PASSWORD,
}
})
assert result
assert aioclient_mock.call_count == 1
async_fire_time_changed(hass, utcnow() + timedelta(minutes=5))
yield from hass.async_block_till_done()
assert aioclient_mock.call_count == 2
@asyncio.coroutine
def test_setup_fails_if_update_fails(hass, aioclient_mock):
"""Test setup fails if first update fails."""
aioclient_mock.get(namecheapdns.UPDATE_URL, params={
'host': HOST,
'domain': DOMAIN,
'password': PASSWORD,
}, text='<interface-response><ErrCount>1</ErrCount></interface-response>')
result = yield from async_setup_component(hass, namecheapdns.DOMAIN, {
'namecheapdns': {
'host': HOST,
'domain': DOMAIN,
'password': PASSWORD,
}
})
assert not result
assert aioclient_mock.call_count == 1
| apache-2.0 |
ryfeus/lambda-packs | Tensorflow_OpenCV_Nightly/source/tensorflow/contrib/learn/python/learn/learn_io/generator_io.py | 52 | 5196 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Methods to allow generator of dict with numpy arrays."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from collections import Container
from types import FunctionType
from types import GeneratorType
from tensorflow.contrib.learn.python.learn.dataframe.queues import feeding_functions
def generator_input_fn(x,
target_key=None,
batch_size=128,
num_epochs=1,
shuffle=True,
queue_capacity=1000,
num_threads=1):
"""Returns input function that would dicts of numpy arrays
yielded from a generator.
It is assumed that every dict yielded from the dictionary represents
a single sample. The generator should consume a single epoch of the data.
This returns a function outputting `features` and `target` based on the dict
of numpy arrays. The dict `features` has the same keys as an element yielded
from x.
Example:
```python
def generator():
for index in range(10):
yield {'height': np.random.randint(32,36),
'age': np.random.randint(18, 80),
'label': np.ones(1)}
with tf.Session() as session:
input_fn = generator_io.generator_input_fn(
generator, target_key="label", batch_size=2, shuffle=False,
num_epochs=1)
```
Args:
x: Generator Function, returns a `Generator` that will yield the data
in `dict` of numpy arrays
target_key: String or Container of Strings, the key or Container of keys of
the numpy arrays in x dictionaries to use as target.
batch_size: Integer, size of batches to return.
num_epochs: Integer, number of epochs to iterate over data. If `None` will
run forever.
shuffle: Boolean, if True shuffles the queue. Avoid shuffle at prediction
time.
queue_capacity: Integer, size of queue to accumulate.
num_threads: Integer, number of threads used for reading and enqueueing.
Returns:
Function, that returns a feature `dict` with `Tensors` and an optional
label `dict` with `Tensors`, or if target_key is `str` label is a `Tensor`
Raises:
TypeError: `x` is not `FunctionType`.
TypeError: `x()` is not `GeneratorType`.
TypeError: `next(x())` is not `dict`.
TypeError: `target_key` is not `str` or `target_key` is not `Container`
of `str`.
KeyError: `target_key` not a key or `target_key[index]` not in next(`x()`).
KeyError: `key` mismatch between dicts emitted from `x()`
"""
if not isinstance(x, FunctionType):
raise TypeError(
'x must be generator function; got {}'.format(type(x).__name__))
generator = x()
if not isinstance(generator, GeneratorType):
raise TypeError(
'x() must be generator; got {}'.format(type(generator).__name__))
data = next(generator)
if not isinstance(data, dict):
raise TypeError('x() must yield dict; got {}'.format(type(data).__name__))
input_keys = sorted(next(x()).keys())
if target_key is not None:
if isinstance(target_key, str):
target_key = [target_key]
elif isinstance(target_key, Container):
for item in target_key:
if not isinstance(item, str):
raise TypeError('target_key must be str or Container of str; got {}'.
format(type(item).__name__))
if item not in input_keys:
raise KeyError(
'target_key not in yielded dict. Expected {} keys; got {}'.format(
input_keys, item))
else:
raise TypeError('target_key must be str or Container of str; got {}'.
format(type(target_key).__name__))
def _generator_input_fn():
"""generator input function."""
queue = feeding_functions.enqueue_data(
x,
queue_capacity,
shuffle=shuffle,
num_threads=num_threads,
enqueue_size=batch_size,
num_epochs=num_epochs)
features = (queue.dequeue_many(batch_size)
if num_epochs is None else queue.dequeue_up_to(batch_size))
if not isinstance(features, list):
features = [features]
features = dict(zip(input_keys, features))
if target_key is not None:
if len(target_key) > 1:
target = {key: features.pop(key) for key in target_key}
else:
target = features.pop(target_key[0])
return features, target
return features
return _generator_input_fn
| mit |
caplio/dlxj-kernel | tools/perf/scripts/python/netdev-times.py | 11271 | 15048 | # Display a process of packets and processed time.
# It helps us to investigate networking or network device.
#
# options
# tx: show only tx chart
# rx: show only rx chart
# dev=: show only thing related to specified device
# debug: work with debug mode. It shows buffer status.
import os
import sys
sys.path.append(os.environ['PERF_EXEC_PATH'] + \
'/scripts/python/Perf-Trace-Util/lib/Perf/Trace')
from perf_trace_context import *
from Core import *
from Util import *
all_event_list = []; # insert all tracepoint event related with this script
irq_dic = {}; # key is cpu and value is a list which stacks irqs
# which raise NET_RX softirq
net_rx_dic = {}; # key is cpu and value include time of NET_RX softirq-entry
# and a list which stacks receive
receive_hunk_list = []; # a list which include a sequence of receive events
rx_skb_list = []; # received packet list for matching
# skb_copy_datagram_iovec
buffer_budget = 65536; # the budget of rx_skb_list, tx_queue_list and
# tx_xmit_list
of_count_rx_skb_list = 0; # overflow count
tx_queue_list = []; # list of packets which pass through dev_queue_xmit
of_count_tx_queue_list = 0; # overflow count
tx_xmit_list = []; # list of packets which pass through dev_hard_start_xmit
of_count_tx_xmit_list = 0; # overflow count
tx_free_list = []; # list of packets which is freed
# options
show_tx = 0;
show_rx = 0;
dev = 0; # store a name of device specified by option "dev="
debug = 0;
# indices of event_info tuple
EINFO_IDX_NAME= 0
EINFO_IDX_CONTEXT=1
EINFO_IDX_CPU= 2
EINFO_IDX_TIME= 3
EINFO_IDX_PID= 4
EINFO_IDX_COMM= 5
# Calculate a time interval(msec) from src(nsec) to dst(nsec)
def diff_msec(src, dst):
return (dst - src) / 1000000.0
# Display a process of transmitting a packet
def print_transmit(hunk):
if dev != 0 and hunk['dev'].find(dev) < 0:
return
print "%7s %5d %6d.%06dsec %12.3fmsec %12.3fmsec" % \
(hunk['dev'], hunk['len'],
nsecs_secs(hunk['queue_t']),
nsecs_nsecs(hunk['queue_t'])/1000,
diff_msec(hunk['queue_t'], hunk['xmit_t']),
diff_msec(hunk['xmit_t'], hunk['free_t']))
# Format for displaying rx packet processing
PF_IRQ_ENTRY= " irq_entry(+%.3fmsec irq=%d:%s)"
PF_SOFT_ENTRY=" softirq_entry(+%.3fmsec)"
PF_NAPI_POLL= " napi_poll_exit(+%.3fmsec %s)"
PF_JOINT= " |"
PF_WJOINT= " | |"
PF_NET_RECV= " |---netif_receive_skb(+%.3fmsec skb=%x len=%d)"
PF_NET_RX= " |---netif_rx(+%.3fmsec skb=%x)"
PF_CPY_DGRAM= " | skb_copy_datagram_iovec(+%.3fmsec %d:%s)"
PF_KFREE_SKB= " | kfree_skb(+%.3fmsec location=%x)"
PF_CONS_SKB= " | consume_skb(+%.3fmsec)"
# Display a process of received packets and interrputs associated with
# a NET_RX softirq
def print_receive(hunk):
show_hunk = 0
irq_list = hunk['irq_list']
cpu = irq_list[0]['cpu']
base_t = irq_list[0]['irq_ent_t']
# check if this hunk should be showed
if dev != 0:
for i in range(len(irq_list)):
if irq_list[i]['name'].find(dev) >= 0:
show_hunk = 1
break
else:
show_hunk = 1
if show_hunk == 0:
return
print "%d.%06dsec cpu=%d" % \
(nsecs_secs(base_t), nsecs_nsecs(base_t)/1000, cpu)
for i in range(len(irq_list)):
print PF_IRQ_ENTRY % \
(diff_msec(base_t, irq_list[i]['irq_ent_t']),
irq_list[i]['irq'], irq_list[i]['name'])
print PF_JOINT
irq_event_list = irq_list[i]['event_list']
for j in range(len(irq_event_list)):
irq_event = irq_event_list[j]
if irq_event['event'] == 'netif_rx':
print PF_NET_RX % \
(diff_msec(base_t, irq_event['time']),
irq_event['skbaddr'])
print PF_JOINT
print PF_SOFT_ENTRY % \
diff_msec(base_t, hunk['sirq_ent_t'])
print PF_JOINT
event_list = hunk['event_list']
for i in range(len(event_list)):
event = event_list[i]
if event['event_name'] == 'napi_poll':
print PF_NAPI_POLL % \
(diff_msec(base_t, event['event_t']), event['dev'])
if i == len(event_list) - 1:
print ""
else:
print PF_JOINT
else:
print PF_NET_RECV % \
(diff_msec(base_t, event['event_t']), event['skbaddr'],
event['len'])
if 'comm' in event.keys():
print PF_WJOINT
print PF_CPY_DGRAM % \
(diff_msec(base_t, event['comm_t']),
event['pid'], event['comm'])
elif 'handle' in event.keys():
print PF_WJOINT
if event['handle'] == "kfree_skb":
print PF_KFREE_SKB % \
(diff_msec(base_t,
event['comm_t']),
event['location'])
elif event['handle'] == "consume_skb":
print PF_CONS_SKB % \
diff_msec(base_t,
event['comm_t'])
print PF_JOINT
def trace_begin():
global show_tx
global show_rx
global dev
global debug
for i in range(len(sys.argv)):
if i == 0:
continue
arg = sys.argv[i]
if arg == 'tx':
show_tx = 1
elif arg =='rx':
show_rx = 1
elif arg.find('dev=',0, 4) >= 0:
dev = arg[4:]
elif arg == 'debug':
debug = 1
if show_tx == 0 and show_rx == 0:
show_tx = 1
show_rx = 1
def trace_end():
# order all events in time
all_event_list.sort(lambda a,b :cmp(a[EINFO_IDX_TIME],
b[EINFO_IDX_TIME]))
# process all events
for i in range(len(all_event_list)):
event_info = all_event_list[i]
name = event_info[EINFO_IDX_NAME]
if name == 'irq__softirq_exit':
handle_irq_softirq_exit(event_info)
elif name == 'irq__softirq_entry':
handle_irq_softirq_entry(event_info)
elif name == 'irq__softirq_raise':
handle_irq_softirq_raise(event_info)
elif name == 'irq__irq_handler_entry':
handle_irq_handler_entry(event_info)
elif name == 'irq__irq_handler_exit':
handle_irq_handler_exit(event_info)
elif name == 'napi__napi_poll':
handle_napi_poll(event_info)
elif name == 'net__netif_receive_skb':
handle_netif_receive_skb(event_info)
elif name == 'net__netif_rx':
handle_netif_rx(event_info)
elif name == 'skb__skb_copy_datagram_iovec':
handle_skb_copy_datagram_iovec(event_info)
elif name == 'net__net_dev_queue':
handle_net_dev_queue(event_info)
elif name == 'net__net_dev_xmit':
handle_net_dev_xmit(event_info)
elif name == 'skb__kfree_skb':
handle_kfree_skb(event_info)
elif name == 'skb__consume_skb':
handle_consume_skb(event_info)
# display receive hunks
if show_rx:
for i in range(len(receive_hunk_list)):
print_receive(receive_hunk_list[i])
# display transmit hunks
if show_tx:
print " dev len Qdisc " \
" netdevice free"
for i in range(len(tx_free_list)):
print_transmit(tx_free_list[i])
if debug:
print "debug buffer status"
print "----------------------------"
print "xmit Qdisc:remain:%d overflow:%d" % \
(len(tx_queue_list), of_count_tx_queue_list)
print "xmit netdevice:remain:%d overflow:%d" % \
(len(tx_xmit_list), of_count_tx_xmit_list)
print "receive:remain:%d overflow:%d" % \
(len(rx_skb_list), of_count_rx_skb_list)
# called from perf, when it finds a correspoinding event
def irq__softirq_entry(name, context, cpu, sec, nsec, pid, comm, vec):
if symbol_str("irq__softirq_entry", "vec", vec) != "NET_RX":
return
event_info = (name, context, cpu, nsecs(sec, nsec), pid, comm, vec)
all_event_list.append(event_info)
def irq__softirq_exit(name, context, cpu, sec, nsec, pid, comm, vec):
if symbol_str("irq__softirq_entry", "vec", vec) != "NET_RX":
return
event_info = (name, context, cpu, nsecs(sec, nsec), pid, comm, vec)
all_event_list.append(event_info)
def irq__softirq_raise(name, context, cpu, sec, nsec, pid, comm, vec):
if symbol_str("irq__softirq_entry", "vec", vec) != "NET_RX":
return
event_info = (name, context, cpu, nsecs(sec, nsec), pid, comm, vec)
all_event_list.append(event_info)
def irq__irq_handler_entry(name, context, cpu, sec, nsec, pid, comm,
irq, irq_name):
event_info = (name, context, cpu, nsecs(sec, nsec), pid, comm,
irq, irq_name)
all_event_list.append(event_info)
def irq__irq_handler_exit(name, context, cpu, sec, nsec, pid, comm, irq, ret):
event_info = (name, context, cpu, nsecs(sec, nsec), pid, comm, irq, ret)
all_event_list.append(event_info)
def napi__napi_poll(name, context, cpu, sec, nsec, pid, comm, napi, dev_name):
event_info = (name, context, cpu, nsecs(sec, nsec), pid, comm,
napi, dev_name)
all_event_list.append(event_info)
def net__netif_receive_skb(name, context, cpu, sec, nsec, pid, comm, skbaddr,
skblen, dev_name):
event_info = (name, context, cpu, nsecs(sec, nsec), pid, comm,
skbaddr, skblen, dev_name)
all_event_list.append(event_info)
def net__netif_rx(name, context, cpu, sec, nsec, pid, comm, skbaddr,
skblen, dev_name):
event_info = (name, context, cpu, nsecs(sec, nsec), pid, comm,
skbaddr, skblen, dev_name)
all_event_list.append(event_info)
def net__net_dev_queue(name, context, cpu, sec, nsec, pid, comm,
skbaddr, skblen, dev_name):
event_info = (name, context, cpu, nsecs(sec, nsec), pid, comm,
skbaddr, skblen, dev_name)
all_event_list.append(event_info)
def net__net_dev_xmit(name, context, cpu, sec, nsec, pid, comm,
skbaddr, skblen, rc, dev_name):
event_info = (name, context, cpu, nsecs(sec, nsec), pid, comm,
skbaddr, skblen, rc ,dev_name)
all_event_list.append(event_info)
def skb__kfree_skb(name, context, cpu, sec, nsec, pid, comm,
skbaddr, protocol, location):
event_info = (name, context, cpu, nsecs(sec, nsec), pid, comm,
skbaddr, protocol, location)
all_event_list.append(event_info)
def skb__consume_skb(name, context, cpu, sec, nsec, pid, comm, skbaddr):
event_info = (name, context, cpu, nsecs(sec, nsec), pid, comm,
skbaddr)
all_event_list.append(event_info)
def skb__skb_copy_datagram_iovec(name, context, cpu, sec, nsec, pid, comm,
skbaddr, skblen):
event_info = (name, context, cpu, nsecs(sec, nsec), pid, comm,
skbaddr, skblen)
all_event_list.append(event_info)
def handle_irq_handler_entry(event_info):
(name, context, cpu, time, pid, comm, irq, irq_name) = event_info
if cpu not in irq_dic.keys():
irq_dic[cpu] = []
irq_record = {'irq':irq, 'name':irq_name, 'cpu':cpu, 'irq_ent_t':time}
irq_dic[cpu].append(irq_record)
def handle_irq_handler_exit(event_info):
(name, context, cpu, time, pid, comm, irq, ret) = event_info
if cpu not in irq_dic.keys():
return
irq_record = irq_dic[cpu].pop()
if irq != irq_record['irq']:
return
irq_record.update({'irq_ext_t':time})
# if an irq doesn't include NET_RX softirq, drop.
if 'event_list' in irq_record.keys():
irq_dic[cpu].append(irq_record)
def handle_irq_softirq_raise(event_info):
(name, context, cpu, time, pid, comm, vec) = event_info
if cpu not in irq_dic.keys() \
or len(irq_dic[cpu]) == 0:
return
irq_record = irq_dic[cpu].pop()
if 'event_list' in irq_record.keys():
irq_event_list = irq_record['event_list']
else:
irq_event_list = []
irq_event_list.append({'time':time, 'event':'sirq_raise'})
irq_record.update({'event_list':irq_event_list})
irq_dic[cpu].append(irq_record)
def handle_irq_softirq_entry(event_info):
(name, context, cpu, time, pid, comm, vec) = event_info
net_rx_dic[cpu] = {'sirq_ent_t':time, 'event_list':[]}
def handle_irq_softirq_exit(event_info):
(name, context, cpu, time, pid, comm, vec) = event_info
irq_list = []
event_list = 0
if cpu in irq_dic.keys():
irq_list = irq_dic[cpu]
del irq_dic[cpu]
if cpu in net_rx_dic.keys():
sirq_ent_t = net_rx_dic[cpu]['sirq_ent_t']
event_list = net_rx_dic[cpu]['event_list']
del net_rx_dic[cpu]
if irq_list == [] or event_list == 0:
return
rec_data = {'sirq_ent_t':sirq_ent_t, 'sirq_ext_t':time,
'irq_list':irq_list, 'event_list':event_list}
# merge information realted to a NET_RX softirq
receive_hunk_list.append(rec_data)
def handle_napi_poll(event_info):
(name, context, cpu, time, pid, comm, napi, dev_name) = event_info
if cpu in net_rx_dic.keys():
event_list = net_rx_dic[cpu]['event_list']
rec_data = {'event_name':'napi_poll',
'dev':dev_name, 'event_t':time}
event_list.append(rec_data)
def handle_netif_rx(event_info):
(name, context, cpu, time, pid, comm,
skbaddr, skblen, dev_name) = event_info
if cpu not in irq_dic.keys() \
or len(irq_dic[cpu]) == 0:
return
irq_record = irq_dic[cpu].pop()
if 'event_list' in irq_record.keys():
irq_event_list = irq_record['event_list']
else:
irq_event_list = []
irq_event_list.append({'time':time, 'event':'netif_rx',
'skbaddr':skbaddr, 'skblen':skblen, 'dev_name':dev_name})
irq_record.update({'event_list':irq_event_list})
irq_dic[cpu].append(irq_record)
def handle_netif_receive_skb(event_info):
global of_count_rx_skb_list
(name, context, cpu, time, pid, comm,
skbaddr, skblen, dev_name) = event_info
if cpu in net_rx_dic.keys():
rec_data = {'event_name':'netif_receive_skb',
'event_t':time, 'skbaddr':skbaddr, 'len':skblen}
event_list = net_rx_dic[cpu]['event_list']
event_list.append(rec_data)
rx_skb_list.insert(0, rec_data)
if len(rx_skb_list) > buffer_budget:
rx_skb_list.pop()
of_count_rx_skb_list += 1
def handle_net_dev_queue(event_info):
global of_count_tx_queue_list
(name, context, cpu, time, pid, comm,
skbaddr, skblen, dev_name) = event_info
skb = {'dev':dev_name, 'skbaddr':skbaddr, 'len':skblen, 'queue_t':time}
tx_queue_list.insert(0, skb)
if len(tx_queue_list) > buffer_budget:
tx_queue_list.pop()
of_count_tx_queue_list += 1
def handle_net_dev_xmit(event_info):
global of_count_tx_xmit_list
(name, context, cpu, time, pid, comm,
skbaddr, skblen, rc, dev_name) = event_info
if rc == 0: # NETDEV_TX_OK
for i in range(len(tx_queue_list)):
skb = tx_queue_list[i]
if skb['skbaddr'] == skbaddr:
skb['xmit_t'] = time
tx_xmit_list.insert(0, skb)
del tx_queue_list[i]
if len(tx_xmit_list) > buffer_budget:
tx_xmit_list.pop()
of_count_tx_xmit_list += 1
return
def handle_kfree_skb(event_info):
(name, context, cpu, time, pid, comm,
skbaddr, protocol, location) = event_info
for i in range(len(tx_queue_list)):
skb = tx_queue_list[i]
if skb['skbaddr'] == skbaddr:
del tx_queue_list[i]
return
for i in range(len(tx_xmit_list)):
skb = tx_xmit_list[i]
if skb['skbaddr'] == skbaddr:
skb['free_t'] = time
tx_free_list.append(skb)
del tx_xmit_list[i]
return
for i in range(len(rx_skb_list)):
rec_data = rx_skb_list[i]
if rec_data['skbaddr'] == skbaddr:
rec_data.update({'handle':"kfree_skb",
'comm':comm, 'pid':pid, 'comm_t':time})
del rx_skb_list[i]
return
def handle_consume_skb(event_info):
(name, context, cpu, time, pid, comm, skbaddr) = event_info
for i in range(len(tx_xmit_list)):
skb = tx_xmit_list[i]
if skb['skbaddr'] == skbaddr:
skb['free_t'] = time
tx_free_list.append(skb)
del tx_xmit_list[i]
return
def handle_skb_copy_datagram_iovec(event_info):
(name, context, cpu, time, pid, comm, skbaddr, skblen) = event_info
for i in range(len(rx_skb_list)):
rec_data = rx_skb_list[i]
if skbaddr == rec_data['skbaddr']:
rec_data.update({'handle':"skb_copy_datagram_iovec",
'comm':comm, 'pid':pid, 'comm_t':time})
del rx_skb_list[i]
return
| gpl-2.0 |
szaydel/python-daemon-1 | daemon/version/__init__.py | 10 | 1295 | # -*- coding: utf-8 -*-
# daemon/version/__init__.py
# Part of python-daemon, an implementation of PEP 3143.
#
# Copyright © 2008–2010 Ben Finney <ben+python@benfinney.id.au>
# This is free software: you may copy, modify, and/or distribute this work
# under the terms of the Python Software Foundation License, version 2 or
# later as published by the Python Software Foundation.
# No warranty expressed or implied. See the file LICENSE.PSF-2 for details.
""" Version information for the python-daemon distribution. """
from __future__ import absolute_import
from .version_info import version_info
version_info['version_string'] = u"1.6"
version_short = u"%(version_string)s" % version_info
version_full = u"%(version_string)s.r%(revno)s" % version_info
version = version_short
author_name = u"Ben Finney"
author_email = u"ben+python@benfinney.id.au"
author = u"%(author_name)s <%(author_email)s>" % vars()
copyright_year_begin = u"2001"
date = version_info['date'].split(' ', 1)[0]
copyright_year = date.split('-')[0]
copyright_year_range = copyright_year_begin
if copyright_year > copyright_year_begin:
copyright_year_range += u"–%(copyright_year)s" % vars()
copyright = (
u"Copyright © %(copyright_year_range)s %(author)s and others"
) % vars()
license = u"PSF-2+"
| gpl-2.0 |
arkottke/pysra | pysra/__init__.py | 1 | 1512 | # The MIT License (MIT)
#
# Copyright (c) 2016-2018 Albert Kottke
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
from pkg_resources import get_distribution
from . import motion
from . import propagation
from . import output
from . import site
from . import variation
__all__ = ["motion", "propagation", "output", "site", "variation"]
__author__ = "Albert Kottke"
__copyright__ = "Copyright 2016 Albert Kottke"
__license__ = "MIT"
__title__ = "pySRA"
__version__ = get_distribution("pySRA").version
| mit |
kimmanuel/d8-training-project | themes/d8training/node_modules/node-gyp/gyp/pylib/gyp/generator/msvs.py | 327 | 132617 | # Copyright (c) 2012 Google Inc. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import copy
import ntpath
import os
import posixpath
import re
import subprocess
import sys
import gyp.common
import gyp.easy_xml as easy_xml
import gyp.generator.ninja as ninja_generator
import gyp.MSVSNew as MSVSNew
import gyp.MSVSProject as MSVSProject
import gyp.MSVSSettings as MSVSSettings
import gyp.MSVSToolFile as MSVSToolFile
import gyp.MSVSUserFile as MSVSUserFile
import gyp.MSVSUtil as MSVSUtil
import gyp.MSVSVersion as MSVSVersion
from gyp.common import GypError
from gyp.common import OrderedSet
# TODO: Remove once bots are on 2.7, http://crbug.com/241769
def _import_OrderedDict():
import collections
try:
return collections.OrderedDict
except AttributeError:
import gyp.ordered_dict
return gyp.ordered_dict.OrderedDict
OrderedDict = _import_OrderedDict()
# Regular expression for validating Visual Studio GUIDs. If the GUID
# contains lowercase hex letters, MSVS will be fine. However,
# IncrediBuild BuildConsole will parse the solution file, but then
# silently skip building the target causing hard to track down errors.
# Note that this only happens with the BuildConsole, and does not occur
# if IncrediBuild is executed from inside Visual Studio. This regex
# validates that the string looks like a GUID with all uppercase hex
# letters.
VALID_MSVS_GUID_CHARS = re.compile(r'^[A-F0-9\-]+$')
generator_default_variables = {
'EXECUTABLE_PREFIX': '',
'EXECUTABLE_SUFFIX': '.exe',
'STATIC_LIB_PREFIX': '',
'SHARED_LIB_PREFIX': '',
'STATIC_LIB_SUFFIX': '.lib',
'SHARED_LIB_SUFFIX': '.dll',
'INTERMEDIATE_DIR': '$(IntDir)',
'SHARED_INTERMEDIATE_DIR': '$(OutDir)obj/global_intermediate',
'OS': 'win',
'PRODUCT_DIR': '$(OutDir)',
'LIB_DIR': '$(OutDir)lib',
'RULE_INPUT_ROOT': '$(InputName)',
'RULE_INPUT_DIRNAME': '$(InputDir)',
'RULE_INPUT_EXT': '$(InputExt)',
'RULE_INPUT_NAME': '$(InputFileName)',
'RULE_INPUT_PATH': '$(InputPath)',
'CONFIGURATION_NAME': '$(ConfigurationName)',
}
# The msvs specific sections that hold paths
generator_additional_path_sections = [
'msvs_cygwin_dirs',
'msvs_props',
]
generator_additional_non_configuration_keys = [
'msvs_cygwin_dirs',
'msvs_cygwin_shell',
'msvs_large_pdb',
'msvs_shard',
'msvs_external_builder',
'msvs_external_builder_out_dir',
'msvs_external_builder_build_cmd',
'msvs_external_builder_clean_cmd',
'msvs_external_builder_clcompile_cmd',
'msvs_enable_winrt',
'msvs_requires_importlibrary',
'msvs_enable_winphone',
'msvs_application_type_revision',
'msvs_target_platform_version',
'msvs_target_platform_minversion',
]
# List of precompiled header related keys.
precomp_keys = [
'msvs_precompiled_header',
'msvs_precompiled_source',
]
cached_username = None
cached_domain = None
# TODO(gspencer): Switch the os.environ calls to be
# win32api.GetDomainName() and win32api.GetUserName() once the
# python version in depot_tools has been updated to work on Vista
# 64-bit.
def _GetDomainAndUserName():
if sys.platform not in ('win32', 'cygwin'):
return ('DOMAIN', 'USERNAME')
global cached_username
global cached_domain
if not cached_domain or not cached_username:
domain = os.environ.get('USERDOMAIN')
username = os.environ.get('USERNAME')
if not domain or not username:
call = subprocess.Popen(['net', 'config', 'Workstation'],
stdout=subprocess.PIPE)
config = call.communicate()[0]
username_re = re.compile(r'^User name\s+(\S+)', re.MULTILINE)
username_match = username_re.search(config)
if username_match:
username = username_match.group(1)
domain_re = re.compile(r'^Logon domain\s+(\S+)', re.MULTILINE)
domain_match = domain_re.search(config)
if domain_match:
domain = domain_match.group(1)
cached_domain = domain
cached_username = username
return (cached_domain, cached_username)
fixpath_prefix = None
def _NormalizedSource(source):
"""Normalize the path.
But not if that gets rid of a variable, as this may expand to something
larger than one directory.
Arguments:
source: The path to be normalize.d
Returns:
The normalized path.
"""
normalized = os.path.normpath(source)
if source.count('$') == normalized.count('$'):
source = normalized
return source
def _FixPath(path):
"""Convert paths to a form that will make sense in a vcproj file.
Arguments:
path: The path to convert, may contain / etc.
Returns:
The path with all slashes made into backslashes.
"""
if fixpath_prefix and path and not os.path.isabs(path) and not path[0] == '$':
path = os.path.join(fixpath_prefix, path)
path = path.replace('/', '\\')
path = _NormalizedSource(path)
if path and path[-1] == '\\':
path = path[:-1]
return path
def _FixPaths(paths):
"""Fix each of the paths of the list."""
return [_FixPath(i) for i in paths]
def _ConvertSourcesToFilterHierarchy(sources, prefix=None, excluded=None,
list_excluded=True, msvs_version=None):
"""Converts a list split source file paths into a vcproj folder hierarchy.
Arguments:
sources: A list of source file paths split.
prefix: A list of source file path layers meant to apply to each of sources.
excluded: A set of excluded files.
msvs_version: A MSVSVersion object.
Returns:
A hierarchy of filenames and MSVSProject.Filter objects that matches the
layout of the source tree.
For example:
_ConvertSourcesToFilterHierarchy([['a', 'bob1.c'], ['b', 'bob2.c']],
prefix=['joe'])
-->
[MSVSProject.Filter('a', contents=['joe\\a\\bob1.c']),
MSVSProject.Filter('b', contents=['joe\\b\\bob2.c'])]
"""
if not prefix: prefix = []
result = []
excluded_result = []
folders = OrderedDict()
# Gather files into the final result, excluded, or folders.
for s in sources:
if len(s) == 1:
filename = _NormalizedSource('\\'.join(prefix + s))
if filename in excluded:
excluded_result.append(filename)
else:
result.append(filename)
elif msvs_version and not msvs_version.UsesVcxproj():
# For MSVS 2008 and earlier, we need to process all files before walking
# the sub folders.
if not folders.get(s[0]):
folders[s[0]] = []
folders[s[0]].append(s[1:])
else:
contents = _ConvertSourcesToFilterHierarchy([s[1:]], prefix + [s[0]],
excluded=excluded,
list_excluded=list_excluded,
msvs_version=msvs_version)
contents = MSVSProject.Filter(s[0], contents=contents)
result.append(contents)
# Add a folder for excluded files.
if excluded_result and list_excluded:
excluded_folder = MSVSProject.Filter('_excluded_files',
contents=excluded_result)
result.append(excluded_folder)
if msvs_version and msvs_version.UsesVcxproj():
return result
# Populate all the folders.
for f in folders:
contents = _ConvertSourcesToFilterHierarchy(folders[f], prefix=prefix + [f],
excluded=excluded,
list_excluded=list_excluded,
msvs_version=msvs_version)
contents = MSVSProject.Filter(f, contents=contents)
result.append(contents)
return result
def _ToolAppend(tools, tool_name, setting, value, only_if_unset=False):
if not value: return
_ToolSetOrAppend(tools, tool_name, setting, value, only_if_unset)
def _ToolSetOrAppend(tools, tool_name, setting, value, only_if_unset=False):
# TODO(bradnelson): ugly hack, fix this more generally!!!
if 'Directories' in setting or 'Dependencies' in setting:
if type(value) == str:
value = value.replace('/', '\\')
else:
value = [i.replace('/', '\\') for i in value]
if not tools.get(tool_name):
tools[tool_name] = dict()
tool = tools[tool_name]
if tool.get(setting):
if only_if_unset: return
if type(tool[setting]) == list and type(value) == list:
tool[setting] += value
else:
raise TypeError(
'Appending "%s" to a non-list setting "%s" for tool "%s" is '
'not allowed, previous value: %s' % (
value, setting, tool_name, str(tool[setting])))
else:
tool[setting] = value
def _ConfigPlatform(config_data):
return config_data.get('msvs_configuration_platform', 'Win32')
def _ConfigBaseName(config_name, platform_name):
if config_name.endswith('_' + platform_name):
return config_name[0:-len(platform_name) - 1]
else:
return config_name
def _ConfigFullName(config_name, config_data):
platform_name = _ConfigPlatform(config_data)
return '%s|%s' % (_ConfigBaseName(config_name, platform_name), platform_name)
def _ConfigWindowsTargetPlatformVersion(config_data):
ver = config_data.get('msvs_windows_target_platform_version')
if not ver or re.match(r'^\d+', ver):
return ver
for key in [r'HKLM\Software\Microsoft\Microsoft SDKs\Windows\%s',
r'HKLM\Software\Wow6432Node\Microsoft\Microsoft SDKs\Windows\%s']:
sdkdir = MSVSVersion._RegistryGetValue(key % ver, 'InstallationFolder')
if not sdkdir:
continue
version = MSVSVersion._RegistryGetValue(key % ver, 'ProductVersion') or ''
# find a matching entry in sdkdir\include
names = sorted([x for x in os.listdir(r'%s\include' % sdkdir) \
if x.startswith(version)], reverse = True)
return names[0]
def _BuildCommandLineForRuleRaw(spec, cmd, cygwin_shell, has_input_path,
quote_cmd, do_setup_env):
if [x for x in cmd if '$(InputDir)' in x]:
input_dir_preamble = (
'set INPUTDIR=$(InputDir)\n'
'if NOT DEFINED INPUTDIR set INPUTDIR=.\\\n'
'set INPUTDIR=%INPUTDIR:~0,-1%\n'
)
else:
input_dir_preamble = ''
if cygwin_shell:
# Find path to cygwin.
cygwin_dir = _FixPath(spec.get('msvs_cygwin_dirs', ['.'])[0])
# Prepare command.
direct_cmd = cmd
direct_cmd = [i.replace('$(IntDir)',
'`cygpath -m "${INTDIR}"`') for i in direct_cmd]
direct_cmd = [i.replace('$(OutDir)',
'`cygpath -m "${OUTDIR}"`') for i in direct_cmd]
direct_cmd = [i.replace('$(InputDir)',
'`cygpath -m "${INPUTDIR}"`') for i in direct_cmd]
if has_input_path:
direct_cmd = [i.replace('$(InputPath)',
'`cygpath -m "${INPUTPATH}"`')
for i in direct_cmd]
direct_cmd = ['\\"%s\\"' % i.replace('"', '\\\\\\"') for i in direct_cmd]
# direct_cmd = gyp.common.EncodePOSIXShellList(direct_cmd)
direct_cmd = ' '.join(direct_cmd)
# TODO(quote): regularize quoting path names throughout the module
cmd = ''
if do_setup_env:
cmd += 'call "$(ProjectDir)%(cygwin_dir)s\\setup_env.bat" && '
cmd += 'set CYGWIN=nontsec&& '
if direct_cmd.find('NUMBER_OF_PROCESSORS') >= 0:
cmd += 'set /a NUMBER_OF_PROCESSORS_PLUS_1=%%NUMBER_OF_PROCESSORS%%+1&& '
if direct_cmd.find('INTDIR') >= 0:
cmd += 'set INTDIR=$(IntDir)&& '
if direct_cmd.find('OUTDIR') >= 0:
cmd += 'set OUTDIR=$(OutDir)&& '
if has_input_path and direct_cmd.find('INPUTPATH') >= 0:
cmd += 'set INPUTPATH=$(InputPath) && '
cmd += 'bash -c "%(cmd)s"'
cmd = cmd % {'cygwin_dir': cygwin_dir,
'cmd': direct_cmd}
return input_dir_preamble + cmd
else:
# Convert cat --> type to mimic unix.
if cmd[0] == 'cat':
command = ['type']
else:
command = [cmd[0].replace('/', '\\')]
# Add call before command to ensure that commands can be tied together one
# after the other without aborting in Incredibuild, since IB makes a bat
# file out of the raw command string, and some commands (like python) are
# actually batch files themselves.
command.insert(0, 'call')
# Fix the paths
# TODO(quote): This is a really ugly heuristic, and will miss path fixing
# for arguments like "--arg=path" or "/opt:path".
# If the argument starts with a slash or dash, it's probably a command line
# switch
arguments = [i if (i[:1] in "/-") else _FixPath(i) for i in cmd[1:]]
arguments = [i.replace('$(InputDir)', '%INPUTDIR%') for i in arguments]
arguments = [MSVSSettings.FixVCMacroSlashes(i) for i in arguments]
if quote_cmd:
# Support a mode for using cmd directly.
# Convert any paths to native form (first element is used directly).
# TODO(quote): regularize quoting path names throughout the module
arguments = ['"%s"' % i for i in arguments]
# Collapse into a single command.
return input_dir_preamble + ' '.join(command + arguments)
def _BuildCommandLineForRule(spec, rule, has_input_path, do_setup_env):
# Currently this weird argument munging is used to duplicate the way a
# python script would need to be run as part of the chrome tree.
# Eventually we should add some sort of rule_default option to set this
# per project. For now the behavior chrome needs is the default.
mcs = rule.get('msvs_cygwin_shell')
if mcs is None:
mcs = int(spec.get('msvs_cygwin_shell', 1))
elif isinstance(mcs, str):
mcs = int(mcs)
quote_cmd = int(rule.get('msvs_quote_cmd', 1))
return _BuildCommandLineForRuleRaw(spec, rule['action'], mcs, has_input_path,
quote_cmd, do_setup_env=do_setup_env)
def _AddActionStep(actions_dict, inputs, outputs, description, command):
"""Merge action into an existing list of actions.
Care must be taken so that actions which have overlapping inputs either don't
get assigned to the same input, or get collapsed into one.
Arguments:
actions_dict: dictionary keyed on input name, which maps to a list of
dicts describing the actions attached to that input file.
inputs: list of inputs
outputs: list of outputs
description: description of the action
command: command line to execute
"""
# Require there to be at least one input (call sites will ensure this).
assert inputs
action = {
'inputs': inputs,
'outputs': outputs,
'description': description,
'command': command,
}
# Pick where to stick this action.
# While less than optimal in terms of build time, attach them to the first
# input for now.
chosen_input = inputs[0]
# Add it there.
if chosen_input not in actions_dict:
actions_dict[chosen_input] = []
actions_dict[chosen_input].append(action)
def _AddCustomBuildToolForMSVS(p, spec, primary_input,
inputs, outputs, description, cmd):
"""Add a custom build tool to execute something.
Arguments:
p: the target project
spec: the target project dict
primary_input: input file to attach the build tool to
inputs: list of inputs
outputs: list of outputs
description: description of the action
cmd: command line to execute
"""
inputs = _FixPaths(inputs)
outputs = _FixPaths(outputs)
tool = MSVSProject.Tool(
'VCCustomBuildTool',
{'Description': description,
'AdditionalDependencies': ';'.join(inputs),
'Outputs': ';'.join(outputs),
'CommandLine': cmd,
})
# Add to the properties of primary input for each config.
for config_name, c_data in spec['configurations'].iteritems():
p.AddFileConfig(_FixPath(primary_input),
_ConfigFullName(config_name, c_data), tools=[tool])
def _AddAccumulatedActionsToMSVS(p, spec, actions_dict):
"""Add actions accumulated into an actions_dict, merging as needed.
Arguments:
p: the target project
spec: the target project dict
actions_dict: dictionary keyed on input name, which maps to a list of
dicts describing the actions attached to that input file.
"""
for primary_input in actions_dict:
inputs = OrderedSet()
outputs = OrderedSet()
descriptions = []
commands = []
for action in actions_dict[primary_input]:
inputs.update(OrderedSet(action['inputs']))
outputs.update(OrderedSet(action['outputs']))
descriptions.append(action['description'])
commands.append(action['command'])
# Add the custom build step for one input file.
description = ', and also '.join(descriptions)
command = '\r\n'.join(commands)
_AddCustomBuildToolForMSVS(p, spec,
primary_input=primary_input,
inputs=inputs,
outputs=outputs,
description=description,
cmd=command)
def _RuleExpandPath(path, input_file):
"""Given the input file to which a rule applied, string substitute a path.
Arguments:
path: a path to string expand
input_file: the file to which the rule applied.
Returns:
The string substituted path.
"""
path = path.replace('$(InputName)',
os.path.splitext(os.path.split(input_file)[1])[0])
path = path.replace('$(InputDir)', os.path.dirname(input_file))
path = path.replace('$(InputExt)',
os.path.splitext(os.path.split(input_file)[1])[1])
path = path.replace('$(InputFileName)', os.path.split(input_file)[1])
path = path.replace('$(InputPath)', input_file)
return path
def _FindRuleTriggerFiles(rule, sources):
"""Find the list of files which a particular rule applies to.
Arguments:
rule: the rule in question
sources: the set of all known source files for this project
Returns:
The list of sources that trigger a particular rule.
"""
return rule.get('rule_sources', [])
def _RuleInputsAndOutputs(rule, trigger_file):
"""Find the inputs and outputs generated by a rule.
Arguments:
rule: the rule in question.
trigger_file: the main trigger for this rule.
Returns:
The pair of (inputs, outputs) involved in this rule.
"""
raw_inputs = _FixPaths(rule.get('inputs', []))
raw_outputs = _FixPaths(rule.get('outputs', []))
inputs = OrderedSet()
outputs = OrderedSet()
inputs.add(trigger_file)
for i in raw_inputs:
inputs.add(_RuleExpandPath(i, trigger_file))
for o in raw_outputs:
outputs.add(_RuleExpandPath(o, trigger_file))
return (inputs, outputs)
def _GenerateNativeRulesForMSVS(p, rules, output_dir, spec, options):
"""Generate a native rules file.
Arguments:
p: the target project
rules: the set of rules to include
output_dir: the directory in which the project/gyp resides
spec: the project dict
options: global generator options
"""
rules_filename = '%s%s.rules' % (spec['target_name'],
options.suffix)
rules_file = MSVSToolFile.Writer(os.path.join(output_dir, rules_filename),
spec['target_name'])
# Add each rule.
for r in rules:
rule_name = r['rule_name']
rule_ext = r['extension']
inputs = _FixPaths(r.get('inputs', []))
outputs = _FixPaths(r.get('outputs', []))
# Skip a rule with no action and no inputs.
if 'action' not in r and not r.get('rule_sources', []):
continue
cmd = _BuildCommandLineForRule(spec, r, has_input_path=True,
do_setup_env=True)
rules_file.AddCustomBuildRule(name=rule_name,
description=r.get('message', rule_name),
extensions=[rule_ext],
additional_dependencies=inputs,
outputs=outputs,
cmd=cmd)
# Write out rules file.
rules_file.WriteIfChanged()
# Add rules file to project.
p.AddToolFile(rules_filename)
def _Cygwinify(path):
path = path.replace('$(OutDir)', '$(OutDirCygwin)')
path = path.replace('$(IntDir)', '$(IntDirCygwin)')
return path
def _GenerateExternalRules(rules, output_dir, spec,
sources, options, actions_to_add):
"""Generate an external makefile to do a set of rules.
Arguments:
rules: the list of rules to include
output_dir: path containing project and gyp files
spec: project specification data
sources: set of sources known
options: global generator options
actions_to_add: The list of actions we will add to.
"""
filename = '%s_rules%s.mk' % (spec['target_name'], options.suffix)
mk_file = gyp.common.WriteOnDiff(os.path.join(output_dir, filename))
# Find cygwin style versions of some paths.
mk_file.write('OutDirCygwin:=$(shell cygpath -u "$(OutDir)")\n')
mk_file.write('IntDirCygwin:=$(shell cygpath -u "$(IntDir)")\n')
# Gather stuff needed to emit all: target.
all_inputs = OrderedSet()
all_outputs = OrderedSet()
all_output_dirs = OrderedSet()
first_outputs = []
for rule in rules:
trigger_files = _FindRuleTriggerFiles(rule, sources)
for tf in trigger_files:
inputs, outputs = _RuleInputsAndOutputs(rule, tf)
all_inputs.update(OrderedSet(inputs))
all_outputs.update(OrderedSet(outputs))
# Only use one target from each rule as the dependency for
# 'all' so we don't try to build each rule multiple times.
first_outputs.append(list(outputs)[0])
# Get the unique output directories for this rule.
output_dirs = [os.path.split(i)[0] for i in outputs]
for od in output_dirs:
all_output_dirs.add(od)
first_outputs_cyg = [_Cygwinify(i) for i in first_outputs]
# Write out all: target, including mkdir for each output directory.
mk_file.write('all: %s\n' % ' '.join(first_outputs_cyg))
for od in all_output_dirs:
if od:
mk_file.write('\tmkdir -p `cygpath -u "%s"`\n' % od)
mk_file.write('\n')
# Define how each output is generated.
for rule in rules:
trigger_files = _FindRuleTriggerFiles(rule, sources)
for tf in trigger_files:
# Get all the inputs and outputs for this rule for this trigger file.
inputs, outputs = _RuleInputsAndOutputs(rule, tf)
inputs = [_Cygwinify(i) for i in inputs]
outputs = [_Cygwinify(i) for i in outputs]
# Prepare the command line for this rule.
cmd = [_RuleExpandPath(c, tf) for c in rule['action']]
cmd = ['"%s"' % i for i in cmd]
cmd = ' '.join(cmd)
# Add it to the makefile.
mk_file.write('%s: %s\n' % (' '.join(outputs), ' '.join(inputs)))
mk_file.write('\t%s\n\n' % cmd)
# Close up the file.
mk_file.close()
# Add makefile to list of sources.
sources.add(filename)
# Add a build action to call makefile.
cmd = ['make',
'OutDir=$(OutDir)',
'IntDir=$(IntDir)',
'-j', '${NUMBER_OF_PROCESSORS_PLUS_1}',
'-f', filename]
cmd = _BuildCommandLineForRuleRaw(spec, cmd, True, False, True, True)
# Insert makefile as 0'th input, so it gets the action attached there,
# as this is easier to understand from in the IDE.
all_inputs = list(all_inputs)
all_inputs.insert(0, filename)
_AddActionStep(actions_to_add,
inputs=_FixPaths(all_inputs),
outputs=_FixPaths(all_outputs),
description='Running external rules for %s' %
spec['target_name'],
command=cmd)
def _EscapeEnvironmentVariableExpansion(s):
"""Escapes % characters.
Escapes any % characters so that Windows-style environment variable
expansions will leave them alone.
See http://connect.microsoft.com/VisualStudio/feedback/details/106127/cl-d-name-text-containing-percentage-characters-doesnt-compile
to understand why we have to do this.
Args:
s: The string to be escaped.
Returns:
The escaped string.
"""
s = s.replace('%', '%%')
return s
quote_replacer_regex = re.compile(r'(\\*)"')
def _EscapeCommandLineArgumentForMSVS(s):
"""Escapes a Windows command-line argument.
So that the Win32 CommandLineToArgv function will turn the escaped result back
into the original string.
See http://msdn.microsoft.com/en-us/library/17w5ykft.aspx
("Parsing C++ Command-Line Arguments") to understand why we have to do
this.
Args:
s: the string to be escaped.
Returns:
the escaped string.
"""
def _Replace(match):
# For a literal quote, CommandLineToArgv requires an odd number of
# backslashes preceding it, and it produces half as many literal backslashes
# (rounded down). So we need to produce 2n+1 backslashes.
return 2 * match.group(1) + '\\"'
# Escape all quotes so that they are interpreted literally.
s = quote_replacer_regex.sub(_Replace, s)
# Now add unescaped quotes so that any whitespace is interpreted literally.
s = '"' + s + '"'
return s
delimiters_replacer_regex = re.compile(r'(\\*)([,;]+)')
def _EscapeVCProjCommandLineArgListItem(s):
"""Escapes command line arguments for MSVS.
The VCProj format stores string lists in a single string using commas and
semi-colons as separators, which must be quoted if they are to be
interpreted literally. However, command-line arguments may already have
quotes, and the VCProj parser is ignorant of the backslash escaping
convention used by CommandLineToArgv, so the command-line quotes and the
VCProj quotes may not be the same quotes. So to store a general
command-line argument in a VCProj list, we need to parse the existing
quoting according to VCProj's convention and quote any delimiters that are
not already quoted by that convention. The quotes that we add will also be
seen by CommandLineToArgv, so if backslashes precede them then we also have
to escape those backslashes according to the CommandLineToArgv
convention.
Args:
s: the string to be escaped.
Returns:
the escaped string.
"""
def _Replace(match):
# For a non-literal quote, CommandLineToArgv requires an even number of
# backslashes preceding it, and it produces half as many literal
# backslashes. So we need to produce 2n backslashes.
return 2 * match.group(1) + '"' + match.group(2) + '"'
segments = s.split('"')
# The unquoted segments are at the even-numbered indices.
for i in range(0, len(segments), 2):
segments[i] = delimiters_replacer_regex.sub(_Replace, segments[i])
# Concatenate back into a single string
s = '"'.join(segments)
if len(segments) % 2 == 0:
# String ends while still quoted according to VCProj's convention. This
# means the delimiter and the next list item that follow this one in the
# .vcproj file will be misinterpreted as part of this item. There is nothing
# we can do about this. Adding an extra quote would correct the problem in
# the VCProj but cause the same problem on the final command-line. Moving
# the item to the end of the list does works, but that's only possible if
# there's only one such item. Let's just warn the user.
print >> sys.stderr, ('Warning: MSVS may misinterpret the odd number of ' +
'quotes in ' + s)
return s
def _EscapeCppDefineForMSVS(s):
"""Escapes a CPP define so that it will reach the compiler unaltered."""
s = _EscapeEnvironmentVariableExpansion(s)
s = _EscapeCommandLineArgumentForMSVS(s)
s = _EscapeVCProjCommandLineArgListItem(s)
# cl.exe replaces literal # characters with = in preprocesor definitions for
# some reason. Octal-encode to work around that.
s = s.replace('#', '\\%03o' % ord('#'))
return s
quote_replacer_regex2 = re.compile(r'(\\+)"')
def _EscapeCommandLineArgumentForMSBuild(s):
"""Escapes a Windows command-line argument for use by MSBuild."""
def _Replace(match):
return (len(match.group(1)) / 2 * 4) * '\\' + '\\"'
# Escape all quotes so that they are interpreted literally.
s = quote_replacer_regex2.sub(_Replace, s)
return s
def _EscapeMSBuildSpecialCharacters(s):
escape_dictionary = {
'%': '%25',
'$': '%24',
'@': '%40',
"'": '%27',
';': '%3B',
'?': '%3F',
'*': '%2A'
}
result = ''.join([escape_dictionary.get(c, c) for c in s])
return result
def _EscapeCppDefineForMSBuild(s):
"""Escapes a CPP define so that it will reach the compiler unaltered."""
s = _EscapeEnvironmentVariableExpansion(s)
s = _EscapeCommandLineArgumentForMSBuild(s)
s = _EscapeMSBuildSpecialCharacters(s)
# cl.exe replaces literal # characters with = in preprocesor definitions for
# some reason. Octal-encode to work around that.
s = s.replace('#', '\\%03o' % ord('#'))
return s
def _GenerateRulesForMSVS(p, output_dir, options, spec,
sources, excluded_sources,
actions_to_add):
"""Generate all the rules for a particular project.
Arguments:
p: the project
output_dir: directory to emit rules to
options: global options passed to the generator
spec: the specification for this project
sources: the set of all known source files in this project
excluded_sources: the set of sources excluded from normal processing
actions_to_add: deferred list of actions to add in
"""
rules = spec.get('rules', [])
rules_native = [r for r in rules if not int(r.get('msvs_external_rule', 0))]
rules_external = [r for r in rules if int(r.get('msvs_external_rule', 0))]
# Handle rules that use a native rules file.
if rules_native:
_GenerateNativeRulesForMSVS(p, rules_native, output_dir, spec, options)
# Handle external rules (non-native rules).
if rules_external:
_GenerateExternalRules(rules_external, output_dir, spec,
sources, options, actions_to_add)
_AdjustSourcesForRules(rules, sources, excluded_sources, False)
def _AdjustSourcesForRules(rules, sources, excluded_sources, is_msbuild):
# Add outputs generated by each rule (if applicable).
for rule in rules:
# Add in the outputs from this rule.
trigger_files = _FindRuleTriggerFiles(rule, sources)
for trigger_file in trigger_files:
# Remove trigger_file from excluded_sources to let the rule be triggered
# (e.g. rule trigger ax_enums.idl is added to excluded_sources
# because it's also in an action's inputs in the same project)
excluded_sources.discard(_FixPath(trigger_file))
# Done if not processing outputs as sources.
if int(rule.get('process_outputs_as_sources', False)):
inputs, outputs = _RuleInputsAndOutputs(rule, trigger_file)
inputs = OrderedSet(_FixPaths(inputs))
outputs = OrderedSet(_FixPaths(outputs))
inputs.remove(_FixPath(trigger_file))
sources.update(inputs)
if not is_msbuild:
excluded_sources.update(inputs)
sources.update(outputs)
def _FilterActionsFromExcluded(excluded_sources, actions_to_add):
"""Take inputs with actions attached out of the list of exclusions.
Arguments:
excluded_sources: list of source files not to be built.
actions_to_add: dict of actions keyed on source file they're attached to.
Returns:
excluded_sources with files that have actions attached removed.
"""
must_keep = OrderedSet(_FixPaths(actions_to_add.keys()))
return [s for s in excluded_sources if s not in must_keep]
def _GetDefaultConfiguration(spec):
return spec['configurations'][spec['default_configuration']]
def _GetGuidOfProject(proj_path, spec):
"""Get the guid for the project.
Arguments:
proj_path: Path of the vcproj or vcxproj file to generate.
spec: The target dictionary containing the properties of the target.
Returns:
the guid.
Raises:
ValueError: if the specified GUID is invalid.
"""
# Pluck out the default configuration.
default_config = _GetDefaultConfiguration(spec)
# Decide the guid of the project.
guid = default_config.get('msvs_guid')
if guid:
if VALID_MSVS_GUID_CHARS.match(guid) is None:
raise ValueError('Invalid MSVS guid: "%s". Must match regex: "%s".' %
(guid, VALID_MSVS_GUID_CHARS.pattern))
guid = '{%s}' % guid
guid = guid or MSVSNew.MakeGuid(proj_path)
return guid
def _GetMsbuildToolsetOfProject(proj_path, spec, version):
"""Get the platform toolset for the project.
Arguments:
proj_path: Path of the vcproj or vcxproj file to generate.
spec: The target dictionary containing the properties of the target.
version: The MSVSVersion object.
Returns:
the platform toolset string or None.
"""
# Pluck out the default configuration.
default_config = _GetDefaultConfiguration(spec)
toolset = default_config.get('msbuild_toolset')
if not toolset and version.DefaultToolset():
toolset = version.DefaultToolset()
return toolset
def _GenerateProject(project, options, version, generator_flags):
"""Generates a vcproj file.
Arguments:
project: the MSVSProject object.
options: global generator options.
version: the MSVSVersion object.
generator_flags: dict of generator-specific flags.
Returns:
A list of source files that cannot be found on disk.
"""
default_config = _GetDefaultConfiguration(project.spec)
# Skip emitting anything if told to with msvs_existing_vcproj option.
if default_config.get('msvs_existing_vcproj'):
return []
if version.UsesVcxproj():
return _GenerateMSBuildProject(project, options, version, generator_flags)
else:
return _GenerateMSVSProject(project, options, version, generator_flags)
# TODO: Avoid code duplication with _ValidateSourcesForOSX in make.py.
def _ValidateSourcesForMSVSProject(spec, version):
"""Makes sure if duplicate basenames are not specified in the source list.
Arguments:
spec: The target dictionary containing the properties of the target.
version: The VisualStudioVersion object.
"""
# This validation should not be applied to MSVC2010 and later.
assert not version.UsesVcxproj()
# TODO: Check if MSVC allows this for loadable_module targets.
if spec.get('type', None) not in ('static_library', 'shared_library'):
return
sources = spec.get('sources', [])
basenames = {}
for source in sources:
name, ext = os.path.splitext(source)
is_compiled_file = ext in [
'.c', '.cc', '.cpp', '.cxx', '.m', '.mm', '.s', '.S']
if not is_compiled_file:
continue
basename = os.path.basename(name) # Don't include extension.
basenames.setdefault(basename, []).append(source)
error = ''
for basename, files in basenames.iteritems():
if len(files) > 1:
error += ' %s: %s\n' % (basename, ' '.join(files))
if error:
print('static library %s has several files with the same basename:\n' %
spec['target_name'] + error + 'MSVC08 cannot handle that.')
raise GypError('Duplicate basenames in sources section, see list above')
def _GenerateMSVSProject(project, options, version, generator_flags):
"""Generates a .vcproj file. It may create .rules and .user files too.
Arguments:
project: The project object we will generate the file for.
options: Global options passed to the generator.
version: The VisualStudioVersion object.
generator_flags: dict of generator-specific flags.
"""
spec = project.spec
gyp.common.EnsureDirExists(project.path)
platforms = _GetUniquePlatforms(spec)
p = MSVSProject.Writer(project.path, version, spec['target_name'],
project.guid, platforms)
# Get directory project file is in.
project_dir = os.path.split(project.path)[0]
gyp_path = _NormalizedSource(project.build_file)
relative_path_of_gyp_file = gyp.common.RelativePath(gyp_path, project_dir)
config_type = _GetMSVSConfigurationType(spec, project.build_file)
for config_name, config in spec['configurations'].iteritems():
_AddConfigurationToMSVSProject(p, spec, config_type, config_name, config)
# MSVC08 and prior version cannot handle duplicate basenames in the same
# target.
# TODO: Take excluded sources into consideration if possible.
_ValidateSourcesForMSVSProject(spec, version)
# Prepare list of sources and excluded sources.
gyp_file = os.path.split(project.build_file)[1]
sources, excluded_sources = _PrepareListOfSources(spec, generator_flags,
gyp_file)
# Add rules.
actions_to_add = {}
_GenerateRulesForMSVS(p, project_dir, options, spec,
sources, excluded_sources,
actions_to_add)
list_excluded = generator_flags.get('msvs_list_excluded_files', True)
sources, excluded_sources, excluded_idl = (
_AdjustSourcesAndConvertToFilterHierarchy(spec, options, project_dir,
sources, excluded_sources,
list_excluded, version))
# Add in files.
missing_sources = _VerifySourcesExist(sources, project_dir)
p.AddFiles(sources)
_AddToolFilesToMSVS(p, spec)
_HandlePreCompiledHeaders(p, sources, spec)
_AddActions(actions_to_add, spec, relative_path_of_gyp_file)
_AddCopies(actions_to_add, spec)
_WriteMSVSUserFile(project.path, version, spec)
# NOTE: this stanza must appear after all actions have been decided.
# Don't excluded sources with actions attached, or they won't run.
excluded_sources = _FilterActionsFromExcluded(
excluded_sources, actions_to_add)
_ExcludeFilesFromBeingBuilt(p, spec, excluded_sources, excluded_idl,
list_excluded)
_AddAccumulatedActionsToMSVS(p, spec, actions_to_add)
# Write it out.
p.WriteIfChanged()
return missing_sources
def _GetUniquePlatforms(spec):
"""Returns the list of unique platforms for this spec, e.g ['win32', ...].
Arguments:
spec: The target dictionary containing the properties of the target.
Returns:
The MSVSUserFile object created.
"""
# Gather list of unique platforms.
platforms = OrderedSet()
for configuration in spec['configurations']:
platforms.add(_ConfigPlatform(spec['configurations'][configuration]))
platforms = list(platforms)
return platforms
def _CreateMSVSUserFile(proj_path, version, spec):
"""Generates a .user file for the user running this Gyp program.
Arguments:
proj_path: The path of the project file being created. The .user file
shares the same path (with an appropriate suffix).
version: The VisualStudioVersion object.
spec: The target dictionary containing the properties of the target.
Returns:
The MSVSUserFile object created.
"""
(domain, username) = _GetDomainAndUserName()
vcuser_filename = '.'.join([proj_path, domain, username, 'user'])
user_file = MSVSUserFile.Writer(vcuser_filename, version,
spec['target_name'])
return user_file
def _GetMSVSConfigurationType(spec, build_file):
"""Returns the configuration type for this project.
It's a number defined by Microsoft. May raise an exception.
Args:
spec: The target dictionary containing the properties of the target.
build_file: The path of the gyp file.
Returns:
An integer, the configuration type.
"""
try:
config_type = {
'executable': '1', # .exe
'shared_library': '2', # .dll
'loadable_module': '2', # .dll
'static_library': '4', # .lib
'none': '10', # Utility type
}[spec['type']]
except KeyError:
if spec.get('type'):
raise GypError('Target type %s is not a valid target type for '
'target %s in %s.' %
(spec['type'], spec['target_name'], build_file))
else:
raise GypError('Missing type field for target %s in %s.' %
(spec['target_name'], build_file))
return config_type
def _AddConfigurationToMSVSProject(p, spec, config_type, config_name, config):
"""Adds a configuration to the MSVS project.
Many settings in a vcproj file are specific to a configuration. This
function the main part of the vcproj file that's configuration specific.
Arguments:
p: The target project being generated.
spec: The target dictionary containing the properties of the target.
config_type: The configuration type, a number as defined by Microsoft.
config_name: The name of the configuration.
config: The dictionary that defines the special processing to be done
for this configuration.
"""
# Get the information for this configuration
include_dirs, midl_include_dirs, resource_include_dirs = \
_GetIncludeDirs(config)
libraries = _GetLibraries(spec)
library_dirs = _GetLibraryDirs(config)
out_file, vc_tool, _ = _GetOutputFilePathAndTool(spec, msbuild=False)
defines = _GetDefines(config)
defines = [_EscapeCppDefineForMSVS(d) for d in defines]
disabled_warnings = _GetDisabledWarnings(config)
prebuild = config.get('msvs_prebuild')
postbuild = config.get('msvs_postbuild')
def_file = _GetModuleDefinition(spec)
precompiled_header = config.get('msvs_precompiled_header')
# Prepare the list of tools as a dictionary.
tools = dict()
# Add in user specified msvs_settings.
msvs_settings = config.get('msvs_settings', {})
MSVSSettings.ValidateMSVSSettings(msvs_settings)
# Prevent default library inheritance from the environment.
_ToolAppend(tools, 'VCLinkerTool', 'AdditionalDependencies', ['$(NOINHERIT)'])
for tool in msvs_settings:
settings = config['msvs_settings'][tool]
for setting in settings:
_ToolAppend(tools, tool, setting, settings[setting])
# Add the information to the appropriate tool
_ToolAppend(tools, 'VCCLCompilerTool',
'AdditionalIncludeDirectories', include_dirs)
_ToolAppend(tools, 'VCMIDLTool',
'AdditionalIncludeDirectories', midl_include_dirs)
_ToolAppend(tools, 'VCResourceCompilerTool',
'AdditionalIncludeDirectories', resource_include_dirs)
# Add in libraries.
_ToolAppend(tools, 'VCLinkerTool', 'AdditionalDependencies', libraries)
_ToolAppend(tools, 'VCLinkerTool', 'AdditionalLibraryDirectories',
library_dirs)
if out_file:
_ToolAppend(tools, vc_tool, 'OutputFile', out_file, only_if_unset=True)
# Add defines.
_ToolAppend(tools, 'VCCLCompilerTool', 'PreprocessorDefinitions', defines)
_ToolAppend(tools, 'VCResourceCompilerTool', 'PreprocessorDefinitions',
defines)
# Change program database directory to prevent collisions.
_ToolAppend(tools, 'VCCLCompilerTool', 'ProgramDataBaseFileName',
'$(IntDir)$(ProjectName)\\vc80.pdb', only_if_unset=True)
# Add disabled warnings.
_ToolAppend(tools, 'VCCLCompilerTool',
'DisableSpecificWarnings', disabled_warnings)
# Add Pre-build.
_ToolAppend(tools, 'VCPreBuildEventTool', 'CommandLine', prebuild)
# Add Post-build.
_ToolAppend(tools, 'VCPostBuildEventTool', 'CommandLine', postbuild)
# Turn on precompiled headers if appropriate.
if precompiled_header:
precompiled_header = os.path.split(precompiled_header)[1]
_ToolAppend(tools, 'VCCLCompilerTool', 'UsePrecompiledHeader', '2')
_ToolAppend(tools, 'VCCLCompilerTool',
'PrecompiledHeaderThrough', precompiled_header)
_ToolAppend(tools, 'VCCLCompilerTool',
'ForcedIncludeFiles', precompiled_header)
# Loadable modules don't generate import libraries;
# tell dependent projects to not expect one.
if spec['type'] == 'loadable_module':
_ToolAppend(tools, 'VCLinkerTool', 'IgnoreImportLibrary', 'true')
# Set the module definition file if any.
if def_file:
_ToolAppend(tools, 'VCLinkerTool', 'ModuleDefinitionFile', def_file)
_AddConfigurationToMSVS(p, spec, tools, config, config_type, config_name)
def _GetIncludeDirs(config):
"""Returns the list of directories to be used for #include directives.
Arguments:
config: The dictionary that defines the special processing to be done
for this configuration.
Returns:
The list of directory paths.
"""
# TODO(bradnelson): include_dirs should really be flexible enough not to
# require this sort of thing.
include_dirs = (
config.get('include_dirs', []) +
config.get('msvs_system_include_dirs', []))
midl_include_dirs = (
config.get('midl_include_dirs', []) +
config.get('msvs_system_include_dirs', []))
resource_include_dirs = config.get('resource_include_dirs', include_dirs)
include_dirs = _FixPaths(include_dirs)
midl_include_dirs = _FixPaths(midl_include_dirs)
resource_include_dirs = _FixPaths(resource_include_dirs)
return include_dirs, midl_include_dirs, resource_include_dirs
def _GetLibraryDirs(config):
"""Returns the list of directories to be used for library search paths.
Arguments:
config: The dictionary that defines the special processing to be done
for this configuration.
Returns:
The list of directory paths.
"""
library_dirs = config.get('library_dirs', [])
library_dirs = _FixPaths(library_dirs)
return library_dirs
def _GetLibraries(spec):
"""Returns the list of libraries for this configuration.
Arguments:
spec: The target dictionary containing the properties of the target.
Returns:
The list of directory paths.
"""
libraries = spec.get('libraries', [])
# Strip out -l, as it is not used on windows (but is needed so we can pass
# in libraries that are assumed to be in the default library path).
# Also remove duplicate entries, leaving only the last duplicate, while
# preserving order.
found = OrderedSet()
unique_libraries_list = []
for entry in reversed(libraries):
library = re.sub(r'^\-l', '', entry)
if not os.path.splitext(library)[1]:
library += '.lib'
if library not in found:
found.add(library)
unique_libraries_list.append(library)
unique_libraries_list.reverse()
return unique_libraries_list
def _GetOutputFilePathAndTool(spec, msbuild):
"""Returns the path and tool to use for this target.
Figures out the path of the file this spec will create and the name of
the VC tool that will create it.
Arguments:
spec: The target dictionary containing the properties of the target.
Returns:
A triple of (file path, name of the vc tool, name of the msbuild tool)
"""
# Select a name for the output file.
out_file = ''
vc_tool = ''
msbuild_tool = ''
output_file_map = {
'executable': ('VCLinkerTool', 'Link', '$(OutDir)', '.exe'),
'shared_library': ('VCLinkerTool', 'Link', '$(OutDir)', '.dll'),
'loadable_module': ('VCLinkerTool', 'Link', '$(OutDir)', '.dll'),
'static_library': ('VCLibrarianTool', 'Lib', '$(OutDir)lib\\', '.lib'),
}
output_file_props = output_file_map.get(spec['type'])
if output_file_props and int(spec.get('msvs_auto_output_file', 1)):
vc_tool, msbuild_tool, out_dir, suffix = output_file_props
if spec.get('standalone_static_library', 0):
out_dir = '$(OutDir)'
out_dir = spec.get('product_dir', out_dir)
product_extension = spec.get('product_extension')
if product_extension:
suffix = '.' + product_extension
elif msbuild:
suffix = '$(TargetExt)'
prefix = spec.get('product_prefix', '')
product_name = spec.get('product_name', '$(ProjectName)')
out_file = ntpath.join(out_dir, prefix + product_name + suffix)
return out_file, vc_tool, msbuild_tool
def _GetOutputTargetExt(spec):
"""Returns the extension for this target, including the dot
If product_extension is specified, set target_extension to this to avoid
MSB8012, returns None otherwise. Ignores any target_extension settings in
the input files.
Arguments:
spec: The target dictionary containing the properties of the target.
Returns:
A string with the extension, or None
"""
target_extension = spec.get('product_extension')
if target_extension:
return '.' + target_extension
return None
def _GetDefines(config):
"""Returns the list of preprocessor definitions for this configuation.
Arguments:
config: The dictionary that defines the special processing to be done
for this configuration.
Returns:
The list of preprocessor definitions.
"""
defines = []
for d in config.get('defines', []):
if type(d) == list:
fd = '='.join([str(dpart) for dpart in d])
else:
fd = str(d)
defines.append(fd)
return defines
def _GetDisabledWarnings(config):
return [str(i) for i in config.get('msvs_disabled_warnings', [])]
def _GetModuleDefinition(spec):
def_file = ''
if spec['type'] in ['shared_library', 'loadable_module', 'executable']:
def_files = [s for s in spec.get('sources', []) if s.endswith('.def')]
if len(def_files) == 1:
def_file = _FixPath(def_files[0])
elif def_files:
raise ValueError(
'Multiple module definition files in one target, target %s lists '
'multiple .def files: %s' % (
spec['target_name'], ' '.join(def_files)))
return def_file
def _ConvertToolsToExpectedForm(tools):
"""Convert tools to a form expected by Visual Studio.
Arguments:
tools: A dictionary of settings; the tool name is the key.
Returns:
A list of Tool objects.
"""
tool_list = []
for tool, settings in tools.iteritems():
# Collapse settings with lists.
settings_fixed = {}
for setting, value in settings.iteritems():
if type(value) == list:
if ((tool == 'VCLinkerTool' and
setting == 'AdditionalDependencies') or
setting == 'AdditionalOptions'):
settings_fixed[setting] = ' '.join(value)
else:
settings_fixed[setting] = ';'.join(value)
else:
settings_fixed[setting] = value
# Add in this tool.
tool_list.append(MSVSProject.Tool(tool, settings_fixed))
return tool_list
def _AddConfigurationToMSVS(p, spec, tools, config, config_type, config_name):
"""Add to the project file the configuration specified by config.
Arguments:
p: The target project being generated.
spec: the target project dict.
tools: A dictionary of settings; the tool name is the key.
config: The dictionary that defines the special processing to be done
for this configuration.
config_type: The configuration type, a number as defined by Microsoft.
config_name: The name of the configuration.
"""
attributes = _GetMSVSAttributes(spec, config, config_type)
# Add in this configuration.
tool_list = _ConvertToolsToExpectedForm(tools)
p.AddConfig(_ConfigFullName(config_name, config),
attrs=attributes, tools=tool_list)
def _GetMSVSAttributes(spec, config, config_type):
# Prepare configuration attributes.
prepared_attrs = {}
source_attrs = config.get('msvs_configuration_attributes', {})
for a in source_attrs:
prepared_attrs[a] = source_attrs[a]
# Add props files.
vsprops_dirs = config.get('msvs_props', [])
vsprops_dirs = _FixPaths(vsprops_dirs)
if vsprops_dirs:
prepared_attrs['InheritedPropertySheets'] = ';'.join(vsprops_dirs)
# Set configuration type.
prepared_attrs['ConfigurationType'] = config_type
output_dir = prepared_attrs.get('OutputDirectory',
'$(SolutionDir)$(ConfigurationName)')
prepared_attrs['OutputDirectory'] = _FixPath(output_dir) + '\\'
if 'IntermediateDirectory' not in prepared_attrs:
intermediate = '$(ConfigurationName)\\obj\\$(ProjectName)'
prepared_attrs['IntermediateDirectory'] = _FixPath(intermediate) + '\\'
else:
intermediate = _FixPath(prepared_attrs['IntermediateDirectory']) + '\\'
intermediate = MSVSSettings.FixVCMacroSlashes(intermediate)
prepared_attrs['IntermediateDirectory'] = intermediate
return prepared_attrs
def _AddNormalizedSources(sources_set, sources_array):
sources_set.update(_NormalizedSource(s) for s in sources_array)
def _PrepareListOfSources(spec, generator_flags, gyp_file):
"""Prepare list of sources and excluded sources.
Besides the sources specified directly in the spec, adds the gyp file so
that a change to it will cause a re-compile. Also adds appropriate sources
for actions and copies. Assumes later stage will un-exclude files which
have custom build steps attached.
Arguments:
spec: The target dictionary containing the properties of the target.
gyp_file: The name of the gyp file.
Returns:
A pair of (list of sources, list of excluded sources).
The sources will be relative to the gyp file.
"""
sources = OrderedSet()
_AddNormalizedSources(sources, spec.get('sources', []))
excluded_sources = OrderedSet()
# Add in the gyp file.
if not generator_flags.get('standalone'):
sources.add(gyp_file)
# Add in 'action' inputs and outputs.
for a in spec.get('actions', []):
inputs = a['inputs']
inputs = [_NormalizedSource(i) for i in inputs]
# Add all inputs to sources and excluded sources.
inputs = OrderedSet(inputs)
sources.update(inputs)
if not spec.get('msvs_external_builder'):
excluded_sources.update(inputs)
if int(a.get('process_outputs_as_sources', False)):
_AddNormalizedSources(sources, a.get('outputs', []))
# Add in 'copies' inputs and outputs.
for cpy in spec.get('copies', []):
_AddNormalizedSources(sources, cpy.get('files', []))
return (sources, excluded_sources)
def _AdjustSourcesAndConvertToFilterHierarchy(
spec, options, gyp_dir, sources, excluded_sources, list_excluded, version):
"""Adjusts the list of sources and excluded sources.
Also converts the sets to lists.
Arguments:
spec: The target dictionary containing the properties of the target.
options: Global generator options.
gyp_dir: The path to the gyp file being processed.
sources: A set of sources to be included for this project.
excluded_sources: A set of sources to be excluded for this project.
version: A MSVSVersion object.
Returns:
A trio of (list of sources, list of excluded sources,
path of excluded IDL file)
"""
# Exclude excluded sources coming into the generator.
excluded_sources.update(OrderedSet(spec.get('sources_excluded', [])))
# Add excluded sources into sources for good measure.
sources.update(excluded_sources)
# Convert to proper windows form.
# NOTE: sources goes from being a set to a list here.
# NOTE: excluded_sources goes from being a set to a list here.
sources = _FixPaths(sources)
# Convert to proper windows form.
excluded_sources = _FixPaths(excluded_sources)
excluded_idl = _IdlFilesHandledNonNatively(spec, sources)
precompiled_related = _GetPrecompileRelatedFiles(spec)
# Find the excluded ones, minus the precompiled header related ones.
fully_excluded = [i for i in excluded_sources if i not in precompiled_related]
# Convert to folders and the right slashes.
sources = [i.split('\\') for i in sources]
sources = _ConvertSourcesToFilterHierarchy(sources, excluded=fully_excluded,
list_excluded=list_excluded,
msvs_version=version)
# Prune filters with a single child to flatten ugly directory structures
# such as ../../src/modules/module1 etc.
if version.UsesVcxproj():
while all([isinstance(s, MSVSProject.Filter) for s in sources]) \
and len(set([s.name for s in sources])) == 1:
assert all([len(s.contents) == 1 for s in sources])
sources = [s.contents[0] for s in sources]
else:
while len(sources) == 1 and isinstance(sources[0], MSVSProject.Filter):
sources = sources[0].contents
return sources, excluded_sources, excluded_idl
def _IdlFilesHandledNonNatively(spec, sources):
# If any non-native rules use 'idl' as an extension exclude idl files.
# Gather a list here to use later.
using_idl = False
for rule in spec.get('rules', []):
if rule['extension'] == 'idl' and int(rule.get('msvs_external_rule', 0)):
using_idl = True
break
if using_idl:
excluded_idl = [i for i in sources if i.endswith('.idl')]
else:
excluded_idl = []
return excluded_idl
def _GetPrecompileRelatedFiles(spec):
# Gather a list of precompiled header related sources.
precompiled_related = []
for _, config in spec['configurations'].iteritems():
for k in precomp_keys:
f = config.get(k)
if f:
precompiled_related.append(_FixPath(f))
return precompiled_related
def _ExcludeFilesFromBeingBuilt(p, spec, excluded_sources, excluded_idl,
list_excluded):
exclusions = _GetExcludedFilesFromBuild(spec, excluded_sources, excluded_idl)
for file_name, excluded_configs in exclusions.iteritems():
if (not list_excluded and
len(excluded_configs) == len(spec['configurations'])):
# If we're not listing excluded files, then they won't appear in the
# project, so don't try to configure them to be excluded.
pass
else:
for config_name, config in excluded_configs:
p.AddFileConfig(file_name, _ConfigFullName(config_name, config),
{'ExcludedFromBuild': 'true'})
def _GetExcludedFilesFromBuild(spec, excluded_sources, excluded_idl):
exclusions = {}
# Exclude excluded sources from being built.
for f in excluded_sources:
excluded_configs = []
for config_name, config in spec['configurations'].iteritems():
precomped = [_FixPath(config.get(i, '')) for i in precomp_keys]
# Don't do this for ones that are precompiled header related.
if f not in precomped:
excluded_configs.append((config_name, config))
exclusions[f] = excluded_configs
# If any non-native rules use 'idl' as an extension exclude idl files.
# Exclude them now.
for f in excluded_idl:
excluded_configs = []
for config_name, config in spec['configurations'].iteritems():
excluded_configs.append((config_name, config))
exclusions[f] = excluded_configs
return exclusions
def _AddToolFilesToMSVS(p, spec):
# Add in tool files (rules).
tool_files = OrderedSet()
for _, config in spec['configurations'].iteritems():
for f in config.get('msvs_tool_files', []):
tool_files.add(f)
for f in tool_files:
p.AddToolFile(f)
def _HandlePreCompiledHeaders(p, sources, spec):
# Pre-compiled header source stubs need a different compiler flag
# (generate precompiled header) and any source file not of the same
# kind (i.e. C vs. C++) as the precompiled header source stub needs
# to have use of precompiled headers disabled.
extensions_excluded_from_precompile = []
for config_name, config in spec['configurations'].iteritems():
source = config.get('msvs_precompiled_source')
if source:
source = _FixPath(source)
# UsePrecompiledHeader=1 for if using precompiled headers.
tool = MSVSProject.Tool('VCCLCompilerTool',
{'UsePrecompiledHeader': '1'})
p.AddFileConfig(source, _ConfigFullName(config_name, config),
{}, tools=[tool])
basename, extension = os.path.splitext(source)
if extension == '.c':
extensions_excluded_from_precompile = ['.cc', '.cpp', '.cxx']
else:
extensions_excluded_from_precompile = ['.c']
def DisableForSourceTree(source_tree):
for source in source_tree:
if isinstance(source, MSVSProject.Filter):
DisableForSourceTree(source.contents)
else:
basename, extension = os.path.splitext(source)
if extension in extensions_excluded_from_precompile:
for config_name, config in spec['configurations'].iteritems():
tool = MSVSProject.Tool('VCCLCompilerTool',
{'UsePrecompiledHeader': '0',
'ForcedIncludeFiles': '$(NOINHERIT)'})
p.AddFileConfig(_FixPath(source),
_ConfigFullName(config_name, config),
{}, tools=[tool])
# Do nothing if there was no precompiled source.
if extensions_excluded_from_precompile:
DisableForSourceTree(sources)
def _AddActions(actions_to_add, spec, relative_path_of_gyp_file):
# Add actions.
actions = spec.get('actions', [])
# Don't setup_env every time. When all the actions are run together in one
# batch file in VS, the PATH will grow too long.
# Membership in this set means that the cygwin environment has been set up,
# and does not need to be set up again.
have_setup_env = set()
for a in actions:
# Attach actions to the gyp file if nothing else is there.
inputs = a.get('inputs') or [relative_path_of_gyp_file]
attached_to = inputs[0]
need_setup_env = attached_to not in have_setup_env
cmd = _BuildCommandLineForRule(spec, a, has_input_path=False,
do_setup_env=need_setup_env)
have_setup_env.add(attached_to)
# Add the action.
_AddActionStep(actions_to_add,
inputs=inputs,
outputs=a.get('outputs', []),
description=a.get('message', a['action_name']),
command=cmd)
def _WriteMSVSUserFile(project_path, version, spec):
# Add run_as and test targets.
if 'run_as' in spec:
run_as = spec['run_as']
action = run_as.get('action', [])
environment = run_as.get('environment', [])
working_directory = run_as.get('working_directory', '.')
elif int(spec.get('test', 0)):
action = ['$(TargetPath)', '--gtest_print_time']
environment = []
working_directory = '.'
else:
return # Nothing to add
# Write out the user file.
user_file = _CreateMSVSUserFile(project_path, version, spec)
for config_name, c_data in spec['configurations'].iteritems():
user_file.AddDebugSettings(_ConfigFullName(config_name, c_data),
action, environment, working_directory)
user_file.WriteIfChanged()
def _AddCopies(actions_to_add, spec):
copies = _GetCopies(spec)
for inputs, outputs, cmd, description in copies:
_AddActionStep(actions_to_add, inputs=inputs, outputs=outputs,
description=description, command=cmd)
def _GetCopies(spec):
copies = []
# Add copies.
for cpy in spec.get('copies', []):
for src in cpy.get('files', []):
dst = os.path.join(cpy['destination'], os.path.basename(src))
# _AddCustomBuildToolForMSVS() will call _FixPath() on the inputs and
# outputs, so do the same for our generated command line.
if src.endswith('/'):
src_bare = src[:-1]
base_dir = posixpath.split(src_bare)[0]
outer_dir = posixpath.split(src_bare)[1]
cmd = 'cd "%s" && xcopy /e /f /y "%s" "%s\\%s\\"' % (
_FixPath(base_dir), outer_dir, _FixPath(dst), outer_dir)
copies.append(([src], ['dummy_copies', dst], cmd,
'Copying %s to %s' % (src, dst)))
else:
cmd = 'mkdir "%s" 2>nul & set ERRORLEVEL=0 & copy /Y "%s" "%s"' % (
_FixPath(cpy['destination']), _FixPath(src), _FixPath(dst))
copies.append(([src], [dst], cmd, 'Copying %s to %s' % (src, dst)))
return copies
def _GetPathDict(root, path):
# |path| will eventually be empty (in the recursive calls) if it was initially
# relative; otherwise it will eventually end up as '\', 'D:\', etc.
if not path or path.endswith(os.sep):
return root
parent, folder = os.path.split(path)
parent_dict = _GetPathDict(root, parent)
if folder not in parent_dict:
parent_dict[folder] = dict()
return parent_dict[folder]
def _DictsToFolders(base_path, bucket, flat):
# Convert to folders recursively.
children = []
for folder, contents in bucket.iteritems():
if type(contents) == dict:
folder_children = _DictsToFolders(os.path.join(base_path, folder),
contents, flat)
if flat:
children += folder_children
else:
folder_children = MSVSNew.MSVSFolder(os.path.join(base_path, folder),
name='(' + folder + ')',
entries=folder_children)
children.append(folder_children)
else:
children.append(contents)
return children
def _CollapseSingles(parent, node):
# Recursively explorer the tree of dicts looking for projects which are
# the sole item in a folder which has the same name as the project. Bring
# such projects up one level.
if (type(node) == dict and
len(node) == 1 and
node.keys()[0] == parent + '.vcproj'):
return node[node.keys()[0]]
if type(node) != dict:
return node
for child in node:
node[child] = _CollapseSingles(child, node[child])
return node
def _GatherSolutionFolders(sln_projects, project_objects, flat):
root = {}
# Convert into a tree of dicts on path.
for p in sln_projects:
gyp_file, target = gyp.common.ParseQualifiedTarget(p)[0:2]
gyp_dir = os.path.dirname(gyp_file)
path_dict = _GetPathDict(root, gyp_dir)
path_dict[target + '.vcproj'] = project_objects[p]
# Walk down from the top until we hit a folder that has more than one entry.
# In practice, this strips the top-level "src/" dir from the hierarchy in
# the solution.
while len(root) == 1 and type(root[root.keys()[0]]) == dict:
root = root[root.keys()[0]]
# Collapse singles.
root = _CollapseSingles('', root)
# Merge buckets until everything is a root entry.
return _DictsToFolders('', root, flat)
def _GetPathOfProject(qualified_target, spec, options, msvs_version):
default_config = _GetDefaultConfiguration(spec)
proj_filename = default_config.get('msvs_existing_vcproj')
if not proj_filename:
proj_filename = (spec['target_name'] + options.suffix +
msvs_version.ProjectExtension())
build_file = gyp.common.BuildFile(qualified_target)
proj_path = os.path.join(os.path.dirname(build_file), proj_filename)
fix_prefix = None
if options.generator_output:
project_dir_path = os.path.dirname(os.path.abspath(proj_path))
proj_path = os.path.join(options.generator_output, proj_path)
fix_prefix = gyp.common.RelativePath(project_dir_path,
os.path.dirname(proj_path))
return proj_path, fix_prefix
def _GetPlatformOverridesOfProject(spec):
# Prepare a dict indicating which project configurations are used for which
# solution configurations for this target.
config_platform_overrides = {}
for config_name, c in spec['configurations'].iteritems():
config_fullname = _ConfigFullName(config_name, c)
platform = c.get('msvs_target_platform', _ConfigPlatform(c))
fixed_config_fullname = '%s|%s' % (
_ConfigBaseName(config_name, _ConfigPlatform(c)), platform)
config_platform_overrides[config_fullname] = fixed_config_fullname
return config_platform_overrides
def _CreateProjectObjects(target_list, target_dicts, options, msvs_version):
"""Create a MSVSProject object for the targets found in target list.
Arguments:
target_list: the list of targets to generate project objects for.
target_dicts: the dictionary of specifications.
options: global generator options.
msvs_version: the MSVSVersion object.
Returns:
A set of created projects, keyed by target.
"""
global fixpath_prefix
# Generate each project.
projects = {}
for qualified_target in target_list:
spec = target_dicts[qualified_target]
if spec['toolset'] != 'target':
raise GypError(
'Multiple toolsets not supported in msvs build (target %s)' %
qualified_target)
proj_path, fixpath_prefix = _GetPathOfProject(qualified_target, spec,
options, msvs_version)
guid = _GetGuidOfProject(proj_path, spec)
overrides = _GetPlatformOverridesOfProject(spec)
build_file = gyp.common.BuildFile(qualified_target)
# Create object for this project.
obj = MSVSNew.MSVSProject(
proj_path,
name=spec['target_name'],
guid=guid,
spec=spec,
build_file=build_file,
config_platform_overrides=overrides,
fixpath_prefix=fixpath_prefix)
# Set project toolset if any (MS build only)
if msvs_version.UsesVcxproj():
obj.set_msbuild_toolset(
_GetMsbuildToolsetOfProject(proj_path, spec, msvs_version))
projects[qualified_target] = obj
# Set all the dependencies, but not if we are using an external builder like
# ninja
for project in projects.values():
if not project.spec.get('msvs_external_builder'):
deps = project.spec.get('dependencies', [])
deps = [projects[d] for d in deps]
project.set_dependencies(deps)
return projects
def _InitNinjaFlavor(params, target_list, target_dicts):
"""Initialize targets for the ninja flavor.
This sets up the necessary variables in the targets to generate msvs projects
that use ninja as an external builder. The variables in the spec are only set
if they have not been set. This allows individual specs to override the
default values initialized here.
Arguments:
params: Params provided to the generator.
target_list: List of target pairs: 'base/base.gyp:base'.
target_dicts: Dict of target properties keyed on target pair.
"""
for qualified_target in target_list:
spec = target_dicts[qualified_target]
if spec.get('msvs_external_builder'):
# The spec explicitly defined an external builder, so don't change it.
continue
path_to_ninja = spec.get('msvs_path_to_ninja', 'ninja.exe')
spec['msvs_external_builder'] = 'ninja'
if not spec.get('msvs_external_builder_out_dir'):
gyp_file, _, _ = gyp.common.ParseQualifiedTarget(qualified_target)
gyp_dir = os.path.dirname(gyp_file)
configuration = '$(Configuration)'
if params.get('target_arch') == 'x64':
configuration += '_x64'
spec['msvs_external_builder_out_dir'] = os.path.join(
gyp.common.RelativePath(params['options'].toplevel_dir, gyp_dir),
ninja_generator.ComputeOutputDir(params),
configuration)
if not spec.get('msvs_external_builder_build_cmd'):
spec['msvs_external_builder_build_cmd'] = [
path_to_ninja,
'-C',
'$(OutDir)',
'$(ProjectName)',
]
if not spec.get('msvs_external_builder_clean_cmd'):
spec['msvs_external_builder_clean_cmd'] = [
path_to_ninja,
'-C',
'$(OutDir)',
'-tclean',
'$(ProjectName)',
]
def CalculateVariables(default_variables, params):
"""Generated variables that require params to be known."""
generator_flags = params.get('generator_flags', {})
# Select project file format version (if unset, default to auto detecting).
msvs_version = MSVSVersion.SelectVisualStudioVersion(
generator_flags.get('msvs_version', 'auto'))
# Stash msvs_version for later (so we don't have to probe the system twice).
params['msvs_version'] = msvs_version
# Set a variable so conditions can be based on msvs_version.
default_variables['MSVS_VERSION'] = msvs_version.ShortName()
# To determine processor word size on Windows, in addition to checking
# PROCESSOR_ARCHITECTURE (which reflects the word size of the current
# process), it is also necessary to check PROCESSOR_ARCITEW6432 (which
# contains the actual word size of the system when running thru WOW64).
if (os.environ.get('PROCESSOR_ARCHITECTURE', '').find('64') >= 0 or
os.environ.get('PROCESSOR_ARCHITEW6432', '').find('64') >= 0):
default_variables['MSVS_OS_BITS'] = 64
else:
default_variables['MSVS_OS_BITS'] = 32
if gyp.common.GetFlavor(params) == 'ninja':
default_variables['SHARED_INTERMEDIATE_DIR'] = '$(OutDir)gen'
def PerformBuild(data, configurations, params):
options = params['options']
msvs_version = params['msvs_version']
devenv = os.path.join(msvs_version.path, 'Common7', 'IDE', 'devenv.com')
for build_file, build_file_dict in data.iteritems():
(build_file_root, build_file_ext) = os.path.splitext(build_file)
if build_file_ext != '.gyp':
continue
sln_path = build_file_root + options.suffix + '.sln'
if options.generator_output:
sln_path = os.path.join(options.generator_output, sln_path)
for config in configurations:
arguments = [devenv, sln_path, '/Build', config]
print 'Building [%s]: %s' % (config, arguments)
rtn = subprocess.check_call(arguments)
def GenerateOutput(target_list, target_dicts, data, params):
"""Generate .sln and .vcproj files.
This is the entry point for this generator.
Arguments:
target_list: List of target pairs: 'base/base.gyp:base'.
target_dicts: Dict of target properties keyed on target pair.
data: Dictionary containing per .gyp data.
"""
global fixpath_prefix
options = params['options']
# Get the project file format version back out of where we stashed it in
# GeneratorCalculatedVariables.
msvs_version = params['msvs_version']
generator_flags = params.get('generator_flags', {})
# Optionally shard targets marked with 'msvs_shard': SHARD_COUNT.
(target_list, target_dicts) = MSVSUtil.ShardTargets(target_list, target_dicts)
# Optionally use the large PDB workaround for targets marked with
# 'msvs_large_pdb': 1.
(target_list, target_dicts) = MSVSUtil.InsertLargePdbShims(
target_list, target_dicts, generator_default_variables)
# Optionally configure each spec to use ninja as the external builder.
if params.get('flavor') == 'ninja':
_InitNinjaFlavor(params, target_list, target_dicts)
# Prepare the set of configurations.
configs = set()
for qualified_target in target_list:
spec = target_dicts[qualified_target]
for config_name, config in spec['configurations'].iteritems():
configs.add(_ConfigFullName(config_name, config))
configs = list(configs)
# Figure out all the projects that will be generated and their guids
project_objects = _CreateProjectObjects(target_list, target_dicts, options,
msvs_version)
# Generate each project.
missing_sources = []
for project in project_objects.values():
fixpath_prefix = project.fixpath_prefix
missing_sources.extend(_GenerateProject(project, options, msvs_version,
generator_flags))
fixpath_prefix = None
for build_file in data:
# Validate build_file extension
if not build_file.endswith('.gyp'):
continue
sln_path = os.path.splitext(build_file)[0] + options.suffix + '.sln'
if options.generator_output:
sln_path = os.path.join(options.generator_output, sln_path)
# Get projects in the solution, and their dependents.
sln_projects = gyp.common.BuildFileTargets(target_list, build_file)
sln_projects += gyp.common.DeepDependencyTargets(target_dicts, sln_projects)
# Create folder hierarchy.
root_entries = _GatherSolutionFolders(
sln_projects, project_objects, flat=msvs_version.FlatSolution())
# Create solution.
sln = MSVSNew.MSVSSolution(sln_path,
entries=root_entries,
variants=configs,
websiteProperties=False,
version=msvs_version)
sln.Write()
if missing_sources:
error_message = "Missing input files:\n" + \
'\n'.join(set(missing_sources))
if generator_flags.get('msvs_error_on_missing_sources', False):
raise GypError(error_message)
else:
print >> sys.stdout, "Warning: " + error_message
def _GenerateMSBuildFiltersFile(filters_path, source_files,
rule_dependencies, extension_to_rule_name):
"""Generate the filters file.
This file is used by Visual Studio to organize the presentation of source
files into folders.
Arguments:
filters_path: The path of the file to be created.
source_files: The hierarchical structure of all the sources.
extension_to_rule_name: A dictionary mapping file extensions to rules.
"""
filter_group = []
source_group = []
_AppendFiltersForMSBuild('', source_files, rule_dependencies,
extension_to_rule_name, filter_group, source_group)
if filter_group:
content = ['Project',
{'ToolsVersion': '4.0',
'xmlns': 'http://schemas.microsoft.com/developer/msbuild/2003'
},
['ItemGroup'] + filter_group,
['ItemGroup'] + source_group
]
easy_xml.WriteXmlIfChanged(content, filters_path, pretty=True, win32=True)
elif os.path.exists(filters_path):
# We don't need this filter anymore. Delete the old filter file.
os.unlink(filters_path)
def _AppendFiltersForMSBuild(parent_filter_name, sources, rule_dependencies,
extension_to_rule_name,
filter_group, source_group):
"""Creates the list of filters and sources to be added in the filter file.
Args:
parent_filter_name: The name of the filter under which the sources are
found.
sources: The hierarchy of filters and sources to process.
extension_to_rule_name: A dictionary mapping file extensions to rules.
filter_group: The list to which filter entries will be appended.
source_group: The list to which source entries will be appeneded.
"""
for source in sources:
if isinstance(source, MSVSProject.Filter):
# We have a sub-filter. Create the name of that sub-filter.
if not parent_filter_name:
filter_name = source.name
else:
filter_name = '%s\\%s' % (parent_filter_name, source.name)
# Add the filter to the group.
filter_group.append(
['Filter', {'Include': filter_name},
['UniqueIdentifier', MSVSNew.MakeGuid(source.name)]])
# Recurse and add its dependents.
_AppendFiltersForMSBuild(filter_name, source.contents,
rule_dependencies, extension_to_rule_name,
filter_group, source_group)
else:
# It's a source. Create a source entry.
_, element = _MapFileToMsBuildSourceType(source, rule_dependencies,
extension_to_rule_name)
source_entry = [element, {'Include': source}]
# Specify the filter it is part of, if any.
if parent_filter_name:
source_entry.append(['Filter', parent_filter_name])
source_group.append(source_entry)
def _MapFileToMsBuildSourceType(source, rule_dependencies,
extension_to_rule_name):
"""Returns the group and element type of the source file.
Arguments:
source: The source file name.
extension_to_rule_name: A dictionary mapping file extensions to rules.
Returns:
A pair of (group this file should be part of, the label of element)
"""
_, ext = os.path.splitext(source)
if ext in extension_to_rule_name:
group = 'rule'
element = extension_to_rule_name[ext]
elif ext in ['.cc', '.cpp', '.c', '.cxx']:
group = 'compile'
element = 'ClCompile'
elif ext in ['.h', '.hxx']:
group = 'include'
element = 'ClInclude'
elif ext == '.rc':
group = 'resource'
element = 'ResourceCompile'
elif ext == '.asm':
group = 'masm'
element = 'MASM'
elif ext == '.idl':
group = 'midl'
element = 'Midl'
elif source in rule_dependencies:
group = 'rule_dependency'
element = 'CustomBuild'
else:
group = 'none'
element = 'None'
return (group, element)
def _GenerateRulesForMSBuild(output_dir, options, spec,
sources, excluded_sources,
props_files_of_rules, targets_files_of_rules,
actions_to_add, rule_dependencies,
extension_to_rule_name):
# MSBuild rules are implemented using three files: an XML file, a .targets
# file and a .props file.
# See http://blogs.msdn.com/b/vcblog/archive/2010/04/21/quick-help-on-vs2010-custom-build-rule.aspx
# for more details.
rules = spec.get('rules', [])
rules_native = [r for r in rules if not int(r.get('msvs_external_rule', 0))]
rules_external = [r for r in rules if int(r.get('msvs_external_rule', 0))]
msbuild_rules = []
for rule in rules_native:
# Skip a rule with no action and no inputs.
if 'action' not in rule and not rule.get('rule_sources', []):
continue
msbuild_rule = MSBuildRule(rule, spec)
msbuild_rules.append(msbuild_rule)
rule_dependencies.update(msbuild_rule.additional_dependencies.split(';'))
extension_to_rule_name[msbuild_rule.extension] = msbuild_rule.rule_name
if msbuild_rules:
base = spec['target_name'] + options.suffix
props_name = base + '.props'
targets_name = base + '.targets'
xml_name = base + '.xml'
props_files_of_rules.add(props_name)
targets_files_of_rules.add(targets_name)
props_path = os.path.join(output_dir, props_name)
targets_path = os.path.join(output_dir, targets_name)
xml_path = os.path.join(output_dir, xml_name)
_GenerateMSBuildRulePropsFile(props_path, msbuild_rules)
_GenerateMSBuildRuleTargetsFile(targets_path, msbuild_rules)
_GenerateMSBuildRuleXmlFile(xml_path, msbuild_rules)
if rules_external:
_GenerateExternalRules(rules_external, output_dir, spec,
sources, options, actions_to_add)
_AdjustSourcesForRules(rules, sources, excluded_sources, True)
class MSBuildRule(object):
"""Used to store information used to generate an MSBuild rule.
Attributes:
rule_name: The rule name, sanitized to use in XML.
target_name: The name of the target.
after_targets: The name of the AfterTargets element.
before_targets: The name of the BeforeTargets element.
depends_on: The name of the DependsOn element.
compute_output: The name of the ComputeOutput element.
dirs_to_make: The name of the DirsToMake element.
inputs: The name of the _inputs element.
tlog: The name of the _tlog element.
extension: The extension this rule applies to.
description: The message displayed when this rule is invoked.
additional_dependencies: A string listing additional dependencies.
outputs: The outputs of this rule.
command: The command used to run the rule.
"""
def __init__(self, rule, spec):
self.display_name = rule['rule_name']
# Assure that the rule name is only characters and numbers
self.rule_name = re.sub(r'\W', '_', self.display_name)
# Create the various element names, following the example set by the
# Visual Studio 2008 to 2010 conversion. I don't know if VS2010
# is sensitive to the exact names.
self.target_name = '_' + self.rule_name
self.after_targets = self.rule_name + 'AfterTargets'
self.before_targets = self.rule_name + 'BeforeTargets'
self.depends_on = self.rule_name + 'DependsOn'
self.compute_output = 'Compute%sOutput' % self.rule_name
self.dirs_to_make = self.rule_name + 'DirsToMake'
self.inputs = self.rule_name + '_inputs'
self.tlog = self.rule_name + '_tlog'
self.extension = rule['extension']
if not self.extension.startswith('.'):
self.extension = '.' + self.extension
self.description = MSVSSettings.ConvertVCMacrosToMSBuild(
rule.get('message', self.rule_name))
old_additional_dependencies = _FixPaths(rule.get('inputs', []))
self.additional_dependencies = (
';'.join([MSVSSettings.ConvertVCMacrosToMSBuild(i)
for i in old_additional_dependencies]))
old_outputs = _FixPaths(rule.get('outputs', []))
self.outputs = ';'.join([MSVSSettings.ConvertVCMacrosToMSBuild(i)
for i in old_outputs])
old_command = _BuildCommandLineForRule(spec, rule, has_input_path=True,
do_setup_env=True)
self.command = MSVSSettings.ConvertVCMacrosToMSBuild(old_command)
def _GenerateMSBuildRulePropsFile(props_path, msbuild_rules):
"""Generate the .props file."""
content = ['Project',
{'xmlns': 'http://schemas.microsoft.com/developer/msbuild/2003'}]
for rule in msbuild_rules:
content.extend([
['PropertyGroup',
{'Condition': "'$(%s)' == '' and '$(%s)' == '' and "
"'$(ConfigurationType)' != 'Makefile'" % (rule.before_targets,
rule.after_targets)
},
[rule.before_targets, 'Midl'],
[rule.after_targets, 'CustomBuild'],
],
['PropertyGroup',
[rule.depends_on,
{'Condition': "'$(ConfigurationType)' != 'Makefile'"},
'_SelectedFiles;$(%s)' % rule.depends_on
],
],
['ItemDefinitionGroup',
[rule.rule_name,
['CommandLineTemplate', rule.command],
['Outputs', rule.outputs],
['ExecutionDescription', rule.description],
['AdditionalDependencies', rule.additional_dependencies],
],
]
])
easy_xml.WriteXmlIfChanged(content, props_path, pretty=True, win32=True)
def _GenerateMSBuildRuleTargetsFile(targets_path, msbuild_rules):
"""Generate the .targets file."""
content = ['Project',
{'xmlns': 'http://schemas.microsoft.com/developer/msbuild/2003'
}
]
item_group = [
'ItemGroup',
['PropertyPageSchema',
{'Include': '$(MSBuildThisFileDirectory)$(MSBuildThisFileName).xml'}
]
]
for rule in msbuild_rules:
item_group.append(
['AvailableItemName',
{'Include': rule.rule_name},
['Targets', rule.target_name],
])
content.append(item_group)
for rule in msbuild_rules:
content.append(
['UsingTask',
{'TaskName': rule.rule_name,
'TaskFactory': 'XamlTaskFactory',
'AssemblyName': 'Microsoft.Build.Tasks.v4.0'
},
['Task', '$(MSBuildThisFileDirectory)$(MSBuildThisFileName).xml'],
])
for rule in msbuild_rules:
rule_name = rule.rule_name
target_outputs = '%%(%s.Outputs)' % rule_name
target_inputs = ('%%(%s.Identity);%%(%s.AdditionalDependencies);'
'$(MSBuildProjectFile)') % (rule_name, rule_name)
rule_inputs = '%%(%s.Identity)' % rule_name
extension_condition = ("'%(Extension)'=='.obj' or "
"'%(Extension)'=='.res' or "
"'%(Extension)'=='.rsc' or "
"'%(Extension)'=='.lib'")
remove_section = [
'ItemGroup',
{'Condition': "'@(SelectedFiles)' != ''"},
[rule_name,
{'Remove': '@(%s)' % rule_name,
'Condition': "'%(Identity)' != '@(SelectedFiles)'"
}
]
]
inputs_section = [
'ItemGroup',
[rule.inputs, {'Include': '%%(%s.AdditionalDependencies)' % rule_name}]
]
logging_section = [
'ItemGroup',
[rule.tlog,
{'Include': '%%(%s.Outputs)' % rule_name,
'Condition': ("'%%(%s.Outputs)' != '' and "
"'%%(%s.ExcludedFromBuild)' != 'true'" %
(rule_name, rule_name))
},
['Source', "@(%s, '|')" % rule_name],
['Inputs', "@(%s -> '%%(Fullpath)', ';')" % rule.inputs],
],
]
message_section = [
'Message',
{'Importance': 'High',
'Text': '%%(%s.ExecutionDescription)' % rule_name
}
]
write_tlog_section = [
'WriteLinesToFile',
{'Condition': "'@(%s)' != '' and '%%(%s.ExcludedFromBuild)' != "
"'true'" % (rule.tlog, rule.tlog),
'File': '$(IntDir)$(ProjectName).write.1.tlog',
'Lines': "^%%(%s.Source);@(%s->'%%(Fullpath)')" % (rule.tlog,
rule.tlog)
}
]
read_tlog_section = [
'WriteLinesToFile',
{'Condition': "'@(%s)' != '' and '%%(%s.ExcludedFromBuild)' != "
"'true'" % (rule.tlog, rule.tlog),
'File': '$(IntDir)$(ProjectName).read.1.tlog',
'Lines': "^%%(%s.Source);%%(%s.Inputs)" % (rule.tlog, rule.tlog)
}
]
command_and_input_section = [
rule_name,
{'Condition': "'@(%s)' != '' and '%%(%s.ExcludedFromBuild)' != "
"'true'" % (rule_name, rule_name),
'EchoOff': 'true',
'StandardOutputImportance': 'High',
'StandardErrorImportance': 'High',
'CommandLineTemplate': '%%(%s.CommandLineTemplate)' % rule_name,
'AdditionalOptions': '%%(%s.AdditionalOptions)' % rule_name,
'Inputs': rule_inputs
}
]
content.extend([
['Target',
{'Name': rule.target_name,
'BeforeTargets': '$(%s)' % rule.before_targets,
'AfterTargets': '$(%s)' % rule.after_targets,
'Condition': "'@(%s)' != ''" % rule_name,
'DependsOnTargets': '$(%s);%s' % (rule.depends_on,
rule.compute_output),
'Outputs': target_outputs,
'Inputs': target_inputs
},
remove_section,
inputs_section,
logging_section,
message_section,
write_tlog_section,
read_tlog_section,
command_and_input_section,
],
['PropertyGroup',
['ComputeLinkInputsTargets',
'$(ComputeLinkInputsTargets);',
'%s;' % rule.compute_output
],
['ComputeLibInputsTargets',
'$(ComputeLibInputsTargets);',
'%s;' % rule.compute_output
],
],
['Target',
{'Name': rule.compute_output,
'Condition': "'@(%s)' != ''" % rule_name
},
['ItemGroup',
[rule.dirs_to_make,
{'Condition': "'@(%s)' != '' and "
"'%%(%s.ExcludedFromBuild)' != 'true'" % (rule_name, rule_name),
'Include': '%%(%s.Outputs)' % rule_name
}
],
['Link',
{'Include': '%%(%s.Identity)' % rule.dirs_to_make,
'Condition': extension_condition
}
],
['Lib',
{'Include': '%%(%s.Identity)' % rule.dirs_to_make,
'Condition': extension_condition
}
],
['ImpLib',
{'Include': '%%(%s.Identity)' % rule.dirs_to_make,
'Condition': extension_condition
}
],
],
['MakeDir',
{'Directories': ("@(%s->'%%(RootDir)%%(Directory)')" %
rule.dirs_to_make)
}
]
],
])
easy_xml.WriteXmlIfChanged(content, targets_path, pretty=True, win32=True)
def _GenerateMSBuildRuleXmlFile(xml_path, msbuild_rules):
# Generate the .xml file
content = [
'ProjectSchemaDefinitions',
{'xmlns': ('clr-namespace:Microsoft.Build.Framework.XamlTypes;'
'assembly=Microsoft.Build.Framework'),
'xmlns:x': 'http://schemas.microsoft.com/winfx/2006/xaml',
'xmlns:sys': 'clr-namespace:System;assembly=mscorlib',
'xmlns:transformCallback':
'Microsoft.Cpp.Dev10.ConvertPropertyCallback'
}
]
for rule in msbuild_rules:
content.extend([
['Rule',
{'Name': rule.rule_name,
'PageTemplate': 'tool',
'DisplayName': rule.display_name,
'Order': '200'
},
['Rule.DataSource',
['DataSource',
{'Persistence': 'ProjectFile',
'ItemType': rule.rule_name
}
]
],
['Rule.Categories',
['Category',
{'Name': 'General'},
['Category.DisplayName',
['sys:String', 'General'],
],
],
['Category',
{'Name': 'Command Line',
'Subtype': 'CommandLine'
},
['Category.DisplayName',
['sys:String', 'Command Line'],
],
],
],
['StringListProperty',
{'Name': 'Inputs',
'Category': 'Command Line',
'IsRequired': 'true',
'Switch': ' '
},
['StringListProperty.DataSource',
['DataSource',
{'Persistence': 'ProjectFile',
'ItemType': rule.rule_name,
'SourceType': 'Item'
}
]
],
],
['StringProperty',
{'Name': 'CommandLineTemplate',
'DisplayName': 'Command Line',
'Visible': 'False',
'IncludeInCommandLine': 'False'
}
],
['DynamicEnumProperty',
{'Name': rule.before_targets,
'Category': 'General',
'EnumProvider': 'Targets',
'IncludeInCommandLine': 'False'
},
['DynamicEnumProperty.DisplayName',
['sys:String', 'Execute Before'],
],
['DynamicEnumProperty.Description',
['sys:String', 'Specifies the targets for the build customization'
' to run before.'
],
],
['DynamicEnumProperty.ProviderSettings',
['NameValuePair',
{'Name': 'Exclude',
'Value': '^%s|^Compute' % rule.before_targets
}
]
],
['DynamicEnumProperty.DataSource',
['DataSource',
{'Persistence': 'ProjectFile',
'HasConfigurationCondition': 'true'
}
]
],
],
['DynamicEnumProperty',
{'Name': rule.after_targets,
'Category': 'General',
'EnumProvider': 'Targets',
'IncludeInCommandLine': 'False'
},
['DynamicEnumProperty.DisplayName',
['sys:String', 'Execute After'],
],
['DynamicEnumProperty.Description',
['sys:String', ('Specifies the targets for the build customization'
' to run after.')
],
],
['DynamicEnumProperty.ProviderSettings',
['NameValuePair',
{'Name': 'Exclude',
'Value': '^%s|^Compute' % rule.after_targets
}
]
],
['DynamicEnumProperty.DataSource',
['DataSource',
{'Persistence': 'ProjectFile',
'ItemType': '',
'HasConfigurationCondition': 'true'
}
]
],
],
['StringListProperty',
{'Name': 'Outputs',
'DisplayName': 'Outputs',
'Visible': 'False',
'IncludeInCommandLine': 'False'
}
],
['StringProperty',
{'Name': 'ExecutionDescription',
'DisplayName': 'Execution Description',
'Visible': 'False',
'IncludeInCommandLine': 'False'
}
],
['StringListProperty',
{'Name': 'AdditionalDependencies',
'DisplayName': 'Additional Dependencies',
'IncludeInCommandLine': 'False',
'Visible': 'false'
}
],
['StringProperty',
{'Subtype': 'AdditionalOptions',
'Name': 'AdditionalOptions',
'Category': 'Command Line'
},
['StringProperty.DisplayName',
['sys:String', 'Additional Options'],
],
['StringProperty.Description',
['sys:String', 'Additional Options'],
],
],
],
['ItemType',
{'Name': rule.rule_name,
'DisplayName': rule.display_name
}
],
['FileExtension',
{'Name': '*' + rule.extension,
'ContentType': rule.rule_name
}
],
['ContentType',
{'Name': rule.rule_name,
'DisplayName': '',
'ItemType': rule.rule_name
}
]
])
easy_xml.WriteXmlIfChanged(content, xml_path, pretty=True, win32=True)
def _GetConfigurationAndPlatform(name, settings):
configuration = name.rsplit('_', 1)[0]
platform = settings.get('msvs_configuration_platform', 'Win32')
return (configuration, platform)
def _GetConfigurationCondition(name, settings):
return (r"'$(Configuration)|$(Platform)'=='%s|%s'" %
_GetConfigurationAndPlatform(name, settings))
def _GetMSBuildProjectConfigurations(configurations):
group = ['ItemGroup', {'Label': 'ProjectConfigurations'}]
for (name, settings) in sorted(configurations.iteritems()):
configuration, platform = _GetConfigurationAndPlatform(name, settings)
designation = '%s|%s' % (configuration, platform)
group.append(
['ProjectConfiguration', {'Include': designation},
['Configuration', configuration],
['Platform', platform]])
return [group]
def _GetMSBuildGlobalProperties(spec, guid, gyp_file_name):
namespace = os.path.splitext(gyp_file_name)[0]
properties = [
['PropertyGroup', {'Label': 'Globals'},
['ProjectGuid', guid],
['Keyword', 'Win32Proj'],
['RootNamespace', namespace],
['IgnoreWarnCompileDuplicatedFilename', 'true'],
]
]
if os.environ.get('PROCESSOR_ARCHITECTURE') == 'AMD64' or \
os.environ.get('PROCESSOR_ARCHITEW6432') == 'AMD64':
properties[0].append(['PreferredToolArchitecture', 'x64'])
if spec.get('msvs_enable_winrt'):
properties[0].append(['DefaultLanguage', 'en-US'])
properties[0].append(['AppContainerApplication', 'true'])
if spec.get('msvs_application_type_revision'):
app_type_revision = spec.get('msvs_application_type_revision')
properties[0].append(['ApplicationTypeRevision', app_type_revision])
else:
properties[0].append(['ApplicationTypeRevision', '8.1'])
if spec.get('msvs_target_platform_version'):
target_platform_version = spec.get('msvs_target_platform_version')
properties[0].append(['WindowsTargetPlatformVersion',
target_platform_version])
if spec.get('msvs_target_platform_minversion'):
target_platform_minversion = spec.get('msvs_target_platform_minversion')
properties[0].append(['WindowsTargetPlatformMinVersion',
target_platform_minversion])
else:
properties[0].append(['WindowsTargetPlatformMinVersion',
target_platform_version])
if spec.get('msvs_enable_winphone'):
properties[0].append(['ApplicationType', 'Windows Phone'])
else:
properties[0].append(['ApplicationType', 'Windows Store'])
platform_name = None
msvs_windows_target_platform_version = None
for configuration in spec['configurations'].itervalues():
platform_name = platform_name or _ConfigPlatform(configuration)
msvs_windows_target_platform_version = \
msvs_windows_target_platform_version or \
_ConfigWindowsTargetPlatformVersion(configuration)
if platform_name and msvs_windows_target_platform_version:
break
if platform_name == 'ARM':
properties[0].append(['WindowsSDKDesktopARMSupport', 'true'])
if msvs_windows_target_platform_version:
properties[0].append(['WindowsTargetPlatformVersion', \
str(msvs_windows_target_platform_version)])
return properties
def _GetMSBuildConfigurationDetails(spec, build_file):
properties = {}
for name, settings in spec['configurations'].iteritems():
msbuild_attributes = _GetMSBuildAttributes(spec, settings, build_file)
condition = _GetConfigurationCondition(name, settings)
character_set = msbuild_attributes.get('CharacterSet')
_AddConditionalProperty(properties, condition, 'ConfigurationType',
msbuild_attributes['ConfigurationType'])
if character_set:
if 'msvs_enable_winrt' not in spec :
_AddConditionalProperty(properties, condition, 'CharacterSet',
character_set)
return _GetMSBuildPropertyGroup(spec, 'Configuration', properties)
def _GetMSBuildLocalProperties(msbuild_toolset):
# Currently the only local property we support is PlatformToolset
properties = {}
if msbuild_toolset:
properties = [
['PropertyGroup', {'Label': 'Locals'},
['PlatformToolset', msbuild_toolset],
]
]
return properties
def _GetMSBuildPropertySheets(configurations):
user_props = r'$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props'
additional_props = {}
props_specified = False
for name, settings in sorted(configurations.iteritems()):
configuration = _GetConfigurationCondition(name, settings)
if settings.has_key('msbuild_props'):
additional_props[configuration] = _FixPaths(settings['msbuild_props'])
props_specified = True
else:
additional_props[configuration] = ''
if not props_specified:
return [
['ImportGroup',
{'Label': 'PropertySheets'},
['Import',
{'Project': user_props,
'Condition': "exists('%s')" % user_props,
'Label': 'LocalAppDataPlatform'
}
]
]
]
else:
sheets = []
for condition, props in additional_props.iteritems():
import_group = [
'ImportGroup',
{'Label': 'PropertySheets',
'Condition': condition
},
['Import',
{'Project': user_props,
'Condition': "exists('%s')" % user_props,
'Label': 'LocalAppDataPlatform'
}
]
]
for props_file in props:
import_group.append(['Import', {'Project':props_file}])
sheets.append(import_group)
return sheets
def _ConvertMSVSBuildAttributes(spec, config, build_file):
config_type = _GetMSVSConfigurationType(spec, build_file)
msvs_attributes = _GetMSVSAttributes(spec, config, config_type)
msbuild_attributes = {}
for a in msvs_attributes:
if a in ['IntermediateDirectory', 'OutputDirectory']:
directory = MSVSSettings.ConvertVCMacrosToMSBuild(msvs_attributes[a])
if not directory.endswith('\\'):
directory += '\\'
msbuild_attributes[a] = directory
elif a == 'CharacterSet':
msbuild_attributes[a] = _ConvertMSVSCharacterSet(msvs_attributes[a])
elif a == 'ConfigurationType':
msbuild_attributes[a] = _ConvertMSVSConfigurationType(msvs_attributes[a])
else:
print 'Warning: Do not know how to convert MSVS attribute ' + a
return msbuild_attributes
def _ConvertMSVSCharacterSet(char_set):
if char_set.isdigit():
char_set = {
'0': 'MultiByte',
'1': 'Unicode',
'2': 'MultiByte',
}[char_set]
return char_set
def _ConvertMSVSConfigurationType(config_type):
if config_type.isdigit():
config_type = {
'1': 'Application',
'2': 'DynamicLibrary',
'4': 'StaticLibrary',
'10': 'Utility'
}[config_type]
return config_type
def _GetMSBuildAttributes(spec, config, build_file):
if 'msbuild_configuration_attributes' not in config:
msbuild_attributes = _ConvertMSVSBuildAttributes(spec, config, build_file)
else:
config_type = _GetMSVSConfigurationType(spec, build_file)
config_type = _ConvertMSVSConfigurationType(config_type)
msbuild_attributes = config.get('msbuild_configuration_attributes', {})
msbuild_attributes.setdefault('ConfigurationType', config_type)
output_dir = msbuild_attributes.get('OutputDirectory',
'$(SolutionDir)$(Configuration)')
msbuild_attributes['OutputDirectory'] = _FixPath(output_dir) + '\\'
if 'IntermediateDirectory' not in msbuild_attributes:
intermediate = _FixPath('$(Configuration)') + '\\'
msbuild_attributes['IntermediateDirectory'] = intermediate
if 'CharacterSet' in msbuild_attributes:
msbuild_attributes['CharacterSet'] = _ConvertMSVSCharacterSet(
msbuild_attributes['CharacterSet'])
if 'TargetName' not in msbuild_attributes:
prefix = spec.get('product_prefix', '')
product_name = spec.get('product_name', '$(ProjectName)')
target_name = prefix + product_name
msbuild_attributes['TargetName'] = target_name
if 'TargetExt' not in msbuild_attributes and 'product_extension' in spec:
ext = spec.get('product_extension')
msbuild_attributes['TargetExt'] = '.' + ext
if spec.get('msvs_external_builder'):
external_out_dir = spec.get('msvs_external_builder_out_dir', '.')
msbuild_attributes['OutputDirectory'] = _FixPath(external_out_dir) + '\\'
# Make sure that 'TargetPath' matches 'Lib.OutputFile' or 'Link.OutputFile'
# (depending on the tool used) to avoid MSB8012 warning.
msbuild_tool_map = {
'executable': 'Link',
'shared_library': 'Link',
'loadable_module': 'Link',
'static_library': 'Lib',
}
msbuild_tool = msbuild_tool_map.get(spec['type'])
if msbuild_tool:
msbuild_settings = config['finalized_msbuild_settings']
out_file = msbuild_settings[msbuild_tool].get('OutputFile')
if out_file:
msbuild_attributes['TargetPath'] = _FixPath(out_file)
target_ext = msbuild_settings[msbuild_tool].get('TargetExt')
if target_ext:
msbuild_attributes['TargetExt'] = target_ext
return msbuild_attributes
def _GetMSBuildConfigurationGlobalProperties(spec, configurations, build_file):
# TODO(jeanluc) We could optimize out the following and do it only if
# there are actions.
# TODO(jeanluc) Handle the equivalent of setting 'CYGWIN=nontsec'.
new_paths = []
cygwin_dirs = spec.get('msvs_cygwin_dirs', ['.'])[0]
if cygwin_dirs:
cyg_path = '$(MSBuildProjectDirectory)\\%s\\bin\\' % _FixPath(cygwin_dirs)
new_paths.append(cyg_path)
# TODO(jeanluc) Change the convention to have both a cygwin_dir and a
# python_dir.
python_path = cyg_path.replace('cygwin\\bin', 'python_26')
new_paths.append(python_path)
if new_paths:
new_paths = '$(ExecutablePath);' + ';'.join(new_paths)
properties = {}
for (name, configuration) in sorted(configurations.iteritems()):
condition = _GetConfigurationCondition(name, configuration)
attributes = _GetMSBuildAttributes(spec, configuration, build_file)
msbuild_settings = configuration['finalized_msbuild_settings']
_AddConditionalProperty(properties, condition, 'IntDir',
attributes['IntermediateDirectory'])
_AddConditionalProperty(properties, condition, 'OutDir',
attributes['OutputDirectory'])
_AddConditionalProperty(properties, condition, 'TargetName',
attributes['TargetName'])
if 'TargetExt' in attributes:
_AddConditionalProperty(properties, condition, 'TargetExt',
attributes['TargetExt'])
if attributes.get('TargetPath'):
_AddConditionalProperty(properties, condition, 'TargetPath',
attributes['TargetPath'])
if attributes.get('TargetExt'):
_AddConditionalProperty(properties, condition, 'TargetExt',
attributes['TargetExt'])
if new_paths:
_AddConditionalProperty(properties, condition, 'ExecutablePath',
new_paths)
tool_settings = msbuild_settings.get('', {})
for name, value in sorted(tool_settings.iteritems()):
formatted_value = _GetValueFormattedForMSBuild('', name, value)
_AddConditionalProperty(properties, condition, name, formatted_value)
return _GetMSBuildPropertyGroup(spec, None, properties)
def _AddConditionalProperty(properties, condition, name, value):
"""Adds a property / conditional value pair to a dictionary.
Arguments:
properties: The dictionary to be modified. The key is the name of the
property. The value is itself a dictionary; its key is the value and
the value a list of condition for which this value is true.
condition: The condition under which the named property has the value.
name: The name of the property.
value: The value of the property.
"""
if name not in properties:
properties[name] = {}
values = properties[name]
if value not in values:
values[value] = []
conditions = values[value]
conditions.append(condition)
# Regex for msvs variable references ( i.e. $(FOO) ).
MSVS_VARIABLE_REFERENCE = re.compile(r'\$\(([a-zA-Z_][a-zA-Z0-9_]*)\)')
def _GetMSBuildPropertyGroup(spec, label, properties):
"""Returns a PropertyGroup definition for the specified properties.
Arguments:
spec: The target project dict.
label: An optional label for the PropertyGroup.
properties: The dictionary to be converted. The key is the name of the
property. The value is itself a dictionary; its key is the value and
the value a list of condition for which this value is true.
"""
group = ['PropertyGroup']
if label:
group.append({'Label': label})
num_configurations = len(spec['configurations'])
def GetEdges(node):
# Use a definition of edges such that user_of_variable -> used_varible.
# This happens to be easier in this case, since a variable's
# definition contains all variables it references in a single string.
edges = set()
for value in sorted(properties[node].keys()):
# Add to edges all $(...) references to variables.
#
# Variable references that refer to names not in properties are excluded
# These can exist for instance to refer built in definitions like
# $(SolutionDir).
#
# Self references are ignored. Self reference is used in a few places to
# append to the default value. I.e. PATH=$(PATH);other_path
edges.update(set([v for v in MSVS_VARIABLE_REFERENCE.findall(value)
if v in properties and v != node]))
return edges
properties_ordered = gyp.common.TopologicallySorted(
properties.keys(), GetEdges)
# Walk properties in the reverse of a topological sort on
# user_of_variable -> used_variable as this ensures variables are
# defined before they are used.
# NOTE: reverse(topsort(DAG)) = topsort(reverse_edges(DAG))
for name in reversed(properties_ordered):
values = properties[name]
for value, conditions in sorted(values.iteritems()):
if len(conditions) == num_configurations:
# If the value is the same all configurations,
# just add one unconditional entry.
group.append([name, value])
else:
for condition in conditions:
group.append([name, {'Condition': condition}, value])
return [group]
def _GetMSBuildToolSettingsSections(spec, configurations):
groups = []
for (name, configuration) in sorted(configurations.iteritems()):
msbuild_settings = configuration['finalized_msbuild_settings']
group = ['ItemDefinitionGroup',
{'Condition': _GetConfigurationCondition(name, configuration)}
]
for tool_name, tool_settings in sorted(msbuild_settings.iteritems()):
# Skip the tool named '' which is a holder of global settings handled
# by _GetMSBuildConfigurationGlobalProperties.
if tool_name:
if tool_settings:
tool = [tool_name]
for name, value in sorted(tool_settings.iteritems()):
formatted_value = _GetValueFormattedForMSBuild(tool_name, name,
value)
tool.append([name, formatted_value])
group.append(tool)
groups.append(group)
return groups
def _FinalizeMSBuildSettings(spec, configuration):
if 'msbuild_settings' in configuration:
converted = False
msbuild_settings = configuration['msbuild_settings']
MSVSSettings.ValidateMSBuildSettings(msbuild_settings)
else:
converted = True
msvs_settings = configuration.get('msvs_settings', {})
msbuild_settings = MSVSSettings.ConvertToMSBuildSettings(msvs_settings)
include_dirs, midl_include_dirs, resource_include_dirs = \
_GetIncludeDirs(configuration)
libraries = _GetLibraries(spec)
library_dirs = _GetLibraryDirs(configuration)
out_file, _, msbuild_tool = _GetOutputFilePathAndTool(spec, msbuild=True)
target_ext = _GetOutputTargetExt(spec)
defines = _GetDefines(configuration)
if converted:
# Visual Studio 2010 has TR1
defines = [d for d in defines if d != '_HAS_TR1=0']
# Warn of ignored settings
ignored_settings = ['msvs_tool_files']
for ignored_setting in ignored_settings:
value = configuration.get(ignored_setting)
if value:
print ('Warning: The automatic conversion to MSBuild does not handle '
'%s. Ignoring setting of %s' % (ignored_setting, str(value)))
defines = [_EscapeCppDefineForMSBuild(d) for d in defines]
disabled_warnings = _GetDisabledWarnings(configuration)
prebuild = configuration.get('msvs_prebuild')
postbuild = configuration.get('msvs_postbuild')
def_file = _GetModuleDefinition(spec)
precompiled_header = configuration.get('msvs_precompiled_header')
# Add the information to the appropriate tool
# TODO(jeanluc) We could optimize and generate these settings only if
# the corresponding files are found, e.g. don't generate ResourceCompile
# if you don't have any resources.
_ToolAppend(msbuild_settings, 'ClCompile',
'AdditionalIncludeDirectories', include_dirs)
_ToolAppend(msbuild_settings, 'Midl',
'AdditionalIncludeDirectories', midl_include_dirs)
_ToolAppend(msbuild_settings, 'ResourceCompile',
'AdditionalIncludeDirectories', resource_include_dirs)
# Add in libraries, note that even for empty libraries, we want this
# set, to prevent inheriting default libraries from the enviroment.
_ToolSetOrAppend(msbuild_settings, 'Link', 'AdditionalDependencies',
libraries)
_ToolAppend(msbuild_settings, 'Link', 'AdditionalLibraryDirectories',
library_dirs)
if out_file:
_ToolAppend(msbuild_settings, msbuild_tool, 'OutputFile', out_file,
only_if_unset=True)
if target_ext:
_ToolAppend(msbuild_settings, msbuild_tool, 'TargetExt', target_ext,
only_if_unset=True)
# Add defines.
_ToolAppend(msbuild_settings, 'ClCompile',
'PreprocessorDefinitions', defines)
_ToolAppend(msbuild_settings, 'ResourceCompile',
'PreprocessorDefinitions', defines)
# Add disabled warnings.
_ToolAppend(msbuild_settings, 'ClCompile',
'DisableSpecificWarnings', disabled_warnings)
# Turn on precompiled headers if appropriate.
if precompiled_header:
precompiled_header = os.path.split(precompiled_header)[1]
_ToolAppend(msbuild_settings, 'ClCompile', 'PrecompiledHeader', 'Use')
_ToolAppend(msbuild_settings, 'ClCompile',
'PrecompiledHeaderFile', precompiled_header)
_ToolAppend(msbuild_settings, 'ClCompile',
'ForcedIncludeFiles', [precompiled_header])
else:
_ToolAppend(msbuild_settings, 'ClCompile', 'PrecompiledHeader', 'NotUsing')
# Turn off WinRT compilation
_ToolAppend(msbuild_settings, 'ClCompile', 'CompileAsWinRT', 'false')
# Turn on import libraries if appropriate
if spec.get('msvs_requires_importlibrary'):
_ToolAppend(msbuild_settings, '', 'IgnoreImportLibrary', 'false')
# Loadable modules don't generate import libraries;
# tell dependent projects to not expect one.
if spec['type'] == 'loadable_module':
_ToolAppend(msbuild_settings, '', 'IgnoreImportLibrary', 'true')
# Set the module definition file if any.
if def_file:
_ToolAppend(msbuild_settings, 'Link', 'ModuleDefinitionFile', def_file)
configuration['finalized_msbuild_settings'] = msbuild_settings
if prebuild:
_ToolAppend(msbuild_settings, 'PreBuildEvent', 'Command', prebuild)
if postbuild:
_ToolAppend(msbuild_settings, 'PostBuildEvent', 'Command', postbuild)
def _GetValueFormattedForMSBuild(tool_name, name, value):
if type(value) == list:
# For some settings, VS2010 does not automatically extends the settings
# TODO(jeanluc) Is this what we want?
if name in ['AdditionalIncludeDirectories',
'AdditionalLibraryDirectories',
'AdditionalOptions',
'DelayLoadDLLs',
'DisableSpecificWarnings',
'PreprocessorDefinitions']:
value.append('%%(%s)' % name)
# For most tools, entries in a list should be separated with ';' but some
# settings use a space. Check for those first.
exceptions = {
'ClCompile': ['AdditionalOptions'],
'Link': ['AdditionalOptions'],
'Lib': ['AdditionalOptions']}
if tool_name in exceptions and name in exceptions[tool_name]:
char = ' '
else:
char = ';'
formatted_value = char.join(
[MSVSSettings.ConvertVCMacrosToMSBuild(i) for i in value])
else:
formatted_value = MSVSSettings.ConvertVCMacrosToMSBuild(value)
return formatted_value
def _VerifySourcesExist(sources, root_dir):
"""Verifies that all source files exist on disk.
Checks that all regular source files, i.e. not created at run time,
exist on disk. Missing files cause needless recompilation but no otherwise
visible errors.
Arguments:
sources: A recursive list of Filter/file names.
root_dir: The root directory for the relative path names.
Returns:
A list of source files that cannot be found on disk.
"""
missing_sources = []
for source in sources:
if isinstance(source, MSVSProject.Filter):
missing_sources.extend(_VerifySourcesExist(source.contents, root_dir))
else:
if '$' not in source:
full_path = os.path.join(root_dir, source)
if not os.path.exists(full_path):
missing_sources.append(full_path)
return missing_sources
def _GetMSBuildSources(spec, sources, exclusions, rule_dependencies,
extension_to_rule_name, actions_spec,
sources_handled_by_action, list_excluded):
groups = ['none', 'masm', 'midl', 'include', 'compile', 'resource', 'rule',
'rule_dependency']
grouped_sources = {}
for g in groups:
grouped_sources[g] = []
_AddSources2(spec, sources, exclusions, grouped_sources,
rule_dependencies, extension_to_rule_name,
sources_handled_by_action, list_excluded)
sources = []
for g in groups:
if grouped_sources[g]:
sources.append(['ItemGroup'] + grouped_sources[g])
if actions_spec:
sources.append(['ItemGroup'] + actions_spec)
return sources
def _AddSources2(spec, sources, exclusions, grouped_sources,
rule_dependencies, extension_to_rule_name,
sources_handled_by_action,
list_excluded):
extensions_excluded_from_precompile = []
for source in sources:
if isinstance(source, MSVSProject.Filter):
_AddSources2(spec, source.contents, exclusions, grouped_sources,
rule_dependencies, extension_to_rule_name,
sources_handled_by_action,
list_excluded)
else:
if not source in sources_handled_by_action:
detail = []
excluded_configurations = exclusions.get(source, [])
if len(excluded_configurations) == len(spec['configurations']):
detail.append(['ExcludedFromBuild', 'true'])
else:
for config_name, configuration in sorted(excluded_configurations):
condition = _GetConfigurationCondition(config_name, configuration)
detail.append(['ExcludedFromBuild',
{'Condition': condition},
'true'])
# Add precompile if needed
for config_name, configuration in spec['configurations'].iteritems():
precompiled_source = configuration.get('msvs_precompiled_source', '')
if precompiled_source != '':
precompiled_source = _FixPath(precompiled_source)
if not extensions_excluded_from_precompile:
# If the precompiled header is generated by a C source, we must
# not try to use it for C++ sources, and vice versa.
basename, extension = os.path.splitext(precompiled_source)
if extension == '.c':
extensions_excluded_from_precompile = ['.cc', '.cpp', '.cxx']
else:
extensions_excluded_from_precompile = ['.c']
if precompiled_source == source:
condition = _GetConfigurationCondition(config_name, configuration)
detail.append(['PrecompiledHeader',
{'Condition': condition},
'Create'
])
else:
# Turn off precompiled header usage for source files of a
# different type than the file that generated the
# precompiled header.
for extension in extensions_excluded_from_precompile:
if source.endswith(extension):
detail.append(['PrecompiledHeader', ''])
detail.append(['ForcedIncludeFiles', ''])
group, element = _MapFileToMsBuildSourceType(source, rule_dependencies,
extension_to_rule_name)
grouped_sources[group].append([element, {'Include': source}] + detail)
def _GetMSBuildProjectReferences(project):
references = []
if project.dependencies:
group = ['ItemGroup']
for dependency in project.dependencies:
guid = dependency.guid
project_dir = os.path.split(project.path)[0]
relative_path = gyp.common.RelativePath(dependency.path, project_dir)
project_ref = ['ProjectReference',
{'Include': relative_path},
['Project', guid],
['ReferenceOutputAssembly', 'false']
]
for config in dependency.spec.get('configurations', {}).itervalues():
if config.get('msvs_use_library_dependency_inputs', 0):
project_ref.append(['UseLibraryDependencyInputs', 'true'])
break
# If it's disabled in any config, turn it off in the reference.
if config.get('msvs_2010_disable_uldi_when_referenced', 0):
project_ref.append(['UseLibraryDependencyInputs', 'false'])
break
group.append(project_ref)
references.append(group)
return references
def _GenerateMSBuildProject(project, options, version, generator_flags):
spec = project.spec
configurations = spec['configurations']
project_dir, project_file_name = os.path.split(project.path)
gyp.common.EnsureDirExists(project.path)
# Prepare list of sources and excluded sources.
gyp_path = _NormalizedSource(project.build_file)
relative_path_of_gyp_file = gyp.common.RelativePath(gyp_path, project_dir)
gyp_file = os.path.split(project.build_file)[1]
sources, excluded_sources = _PrepareListOfSources(spec, generator_flags,
gyp_file)
# Add rules.
actions_to_add = {}
props_files_of_rules = set()
targets_files_of_rules = set()
rule_dependencies = set()
extension_to_rule_name = {}
list_excluded = generator_flags.get('msvs_list_excluded_files', True)
# Don't generate rules if we are using an external builder like ninja.
if not spec.get('msvs_external_builder'):
_GenerateRulesForMSBuild(project_dir, options, spec,
sources, excluded_sources,
props_files_of_rules, targets_files_of_rules,
actions_to_add, rule_dependencies,
extension_to_rule_name)
else:
rules = spec.get('rules', [])
_AdjustSourcesForRules(rules, sources, excluded_sources, True)
sources, excluded_sources, excluded_idl = (
_AdjustSourcesAndConvertToFilterHierarchy(spec, options,
project_dir, sources,
excluded_sources,
list_excluded, version))
# Don't add actions if we are using an external builder like ninja.
if not spec.get('msvs_external_builder'):
_AddActions(actions_to_add, spec, project.build_file)
_AddCopies(actions_to_add, spec)
# NOTE: this stanza must appear after all actions have been decided.
# Don't excluded sources with actions attached, or they won't run.
excluded_sources = _FilterActionsFromExcluded(
excluded_sources, actions_to_add)
exclusions = _GetExcludedFilesFromBuild(spec, excluded_sources, excluded_idl)
actions_spec, sources_handled_by_action = _GenerateActionsForMSBuild(
spec, actions_to_add)
_GenerateMSBuildFiltersFile(project.path + '.filters', sources,
rule_dependencies,
extension_to_rule_name)
missing_sources = _VerifySourcesExist(sources, project_dir)
for configuration in configurations.itervalues():
_FinalizeMSBuildSettings(spec, configuration)
# Add attributes to root element
import_default_section = [
['Import', {'Project': r'$(VCTargetsPath)\Microsoft.Cpp.Default.props'}]]
import_cpp_props_section = [
['Import', {'Project': r'$(VCTargetsPath)\Microsoft.Cpp.props'}]]
import_cpp_targets_section = [
['Import', {'Project': r'$(VCTargetsPath)\Microsoft.Cpp.targets'}]]
import_masm_props_section = [
['Import',
{'Project': r'$(VCTargetsPath)\BuildCustomizations\masm.props'}]]
import_masm_targets_section = [
['Import',
{'Project': r'$(VCTargetsPath)\BuildCustomizations\masm.targets'}]]
macro_section = [['PropertyGroup', {'Label': 'UserMacros'}]]
content = [
'Project',
{'xmlns': 'http://schemas.microsoft.com/developer/msbuild/2003',
'ToolsVersion': version.ProjectVersion(),
'DefaultTargets': 'Build'
}]
content += _GetMSBuildProjectConfigurations(configurations)
content += _GetMSBuildGlobalProperties(spec, project.guid, project_file_name)
content += import_default_section
content += _GetMSBuildConfigurationDetails(spec, project.build_file)
if spec.get('msvs_enable_winphone'):
content += _GetMSBuildLocalProperties('v120_wp81')
else:
content += _GetMSBuildLocalProperties(project.msbuild_toolset)
content += import_cpp_props_section
content += import_masm_props_section
content += _GetMSBuildExtensions(props_files_of_rules)
content += _GetMSBuildPropertySheets(configurations)
content += macro_section
content += _GetMSBuildConfigurationGlobalProperties(spec, configurations,
project.build_file)
content += _GetMSBuildToolSettingsSections(spec, configurations)
content += _GetMSBuildSources(
spec, sources, exclusions, rule_dependencies, extension_to_rule_name,
actions_spec, sources_handled_by_action, list_excluded)
content += _GetMSBuildProjectReferences(project)
content += import_cpp_targets_section
content += import_masm_targets_section
content += _GetMSBuildExtensionTargets(targets_files_of_rules)
if spec.get('msvs_external_builder'):
content += _GetMSBuildExternalBuilderTargets(spec)
# TODO(jeanluc) File a bug to get rid of runas. We had in MSVS:
# has_run_as = _WriteMSVSUserFile(project.path, version, spec)
easy_xml.WriteXmlIfChanged(content, project.path, pretty=True, win32=True)
return missing_sources
def _GetMSBuildExternalBuilderTargets(spec):
"""Return a list of MSBuild targets for external builders.
The "Build" and "Clean" targets are always generated. If the spec contains
'msvs_external_builder_clcompile_cmd', then the "ClCompile" target will also
be generated, to support building selected C/C++ files.
Arguments:
spec: The gyp target spec.
Returns:
List of MSBuild 'Target' specs.
"""
build_cmd = _BuildCommandLineForRuleRaw(
spec, spec['msvs_external_builder_build_cmd'],
False, False, False, False)
build_target = ['Target', {'Name': 'Build'}]
build_target.append(['Exec', {'Command': build_cmd}])
clean_cmd = _BuildCommandLineForRuleRaw(
spec, spec['msvs_external_builder_clean_cmd'],
False, False, False, False)
clean_target = ['Target', {'Name': 'Clean'}]
clean_target.append(['Exec', {'Command': clean_cmd}])
targets = [build_target, clean_target]
if spec.get('msvs_external_builder_clcompile_cmd'):
clcompile_cmd = _BuildCommandLineForRuleRaw(
spec, spec['msvs_external_builder_clcompile_cmd'],
False, False, False, False)
clcompile_target = ['Target', {'Name': 'ClCompile'}]
clcompile_target.append(['Exec', {'Command': clcompile_cmd}])
targets.append(clcompile_target)
return targets
def _GetMSBuildExtensions(props_files_of_rules):
extensions = ['ImportGroup', {'Label': 'ExtensionSettings'}]
for props_file in props_files_of_rules:
extensions.append(['Import', {'Project': props_file}])
return [extensions]
def _GetMSBuildExtensionTargets(targets_files_of_rules):
targets_node = ['ImportGroup', {'Label': 'ExtensionTargets'}]
for targets_file in sorted(targets_files_of_rules):
targets_node.append(['Import', {'Project': targets_file}])
return [targets_node]
def _GenerateActionsForMSBuild(spec, actions_to_add):
"""Add actions accumulated into an actions_to_add, merging as needed.
Arguments:
spec: the target project dict
actions_to_add: dictionary keyed on input name, which maps to a list of
dicts describing the actions attached to that input file.
Returns:
A pair of (action specification, the sources handled by this action).
"""
sources_handled_by_action = OrderedSet()
actions_spec = []
for primary_input, actions in actions_to_add.iteritems():
inputs = OrderedSet()
outputs = OrderedSet()
descriptions = []
commands = []
for action in actions:
inputs.update(OrderedSet(action['inputs']))
outputs.update(OrderedSet(action['outputs']))
descriptions.append(action['description'])
cmd = action['command']
# For most actions, add 'call' so that actions that invoke batch files
# return and continue executing. msbuild_use_call provides a way to
# disable this but I have not seen any adverse effect from doing that
# for everything.
if action.get('msbuild_use_call', True):
cmd = 'call ' + cmd
commands.append(cmd)
# Add the custom build action for one input file.
description = ', and also '.join(descriptions)
# We can't join the commands simply with && because the command line will
# get too long. See also _AddActions: cygwin's setup_env mustn't be called
# for every invocation or the command that sets the PATH will grow too
# long.
command = '\r\n'.join([c + '\r\nif %errorlevel% neq 0 exit /b %errorlevel%'
for c in commands])
_AddMSBuildAction(spec,
primary_input,
inputs,
outputs,
command,
description,
sources_handled_by_action,
actions_spec)
return actions_spec, sources_handled_by_action
def _AddMSBuildAction(spec, primary_input, inputs, outputs, cmd, description,
sources_handled_by_action, actions_spec):
command = MSVSSettings.ConvertVCMacrosToMSBuild(cmd)
primary_input = _FixPath(primary_input)
inputs_array = _FixPaths(inputs)
outputs_array = _FixPaths(outputs)
additional_inputs = ';'.join([i for i in inputs_array
if i != primary_input])
outputs = ';'.join(outputs_array)
sources_handled_by_action.add(primary_input)
action_spec = ['CustomBuild', {'Include': primary_input}]
action_spec.extend(
# TODO(jeanluc) 'Document' for all or just if as_sources?
[['FileType', 'Document'],
['Command', command],
['Message', description],
['Outputs', outputs]
])
if additional_inputs:
action_spec.append(['AdditionalInputs', additional_inputs])
actions_spec.append(action_spec)
| gpl-2.0 |
RanadeepPolavarapu/kuma | vendor/packages/translate/tools/test_podebug.py | 25 | 6654 | # -*- coding: utf-8 -*-
from translate.storage import base, po, xliff
from translate.tools import podebug
PO_DOC = """
msgid "This is a %s test, hooray."
msgstr ""
"""
XLIFF_DOC = """<?xml version='1.0' encoding='utf-8'?>
<xliff xmlns="urn:oasis:names:tc:xliff:document:1.1" version="1.1">
<file original="NoName" source-language="en" datatype="plaintext">
<body>
<trans-unit id="office:document-content[0]/office:body[0]/office:text[0]/text:p[0]">
<source>This <g id="0">is a</g> test <x id="1" xid="office:document-content[0]/office:body[0]/office:text[0]/text:p[0]/text:note[0]"/>, hooray.</source>
</trans-unit>
</body>
</file>
</xliff>
"""
class TestPODebug:
debug = podebug.podebug()
def setup_method(self, method):
self.postore = po.pofile(PO_DOC)
self.xliffstore = xliff.xlifffile(XLIFF_DOC)
def test_ignore_gtk(self):
"""Test operation of GTK message ignoring"""
unit = base.TranslationUnit("default:LTR")
assert self.debug.ignore_gtk(unit)
def test_keep_target(self):
"""Test that we use the target for rewriting if it exists."""
unit = base.TranslationUnit(u"blie")
unit.target = u"bla"
debugger = podebug.podebug(rewritestyle="xxx")
unit = debugger.convertunit(unit, "")
assert unit.target == u"xxxblaxxx"
unit.target = u"d%d"
debugger = podebug.podebug(rewritestyle="flipped")
unit = debugger.convertunit(unit, "")
assert unit.target == u"\u202ep%d"
def test_rewrite_blank(self):
"""Test the blank rewrite function"""
assert str(self.debug.rewrite_blank(u"Test")) == u""
def test_rewrite_en(self):
"""Test the en rewrite function"""
assert str(self.debug.rewrite_en(u"Test")) == u"Test"
def test_rewrite_xxx(self):
"""Test the xxx rewrite function"""
assert str(self.debug.rewrite_xxx(u"Test")) == u"xxxTestxxx"
assert str(self.debug.rewrite_xxx(u"Newline\n")) == u"xxxNewlinexxx\n"
def test_rewrite_bracket(self):
"""Test the bracket rewrite function"""
assert str(self.debug.rewrite_bracket(u"Test")) == u"[Test]"
assert str(self.debug.rewrite_bracket(u"Newline\n")) == u"[Newline]\n"
def test_rewrite_unicode(self):
"""Test the unicode rewrite function"""
assert unicode(self.debug.rewrite_unicode(u"Test")) == u"Ŧḗşŧ"
def test_rewrite_flipped(self):
"""Test the unicode rewrite function"""
assert unicode(self.debug.rewrite_flipped(u"Test")) == u"\u202e⊥ǝsʇ"
# alternative with reversed string and no RTL override:
#assert unicode(self.debug.rewrite_flipped("Test")) == u"ʇsǝ⊥"
# Chars < ! and > z are returned as is
assert unicode(self.debug.rewrite_flipped(u" ")) == u"\u202e "
assert unicode(self.debug.rewrite_flipped(u"©")) == u"\u202e©"
def test_rewrite_chef(self):
"""Test the chef rewrite function
This is not realy critical to test but a simple tests ensures
that it stays working.
"""
assert str(self.debug.rewrite_chef(u"Mock Swedish test you muppet")) == u"Mock Swedish test yooo mooppet"
def test_po_variables(self):
debug = podebug.podebug(rewritestyle='unicode')
po_out = debug.convertstore(self.postore)
in_unit = self.postore.units[0]
out_unit = po_out.units[0]
assert in_unit.source == out_unit.source
print(out_unit.target)
print(str(po_out))
rewrite_func = self.debug.rewrite_unicode
assert out_unit.target == u"%s%%s%s" % (rewrite_func(u'This is a '), rewrite_func(u' test, hooray.'))
def test_xliff_rewrite(self):
debug = podebug.podebug(rewritestyle='xxx')
xliff_out = debug.convertstore(self.xliffstore)
in_unit = self.xliffstore.units[0]
out_unit = xliff_out.units[0]
assert in_unit.source == out_unit.source
print(out_unit.target)
print(str(xliff_out))
assert out_unit.target == u'xxx%sxxx' % (in_unit.source)
def test_hash(self):
po_docs = ("""
msgid "Test msgid 1"
msgstr "Test msgstr 1"
""",
"""
msgctxt "test context"
msgid "Test msgid 2"
msgstr "Test msgstr 2"
""",
"""
# Test comment 3
msgctxt "test context 3"
msgid "Test msgid 3"
msgstr "Test msgstr 3"
""")
debugs = (podebug.podebug(format="%h "),
podebug.podebug(format="%6h."),
podebug.podebug(format="zzz%7h.zzz"),
podebug.podebug(format="%f %F %b %B %d %s "),
podebug.podebug(format="%3f %4F %5b %6B %7d %8s "),
podebug.podebug(format="%cf %cF %cb %cB %cd %cs "),
podebug.podebug(format="%3cf %4cF %5cb %6cB %7cd %8cs "),)
results = ["85a9 Test msgstr 1", "a15d Test msgstr 2", "6398 Test msgstr 3",
"85a917.Test msgstr 1", "a15d71.Test msgstr 2", "639898.Test msgstr 3",
"zzz85a9170.zzzTest msgstr 1", "zzza15d718.zzzTest msgstr 2", "zzz639898c.zzzTest msgstr 3",
"fullpath/to/fakefile fullpath/to/fakefile.po fakefile fakefile.po fullpath/to full-t-fake Test msgstr 1",
"fullpath/to/fakefile fullpath/to/fakefile.po fakefile fakefile.po fullpath/to full-t-fake Test msgstr 2",
"fullpath/to/fakefile fullpath/to/fakefile.po fakefile fakefile.po fullpath/to full-t-fake Test msgstr 3",
"ful full fakef fakefi fullpat full-t-f Test msgstr 1",
"ful full fakef fakefi fullpat full-t-f Test msgstr 2",
"ful full fakef fakefi fullpat full-t-f Test msgstr 3",
"fllpth/t/fkfl fllpth/t/fkfl.p fkfl fkfl.p fllpth/t fll-t-fk Test msgstr 1",
"fllpth/t/fkfl fllpth/t/fkfl.p fkfl fkfl.p fllpth/t fll-t-fk Test msgstr 2",
"fllpth/t/fkfl fllpth/t/fkfl.p fkfl fkfl.p fllpth/t fll-t-fk Test msgstr 3",
"fll fllp fkfl fkfl.p fllpth/ fll-t-fk Test msgstr 1",
"fll fllp fkfl fkfl.p fllpth/ fll-t-fk Test msgstr 2",
"fll fllp fkfl fkfl.p fllpth/ fll-t-fk Test msgstr 3"]
for debug in debugs:
for po_doc in po_docs:
postore = po.pofile(po_doc)
postore.filename = "fullpath/to/fakefile.po"
po_out = debug.convertstore(postore)
in_unit = postore.units[0]
out_unit = po_out.units[0]
assert in_unit.source == out_unit.source
assert out_unit.target == results.pop(0)
| mpl-2.0 |
kamalx/edx-platform | common/test/acceptance/tests/test_annotatable.py | 21 | 5426 | # -*- coding: utf-8 -*-
"""
E2E tests for the LMS.
"""
import time
from unittest import skip
from .helpers import UniqueCourseTest
from ..pages.studio.auto_auth import AutoAuthPage
from ..pages.lms.courseware import CoursewarePage
from ..pages.lms.annotation_component import AnnotationComponentPage
from ..fixtures.course import CourseFixture, XBlockFixtureDesc
from ..fixtures.xqueue import XQueueResponseFixture
from textwrap import dedent
def _correctness(choice, target):
if choice == target:
return "correct"
elif abs(choice - target) == 1:
return "partially-correct"
else:
return "incorrect"
class AnnotatableProblemTest(UniqueCourseTest):
"""
Tests for annotation components.
"""
USERNAME = "STAFF_TESTER"
EMAIL = "johndoe@example.com"
DATA_TEMPLATE = dedent("""\
<annotatable>
<instructions>Instruction text</instructions>
<p>{}</p>
</annotatable>
""")
ANNOTATION_TEMPLATE = dedent("""\
Before {0}.
<annotation title="region {0}" body="Comment {0}" highlight="yellow" problem="{0}">
Region Contents {0}
</annotation>
After {0}.
""")
PROBLEM_TEMPLATE = dedent("""\
<problem max_attempts="1" weight="">
<annotationresponse>
<annotationinput>
<title>Question {number}</title>
<text>Region Contents {number}</text>
<comment>What number is this region?</comment>
<comment_prompt>Type your response below:</comment_prompt>
<tag_prompt>What number is this region?</tag_prompt>
<options>
{options}
</options>
</annotationinput>
</annotationresponse>
<solution>
This problem is checking region {number}
</solution>
</problem>
""")
OPTION_TEMPLATE = """<option choice="{correctness}">{number}</option>"""
def setUp(self):
super(AnnotatableProblemTest, self).setUp()
self.courseware_page = CoursewarePage(self.browser, self.course_id)
# Install a course with two annotations and two annotations problems.
course_fix = CourseFixture(
self.course_info['org'], self.course_info['number'],
self.course_info['run'], self.course_info['display_name']
)
self.annotation_count = 2
course_fix.add_children(
XBlockFixtureDesc('chapter', 'Test Section').add_children(
XBlockFixtureDesc('sequential', 'Test Subsection').add_children(
XBlockFixtureDesc('vertical', 'Test Annotation Vertical').add_children(
XBlockFixtureDesc('annotatable', 'Test Annotation Module',
data=self.DATA_TEMPLATE.format("\n".join(
self.ANNOTATION_TEMPLATE.format(i) for i in xrange(self.annotation_count)
))),
XBlockFixtureDesc('problem', 'Test Annotation Problem 0',
data=self.PROBLEM_TEMPLATE.format(number=0, options="\n".join(
self.OPTION_TEMPLATE.format(
number=k,
correctness=_correctness(k, 0))
for k in xrange(self.annotation_count)
))),
XBlockFixtureDesc('problem', 'Test Annotation Problem 1',
data=self.PROBLEM_TEMPLATE.format(number=1, options="\n".join(
self.OPTION_TEMPLATE.format(
number=k,
correctness=_correctness(k, 1))
for k in xrange(self.annotation_count)
)))
)
)
)
).install()
# Auto-auth register for the course.
AutoAuthPage(self.browser, username=self.USERNAME, email=self.EMAIL,
course_id=self.course_id, staff=False).visit()
def _goto_annotation_component_page(self):
"""
Open annotation component page with assertion.
"""
self.courseware_page.visit()
annotation_component_page = AnnotationComponentPage(self.browser)
self.assertEqual(
annotation_component_page.component_name, 'TEST ANNOTATION MODULE'.format()
)
return annotation_component_page
@skip # TODO fix TNL-1590
def test_annotation_component(self):
"""
Test annotation components links to annotation problems.
"""
annotation_component_page = self._goto_annotation_component_page()
for i in xrange(self.annotation_count):
annotation_component_page.click_reply_annotation(i)
self.assertTrue(annotation_component_page.check_scroll_to_problem())
annotation_component_page.answer_problem()
self.assertTrue(annotation_component_page.check_feedback())
annotation_component_page.click_return_to_annotation()
self.assertTrue(annotation_component_page.check_scroll_to_annotation())
| agpl-3.0 |
eseraygun/python-entities | examples/basicusage.py | 1 | 1373 | from entities import *
class Account(Entity):
id = IntegerField(group=PRIMARY) # this field is in primary key group
iban = IntegerField(group=SECONDARY) # this is in secondary key group
balance = FloatField(default=0.0)
class Name(Entity):
first_name = StringField(group=SECONDARY)
last_name = StringField(group=SECONDARY)
class Customer(Entity):
id = IntegerField(group=PRIMARY)
name = EntityField(Name, group=SECONDARY)
accounts = ListField(ReferenceField(Account), default=list)
# Create Account objects.
a_1 = Account(1, 111, 10.0) # __init__() recognize positional arguments
a_2 = Account(id=2, iban=222, balance=20.0) # as well as keyword arguments
# Generate hashable key using primary key.
print a_1.keyify() # prints '(1,)'
# Generate hashable key using secondary key.
print a_2.keyify(SECONDARY) # prints '(222,)'
# Create Customer object.
c = Customer(1, Name('eser', 'aygun'))
# Generate hashable key using primary key.
print c.keyify() # prints '(1,)'
# Generate hashable key using secondary key.
print c.keyify(SECONDARY) # prints '(('eser', 'aygun'),)'
# Try validating an invalid object.
c.accounts.append(123)
try:
c.validate() # fails
except ValidationError:
print 'accounts list is only for Account objects'
# Try validating a valid object.
c.accounts = [a_1, a_2]
c.validate() # succeeds
| bsd-3-clause |
dracorpg/python-ivi | ivi/agilent/agilent8341A.py | 6 | 1480 | """
Python Interchangeable Virtual Instrument Library
Copyright (c) 2014 Alex Forencich
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
"""
from .agilentBase8340 import *
class agilent8341A(agilentBase8340):
"Agilent 8341A IVI RF sweep generator driver"
def __init__(self, *args, **kwargs):
self.__dict__.setdefault('_instrument_id', 'HP8341A')
super(agilent8341A, self).__init__(*args, **kwargs)
self._frequency_low = 10e6
self._frequency_high = 26.5e9
| mit |
icereval/osf.io | scripts/analytics/migrate_analytics.py | 9 | 20115 | # A script to migrate old keen analytics to a new collection, generate in-between points for choppy
# data, or a little of both
import os
import csv
import copy
import pytz
import logging
import argparse
import datetime
from dateutil.parser import parse
from keen.client import KeenClient
from website.settings import KEEN as keen_settings
logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)
VERY_LONG_TIMEFRAME = 'this_20_years'
def parse_args():
parser = argparse.ArgumentParser(
description='Enter a start date and end date to gather, smooth, and send back analytics for keen'
)
parser.add_argument('-s', '--start', dest='start_date')
parser.add_argument('-e', '--end', dest='end_date')
parser.add_argument('-t', '--transfer', dest='transfer_collection', action='store_true')
parser.add_argument('-sc', '--source', dest='source_collection')
parser.add_argument('-dc', '--destination', dest='destination_collection')
parser.add_argument('-sm', '--smooth', dest='smooth_events', action='store_true')
parser.add_argument('-o', '--old', dest='old_analytics', action='store_true')
parser.add_argument('-d', '--dry', dest='dry', action='store_true')
parser.add_argument('-r', '--reverse', dest='reverse', action='store_true')
parser.add_argument('-re', '--removeevent', dest="remove_event")
parsed = parser.parse_args()
validate_args(parsed)
return parsed
def validate_args(args):
""" Go through supplied command line args an determine if you have enough to continue
:param args: argparse args object, to sift through and figure out if you need more info
:return: None, just raise errors if it finds something wrong
"""
if args.dry:
logger.info('Running analytics on DRY RUN mode! No data will actually be sent to Keen.')
potential_operations = [args.smooth_events, args.transfer_collection, args.old_analytics]
if len([arg for arg in potential_operations if arg]) > 1:
raise ValueError('You may only choose one analytic type to run: transfer, smooth, or import old analytics.')
if args.smooth_events and not (args.start_date and args.end_date):
raise ValueError('To smooth data, please enter both a start date and end date.')
if args.start_date and args.end_date:
if parse(args.start_date) > parse(args.end_date):
raise ValueError('Please enter an end date that is after the start date.')
if args.smooth_events and not args.source_collection:
raise ValueError('Please specify a source collection to smooth data from.')
if args.transfer_collection and not (args.source_collection and args.destination_collection):
raise ValueError('To transfer between keen collections, enter both a source and a destination collection.')
if any([args.start_date, args.end_date]) and not all([args.start_date, args.end_date]):
raise ValueError('You must provide both a start and an end date if you provide either.')
if args.remove_event and not args.source_collection:
raise ValueError('You must provide both a source collection to remove an event from.')
def fill_in_event_gaps(collection_name, events):
""" A method to help fill in gaps between events that might be far apart,
so that one event happens per day.
:param collection_name: keen collection events are from
:param events: events to fill in gaps between
:return: list of "generated and estimated" events to send that will fill in gaps.
"""
given_days = [parse(event['keen']['timestamp']).date() for event in events if not event.get('generated')]
given_days.sort()
date_chunks = [given_days[x-1:x+1] for x in range(1, len(given_days))]
events_to_add = []
if given_days:
if collection_name == 'addon_snapshot':
all_providers = list(set([event['provider']['name'] for event in events]))
for provider in all_providers:
for date_pair in date_chunks:
if date_pair[1] - date_pair[0] > datetime.timedelta(1) and date_pair[0] != date_pair[1]:
first_event = [
event for event in events if date_from_event_ts(event) == date_pair[0] and event['provider']['name'] == provider and not event.get('generated')
]
if first_event:
events_to_add += generate_events_between_events(date_pair, first_event[0])
elif collection_name == 'institution_summary':
all_instutitions = list(set([event['institution']['name'] for event in events]))
for institution in all_instutitions:
for date_pair in date_chunks:
if date_pair[1] - date_pair[0] > datetime.timedelta(1) and date_pair[0] != date_pair[1]:
first_event = [
event for event in events if date_from_event_ts(event) == date_pair[0] and event['institution']['name'] == institution and not event.get('generated')
]
if first_event:
events_to_add += generate_events_between_events(date_pair, first_event[0])
else:
for date_pair in date_chunks:
if date_pair[1] - date_pair[0] > datetime.timedelta(1) and date_pair[0] != date_pair[1]:
first_event = [event for event in events if date_from_event_ts(event) == date_pair[0] and not event.get('generated')]
if first_event:
events_to_add += generate_events_between_events(date_pair, first_event[0])
logger.info('Generated {} events to add to the {} collection.'.format(len(events_to_add), collection_name))
else:
logger.info('Could not retrieve events for the date range you provided.')
return events_to_add
def date_from_event_ts(event):
return parse(event['keen']['timestamp']).date()
def generate_events_between_events(given_days, first_event):
first_day = given_days[0]
last_day = given_days[-1]
next_day = first_day + datetime.timedelta(1)
first_event['keen'].pop('created_at')
first_event['keen'].pop('id')
first_event['generated'] = True # Add value to tag generated data
generated_events = []
while next_day < last_day:
new_event = copy.deepcopy(first_event)
new_event['keen']['timestamp'] = datetime.datetime(next_day.year, next_day.month, next_day.day).replace(tzinfo=pytz.UTC).isoformat()
if next_day not in given_days:
generated_events.append(new_event)
next_day += datetime.timedelta(1)
if generated_events:
logger.info('Generated {} events for the interval {} to {}'.format(
len(generated_events),
given_days[0].isoformat(),
given_days[1].isoformat()
)
)
return generated_events
def get_keen_client():
keen_project = keen_settings['private'].get('project_id')
read_key = keen_settings['private'].get('read_key')
master_key = keen_settings['private'].get('master_key')
write_key = keen_settings['private'].get('write_key')
if keen_project and read_key and master_key:
client = KeenClient(
project_id=keen_project,
read_key=read_key,
master_key=master_key,
write_key=write_key
)
else:
raise ValueError('Cannot connect to Keen clients - all keys not provided.')
return client
def extract_events_from_keen(client, event_collection, start_date=None, end_date=None):
""" Get analytics from keen to use as a starting point for smoothing or transferring
:param client: keen client to use for connection
:param start_date: datetime object, datetime to start gathering from keen
:param end_date: datetime object, datetime to stop gathering from keen
:param event_collection: str, name of the event collection to gather from
:return: a list of keen events to use in other methods
"""
timeframe = VERY_LONG_TIMEFRAME
if start_date and end_date:
logger.info('Gathering events from the {} collection between {} and {}'.format(event_collection, start_date, end_date))
timeframe = {"start": start_date.isoformat(), "end": end_date.isoformat()}
else:
logger.info('Gathering events from the {} collection using timeframe {}'.format(event_collection, VERY_LONG_TIMEFRAME))
return client.extraction(event_collection, timeframe=timeframe)
def make_sure_keen_schemas_match(source_collection, destination_collection, keen_client):
""" Helper function to check if two given collections have matching schemas in keen, to make sure
they can be transfered between one another
:param source_collection: str, collection that events are stored now
:param destination_collection: str, collection to transfer to
:param keen_client: KeenClient, instantiated for the connection
:return: bool, if the two schemas match in keen
"""
source_schema = keen_client.get_collection(source_collection)
destination_schema = keen_client.get_collection(destination_collection)
return source_schema == destination_schema
def transfer_events_to_another_collection(client, source_collection, destination_collection, dry, reverse=False):
""" Transfer analytics from source collection to the destination collection.
Will only work if the source and destination have the same schemas attached, will error if they don't
:param client: KeenClient, client to use to make connection to keen
:param source_collection: str, keen collection to transfer from
:param destination_collection: str, keen collection to transfer to
:param dry: bool, whether or not to make a dry run, aka actually send events to keen
:return: None
"""
schemas_match = make_sure_keen_schemas_match(source_collection, destination_collection, client)
if not schemas_match:
raise ValueError('The two provided schemas in keen do not match, you will need to do a bit more work.')
events_from_source = extract_events_from_keen(client, source_collection)
for event in events_from_source:
event['keen'].pop('created_at')
event['keen'].pop('id')
if reverse:
remove_events_from_keen(client, destination_collection, events_from_source, dry)
else:
add_events_to_keen(client, destination_collection, events_from_source, dry)
logger.info(
'Transferred {} events from the {} collection to the {} collection'.format(
len(events_from_source),
source_collection,
destination_collection
)
)
def add_events_to_keen(client, collection, events, dry):
logger.info('Adding {} events to the {} collection...'.format(len(events), collection))
if not dry:
client.add_events({collection: events})
def smooth_events_in_keen(client, source_collection, start_date, end_date, dry, reverse):
base_events = extract_events_from_keen(client, source_collection, start_date, end_date)
events_to_fill_in = fill_in_event_gaps(source_collection, base_events)
if reverse:
remove_events_from_keen(client, source_collection, events_to_fill_in, dry)
else:
add_events_to_keen(client, source_collection, events_to_fill_in, dry)
def remove_events_from_keen(client, source_collection, events, dry):
for event in events:
filters = [{'property_name': 'keen.timestamp', 'operator': 'eq', 'property_value': event['keen']['timestamp']}]
# test to see if you get back the correct events from keen
filtered_event = client.extraction(source_collection, filters=filters)
if filtered_event:
filtered_event = filtered_event[0]
filtered_event['keen'].pop('id')
filtered_event['keen'].pop('created_at')
filtered_event['keen']['timestamp'] = filtered_event['keen']['timestamp'][:10] # ends of timestamps differ
event['keen']['timestamp'] = event['keen']['timestamp'][:10]
if event != filtered_event:
logger.error('Filtered event not equal to the event you have gathered, not removing...')
else:
logger.info('About to delete a generated event from the {} collection from the date {}'.format(
source_collection, event['keen']['timestamp']
))
if not dry:
client.delete_events(source_collection, filters=filters)
else:
logger.info('No filtered event found.')
def import_old_events_from_spreadsheet():
home = os.path.expanduser("~")
spreadsheet_path = home + '/daily_user_counts.csv'
key_map = {
'active-users': 'active',
'logs-gte-11-total': 'depth',
'number_users': 'total_users', # really is active - number_users
'number_projects': 'projects.total',
'number_projects_public': 'projects.public',
'number_projects_registered': 'registrations.total',
'Date': 'timestamp',
'dropbox-users-enabled': 'enabled',
'dropbox-users-authorized': 'authorized',
'dropbox-users-linked': 'linked',
'profile-edits': 'profile_edited'
}
with open(spreadsheet_path) as csvfile:
reader = csv.reader(csvfile, delimiter=',')
col_names = reader.next()
dictReader = csv.DictReader(open(spreadsheet_path, 'rb'), fieldnames=col_names, delimiter=',')
events = []
for row in dictReader:
event = {}
for key in row:
equiv_key = key_map.get(key, None)
if equiv_key:
event[equiv_key] = row[key]
events.append(event)
user_summary_cols = ['active', 'depth', 'total_users', 'timestamp', 'profile_edited']
node_summary_cols = ['registrations.total', 'projects.total', 'projects.public', 'timestamp']
addon_summary_cols = ['enabled', 'authorized', 'linked', 'timestamp']
user_events = []
node_events = []
addon_events = []
for event in events[3:]: # The first few rows have blank and/or bad data because they're extra headers
node_event = {}
user_event = {}
addon_event = {}
for key, value in event.iteritems():
if key in node_summary_cols:
node_event[key] = value
if key in user_summary_cols:
user_event[key] = value
if key in addon_summary_cols:
addon_event[key] = value
formatted_user_event = format_event(user_event, analytics_type='user')
formatted_node_event = format_event(node_event, analytics_type='node')
formatted_addon_event = format_event(addon_event, analytics_type='addon')
if formatted_node_event:
node_events.append(formatted_node_event)
if formatted_user_event:
user_events.append(formatted_user_event)
if formatted_addon_event:
addon_events.append(formatted_addon_event)
logger.info(
'Gathered {} old user events, {} old node events and {} old dropbox addon events for keen'.format(
len(user_events),
len(node_events),
len(addon_events)
)
)
return {'user_summary': user_events, 'node_summary': node_events, 'addon_snapshot': addon_events}
def comma_int(value):
if value and value != 'MISSING':
return int(value.replace(',', ''))
def format_event(event, analytics_type):
user_event_template = {
"status": {},
"keen": {}
}
node_event_template = {
"projects": {},
"registered_projects": {},
"keen": {}
}
addon_event_template = {
"keen": {},
"users": {}
}
template_to_use = None
if analytics_type == 'user':
template_to_use = user_event_template
if event['active'] and event['active'] != 'MISSING':
template_to_use['status']['active'] = comma_int(event['active'])
if event['total_users'] and event['active']:
template_to_use['status']['unconfirmed'] = comma_int(event['total_users']) - comma_int(event['active'])
if event['profile_edited']:
template_to_use['status']['profile_edited'] = comma_int(event['profile_edited'])
elif analytics_type == 'node':
template_to_use = node_event_template
if event['projects.total']:
template_to_use['projects']['total'] = comma_int(event['projects.total'])
if event['projects.public']:
template_to_use['projects']['public'] = comma_int(event['projects.public'])
if event['registrations.total']:
template_to_use['registered_projects']['total'] = comma_int(event['registrations.total'])
if event['projects.total'] and event['projects.public']:
template_to_use['projects']['private'] = template_to_use['projects']['total'] - template_to_use['projects']['public']
elif analytics_type == 'addon':
template_to_use = addon_event_template
if event['enabled']:
template_to_use['users']['enabled'] = comma_int(event['enabled'])
if event['authorized']:
template_to_use['users']['authorized'] = comma_int(event['authorized'])
if event['linked']:
template_to_use['users']['linked'] = comma_int(event['linked'])
if event['authorized'] or event['enabled'] or event['linked']:
template_to_use["provider"] = {"name": "dropbox"}
template_to_use['keen']['timestamp'] = parse(event['timestamp']).replace(hour=12, tzinfo=pytz.UTC).isoformat()
template_to_use['imported'] = True
formatted_event = {key: value for key, value in template_to_use.items() if value}
if len(formatted_event.items()) > 2: # if there's more than just the auto-added timestamp for keen
return template_to_use
def remove_event_from_keen(client, source_collection, event_id):
filters = [{'property_name': 'keen.id', 'operator': 'eq', 'property_value': event_id}]
client.delete_events(source_collection, filters=filters)
def parse_and_send_old_events_to_keen(client, dry, reverse):
old_events = import_old_events_from_spreadsheet()
for key, value in old_events.iteritems():
if reverse:
remove_events_from_keen(client, key, value, dry)
else:
add_events_to_keen(client, key, value, dry)
def main():
""" Main function for moving around and adjusting analytics gotten from keen and sending them back to keen.
Usage:
* Transfer all events from the 'institution_analytics' to the 'institution_summary' collection:
`python -m scripts.analytics.migrate_analytics -d -t -sc institution_analytics -dc institution_summary`
* Fill in the gaps in analytics for the 'addon_snapshot' collection between 2016-11-01 and 2016-11-15:
`python -m scripts.analytics.migrate_analytics -d -sm -sc addon_snapshot -s 2016-11-01 -e 2016-11-15`
* Reverse the above action by adding -r:
`python -m scripts.analytics.migrate_analytics -d -sm -sc addon_snapshot -s 2016-11-01 -e 2016-11-15 -r`
* Parse old analytics from the old analytics CSV stored on your filesystem:
`python -m scripts.analytics.migrate_analytics -o -d`
"""
args = parse_args()
client = get_keen_client()
dry = args.dry
reverse = args.reverse
if args.remove_event:
remove_event_from_keen(client, args.source_collection, args.remove_event)
if args.smooth_events:
smooth_events_in_keen(client, args.source_collection, parse(args.start_date), parse(args.end_date), dry, reverse)
elif args.transfer_collection:
transfer_events_to_another_collection(client, args.source_collection, args.destination_collection, dry, reverse)
elif args.old_analytics:
parse_and_send_old_events_to_keen(client, dry, reverse)
if __name__ == '__main__':
main()
| apache-2.0 |
ganeshnalawade/ansible | test/integration/targets/plugin_config_for_inventory/cache_plugins/none.py | 33 | 1506 | # (c) 2014, Brian Coca, Josh Drake, et al
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible.plugins.cache import BaseCacheModule
DOCUMENTATION = '''
cache: none
short_description: write-only cache (no cache)
description:
- No caching at all
version_added: historical
author: core team (@ansible-core)
options:
_timeout:
default: 86400
description: Expiration timeout for the cache plugin data
env:
- name: ANSIBLE_CACHE_PLUGIN_TIMEOUT
ini:
- key: fact_caching_timeout
section: defaults
type: integer
'''
class CacheModule(BaseCacheModule):
def __init__(self, *args, **kwargs):
super(CacheModule, self).__init__(*args, **kwargs)
self.empty = {}
self._timeout = self.get_option('_timeout')
def get(self, key):
return self.empty.get(key)
def set(self, key, value):
return value
def keys(self):
return self.empty.keys()
def contains(self, key):
return key in self.empty
def delete(self, key):
del self.emtpy[key]
def flush(self):
self.empty = {}
def copy(self):
return self.empty.copy()
def __getstate__(self):
return self.copy()
def __setstate__(self, data):
self.empty = data
| gpl-3.0 |
sangwook236/SWDT | sw_dev/python/rnd/test/probabilistic_graphical_model/pydensecrf/pydensecrf_basic.py | 2 | 5687 | #!/usr/bin/env python
# -*- coding: UTF-8 -*-
# REF [site] >> https://github.com/lucasb-eyer/pydensecrf
import time
import numpy as np
import cv2
import pydensecrf.densecrf as dcrf
from pydensecrf.utils import unary_from_labels, create_pairwise_bilateral, create_pairwise_gaussian
# REF [site] >> https://github.com/lucasb-eyer/pydensecrf/blob/master/examples/inference.py
def simple_example():
image_filepath = './driver_license.png'
#image_filepath = './driver_license_20190329.jpg'
#image_filepath = './passport_chaewanlee_20130402.jpg'
#image_filepath = './passport_chaewanlee_20170804.jpg'
#image_filepath = './passport_hyejoongkim_20140508.jpg'
#image_filepath = './passport_jihyunglee_20130402.jpg'
#image_filepath = './passport_jihyunglee_20170804.jpg'
#image_filepath = './passport_malnamkang_1.jpg'
#image_filepath = './passport_malnamkang_2.jpg'
#image_filepath = './passport_sangwooklee_20031211.jpg'
#image_filepath = './passport_sangwooklee_20130402.jpg'
#image_filepath = './rrn_malnamkang.jpg
#image_filepath = './rrn_sangwooklee_20190329.jpg'
anno_filepath = image_filepath
img = cv2.imread(image_filepath)
# Convert the annotation's RGB color to a single 32-bit integer color 0xBBGGRR.
anno_rgb = cv2.imread(anno_filepath).astype(np.uint32)
anno_lbl = anno_rgb[:,:,0] + (anno_rgb[:,:,1] << 8) + (anno_rgb[:,:,2] << 16)
# Convert the 32bit integer color to 1, 2, ..., labels.
# Note that all-black, i.e. the value 0 for background will stay 0.
colors, labels = np.unique(anno_lbl, return_inverse=True)
# But remove the all-0 black, that won't exist in the MAP!
HAS_UNK = 0 in colors
if HAS_UNK:
print('Found a full-black pixel in annotation image, assuming it means "unknown" label, and will thus not be present in the output!')
print('If 0 is an actual label for you, consider writing your own code, or simply giving your labels only non-zero values.')
colors = colors[1:]
#else:
# print('No single full-black pixel found in annotation image. Assuming there\'s no "unknown" label!')
# And create a mapping back from the labels to 32bit integer colors.
colorize = np.empty((len(colors), 3), np.uint8)
colorize[:,0] = (colors & 0x0000FF)
colorize[:,1] = (colors & 0x00FF00) >> 8
colorize[:,2] = (colors & 0xFF0000) >> 16
# Compute the number of classes in the label image.
# We subtract one because the number shouldn't include the value 0 which stands for "unknown" or "unsure".
n_labels = len(set(labels.flat)) - int(HAS_UNK)
#print(n_labels, 'labels', ('plus "unknown" 0: ' if HAS_UNK else ''), set(labels.flat))
print(n_labels, 'labels', ('plus "unknown" 0: ' if HAS_UNK else ''))
#--------------------
# Setup the CRF model.
use_2d = False
#use_2d = True
print('Start building a CRF model...')
start_time = time.time()
if use_2d:
print('Using 2D specialized functions.')
# Example using the DenseCRF2D code.
d = dcrf.DenseCRF2D(img.shape[1], img.shape[0], n_labels)
# Get unary potentials (neg log probability).
U = unary_from_labels(labels, n_labels, gt_prob=0.7, zero_unsure=HAS_UNK)
d.setUnaryEnergy(U)
# This adds the color-independent term, features are the locations only.
d.addPairwiseGaussian(sxy=(3, 3), compat=3,
kernel=dcrf.DIAG_KERNEL,
normalization=dcrf.NORMALIZE_SYMMETRIC)
# This adds the color-dependent term, i.e. features are (x, y, r, g, b).
d.addPairwiseBilateral(sxy=(80, 80), srgb=(13, 13, 13), rgbim=img, compat=10,
kernel=dcrf.DIAG_KERNEL,
normalization=dcrf.NORMALIZE_SYMMETRIC)
else:
print('Using generic 2D functions.')
# Example using the DenseCRF class and the util functions.
d = dcrf.DenseCRF(img.shape[1] * img.shape[0], n_labels)
# Get unary potentials (neg log probability).
U = unary_from_labels(labels, n_labels, gt_prob=0.7, zero_unsure=HAS_UNK)
d.setUnaryEnergy(U)
# This creates the color-independent features and then add them to the CRF.
feats = create_pairwise_gaussian(sdims=(3, 3), shape=img.shape[:2])
d.addPairwiseEnergy(feats, compat=3,
kernel=dcrf.DIAG_KERNEL,
normalization=dcrf.NORMALIZE_SYMMETRIC)
# This creates the color-dependent features and then add them to the CRF.
feats = create_pairwise_bilateral(sdims=(80, 80), schan=(13, 13, 13), img=img, chdim=2)
d.addPairwiseEnergy(feats, compat=10,
kernel=dcrf.DIAG_KERNEL,
normalization=dcrf.NORMALIZE_SYMMETRIC)
print('End building a CRF model: {} secs.'.format(time.time() - start_time))
#--------------------
# Do inference and compute MAP.
print('Start inferring by the CRF model...')
start_time = time.time()
# Run five inference steps.
Q = d.inference(5)
# Find out the most probable class for each pixel.
MAP = np.argmax(Q, axis=0)
print('End inferring by the CRF model: {} secs.'.format(time.time() - start_time))
# Convert the MAP (labels) back to the corresponding colors and save the image.
# Note that there is no "unknown" here anymore, no matter what we had at first.
MAP = colorize[MAP,:]
MAP = MAP.reshape(img.shape)
#cv2.imwrite(output_filepath, MAP)
cv2.imshow('Inference result', MAP)
cv2.waitKey(0)
#--------------------
# Just randomly manually run inference iterations.
print('Start manually inferring by the CRF model...')
start_time = time.time()
Q, tmp1, tmp2 = d.startInference()
for i in range(5):
print('KL-divergence at {}: {}.'.format(i, d.klDivergence(Q)))
d.stepInference(Q, tmp1, tmp2)
print('End manually inferring by the CRF model: {} secs.'.format(time.time() - start_time))
def main():
simple_example()
#--------------------------------------------------------------------
if '__main__' == __name__:
main()
| gpl-3.0 |
bussiere/pypyjs | website/demo/home/rfk/repos/pypy/lib-python/2.7/idlelib/PyShell.py | 18 | 52618 | #! /usr/bin/env python
import os
import os.path
import sys
import string
import getopt
import re
import socket
import time
import threading
import traceback
import types
import linecache
from code import InteractiveInterpreter
try:
from Tkinter import *
except ImportError:
print>>sys.__stderr__, "** IDLE can't import Tkinter. " \
"Your Python may not be configured for Tk. **"
sys.exit(1)
import tkMessageBox
from idlelib.EditorWindow import EditorWindow, fixwordbreaks
from idlelib.FileList import FileList
from idlelib.ColorDelegator import ColorDelegator
from idlelib.UndoDelegator import UndoDelegator
from idlelib.OutputWindow import OutputWindow
from idlelib.configHandler import idleConf
from idlelib import idlever
from idlelib import rpc
from idlelib import Debugger
from idlelib import RemoteDebugger
from idlelib import macosxSupport
IDENTCHARS = string.ascii_letters + string.digits + "_"
HOST = '127.0.0.1' # python execution server on localhost loopback
PORT = 0 # someday pass in host, port for remote debug capability
try:
from signal import SIGTERM
except ImportError:
SIGTERM = 15
# Override warnings module to write to warning_stream. Initialize to send IDLE
# internal warnings to the console. ScriptBinding.check_syntax() will
# temporarily redirect the stream to the shell window to display warnings when
# checking user's code.
global warning_stream
warning_stream = sys.__stderr__
try:
import warnings
except ImportError:
pass
else:
def idle_showwarning(message, category, filename, lineno,
file=None, line=None):
if file is None:
file = warning_stream
try:
file.write(warnings.formatwarning(message, category, filename,
lineno, line=line))
except IOError:
pass ## file (probably __stderr__) is invalid, warning dropped.
warnings.showwarning = idle_showwarning
def idle_formatwarning(message, category, filename, lineno, line=None):
"""Format warnings the IDLE way"""
s = "\nWarning (from warnings module):\n"
s += ' File \"%s\", line %s\n' % (filename, lineno)
if line is None:
line = linecache.getline(filename, lineno)
line = line.strip()
if line:
s += " %s\n" % line
s += "%s: %s\n>>> " % (category.__name__, message)
return s
warnings.formatwarning = idle_formatwarning
def extended_linecache_checkcache(filename=None,
orig_checkcache=linecache.checkcache):
"""Extend linecache.checkcache to preserve the <pyshell#...> entries
Rather than repeating the linecache code, patch it to save the
<pyshell#...> entries, call the original linecache.checkcache()
(skipping them), and then restore the saved entries.
orig_checkcache is bound at definition time to the original
method, allowing it to be patched.
"""
cache = linecache.cache
save = {}
for key in list(cache):
if key[:1] + key[-1:] == '<>':
save[key] = cache.pop(key)
orig_checkcache(filename)
cache.update(save)
# Patch linecache.checkcache():
linecache.checkcache = extended_linecache_checkcache
class PyShellEditorWindow(EditorWindow):
"Regular text edit window in IDLE, supports breakpoints"
def __init__(self, *args):
self.breakpoints = []
EditorWindow.__init__(self, *args)
self.text.bind("<<set-breakpoint-here>>", self.set_breakpoint_here)
self.text.bind("<<clear-breakpoint-here>>", self.clear_breakpoint_here)
self.text.bind("<<open-python-shell>>", self.flist.open_shell)
self.breakpointPath = os.path.join(idleConf.GetUserCfgDir(),
'breakpoints.lst')
# whenever a file is changed, restore breakpoints
if self.io.filename: self.restore_file_breaks()
def filename_changed_hook(old_hook=self.io.filename_change_hook,
self=self):
self.restore_file_breaks()
old_hook()
self.io.set_filename_change_hook(filename_changed_hook)
rmenu_specs = [("Set Breakpoint", "<<set-breakpoint-here>>"),
("Clear Breakpoint", "<<clear-breakpoint-here>>")]
def set_breakpoint(self, lineno):
text = self.text
filename = self.io.filename
text.tag_add("BREAK", "%d.0" % lineno, "%d.0" % (lineno+1))
try:
i = self.breakpoints.index(lineno)
except ValueError: # only add if missing, i.e. do once
self.breakpoints.append(lineno)
try: # update the subprocess debugger
debug = self.flist.pyshell.interp.debugger
debug.set_breakpoint_here(filename, lineno)
except: # but debugger may not be active right now....
pass
def set_breakpoint_here(self, event=None):
text = self.text
filename = self.io.filename
if not filename:
text.bell()
return
lineno = int(float(text.index("insert")))
self.set_breakpoint(lineno)
def clear_breakpoint_here(self, event=None):
text = self.text
filename = self.io.filename
if not filename:
text.bell()
return
lineno = int(float(text.index("insert")))
try:
self.breakpoints.remove(lineno)
except:
pass
text.tag_remove("BREAK", "insert linestart",\
"insert lineend +1char")
try:
debug = self.flist.pyshell.interp.debugger
debug.clear_breakpoint_here(filename, lineno)
except:
pass
def clear_file_breaks(self):
if self.breakpoints:
text = self.text
filename = self.io.filename
if not filename:
text.bell()
return
self.breakpoints = []
text.tag_remove("BREAK", "1.0", END)
try:
debug = self.flist.pyshell.interp.debugger
debug.clear_file_breaks(filename)
except:
pass
def store_file_breaks(self):
"Save breakpoints when file is saved"
# XXX 13 Dec 2002 KBK Currently the file must be saved before it can
# be run. The breaks are saved at that time. If we introduce
# a temporary file save feature the save breaks functionality
# needs to be re-verified, since the breaks at the time the
# temp file is created may differ from the breaks at the last
# permanent save of the file. Currently, a break introduced
# after a save will be effective, but not persistent.
# This is necessary to keep the saved breaks synched with the
# saved file.
#
# Breakpoints are set as tagged ranges in the text. Certain
# kinds of edits cause these ranges to be deleted: Inserting
# or deleting a line just before a breakpoint, and certain
# deletions prior to a breakpoint. These issues need to be
# investigated and understood. It's not clear if they are
# Tk issues or IDLE issues, or whether they can actually
# be fixed. Since a modified file has to be saved before it is
# run, and since self.breakpoints (from which the subprocess
# debugger is loaded) is updated during the save, the visible
# breaks stay synched with the subprocess even if one of these
# unexpected breakpoint deletions occurs.
breaks = self.breakpoints
filename = self.io.filename
try:
with open(self.breakpointPath,"r") as old_file:
lines = old_file.readlines()
except IOError:
lines = []
try:
with open(self.breakpointPath,"w") as new_file:
for line in lines:
if not line.startswith(filename + '='):
new_file.write(line)
self.update_breakpoints()
breaks = self.breakpoints
if breaks:
new_file.write(filename + '=' + str(breaks) + '\n')
except IOError as err:
if not getattr(self.root, "breakpoint_error_displayed", False):
self.root.breakpoint_error_displayed = True
tkMessageBox.showerror(title='IDLE Error',
message='Unable to update breakpoint list:\n%s'
% str(err),
parent=self.text)
def restore_file_breaks(self):
self.text.update() # this enables setting "BREAK" tags to be visible
filename = self.io.filename
if filename is None:
return
if os.path.isfile(self.breakpointPath):
lines = open(self.breakpointPath,"r").readlines()
for line in lines:
if line.startswith(filename + '='):
breakpoint_linenumbers = eval(line[len(filename)+1:])
for breakpoint_linenumber in breakpoint_linenumbers:
self.set_breakpoint(breakpoint_linenumber)
def update_breakpoints(self):
"Retrieves all the breakpoints in the current window"
text = self.text
ranges = text.tag_ranges("BREAK")
linenumber_list = self.ranges_to_linenumbers(ranges)
self.breakpoints = linenumber_list
def ranges_to_linenumbers(self, ranges):
lines = []
for index in range(0, len(ranges), 2):
lineno = int(float(ranges[index]))
end = int(float(ranges[index+1]))
while lineno < end:
lines.append(lineno)
lineno += 1
return lines
# XXX 13 Dec 2002 KBK Not used currently
# def saved_change_hook(self):
# "Extend base method - clear breaks if module is modified"
# if not self.get_saved():
# self.clear_file_breaks()
# EditorWindow.saved_change_hook(self)
def _close(self):
"Extend base method - clear breaks when module is closed"
self.clear_file_breaks()
EditorWindow._close(self)
class PyShellFileList(FileList):
"Extend base class: IDLE supports a shell and breakpoints"
# override FileList's class variable, instances return PyShellEditorWindow
# instead of EditorWindow when new edit windows are created.
EditorWindow = PyShellEditorWindow
pyshell = None
def open_shell(self, event=None):
if self.pyshell:
self.pyshell.top.wakeup()
else:
self.pyshell = PyShell(self)
if self.pyshell:
if not self.pyshell.begin():
return None
return self.pyshell
class ModifiedColorDelegator(ColorDelegator):
"Extend base class: colorizer for the shell window itself"
def __init__(self):
ColorDelegator.__init__(self)
self.LoadTagDefs()
def recolorize_main(self):
self.tag_remove("TODO", "1.0", "iomark")
self.tag_add("SYNC", "1.0", "iomark")
ColorDelegator.recolorize_main(self)
def LoadTagDefs(self):
ColorDelegator.LoadTagDefs(self)
theme = idleConf.GetOption('main','Theme','name')
self.tagdefs.update({
"stdin": {'background':None,'foreground':None},
"stdout": idleConf.GetHighlight(theme, "stdout"),
"stderr": idleConf.GetHighlight(theme, "stderr"),
"console": idleConf.GetHighlight(theme, "console"),
})
class ModifiedUndoDelegator(UndoDelegator):
"Extend base class: forbid insert/delete before the I/O mark"
def insert(self, index, chars, tags=None):
try:
if self.delegate.compare(index, "<", "iomark"):
self.delegate.bell()
return
except TclError:
pass
UndoDelegator.insert(self, index, chars, tags)
def delete(self, index1, index2=None):
try:
if self.delegate.compare(index1, "<", "iomark"):
self.delegate.bell()
return
except TclError:
pass
UndoDelegator.delete(self, index1, index2)
class MyRPCClient(rpc.RPCClient):
def handle_EOF(self):
"Override the base class - just re-raise EOFError"
raise EOFError
class ModifiedInterpreter(InteractiveInterpreter):
def __init__(self, tkconsole):
self.tkconsole = tkconsole
locals = sys.modules['__main__'].__dict__
InteractiveInterpreter.__init__(self, locals=locals)
self.save_warnings_filters = None
self.restarting = False
self.subprocess_arglist = None
self.port = PORT
self.original_compiler_flags = self.compile.compiler.flags
rpcclt = None
rpcpid = None
def spawn_subprocess(self):
if self.subprocess_arglist is None:
self.subprocess_arglist = self.build_subprocess_arglist()
args = self.subprocess_arglist
self.rpcpid = os.spawnv(os.P_NOWAIT, sys.executable, args)
def build_subprocess_arglist(self):
assert (self.port!=0), (
"Socket should have been assigned a port number.")
w = ['-W' + s for s in sys.warnoptions]
if 1/2 > 0: # account for new division
w.append('-Qnew')
# Maybe IDLE is installed and is being accessed via sys.path,
# or maybe it's not installed and the idle.py script is being
# run from the IDLE source directory.
del_exitf = idleConf.GetOption('main', 'General', 'delete-exitfunc',
default=False, type='bool')
if __name__ == 'idlelib.PyShell':
command = "__import__('idlelib.run').run.main(%r)" % (del_exitf,)
else:
command = "__import__('run').main(%r)" % (del_exitf,)
if sys.platform[:3] == 'win' and ' ' in sys.executable:
# handle embedded space in path by quoting the argument
decorated_exec = '"%s"' % sys.executable
else:
decorated_exec = sys.executable
return [decorated_exec] + w + ["-c", command, str(self.port)]
def start_subprocess(self):
addr = (HOST, self.port)
# GUI makes several attempts to acquire socket, listens for connection
for i in range(3):
time.sleep(i)
try:
self.rpcclt = MyRPCClient(addr)
break
except socket.error, err:
pass
else:
self.display_port_binding_error()
return None
# if PORT was 0, system will assign an 'ephemeral' port. Find it out:
self.port = self.rpcclt.listening_sock.getsockname()[1]
# if PORT was not 0, probably working with a remote execution server
if PORT != 0:
# To allow reconnection within the 2MSL wait (cf. Stevens TCP
# V1, 18.6), set SO_REUSEADDR. Note that this can be problematic
# on Windows since the implementation allows two active sockets on
# the same address!
self.rpcclt.listening_sock.setsockopt(socket.SOL_SOCKET,
socket.SO_REUSEADDR, 1)
self.spawn_subprocess()
#time.sleep(20) # test to simulate GUI not accepting connection
# Accept the connection from the Python execution server
self.rpcclt.listening_sock.settimeout(10)
try:
self.rpcclt.accept()
except socket.timeout, err:
self.display_no_subprocess_error()
return None
self.rpcclt.register("stdin", self.tkconsole)
self.rpcclt.register("stdout", self.tkconsole.stdout)
self.rpcclt.register("stderr", self.tkconsole.stderr)
self.rpcclt.register("flist", self.tkconsole.flist)
self.rpcclt.register("linecache", linecache)
self.rpcclt.register("interp", self)
self.transfer_path(with_cwd=True)
self.poll_subprocess()
return self.rpcclt
def restart_subprocess(self, with_cwd=False):
if self.restarting:
return self.rpcclt
self.restarting = True
# close only the subprocess debugger
debug = self.getdebugger()
if debug:
try:
# Only close subprocess debugger, don't unregister gui_adap!
RemoteDebugger.close_subprocess_debugger(self.rpcclt)
except:
pass
# Kill subprocess, spawn a new one, accept connection.
self.rpcclt.close()
self.unix_terminate()
console = self.tkconsole
was_executing = console.executing
console.executing = False
self.spawn_subprocess()
try:
self.rpcclt.accept()
except socket.timeout, err:
self.display_no_subprocess_error()
return None
self.transfer_path(with_cwd=with_cwd)
# annotate restart in shell window and mark it
console.text.delete("iomark", "end-1c")
if was_executing:
console.write('\n')
console.showprompt()
halfbar = ((int(console.width) - 16) // 2) * '='
console.write(halfbar + ' RESTART ' + halfbar)
console.text.mark_set("restart", "end-1c")
console.text.mark_gravity("restart", "left")
console.showprompt()
# restart subprocess debugger
if debug:
# Restarted debugger connects to current instance of debug GUI
gui = RemoteDebugger.restart_subprocess_debugger(self.rpcclt)
# reload remote debugger breakpoints for all PyShellEditWindows
debug.load_breakpoints()
self.compile.compiler.flags = self.original_compiler_flags
self.restarting = False
return self.rpcclt
def __request_interrupt(self):
self.rpcclt.remotecall("exec", "interrupt_the_server", (), {})
def interrupt_subprocess(self):
threading.Thread(target=self.__request_interrupt).start()
def kill_subprocess(self):
try:
self.rpcclt.close()
except AttributeError: # no socket
pass
self.unix_terminate()
self.tkconsole.executing = False
self.rpcclt = None
def unix_terminate(self):
"UNIX: make sure subprocess is terminated and collect status"
if hasattr(os, 'kill'):
try:
os.kill(self.rpcpid, SIGTERM)
except OSError:
# process already terminated:
return
else:
try:
os.waitpid(self.rpcpid, 0)
except OSError:
return
def transfer_path(self, with_cwd=False):
if with_cwd: # Issue 13506
path = [''] # include Current Working Directory
path.extend(sys.path)
else:
path = sys.path
self.runcommand("""if 1:
import sys as _sys
_sys.path = %r
del _sys
\n""" % (path,))
active_seq = None
def poll_subprocess(self):
clt = self.rpcclt
if clt is None:
return
try:
response = clt.pollresponse(self.active_seq, wait=0.05)
except (EOFError, IOError, KeyboardInterrupt):
# lost connection or subprocess terminated itself, restart
# [the KBI is from rpc.SocketIO.handle_EOF()]
if self.tkconsole.closing:
return
response = None
self.restart_subprocess()
if response:
self.tkconsole.resetoutput()
self.active_seq = None
how, what = response
console = self.tkconsole.console
if how == "OK":
if what is not None:
print >>console, repr(what)
elif how == "EXCEPTION":
if self.tkconsole.getvar("<<toggle-jit-stack-viewer>>"):
self.remote_stack_viewer()
elif how == "ERROR":
errmsg = "PyShell.ModifiedInterpreter: Subprocess ERROR:\n"
print >>sys.__stderr__, errmsg, what
print >>console, errmsg, what
# we received a response to the currently active seq number:
try:
self.tkconsole.endexecuting()
except AttributeError: # shell may have closed
pass
# Reschedule myself
if not self.tkconsole.closing:
self.tkconsole.text.after(self.tkconsole.pollinterval,
self.poll_subprocess)
debugger = None
def setdebugger(self, debugger):
self.debugger = debugger
def getdebugger(self):
return self.debugger
def open_remote_stack_viewer(self):
"""Initiate the remote stack viewer from a separate thread.
This method is called from the subprocess, and by returning from this
method we allow the subprocess to unblock. After a bit the shell
requests the subprocess to open the remote stack viewer which returns a
static object looking at the last exception. It is queried through
the RPC mechanism.
"""
self.tkconsole.text.after(300, self.remote_stack_viewer)
return
def remote_stack_viewer(self):
from idlelib import RemoteObjectBrowser
oid = self.rpcclt.remotequeue("exec", "stackviewer", ("flist",), {})
if oid is None:
self.tkconsole.root.bell()
return
item = RemoteObjectBrowser.StubObjectTreeItem(self.rpcclt, oid)
from idlelib.TreeWidget import ScrolledCanvas, TreeNode
top = Toplevel(self.tkconsole.root)
theme = idleConf.GetOption('main','Theme','name')
background = idleConf.GetHighlight(theme, 'normal')['background']
sc = ScrolledCanvas(top, bg=background, highlightthickness=0)
sc.frame.pack(expand=1, fill="both")
node = TreeNode(sc.canvas, None, item)
node.expand()
# XXX Should GC the remote tree when closing the window
gid = 0
def execsource(self, source):
"Like runsource() but assumes complete exec source"
filename = self.stuffsource(source)
self.execfile(filename, source)
def execfile(self, filename, source=None):
"Execute an existing file"
if source is None:
source = open(filename, "r").read()
try:
code = compile(source, filename, "exec")
except (OverflowError, SyntaxError):
self.tkconsole.resetoutput()
tkerr = self.tkconsole.stderr
print>>tkerr, '*** Error in script or command!\n'
print>>tkerr, 'Traceback (most recent call last):'
InteractiveInterpreter.showsyntaxerror(self, filename)
self.tkconsole.showprompt()
else:
self.runcode(code)
def runsource(self, source):
"Extend base class method: Stuff the source in the line cache first"
filename = self.stuffsource(source)
self.more = 0
self.save_warnings_filters = warnings.filters[:]
warnings.filterwarnings(action="error", category=SyntaxWarning)
if isinstance(source, types.UnicodeType):
from idlelib import IOBinding
try:
source = source.encode(IOBinding.encoding)
except UnicodeError:
self.tkconsole.resetoutput()
self.write("Unsupported characters in input\n")
return
try:
# InteractiveInterpreter.runsource() calls its runcode() method,
# which is overridden (see below)
return InteractiveInterpreter.runsource(self, source, filename)
finally:
if self.save_warnings_filters is not None:
warnings.filters[:] = self.save_warnings_filters
self.save_warnings_filters = None
def stuffsource(self, source):
"Stuff source in the filename cache"
filename = "<pyshell#%d>" % self.gid
self.gid = self.gid + 1
lines = source.split("\n")
linecache.cache[filename] = len(source)+1, 0, lines, filename
return filename
def prepend_syspath(self, filename):
"Prepend sys.path with file's directory if not already included"
self.runcommand("""if 1:
_filename = %r
import sys as _sys
from os.path import dirname as _dirname
_dir = _dirname(_filename)
if not _dir in _sys.path:
_sys.path.insert(0, _dir)
del _filename, _sys, _dirname, _dir
\n""" % (filename,))
def showsyntaxerror(self, filename=None):
"""Extend base class method: Add Colorizing
Color the offending position instead of printing it and pointing at it
with a caret.
"""
text = self.tkconsole.text
stuff = self.unpackerror()
if stuff:
msg, lineno, offset, line = stuff
if lineno == 1:
pos = "iomark + %d chars" % (offset-1)
else:
pos = "iomark linestart + %d lines + %d chars" % \
(lineno-1, offset-1)
text.tag_add("ERROR", pos)
text.see(pos)
char = text.get(pos)
if char and char in IDENTCHARS:
text.tag_add("ERROR", pos + " wordstart", pos)
self.tkconsole.resetoutput()
self.write("SyntaxError: %s\n" % str(msg))
else:
self.tkconsole.resetoutput()
InteractiveInterpreter.showsyntaxerror(self, filename)
self.tkconsole.showprompt()
def unpackerror(self):
type, value, tb = sys.exc_info()
ok = type is SyntaxError
if ok:
try:
msg, (dummy_filename, lineno, offset, line) = value
if not offset:
offset = 0
except:
ok = 0
if ok:
return msg, lineno, offset, line
else:
return None
def showtraceback(self):
"Extend base class method to reset output properly"
self.tkconsole.resetoutput()
self.checklinecache()
InteractiveInterpreter.showtraceback(self)
if self.tkconsole.getvar("<<toggle-jit-stack-viewer>>"):
self.tkconsole.open_stack_viewer()
def checklinecache(self):
c = linecache.cache
for key in c.keys():
if key[:1] + key[-1:] != "<>":
del c[key]
def runcommand(self, code):
"Run the code without invoking the debugger"
# The code better not raise an exception!
if self.tkconsole.executing:
self.display_executing_dialog()
return 0
if self.rpcclt:
self.rpcclt.remotequeue("exec", "runcode", (code,), {})
else:
exec code in self.locals
return 1
def runcode(self, code):
"Override base class method"
if self.tkconsole.executing:
self.interp.restart_subprocess()
self.checklinecache()
if self.save_warnings_filters is not None:
warnings.filters[:] = self.save_warnings_filters
self.save_warnings_filters = None
debugger = self.debugger
try:
self.tkconsole.beginexecuting()
if not debugger and self.rpcclt is not None:
self.active_seq = self.rpcclt.asyncqueue("exec", "runcode",
(code,), {})
elif debugger:
debugger.run(code, self.locals)
else:
exec code in self.locals
except SystemExit:
if not self.tkconsole.closing:
if tkMessageBox.askyesno(
"Exit?",
"Do you want to exit altogether?",
default="yes",
master=self.tkconsole.text):
raise
else:
self.showtraceback()
else:
raise
except:
if use_subprocess:
print >>self.tkconsole.stderr, \
"IDLE internal error in runcode()"
self.showtraceback()
self.tkconsole.endexecuting()
else:
if self.tkconsole.canceled:
self.tkconsole.canceled = False
print >>self.tkconsole.stderr, "KeyboardInterrupt"
else:
self.showtraceback()
finally:
if not use_subprocess:
try:
self.tkconsole.endexecuting()
except AttributeError: # shell may have closed
pass
def write(self, s):
"Override base class method"
self.tkconsole.stderr.write(s)
def display_port_binding_error(self):
tkMessageBox.showerror(
"Port Binding Error",
"IDLE can't bind to a TCP/IP port, which is necessary to "
"communicate with its Python execution server. This might be "
"because no networking is installed on this computer. "
"Run IDLE with the -n command line switch to start without a "
"subprocess and refer to Help/IDLE Help 'Running without a "
"subprocess' for further details.",
master=self.tkconsole.text)
def display_no_subprocess_error(self):
tkMessageBox.showerror(
"Subprocess Startup Error",
"IDLE's subprocess didn't make connection. Either IDLE can't "
"start a subprocess or personal firewall software is blocking "
"the connection.",
master=self.tkconsole.text)
def display_executing_dialog(self):
tkMessageBox.showerror(
"Already executing",
"The Python Shell window is already executing a command; "
"please wait until it is finished.",
master=self.tkconsole.text)
class PyShell(OutputWindow):
shell_title = "Python Shell"
# Override classes
ColorDelegator = ModifiedColorDelegator
UndoDelegator = ModifiedUndoDelegator
# Override menus
menu_specs = [
("file", "_File"),
("edit", "_Edit"),
("debug", "_Debug"),
("options", "_Options"),
("windows", "_Windows"),
("help", "_Help"),
]
if macosxSupport.runningAsOSXApp():
del menu_specs[-3]
menu_specs[-2] = ("windows", "_Window")
# New classes
from idlelib.IdleHistory import History
def __init__(self, flist=None):
if use_subprocess:
ms = self.menu_specs
if ms[2][0] != "shell":
ms.insert(2, ("shell", "She_ll"))
self.interp = ModifiedInterpreter(self)
if flist is None:
root = Tk()
fixwordbreaks(root)
root.withdraw()
flist = PyShellFileList(root)
#
OutputWindow.__init__(self, flist, None, None)
#
## self.config(usetabs=1, indentwidth=8, context_use_ps1=1)
self.usetabs = True
# indentwidth must be 8 when using tabs. See note in EditorWindow:
self.indentwidth = 8
self.context_use_ps1 = True
#
text = self.text
text.configure(wrap="char")
text.bind("<<newline-and-indent>>", self.enter_callback)
text.bind("<<plain-newline-and-indent>>", self.linefeed_callback)
text.bind("<<interrupt-execution>>", self.cancel_callback)
text.bind("<<end-of-file>>", self.eof_callback)
text.bind("<<open-stack-viewer>>", self.open_stack_viewer)
text.bind("<<toggle-debugger>>", self.toggle_debugger)
text.bind("<<toggle-jit-stack-viewer>>", self.toggle_jit_stack_viewer)
if use_subprocess:
text.bind("<<view-restart>>", self.view_restart_mark)
text.bind("<<restart-shell>>", self.restart_shell)
#
self.save_stdout = sys.stdout
self.save_stderr = sys.stderr
self.save_stdin = sys.stdin
from idlelib import IOBinding
self.stdout = PseudoFile(self, "stdout", IOBinding.encoding)
self.stderr = PseudoFile(self, "stderr", IOBinding.encoding)
self.console = PseudoFile(self, "console", IOBinding.encoding)
if not use_subprocess:
sys.stdout = self.stdout
sys.stderr = self.stderr
sys.stdin = self
#
self.history = self.History(self.text)
#
self.pollinterval = 50 # millisec
def get_standard_extension_names(self):
return idleConf.GetExtensions(shell_only=True)
reading = False
executing = False
canceled = False
endoffile = False
closing = False
def set_warning_stream(self, stream):
global warning_stream
warning_stream = stream
def get_warning_stream(self):
return warning_stream
def toggle_debugger(self, event=None):
if self.executing:
tkMessageBox.showerror("Don't debug now",
"You can only toggle the debugger when idle",
master=self.text)
self.set_debugger_indicator()
return "break"
else:
db = self.interp.getdebugger()
if db:
self.close_debugger()
else:
self.open_debugger()
def set_debugger_indicator(self):
db = self.interp.getdebugger()
self.setvar("<<toggle-debugger>>", not not db)
def toggle_jit_stack_viewer(self, event=None):
pass # All we need is the variable
def close_debugger(self):
db = self.interp.getdebugger()
if db:
self.interp.setdebugger(None)
db.close()
if self.interp.rpcclt:
RemoteDebugger.close_remote_debugger(self.interp.rpcclt)
self.resetoutput()
self.console.write("[DEBUG OFF]\n")
sys.ps1 = ">>> "
self.showprompt()
self.set_debugger_indicator()
def open_debugger(self):
if self.interp.rpcclt:
dbg_gui = RemoteDebugger.start_remote_debugger(self.interp.rpcclt,
self)
else:
dbg_gui = Debugger.Debugger(self)
self.interp.setdebugger(dbg_gui)
dbg_gui.load_breakpoints()
sys.ps1 = "[DEBUG ON]\n>>> "
self.showprompt()
self.set_debugger_indicator()
def beginexecuting(self):
"Helper for ModifiedInterpreter"
self.resetoutput()
self.executing = 1
def endexecuting(self):
"Helper for ModifiedInterpreter"
self.executing = 0
self.canceled = 0
self.showprompt()
def close(self):
"Extend EditorWindow.close()"
if self.executing:
response = tkMessageBox.askokcancel(
"Kill?",
"The program is still running!\n Do you want to kill it?",
default="ok",
parent=self.text)
if response is False:
return "cancel"
if self.reading:
self.top.quit()
self.canceled = True
self.closing = True
# Wait for poll_subprocess() rescheduling to stop
self.text.after(2 * self.pollinterval, self.close2)
def close2(self):
return EditorWindow.close(self)
def _close(self):
"Extend EditorWindow._close(), shut down debugger and execution server"
self.close_debugger()
if use_subprocess:
self.interp.kill_subprocess()
# Restore std streams
sys.stdout = self.save_stdout
sys.stderr = self.save_stderr
sys.stdin = self.save_stdin
# Break cycles
self.interp = None
self.console = None
self.flist.pyshell = None
self.history = None
EditorWindow._close(self)
def ispythonsource(self, filename):
"Override EditorWindow method: never remove the colorizer"
return True
def short_title(self):
return self.shell_title
COPYRIGHT = \
'Type "copyright", "credits" or "license()" for more information.'
def begin(self):
self.resetoutput()
if use_subprocess:
nosub = ''
client = self.interp.start_subprocess()
if not client:
self.close()
return False
else:
nosub = "==== No Subprocess ===="
self.write("Python %s on %s\n%s\n%s" %
(sys.version, sys.platform, self.COPYRIGHT, nosub))
self.showprompt()
import Tkinter
Tkinter._default_root = None # 03Jan04 KBK What's this?
return True
def readline(self):
save = self.reading
try:
self.reading = 1
self.top.mainloop() # nested mainloop()
finally:
self.reading = save
line = self.text.get("iomark", "end-1c")
if len(line) == 0: # may be EOF if we quit our mainloop with Ctrl-C
line = "\n"
if isinstance(line, unicode):
from idlelib import IOBinding
try:
line = line.encode(IOBinding.encoding)
except UnicodeError:
pass
self.resetoutput()
if self.canceled:
self.canceled = 0
if not use_subprocess:
raise KeyboardInterrupt
if self.endoffile:
self.endoffile = 0
line = ""
return line
def isatty(self):
return True
def cancel_callback(self, event=None):
try:
if self.text.compare("sel.first", "!=", "sel.last"):
return # Active selection -- always use default binding
except:
pass
if not (self.executing or self.reading):
self.resetoutput()
self.interp.write("KeyboardInterrupt\n")
self.showprompt()
return "break"
self.endoffile = 0
self.canceled = 1
if (self.executing and self.interp.rpcclt):
if self.interp.getdebugger():
self.interp.restart_subprocess()
else:
self.interp.interrupt_subprocess()
if self.reading:
self.top.quit() # exit the nested mainloop() in readline()
return "break"
def eof_callback(self, event):
if self.executing and not self.reading:
return # Let the default binding (delete next char) take over
if not (self.text.compare("iomark", "==", "insert") and
self.text.compare("insert", "==", "end-1c")):
return # Let the default binding (delete next char) take over
if not self.executing:
self.resetoutput()
self.close()
else:
self.canceled = 0
self.endoffile = 1
self.top.quit()
return "break"
def linefeed_callback(self, event):
# Insert a linefeed without entering anything (still autoindented)
if self.reading:
self.text.insert("insert", "\n")
self.text.see("insert")
else:
self.newline_and_indent_event(event)
return "break"
def enter_callback(self, event):
if self.executing and not self.reading:
return # Let the default binding (insert '\n') take over
# If some text is selected, recall the selection
# (but only if this before the I/O mark)
try:
sel = self.text.get("sel.first", "sel.last")
if sel:
if self.text.compare("sel.last", "<=", "iomark"):
self.recall(sel, event)
return "break"
except:
pass
# If we're strictly before the line containing iomark, recall
# the current line, less a leading prompt, less leading or
# trailing whitespace
if self.text.compare("insert", "<", "iomark linestart"):
# Check if there's a relevant stdin range -- if so, use it
prev = self.text.tag_prevrange("stdin", "insert")
if prev and self.text.compare("insert", "<", prev[1]):
self.recall(self.text.get(prev[0], prev[1]), event)
return "break"
next = self.text.tag_nextrange("stdin", "insert")
if next and self.text.compare("insert lineend", ">=", next[0]):
self.recall(self.text.get(next[0], next[1]), event)
return "break"
# No stdin mark -- just get the current line, less any prompt
indices = self.text.tag_nextrange("console", "insert linestart")
if indices and \
self.text.compare(indices[0], "<=", "insert linestart"):
self.recall(self.text.get(indices[1], "insert lineend"), event)
else:
self.recall(self.text.get("insert linestart", "insert lineend"), event)
return "break"
# If we're between the beginning of the line and the iomark, i.e.
# in the prompt area, move to the end of the prompt
if self.text.compare("insert", "<", "iomark"):
self.text.mark_set("insert", "iomark")
# If we're in the current input and there's only whitespace
# beyond the cursor, erase that whitespace first
s = self.text.get("insert", "end-1c")
if s and not s.strip():
self.text.delete("insert", "end-1c")
# If we're in the current input before its last line,
# insert a newline right at the insert point
if self.text.compare("insert", "<", "end-1c linestart"):
self.newline_and_indent_event(event)
return "break"
# We're in the last line; append a newline and submit it
self.text.mark_set("insert", "end-1c")
if self.reading:
self.text.insert("insert", "\n")
self.text.see("insert")
else:
self.newline_and_indent_event(event)
self.text.tag_add("stdin", "iomark", "end-1c")
self.text.update_idletasks()
if self.reading:
self.top.quit() # Break out of recursive mainloop() in raw_input()
else:
self.runit()
return "break"
def recall(self, s, event):
# remove leading and trailing empty or whitespace lines
s = re.sub(r'^\s*\n', '' , s)
s = re.sub(r'\n\s*$', '', s)
lines = s.split('\n')
self.text.undo_block_start()
try:
self.text.tag_remove("sel", "1.0", "end")
self.text.mark_set("insert", "end-1c")
prefix = self.text.get("insert linestart", "insert")
if prefix.rstrip().endswith(':'):
self.newline_and_indent_event(event)
prefix = self.text.get("insert linestart", "insert")
self.text.insert("insert", lines[0].strip())
if len(lines) > 1:
orig_base_indent = re.search(r'^([ \t]*)', lines[0]).group(0)
new_base_indent = re.search(r'^([ \t]*)', prefix).group(0)
for line in lines[1:]:
if line.startswith(orig_base_indent):
# replace orig base indentation with new indentation
line = new_base_indent + line[len(orig_base_indent):]
self.text.insert('insert', '\n'+line.rstrip())
finally:
self.text.see("insert")
self.text.undo_block_stop()
def runit(self):
line = self.text.get("iomark", "end-1c")
# Strip off last newline and surrounding whitespace.
# (To allow you to hit return twice to end a statement.)
i = len(line)
while i > 0 and line[i-1] in " \t":
i = i-1
if i > 0 and line[i-1] == "\n":
i = i-1
while i > 0 and line[i-1] in " \t":
i = i-1
line = line[:i]
more = self.interp.runsource(line)
def open_stack_viewer(self, event=None):
if self.interp.rpcclt:
return self.interp.remote_stack_viewer()
try:
sys.last_traceback
except:
tkMessageBox.showerror("No stack trace",
"There is no stack trace yet.\n"
"(sys.last_traceback is not defined)",
master=self.text)
return
from idlelib.StackViewer import StackBrowser
sv = StackBrowser(self.root, self.flist)
def view_restart_mark(self, event=None):
self.text.see("iomark")
self.text.see("restart")
def restart_shell(self, event=None):
"Callback for Run/Restart Shell Cntl-F6"
self.interp.restart_subprocess(with_cwd=True)
def showprompt(self):
self.resetoutput()
try:
s = str(sys.ps1)
except:
s = ""
self.console.write(s)
self.text.mark_set("insert", "end-1c")
self.set_line_and_column()
self.io.reset_undo()
def resetoutput(self):
source = self.text.get("iomark", "end-1c")
if self.history:
self.history.history_store(source)
if self.text.get("end-2c") != "\n":
self.text.insert("end-1c", "\n")
self.text.mark_set("iomark", "end-1c")
self.set_line_and_column()
sys.stdout.softspace = 0
def write(self, s, tags=()):
try:
self.text.mark_gravity("iomark", "right")
OutputWindow.write(self, s, tags, "iomark")
self.text.mark_gravity("iomark", "left")
except:
pass
if self.canceled:
self.canceled = 0
if not use_subprocess:
raise KeyboardInterrupt
class PseudoFile(object):
def __init__(self, shell, tags, encoding=None):
self.shell = shell
self.tags = tags
self.softspace = 0
self.encoding = encoding
def write(self, s):
self.shell.write(s, self.tags)
def writelines(self, lines):
for line in lines:
self.write(line)
def flush(self):
pass
def isatty(self):
return True
usage_msg = """\
USAGE: idle [-deins] [-t title] [file]*
idle [-dns] [-t title] (-c cmd | -r file) [arg]*
idle [-dns] [-t title] - [arg]*
-h print this help message and exit
-n run IDLE without a subprocess (see Help/IDLE Help for details)
The following options will override the IDLE 'settings' configuration:
-e open an edit window
-i open a shell window
The following options imply -i and will open a shell:
-c cmd run the command in a shell, or
-r file run script from file
-d enable the debugger
-s run $IDLESTARTUP or $PYTHONSTARTUP before anything else
-t title set title of shell window
A default edit window will be bypassed when -c, -r, or - are used.
[arg]* are passed to the command (-c) or script (-r) in sys.argv[1:].
Examples:
idle
Open an edit window or shell depending on IDLE's configuration.
idle foo.py foobar.py
Edit the files, also open a shell if configured to start with shell.
idle -est "Baz" foo.py
Run $IDLESTARTUP or $PYTHONSTARTUP, edit foo.py, and open a shell
window with the title "Baz".
idle -c "import sys; print sys.argv" "foo"
Open a shell window and run the command, passing "-c" in sys.argv[0]
and "foo" in sys.argv[1].
idle -d -s -r foo.py "Hello World"
Open a shell window, run a startup script, enable the debugger, and
run foo.py, passing "foo.py" in sys.argv[0] and "Hello World" in
sys.argv[1].
echo "import sys; print sys.argv" | idle - "foobar"
Open a shell window, run the script piped in, passing '' in sys.argv[0]
and "foobar" in sys.argv[1].
"""
def main():
global flist, root, use_subprocess
use_subprocess = True
enable_shell = True
enable_edit = False
debug = False
cmd = None
script = None
startup = False
try:
opts, args = getopt.getopt(sys.argv[1:], "c:deihnr:st:")
except getopt.error, msg:
sys.stderr.write("Error: %s\n" % str(msg))
sys.stderr.write(usage_msg)
sys.exit(2)
for o, a in opts:
if o == '-c':
cmd = a
enable_shell = True
if o == '-d':
debug = True
enable_shell = True
if o == '-e':
enable_edit = True
enable_shell = False
if o == '-h':
sys.stdout.write(usage_msg)
sys.exit()
if o == '-i':
enable_shell = True
if o == '-n':
use_subprocess = False
if o == '-r':
script = a
if os.path.isfile(script):
pass
else:
print "No script file: ", script
sys.exit()
enable_shell = True
if o == '-s':
startup = True
enable_shell = True
if o == '-t':
PyShell.shell_title = a
enable_shell = True
if args and args[0] == '-':
cmd = sys.stdin.read()
enable_shell = True
# process sys.argv and sys.path:
for i in range(len(sys.path)):
sys.path[i] = os.path.abspath(sys.path[i])
if args and args[0] == '-':
sys.argv = [''] + args[1:]
elif cmd:
sys.argv = ['-c'] + args
elif script:
sys.argv = [script] + args
elif args:
enable_edit = True
pathx = []
for filename in args:
pathx.append(os.path.dirname(filename))
for dir in pathx:
dir = os.path.abspath(dir)
if dir not in sys.path:
sys.path.insert(0, dir)
else:
dir = os.getcwd()
if not dir in sys.path:
sys.path.insert(0, dir)
# check the IDLE settings configuration (but command line overrides)
edit_start = idleConf.GetOption('main', 'General',
'editor-on-startup', type='bool')
enable_edit = enable_edit or edit_start
# start editor and/or shell windows:
root = Tk(className="Idle")
fixwordbreaks(root)
root.withdraw()
flist = PyShellFileList(root)
macosxSupport.setupApp(root, flist)
if enable_edit:
if not (cmd or script):
for filename in args:
flist.open(filename)
if not args:
flist.new()
if enable_shell:
shell = flist.open_shell()
if not shell:
return # couldn't open shell
if macosxSupport.runningAsOSXApp() and flist.dict:
# On OSX: when the user has double-clicked on a file that causes
# IDLE to be launched the shell window will open just in front of
# the file she wants to see. Lower the interpreter window when
# there are open files.
shell.top.lower()
shell = flist.pyshell
# handle remaining options:
if debug:
shell.open_debugger()
if startup:
filename = os.environ.get("IDLESTARTUP") or \
os.environ.get("PYTHONSTARTUP")
if filename and os.path.isfile(filename):
shell.interp.execfile(filename)
if shell and cmd or script:
shell.interp.runcommand("""if 1:
import sys as _sys
_sys.argv = %r
del _sys
\n""" % (sys.argv,))
if cmd:
shell.interp.execsource(cmd)
elif script:
shell.interp.prepend_syspath(script)
shell.interp.execfile(script)
# Check for problematic OS X Tk versions and print a warning message
# in the IDLE shell window; this is less intrusive than always opening
# a separate window.
tkversionwarning = macosxSupport.tkVersionWarning(root)
if tkversionwarning:
shell.interp.runcommand(''.join(("print('", tkversionwarning, "')")))
root.mainloop()
root.destroy()
if __name__ == "__main__":
sys.modules['PyShell'] = sys.modules['__main__']
main()
| mit |
cselis86/edx-platform | common/djangoapps/student/migrations/0035_access_roles.py | 50 | 19747 | # -*- coding: utf-8 -*-
from south.v2 import DataMigration
import re
from opaque_keys.edx.locations import SlashSeparatedCourseKey
from opaque_keys import InvalidKeyError
import bson.son
import logging
from django.db.models.query_utils import Q
from django.db.utils import IntegrityError
from xmodule.modulestore import ModuleStoreEnum
from xmodule.modulestore.django import modulestore
from xmodule.modulestore.mixed import MixedModuleStore
import itertools
log = logging.getLogger(__name__)
class Migration(DataMigration):
"""
Converts course_creator, instructor_, staff_, and betatestuser_ to new table
"""
GROUP_ENTRY_RE = re.compile(r'(?P<role_id>staff|instructor|beta_testers|course_creator_group)_?(?P<course_id_string>.*)')
def forwards(self, orm):
"""
Converts group table entries for write access and beta_test roles to course access roles table.
"""
store = modulestore()
if isinstance(store, MixedModuleStore):
self.mongostore = modulestore()._get_modulestore_by_type(ModuleStoreEnum.Type.mongo)
self.xmlstore = modulestore()._get_modulestore_by_type(ModuleStoreEnum.Type.xml)
elif store.get_modulestore_type() == ModuleStoreEnum.Type.mongo:
self.mongostore = store
self.xmlstore = None
elif store.get_modulestore_type() == ModuleStoreEnum.Type.xml:
self.mongostore = None
self.xmlstore = store
else:
return
# Note: Remember to use orm['appname.ModelName'] rather than "from appname.models..."
# b/c the Groups table had several entries for each course, we need to ensure we process each unique
# course only once. The below datastructures help ensure that.
hold = {} # key of course_id_strings with array of group objects. Should only be org scoped entries
# or deleted courses
orgs = {} # downcased org to last recorded normal case of the org
query = Q(name='course_creator_group')
for role in ['staff', 'instructor', 'beta_testers', ]:
query = query | Q(name__startswith=role)
for group in orm['auth.Group'].objects.filter(query).all():
def _migrate_users(correct_course_key, role, lower_org):
"""
Get all the users from the old group and migrate to this course key in the new table
"""
for user in orm['auth.user'].objects.filter(groups=group).all():
entry = orm['student.courseaccessrole'](
role=role, user=user,
org=correct_course_key.org, course_id=correct_course_key
)
try:
entry.save()
except IntegrityError:
# already stored
pass
orgs[lower_org] = correct_course_key.org
parsed_entry = self.GROUP_ENTRY_RE.match(group.name)
role = parsed_entry.group('role_id')
if role == 'course_creator_group':
for user in orm['auth.user'].objects.filter(groups=group).all():
entry = orm['student.courseaccessrole'](role=role, user=user)
entry.save()
else:
course_id_string = parsed_entry.group('course_id_string')
try:
course_key = SlashSeparatedCourseKey.from_deprecated_string(course_id_string)
# course_key is the downcased version, get the normal cased one. loc_mapper() has no
# methods taking downcased SSCK; so, need to do it manually here
correct_course_key = self._map_downcased_ssck(course_key)
if correct_course_key is not None:
_migrate_users(correct_course_key, role, course_key.org)
except InvalidKeyError:
# old dotted format, try permutations
parts = course_id_string.split('.')
if len(parts) < 3:
hold.setdefault(course_id_string, []).append(group)
elif len(parts) == 3:
course_key = SlashSeparatedCourseKey(*parts)
correct_course_key = self._map_downcased_ssck(course_key)
if correct_course_key is None:
hold.setdefault(course_id_string, []).append(group)
else:
_migrate_users(correct_course_key, role, course_key.org)
else:
correct_course_key = self.divide_parts_find_key(parts)
if correct_course_key is None:
hold.setdefault(course_id_string, []).append(group)
else:
_migrate_users(correct_course_key, role, correct_course_key.org)
# see if any in hold were missed above
for held_auth_scope, groups in hold.iteritems():
# orgs indexed by downcased org
held_auth_scope = held_auth_scope.lower()
if held_auth_scope in orgs:
for group in groups:
role = self.GROUP_ENTRY_RE.match(group.name).group('role_id')
# they have org permission
for user in orm['auth.user'].objects.filter(groups=group).all():
entry = orm['student.courseaccessrole'](
role=role,
user=user,
org=orgs[held_auth_scope],
)
entry.save()
else:
# don't silently skip unexpected roles
log.warn("Didn't convert roles %s", [group.name for group in groups])
def divide_parts_find_key(self, parts):
"""
Look for all possible org/course/run patterns from a possibly dotted source
"""
for org_stop, course_stop in itertools.combinations(range(1, len(parts)), 2):
org = '.'.join(parts[:org_stop])
course = '.'.join(parts[org_stop:course_stop])
run = '.'.join(parts[course_stop:])
course_key = SlashSeparatedCourseKey(org, course, run)
correct_course_key = self._map_downcased_ssck(course_key)
if correct_course_key is not None:
return correct_course_key
return None
def backwards(self, orm):
"Removes the new table."
# Since this migration is non-destructive (monotonically adds information), I'm not sure what
# the semantic of backwards should be other than perhaps clearing the table.
orm['student.courseaccessrole'].objects.all().delete()
def _map_downcased_ssck(self, downcased_ssck):
"""
Get the normal cased version of this downcased slash sep course key
"""
if self.mongostore is not None:
course_son = bson.son.SON([
('_id.tag', 'i4x'),
('_id.org', re.compile(ur'^{}$'.format(downcased_ssck.org), re.IGNORECASE | re.UNICODE)),
('_id.course', re.compile(ur'^{}$'.format(downcased_ssck.course), re.IGNORECASE | re.UNICODE)),
('_id.category', 'course'),
('_id.name', re.compile(ur'^{}$'.format(downcased_ssck.run), re.IGNORECASE | re.UNICODE)),
])
entry = self.mongostore.collection.find_one(course_son)
if entry:
idpart = entry['_id']
return SlashSeparatedCourseKey(idpart['org'], idpart['course'], idpart['name'])
if self.xmlstore is not None:
for course in self.xmlstore.get_courses():
if (
course.id.org.lower() == downcased_ssck.org and course.id.course.lower() == downcased_ssck.course
and course.id.run.lower() == downcased_ssck.run
):
return course.id
return None
models = {
'auth.group': {
'Meta': {'object_name': 'Group'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
'auth.permission': {
'Meta': {'ordering': "('content_type__app_label', 'content_type__model', 'codename')", 'unique_together': "(('content_type', 'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
'auth.user': {
'Meta': {'object_name': 'User'},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
'student.anonymoususerid': {
'Meta': {'object_name': 'AnonymousUserId'},
'anonymous_user_id': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '32'}),
'course_id': ('xmodule_django.models.CourseKeyField', [], {'db_index': 'True', 'max_length': '255', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']"})
},
'student.courseaccessrole': {
'Meta': {'unique_together': "(('user', 'org', 'course_id', 'role'),)", 'object_name': 'CourseAccessRole'},
'course_id': ('xmodule_django.models.CourseKeyField', [], {'db_index': 'True', 'max_length': '255', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'org': ('django.db.models.fields.CharField', [], {'db_index': 'True', 'max_length': '64', 'blank': 'True'}),
'role': ('django.db.models.fields.CharField', [], {'max_length': '64', 'db_index': 'True'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']"})
},
'student.courseenrollment': {
'Meta': {'ordering': "('user', 'course_id')", 'unique_together': "(('user', 'course_id'),)", 'object_name': 'CourseEnrollment'},
'course_id': ('xmodule_django.models.CourseKeyField', [], {'max_length': '255', 'db_index': 'True'}),
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'null': 'True', 'db_index': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'mode': ('django.db.models.fields.CharField', [], {'default': "'honor'", 'max_length': '100'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']"})
},
'student.courseenrollmentallowed': {
'Meta': {'unique_together': "(('email', 'course_id'),)", 'object_name': 'CourseEnrollmentAllowed'},
'auto_enroll': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'course_id': ('xmodule_django.models.CourseKeyField', [], {'max_length': '255', 'db_index': 'True'}),
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'null': 'True', 'db_index': 'True', 'blank': 'True'}),
'email': ('django.db.models.fields.CharField', [], {'max_length': '255', 'db_index': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'})
},
'student.loginfailures': {
'Meta': {'object_name': 'LoginFailures'},
'failure_count': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'lockout_until': ('django.db.models.fields.DateTimeField', [], {'null': 'True'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']"})
},
'student.passwordhistory': {
'Meta': {'object_name': 'PasswordHistory'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'time_set': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']"})
},
'student.pendingemailchange': {
'Meta': {'object_name': 'PendingEmailChange'},
'activation_key': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '32', 'db_index': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'new_email': ('django.db.models.fields.CharField', [], {'db_index': 'True', 'max_length': '255', 'blank': 'True'}),
'user': ('django.db.models.fields.related.OneToOneField', [], {'to': "orm['auth.User']", 'unique': 'True'})
},
'student.pendingnamechange': {
'Meta': {'object_name': 'PendingNameChange'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'new_name': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'}),
'rationale': ('django.db.models.fields.CharField', [], {'max_length': '1024', 'blank': 'True'}),
'user': ('django.db.models.fields.related.OneToOneField', [], {'to': "orm['auth.User']", 'unique': 'True'})
},
'student.registration': {
'Meta': {'object_name': 'Registration', 'db_table': "'auth_registration'"},
'activation_key': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '32', 'db_index': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']", 'unique': 'True'})
},
'student.userprofile': {
'Meta': {'object_name': 'UserProfile', 'db_table': "'auth_userprofile'"},
'allow_certificate': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'city': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'country': ('django_countries.fields.CountryField', [], {'max_length': '2', 'null': 'True', 'blank': 'True'}),
'courseware': ('django.db.models.fields.CharField', [], {'default': "'course.xml'", 'max_length': '255', 'blank': 'True'}),
'gender': ('django.db.models.fields.CharField', [], {'db_index': 'True', 'max_length': '6', 'null': 'True', 'blank': 'True'}),
'goals': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'language': ('django.db.models.fields.CharField', [], {'db_index': 'True', 'max_length': '255', 'blank': 'True'}),
'level_of_education': ('django.db.models.fields.CharField', [], {'db_index': 'True', 'max_length': '6', 'null': 'True', 'blank': 'True'}),
'location': ('django.db.models.fields.CharField', [], {'db_index': 'True', 'max_length': '255', 'blank': 'True'}),
'mailing_address': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'meta': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'db_index': 'True', 'max_length': '255', 'blank': 'True'}),
'user': ('django.db.models.fields.related.OneToOneField', [], {'related_name': "'profile'", 'unique': 'True', 'to': "orm['auth.User']"}),
'year_of_birth': ('django.db.models.fields.IntegerField', [], {'db_index': 'True', 'null': 'True', 'blank': 'True'})
},
'student.userstanding': {
'Meta': {'object_name': 'UserStanding'},
'account_status': ('django.db.models.fields.CharField', [], {'max_length': '31', 'blank': 'True'}),
'changed_by': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']", 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'standing_last_changed_at': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'standing'", 'unique': 'True', 'to': "orm['auth.User']"})
},
'student.usertestgroup': {
'Meta': {'object_name': 'UserTestGroup'},
'description': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '32', 'db_index': 'True'}),
'users': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.User']", 'db_index': 'True', 'symmetrical': 'False'})
}
}
complete_apps = ['student']
symmetrical = True
| agpl-3.0 |
litchfield/django | tests/inspectdb/models.py | 208 | 2737 | # -*- encoding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models
class People(models.Model):
name = models.CharField(max_length=255)
parent = models.ForeignKey('self', models.CASCADE)
class Message(models.Model):
from_field = models.ForeignKey(People, models.CASCADE, db_column='from_id')
class PeopleData(models.Model):
people_pk = models.ForeignKey(People, models.CASCADE, primary_key=True)
ssn = models.CharField(max_length=11)
class PeopleMoreData(models.Model):
people_unique = models.ForeignKey(People, models.CASCADE, unique=True)
license = models.CharField(max_length=255)
class DigitsInColumnName(models.Model):
all_digits = models.CharField(max_length=11, db_column='123')
leading_digit = models.CharField(max_length=11, db_column='4extra')
leading_digits = models.CharField(max_length=11, db_column='45extra')
class SpecialName(models.Model):
field = models.IntegerField(db_column='field')
# Underscores
field_field_0 = models.IntegerField(db_column='Field_')
field_field_1 = models.IntegerField(db_column='Field__')
field_field_2 = models.IntegerField(db_column='__field')
# Other chars
prc_x = models.IntegerField(db_column='prc(%) x')
non_ascii = models.IntegerField(db_column='tamaño')
class Meta:
db_table = "inspectdb_special.table name"
class ColumnTypes(models.Model):
id = models.AutoField(primary_key=True)
big_int_field = models.BigIntegerField()
bool_field = models.BooleanField(default=False)
null_bool_field = models.NullBooleanField()
char_field = models.CharField(max_length=10)
null_char_field = models.CharField(max_length=10, blank=True, null=True)
comma_separated_int_field = models.CommaSeparatedIntegerField(max_length=99)
date_field = models.DateField()
date_time_field = models.DateTimeField()
decimal_field = models.DecimalField(max_digits=6, decimal_places=1)
email_field = models.EmailField()
file_field = models.FileField(upload_to="unused")
file_path_field = models.FilePathField()
float_field = models.FloatField()
int_field = models.IntegerField()
gen_ip_adress_field = models.GenericIPAddressField(protocol="ipv4")
pos_int_field = models.PositiveIntegerField()
pos_small_int_field = models.PositiveSmallIntegerField()
slug_field = models.SlugField()
small_int_field = models.SmallIntegerField()
text_field = models.TextField()
time_field = models.TimeField()
url_field = models.URLField()
class UniqueTogether(models.Model):
field1 = models.IntegerField()
field2 = models.CharField(max_length=10)
class Meta:
unique_together = ('field1', 'field2')
| bsd-3-clause |
moonboots/tensorflow | tensorflow/python/kernel_tests/random_crop_test.py | 15 | 2626 | # Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for random_crop."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow as tf
class RandomCropTest(tf.test.TestCase):
def testNoOp(self):
# No random cropping is performed since the size is value.shape.
for shape in (2, 1, 1), (2, 1, 3), (4, 5, 3):
value = np.arange(0, np.prod(shape), dtype=np.int32).reshape(shape)
with self.test_session():
crop = tf.random_crop(value, shape).eval()
self.assertAllEqual(crop, value)
def testContains(self):
with self.test_session():
shape = (3, 5, 7)
target = (2, 3, 4)
value = np.random.randint(1000000, size=shape)
value_set = set(tuple(value[i:i + 2, j:j + 3, k:k + 4].ravel())
for i in range(2) for j in range(3) for k in range(4))
crop = tf.random_crop(value, size=target)
for _ in range(20):
y = crop.eval()
self.assertAllEqual(y.shape, target)
self.assertTrue(tuple(y.ravel()) in value_set)
def testRandomization(self):
# Run 1x1 crop num_samples times in an image and ensure that one finds each
# pixel 1/size of the time.
num_samples = 1000
shape = [5, 4, 1]
size = np.prod(shape)
single = [1, 1, 1]
value = np.arange(size).reshape(shape)
with self.test_session():
crop = tf.random_crop(value, single, seed=7)
counts = np.zeros(size, dtype=np.int32)
for _ in range(num_samples):
y = crop.eval()
self.assertAllEqual(y.shape, single)
counts[y] += 1
# Calculate the mean and 4 * standard deviation.
mean = np.repeat(num_samples / size, size)
four_stddev = 4.0 * np.sqrt(mean)
# Ensure that each entry is observed in 1/size of the samples
# within 4 standard deviations.
self.assertAllClose(counts, mean, atol=four_stddev)
if __name__ == '__main__':
tf.test.main()
| apache-2.0 |
arborh/tensorflow | tensorflow/python/autograph/core/unsupported_features_checker.py | 4 | 1842 | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Checkers for detecting unsupported Python features."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import gast
from tensorflow.python.autograph.pyct import errors
class UnsupportedFeaturesChecker(gast.NodeVisitor):
"""Quick check for Python features we know we don't support.
Any features detected will cause AutoGraph to not compile a function.
"""
def visit_Attribute(self, node):
if (node.attr is not None
and node.attr.startswith('__') and not node.attr.endswith('__')):
raise errors.UnsupportedLanguageElementError(
'mangled names are not yet supported by AutoGraph')
# These checks could potentially be replaced with inspect.isgeneratorfunction
# to avoid a getsource/parse/ast-walk round trip.
def visit_Yield(self, node):
raise errors.UnsupportedLanguageElementError(
'generators are not supported by AutoGraph')
def visit_YieldFrom(self, node):
raise errors.UnsupportedLanguageElementError(
'generators are not supported by AutoGraph')
def verify(node):
UnsupportedFeaturesChecker().visit(node)
| apache-2.0 |
FederatedAI/FATE | python/federatedml/optim/optimizer.py | 1 | 12739 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright 2019 The FATE Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
from federatedml.linear_model.linear_model_weight import LinearModelWeights
from federatedml.util import LOGGER
from federatedml.util import consts
class _Optimizer(object):
def __init__(self, learning_rate, alpha, penalty, decay, decay_sqrt, mu=0):
self.learning_rate = learning_rate
self.iters = 0
self.alpha = alpha
self.penalty = penalty
self.decay = decay
self.decay_sqrt = decay_sqrt
self.mu = mu
def decay_learning_rate(self):
if self.decay_sqrt:
lr = self.learning_rate / np.sqrt(1 + self.decay * self.iters)
else:
lr = self.learning_rate / (1 + self.decay * self.iters)
return lr
@property
def shrinkage_val(self):
this_step_size = self.learning_rate / np.sqrt(self.iters)
return self.alpha * this_step_size
def set_iters(self, iters):
self.iters = iters
def apply_gradients(self, grad):
raise NotImplementedError("Should not call here")
def _l1_updator(self, model_weights: LinearModelWeights, gradient):
coef_ = model_weights.coef_
if model_weights.fit_intercept:
gradient_without_intercept = gradient[: -1]
else:
gradient_without_intercept = gradient
new_weights = np.sign(coef_ - gradient_without_intercept) * np.maximum(0, np.abs(
coef_ - gradient_without_intercept) - self.shrinkage_val)
if model_weights.fit_intercept:
new_weights = np.append(new_weights, model_weights.intercept_)
new_weights[-1] -= gradient[-1]
new_param = LinearModelWeights(new_weights, model_weights.fit_intercept)
# LOGGER.debug("In _l1_updator, original weight: {}, new_weights: {}".format(
# model_weights.unboxed, new_weights
# ))
return new_param
def _l2_updator(self, lr_weights: LinearModelWeights, gradient):
"""
For l2 regularization, the regular term has been added in gradients.
"""
new_weights = lr_weights.unboxed - gradient
new_param = LinearModelWeights(new_weights, lr_weights.fit_intercept)
return new_param
def add_regular_to_grad(self, grad, lr_weights):
if self.penalty == consts.L2_PENALTY:
if lr_weights.fit_intercept:
gradient_without_intercept = grad[: -1]
gradient_without_intercept += self.alpha * lr_weights.coef_
new_grad = np.append(gradient_without_intercept, grad[-1])
else:
new_grad = grad + self.alpha * lr_weights.coef_
else:
new_grad = grad
return new_grad
def regularization_update(self, model_weights: LinearModelWeights, grad,
prev_round_weights: LinearModelWeights = None):
# LOGGER.debug(f"In regularization_update, input model_weights: {model_weights.unboxed}")
if self.penalty == consts.L1_PENALTY:
model_weights = self._l1_updator(model_weights, grad)
elif self.penalty == consts.L2_PENALTY:
model_weights = self._l2_updator(model_weights, grad)
else:
new_vars = model_weights.unboxed - grad
model_weights = LinearModelWeights(new_vars, model_weights.fit_intercept)
if prev_round_weights is not None: # additional proximal term
coef_ = model_weights.unboxed
if model_weights.fit_intercept:
coef_without_intercept = coef_[: -1]
else:
coef_without_intercept = coef_
coef_without_intercept -= self.mu * (model_weights.coef_ - prev_round_weights.coef_)
if model_weights.fit_intercept:
new_coef_ = np.append(coef_without_intercept, coef_[-1])
else:
new_coef_ = coef_without_intercept
model_weights = LinearModelWeights(new_coef_, model_weights.fit_intercept)
return model_weights
def __l1_loss_norm(self, model_weights: LinearModelWeights):
coef_ = model_weights.coef_
loss_norm = np.sum(self.alpha * np.abs(coef_))
return loss_norm
def __l2_loss_norm(self, model_weights: LinearModelWeights):
coef_ = model_weights.coef_
loss_norm = 0.5 * self.alpha * np.dot(coef_, coef_)
return loss_norm
def __add_proximal(self, model_weights, prev_round_weights):
prev_round_coef_ = prev_round_weights.coef_
coef_ = model_weights.coef_
diff = coef_ - prev_round_coef_
loss_norm = self.mu * 0.5 * np.dot(diff, diff)
return loss_norm
def loss_norm(self, model_weights: LinearModelWeights, prev_round_weights: LinearModelWeights = None):
proximal_term = None
if prev_round_weights is not None:
proximal_term = self.__add_proximal(model_weights, prev_round_weights)
if self.penalty == consts.L1_PENALTY:
loss_norm_value = self.__l1_loss_norm(model_weights)
elif self.penalty == consts.L2_PENALTY:
loss_norm_value = self.__l2_loss_norm(model_weights)
else:
loss_norm_value = None
# additional proximal term
if loss_norm_value is None:
loss_norm_value = proximal_term
elif proximal_term is not None:
loss_norm_value += proximal_term
return loss_norm_value
def hess_vector_norm(self, delta_s: LinearModelWeights):
if self.penalty == consts.L1_PENALTY:
return LinearModelWeights(np.zeros_like(delta_s.unboxed), fit_intercept=delta_s.fit_intercept)
elif self.penalty == consts.L2_PENALTY:
return LinearModelWeights(self.alpha * np.array(delta_s.unboxed), fit_intercept=delta_s.fit_intercept)
else:
return LinearModelWeights(np.zeros_like(delta_s.unboxed), fit_intercept=delta_s.fit_intercept)
def update_model(self, model_weights: LinearModelWeights, grad, prev_round_weights: LinearModelWeights = None,
has_applied=True):
if not has_applied:
grad = self.add_regular_to_grad(grad, model_weights)
delta_grad = self.apply_gradients(grad)
else:
delta_grad = grad
model_weights = self.regularization_update(model_weights, delta_grad, prev_round_weights)
return model_weights
class _SgdOptimizer(_Optimizer):
def apply_gradients(self, grad):
learning_rate = self.decay_learning_rate()
delta_grad = learning_rate * grad
# LOGGER.debug("In sgd optimizer, learning_rate: {}, delta_grad: {}".format(learning_rate, delta_grad))
return delta_grad
class _RMSPropOptimizer(_Optimizer):
def __init__(self, learning_rate, alpha, penalty, decay, decay_sqrt, mu):
super().__init__(learning_rate, alpha, penalty, decay, decay_sqrt)
self.rho = 0.99
self.opt_m = None
def apply_gradients(self, grad):
learning_rate = self.decay_learning_rate()
if self.opt_m is None:
self.opt_m = np.zeros_like(grad)
self.opt_m = self.rho * self.opt_m + (1 - self.rho) * np.square(grad)
self.opt_m = np.array(self.opt_m, dtype=np.float64)
delta_grad = learning_rate * grad / np.sqrt(self.opt_m + 1e-6)
return delta_grad
class _AdaGradOptimizer(_Optimizer):
def __init__(self, learning_rate, alpha, penalty, decay, decay_sqrt, mu):
super().__init__(learning_rate, alpha, penalty, decay, decay_sqrt)
self.opt_m = None
def apply_gradients(self, grad):
learning_rate = self.decay_learning_rate()
if self.opt_m is None:
self.opt_m = np.zeros_like(grad)
self.opt_m = self.opt_m + np.square(grad)
self.opt_m = np.array(self.opt_m, dtype=np.float64)
delta_grad = learning_rate * grad / (np.sqrt(self.opt_m) + 1e-7)
return delta_grad
class _NesterovMomentumSGDOpimizer(_Optimizer):
def __init__(self, learning_rate, alpha, penalty, decay, decay_sqrt, mu):
super().__init__(learning_rate, alpha, penalty, decay, decay_sqrt)
self.nesterov_momentum_coeff = 0.9
self.opt_m = None
def apply_gradients(self, grad):
learning_rate = self.decay_learning_rate()
if self.opt_m is None:
self.opt_m = np.zeros_like(grad)
v = self.nesterov_momentum_coeff * self.opt_m - learning_rate * grad
delta_grad = self.nesterov_momentum_coeff * self.opt_m - (1 + self.nesterov_momentum_coeff) * v
self.opt_m = v
# LOGGER.debug('In nesterov_momentum, opt_m: {}, v: {}, delta_grad: {}'.format(
# self.opt_m, v, delta_grad
# ))
return delta_grad
class _AdamOptimizer(_Optimizer):
def __init__(self, learning_rate, alpha, penalty, decay, decay_sqrt, mu):
super().__init__(learning_rate, alpha, penalty, decay, decay_sqrt)
self.opt_beta1 = 0.9
self.opt_beta2 = 0.999
self.opt_beta1_decay = 1.0
self.opt_beta2_decay = 1.0
self.opt_m = None
self.opt_v = None
def apply_gradients(self, grad):
learning_rate = self.decay_learning_rate()
if self.opt_m is None:
self.opt_m = np.zeros_like(grad)
if self.opt_v is None:
self.opt_v = np.zeros_like(grad)
self.opt_beta1_decay = self.opt_beta1_decay * self.opt_beta1
self.opt_beta2_decay = self.opt_beta2_decay * self.opt_beta2
self.opt_m = self.opt_beta1 * self.opt_m + (1 - self.opt_beta1) * grad
self.opt_v = self.opt_beta2 * self.opt_v + (1 - self.opt_beta2) * np.square(grad)
opt_m_hat = self.opt_m / (1 - self.opt_beta1_decay)
opt_v_hat = self.opt_v / (1 - self.opt_beta2_decay)
opt_v_hat = np.array(opt_v_hat, dtype=np.float64)
delta_grad = learning_rate * opt_m_hat / (np.sqrt(opt_v_hat) + 1e-8)
return delta_grad
class _StochasticQuansiNewtonOptimizer(_Optimizer):
def __init__(self, learning_rate, alpha, penalty, decay, decay_sqrt, mu):
super().__init__(learning_rate, alpha, penalty, decay, decay_sqrt)
self.__opt_hess = None
def apply_gradients(self, grad):
learning_rate = self.decay_learning_rate()
# LOGGER.debug("__opt_hess is: {}".format(self.__opt_hess))
if self.__opt_hess is None:
delta_grad = learning_rate * grad
else:
delta_grad = learning_rate * self.__opt_hess.dot(grad)
# LOGGER.debug("In sqn updater, grad: {}, delta_grad: {}".format(grad, delta_grad))
return delta_grad
def set_hess_matrix(self, hess_matrix):
self.__opt_hess = hess_matrix
def optimizer_factory(param):
try:
optimizer_type = param.optimizer
learning_rate = param.learning_rate
alpha = param.alpha
penalty = param.penalty
decay = param.decay
decay_sqrt = param.decay_sqrt
if hasattr(param, 'mu'):
mu = param.mu
else:
mu = 0.0
init_params = [learning_rate, alpha, penalty, decay, decay_sqrt, mu]
except AttributeError:
raise AttributeError("Optimizer parameters has not been totally set")
LOGGER.debug("in optimizer_factory, optimizer_type: {}, learning_rate: {}, alpha: {}, penalty: {},"
"decay: {}, decay_sqrt: {}".format(optimizer_type, *init_params))
if optimizer_type == 'sgd':
return _SgdOptimizer(*init_params)
elif optimizer_type == 'nesterov_momentum_sgd':
return _NesterovMomentumSGDOpimizer(*init_params)
elif optimizer_type == 'rmsprop':
return _RMSPropOptimizer(*init_params)
elif optimizer_type == 'adam':
return _AdamOptimizer(*init_params)
elif optimizer_type == 'adagrad':
return _AdaGradOptimizer(*init_params)
elif optimizer_type == 'sqn':
return _StochasticQuansiNewtonOptimizer(*init_params)
else:
raise NotImplementedError("Optimize method cannot be recognized: {}".format(optimizer_type))
| apache-2.0 |
havard024/prego | crm/lib/python2.7/site-packages/PIL/ImageWin.py | 15 | 7664 | #
# The Python Imaging Library.
# $Id$
#
# a Windows DIB display interface
#
# History:
# 1996-05-20 fl Created
# 1996-09-20 fl Fixed subregion exposure
# 1997-09-21 fl Added draw primitive (for tzPrint)
# 2003-05-21 fl Added experimental Window/ImageWindow classes
# 2003-09-05 fl Added fromstring/tostring methods
#
# Copyright (c) Secret Labs AB 1997-2003.
# Copyright (c) Fredrik Lundh 1996-2003.
#
# See the README file for information on usage and redistribution.
#
import warnings
from PIL import Image
class HDC:
"""
Wraps a HDC integer. The resulting object can be passed to the
:py:meth:`~PIL.ImageWin.Dib.draw` and :py:meth:`~PIL.ImageWin.Dib.expose`
methods.
"""
def __init__(self, dc):
self.dc = dc
def __int__(self):
return self.dc
class HWND:
"""
Wraps a HWND integer. The resulting object can be passed to the
:py:meth:`~PIL.ImageWin.Dib.draw` and :py:meth:`~PIL.ImageWin.Dib.expose`
methods, instead of a DC.
"""
def __init__(self, wnd):
self.wnd = wnd
def __int__(self):
return self.wnd
class Dib:
"""
A Windows bitmap with the given mode and size. The mode can be one of "1",
"L", "P", or "RGB".
If the display requires a palette, this constructor creates a suitable
palette and associates it with the image. For an "L" image, 128 greylevels
are allocated. For an "RGB" image, a 6x6x6 colour cube is used, together
with 20 greylevels.
To make sure that palettes work properly under Windows, you must call the
**palette** method upon certain events from Windows.
:param image: Either a PIL image, or a mode string. If a mode string is
used, a size must also be given. The mode can be one of "1",
"L", "P", or "RGB".
:param size: If the first argument is a mode string, this
defines the size of the image.
"""
def __init__(self, image, size=None):
if hasattr(image, "mode") and hasattr(image, "size"):
mode = image.mode
size = image.size
else:
mode = image
image = None
if mode not in ["1", "L", "P", "RGB"]:
mode = Image.getmodebase(mode)
self.image = Image.core.display(mode, size)
self.mode = mode
self.size = size
if image:
self.paste(image)
def expose(self, handle):
"""
Copy the bitmap contents to a device context.
:param handle: Device context (HDC), cast to a Python integer, or a HDC
or HWND instance. In PythonWin, you can use the
:py:meth:`CDC.GetHandleAttrib` to get a suitable handle.
"""
if isinstance(handle, HWND):
dc = self.image.getdc(handle)
try:
result = self.image.expose(dc)
finally:
self.image.releasedc(handle, dc)
else:
result = self.image.expose(handle)
return result
def draw(self, handle, dst, src=None):
"""
Same as expose, but allows you to specify where to draw the image, and
what part of it to draw.
The destination and source areas are given as 4-tuple rectangles. If
the source is omitted, the entire image is copied. If the source and
the destination have different sizes, the image is resized as
necessary.
"""
if not src:
src = (0,0) + self.size
if isinstance(handle, HWND):
dc = self.image.getdc(handle)
try:
result = self.image.draw(dc, dst, src)
finally:
self.image.releasedc(handle, dc)
else:
result = self.image.draw(handle, dst, src)
return result
def query_palette(self, handle):
"""
Installs the palette associated with the image in the given device
context.
This method should be called upon **QUERYNEWPALETTE** and
**PALETTECHANGED** events from Windows. If this method returns a
non-zero value, one or more display palette entries were changed, and
the image should be redrawn.
:param handle: Device context (HDC), cast to a Python integer, or an
HDC or HWND instance.
:return: A true value if one or more entries were changed (this
indicates that the image should be redrawn).
"""
if isinstance(handle, HWND):
handle = self.image.getdc(handle)
try:
result = self.image.query_palette(handle)
finally:
self.image.releasedc(handle, handle)
else:
result = self.image.query_palette(handle)
return result
def paste(self, im, box=None):
"""
Paste a PIL image into the bitmap image.
:param im: A PIL image. The size must match the target region.
If the mode does not match, the image is converted to the
mode of the bitmap image.
:param box: A 4-tuple defining the left, upper, right, and
lower pixel coordinate. If None is given instead of a
tuple, all of the image is assumed.
"""
im.load()
if self.mode != im.mode:
im = im.convert(self.mode)
if box:
self.image.paste(im.im, box)
else:
self.image.paste(im.im)
def frombytes(self, buffer):
"""
Load display memory contents from byte data.
:param buffer: A buffer containing display data (usually
data returned from <b>tobytes</b>)
"""
return self.image.frombytes(buffer)
def tobytes(self):
"""
Copy display memory contents to bytes object.
:return: A bytes object containing display data.
"""
return self.image.tobytes()
##
# Deprecated aliases to frombytes & tobytes.
def fromstring(self, *args, **kw):
warnings.warn(
'fromstring() is deprecated. Please call frombytes() instead.',
DeprecationWarning,
stacklevel=2
)
return self.frombytes(*args, **kw)
def tostring(self):
warnings.warn(
'tostring() is deprecated. Please call tobytes() instead.',
DeprecationWarning,
stacklevel=2
)
return self.tobytes()
##
# Create a Window with the given title size.
class Window:
def __init__(self, title="PIL", width=None, height=None):
self.hwnd = Image.core.createwindow(
title, self.__dispatcher, width or 0, height or 0
)
def __dispatcher(self, action, *args):
return getattr(self, "ui_handle_" + action)(*args)
def ui_handle_clear(self, dc, x0, y0, x1, y1):
pass
def ui_handle_damage(self, x0, y0, x1, y1):
pass
def ui_handle_destroy(self):
pass
def ui_handle_repair(self, dc, x0, y0, x1, y1):
pass
def ui_handle_resize(self, width, height):
pass
def mainloop(self):
Image.core.eventloop()
##
# Create an image window which displays the given image.
class ImageWindow(Window):
def __init__(self, image, title="PIL"):
if not isinstance(image, Dib):
image = Dib(image)
self.image = image
width, height = image.size
Window.__init__(self, title, width=width, height=height)
def ui_handle_repair(self, dc, x0, y0, x1, y1):
self.image.draw(dc, (x0, y0, x1, y1))
| mit |
Weicong-Lin/pymo-global | android/pgs4a-0.9.6/python-install/lib/python2.7/test/test_compile.py | 29 | 17895 | import unittest
import sys
import _ast
from test import test_support
import textwrap
class TestSpecifics(unittest.TestCase):
def test_no_ending_newline(self):
compile("hi", "<test>", "exec")
compile("hi\r", "<test>", "exec")
def test_empty(self):
compile("", "<test>", "exec")
def test_other_newlines(self):
compile("\r\n", "<test>", "exec")
compile("\r", "<test>", "exec")
compile("hi\r\nstuff\r\ndef f():\n pass\r", "<test>", "exec")
compile("this_is\rreally_old_mac\rdef f():\n pass", "<test>", "exec")
def test_debug_assignment(self):
# catch assignments to __debug__
self.assertRaises(SyntaxError, compile, '__debug__ = 1', '?', 'single')
import __builtin__
prev = __builtin__.__debug__
setattr(__builtin__, '__debug__', 'sure')
setattr(__builtin__, '__debug__', prev)
def test_argument_handling(self):
# detect duplicate positional and keyword arguments
self.assertRaises(SyntaxError, eval, 'lambda a,a:0')
self.assertRaises(SyntaxError, eval, 'lambda a,a=1:0')
self.assertRaises(SyntaxError, eval, 'lambda a=1,a=1:0')
try:
exec 'def f(a, a): pass'
self.fail("duplicate arguments")
except SyntaxError:
pass
try:
exec 'def f(a = 0, a = 1): pass'
self.fail("duplicate keyword arguments")
except SyntaxError:
pass
try:
exec 'def f(a): global a; a = 1'
self.fail("variable is global and local")
except SyntaxError:
pass
def test_syntax_error(self):
self.assertRaises(SyntaxError, compile, "1+*3", "filename", "exec")
def test_none_keyword_arg(self):
self.assertRaises(SyntaxError, compile, "f(None=1)", "<string>", "exec")
def test_duplicate_global_local(self):
try:
exec 'def f(a): global a; a = 1'
self.fail("variable is global and local")
except SyntaxError:
pass
def test_exec_with_general_mapping_for_locals(self):
class M:
"Test mapping interface versus possible calls from eval()."
def __getitem__(self, key):
if key == 'a':
return 12
raise KeyError
def __setitem__(self, key, value):
self.results = (key, value)
def keys(self):
return list('xyz')
m = M()
g = globals()
exec 'z = a' in g, m
self.assertEqual(m.results, ('z', 12))
try:
exec 'z = b' in g, m
except NameError:
pass
else:
self.fail('Did not detect a KeyError')
exec 'z = dir()' in g, m
self.assertEqual(m.results, ('z', list('xyz')))
exec 'z = globals()' in g, m
self.assertEqual(m.results, ('z', g))
exec 'z = locals()' in g, m
self.assertEqual(m.results, ('z', m))
try:
exec 'z = b' in m
except TypeError:
pass
else:
self.fail('Did not validate globals as a real dict')
class A:
"Non-mapping"
pass
m = A()
try:
exec 'z = a' in g, m
except TypeError:
pass
else:
self.fail('Did not validate locals as a mapping')
# Verify that dict subclasses work as well
class D(dict):
def __getitem__(self, key):
if key == 'a':
return 12
return dict.__getitem__(self, key)
d = D()
exec 'z = a' in g, d
self.assertEqual(d['z'], 12)
def test_extended_arg(self):
longexpr = 'x = x or ' + '-x' * 2500
code = '''
def f(x):
%s
%s
%s
%s
%s
%s
%s
%s
%s
%s
# the expressions above have no effect, x == argument
while x:
x -= 1
# EXTENDED_ARG/JUMP_ABSOLUTE here
return x
''' % ((longexpr,)*10)
exec code
self.assertEqual(f(5), 0)
def test_complex_args(self):
with test_support.check_py3k_warnings(
("tuple parameter unpacking has been removed", SyntaxWarning)):
exec textwrap.dedent('''
def comp_args((a, b)):
return a,b
self.assertEqual(comp_args((1, 2)), (1, 2))
def comp_args((a, b)=(3, 4)):
return a, b
self.assertEqual(comp_args((1, 2)), (1, 2))
self.assertEqual(comp_args(), (3, 4))
def comp_args(a, (b, c)):
return a, b, c
self.assertEqual(comp_args(1, (2, 3)), (1, 2, 3))
def comp_args(a=2, (b, c)=(3, 4)):
return a, b, c
self.assertEqual(comp_args(1, (2, 3)), (1, 2, 3))
self.assertEqual(comp_args(), (2, 3, 4))
''')
def test_argument_order(self):
try:
exec 'def f(a=1, (b, c)): pass'
self.fail("non-default args after default")
except SyntaxError:
pass
def test_float_literals(self):
# testing bad float literals
self.assertRaises(SyntaxError, eval, "2e")
self.assertRaises(SyntaxError, eval, "2.0e+")
self.assertRaises(SyntaxError, eval, "1e-")
self.assertRaises(SyntaxError, eval, "3-4e/21")
def test_indentation(self):
# testing compile() of indented block w/o trailing newline"
s = """
if 1:
if 2:
pass"""
compile(s, "<string>", "exec")
# This test is probably specific to CPython and may not generalize
# to other implementations. We are trying to ensure that when
# the first line of code starts after 256, correct line numbers
# in tracebacks are still produced.
def test_leading_newlines(self):
s256 = "".join(["\n"] * 256 + ["spam"])
co = compile(s256, 'fn', 'exec')
self.assertEqual(co.co_firstlineno, 257)
self.assertEqual(co.co_lnotab, '')
def test_literals_with_leading_zeroes(self):
for arg in ["077787", "0xj", "0x.", "0e", "090000000000000",
"080000000000000", "000000000000009", "000000000000008",
"0b42", "0BADCAFE", "0o123456789", "0b1.1", "0o4.2",
"0b101j2", "0o153j2", "0b100e1", "0o777e1", "0o8", "0o78"]:
self.assertRaises(SyntaxError, eval, arg)
self.assertEqual(eval("0777"), 511)
self.assertEqual(eval("0777L"), 511)
self.assertEqual(eval("000777"), 511)
self.assertEqual(eval("0xff"), 255)
self.assertEqual(eval("0xffL"), 255)
self.assertEqual(eval("0XfF"), 255)
self.assertEqual(eval("0777."), 777)
self.assertEqual(eval("0777.0"), 777)
self.assertEqual(eval("000000000000000000000000000000000000000000000000000777e0"), 777)
self.assertEqual(eval("0777e1"), 7770)
self.assertEqual(eval("0e0"), 0)
self.assertEqual(eval("0000E-012"), 0)
self.assertEqual(eval("09.5"), 9.5)
self.assertEqual(eval("0777j"), 777j)
self.assertEqual(eval("00j"), 0j)
self.assertEqual(eval("00.0"), 0)
self.assertEqual(eval("0e3"), 0)
self.assertEqual(eval("090000000000000."), 90000000000000.)
self.assertEqual(eval("090000000000000.0000000000000000000000"), 90000000000000.)
self.assertEqual(eval("090000000000000e0"), 90000000000000.)
self.assertEqual(eval("090000000000000e-0"), 90000000000000.)
self.assertEqual(eval("090000000000000j"), 90000000000000j)
self.assertEqual(eval("000000000000007"), 7)
self.assertEqual(eval("000000000000008."), 8.)
self.assertEqual(eval("000000000000009."), 9.)
self.assertEqual(eval("0b101010"), 42)
self.assertEqual(eval("-0b000000000010"), -2)
self.assertEqual(eval("0o777"), 511)
self.assertEqual(eval("-0o0000010"), -8)
self.assertEqual(eval("020000000000.0"), 20000000000.0)
self.assertEqual(eval("037777777777e0"), 37777777777.0)
self.assertEqual(eval("01000000000000000000000.0"),
1000000000000000000000.0)
def test_unary_minus(self):
# Verify treatment of unary minus on negative numbers SF bug #660455
if sys.maxint == 2147483647:
# 32-bit machine
all_one_bits = '0xffffffff'
self.assertEqual(eval(all_one_bits), 4294967295L)
self.assertEqual(eval("-" + all_one_bits), -4294967295L)
elif sys.maxint == 9223372036854775807:
# 64-bit machine
all_one_bits = '0xffffffffffffffff'
self.assertEqual(eval(all_one_bits), 18446744073709551615L)
self.assertEqual(eval("-" + all_one_bits), -18446744073709551615L)
else:
self.fail("How many bits *does* this machine have???")
# Verify treatment of constant folding on -(sys.maxint+1)
# i.e. -2147483648 on 32 bit platforms. Should return int, not long.
self.assertIsInstance(eval("%s" % (-sys.maxint - 1)), int)
self.assertIsInstance(eval("%s" % (-sys.maxint - 2)), long)
if sys.maxint == 9223372036854775807:
def test_32_63_bit_values(self):
a = +4294967296 # 1 << 32
b = -4294967296 # 1 << 32
c = +281474976710656 # 1 << 48
d = -281474976710656 # 1 << 48
e = +4611686018427387904 # 1 << 62
f = -4611686018427387904 # 1 << 62
g = +9223372036854775807 # 1 << 63 - 1
h = -9223372036854775807 # 1 << 63 - 1
for variable in self.test_32_63_bit_values.func_code.co_consts:
if variable is not None:
self.assertIsInstance(variable, int)
def test_sequence_unpacking_error(self):
# Verify sequence packing/unpacking with "or". SF bug #757818
i,j = (1, -1) or (-1, 1)
self.assertEqual(i, 1)
self.assertEqual(j, -1)
def test_none_assignment(self):
stmts = [
'None = 0',
'None += 0',
'__builtins__.None = 0',
'def None(): pass',
'class None: pass',
'(a, None) = 0, 0',
'for None in range(10): pass',
'def f(None): pass',
'import None',
'import x as None',
'from x import None',
'from x import y as None'
]
for stmt in stmts:
stmt += "\n"
self.assertRaises(SyntaxError, compile, stmt, 'tmp', 'single')
self.assertRaises(SyntaxError, compile, stmt, 'tmp', 'exec')
# This is ok.
compile("from None import x", "tmp", "exec")
compile("from x import None as y", "tmp", "exec")
compile("import None as x", "tmp", "exec")
def test_import(self):
succeed = [
'import sys',
'import os, sys',
'import os as bar',
'import os.path as bar',
'from __future__ import nested_scopes, generators',
'from __future__ import (nested_scopes,\ngenerators)',
'from __future__ import (nested_scopes,\ngenerators,)',
'from sys import stdin, stderr, stdout',
'from sys import (stdin, stderr,\nstdout)',
'from sys import (stdin, stderr,\nstdout,)',
'from sys import (stdin\n, stderr, stdout)',
'from sys import (stdin\n, stderr, stdout,)',
'from sys import stdin as si, stdout as so, stderr as se',
'from sys import (stdin as si, stdout as so, stderr as se)',
'from sys import (stdin as si, stdout as so, stderr as se,)',
]
fail = [
'import (os, sys)',
'import (os), (sys)',
'import ((os), (sys))',
'import (sys',
'import sys)',
'import (os,)',
'import os As bar',
'import os.path a bar',
'from sys import stdin As stdout',
'from sys import stdin a stdout',
'from (sys) import stdin',
'from __future__ import (nested_scopes',
'from __future__ import nested_scopes)',
'from __future__ import nested_scopes,\ngenerators',
'from sys import (stdin',
'from sys import stdin)',
'from sys import stdin, stdout,\nstderr',
'from sys import stdin si',
'from sys import stdin,'
'from sys import (*)',
'from sys import (stdin,, stdout, stderr)',
'from sys import (stdin, stdout),',
]
for stmt in succeed:
compile(stmt, 'tmp', 'exec')
for stmt in fail:
self.assertRaises(SyntaxError, compile, stmt, 'tmp', 'exec')
def test_for_distinct_code_objects(self):
# SF bug 1048870
def f():
f1 = lambda x=1: x
f2 = lambda x=2: x
return f1, f2
f1, f2 = f()
self.assertNotEqual(id(f1.func_code), id(f2.func_code))
def test_lambda_doc(self):
l = lambda: "foo"
self.assertIsNone(l.__doc__)
def test_unicode_encoding(self):
code = u"# -*- coding: utf-8 -*-\npass\n"
self.assertRaises(SyntaxError, compile, code, "tmp", "exec")
def test_subscripts(self):
# SF bug 1448804
# Class to make testing subscript results easy
class str_map(object):
def __init__(self):
self.data = {}
def __getitem__(self, key):
return self.data[str(key)]
def __setitem__(self, key, value):
self.data[str(key)] = value
def __delitem__(self, key):
del self.data[str(key)]
def __contains__(self, key):
return str(key) in self.data
d = str_map()
# Index
d[1] = 1
self.assertEqual(d[1], 1)
d[1] += 1
self.assertEqual(d[1], 2)
del d[1]
self.assertNotIn(1, d)
# Tuple of indices
d[1, 1] = 1
self.assertEqual(d[1, 1], 1)
d[1, 1] += 1
self.assertEqual(d[1, 1], 2)
del d[1, 1]
self.assertNotIn((1, 1), d)
# Simple slice
d[1:2] = 1
self.assertEqual(d[1:2], 1)
d[1:2] += 1
self.assertEqual(d[1:2], 2)
del d[1:2]
self.assertNotIn(slice(1, 2), d)
# Tuple of simple slices
d[1:2, 1:2] = 1
self.assertEqual(d[1:2, 1:2], 1)
d[1:2, 1:2] += 1
self.assertEqual(d[1:2, 1:2], 2)
del d[1:2, 1:2]
self.assertNotIn((slice(1, 2), slice(1, 2)), d)
# Extended slice
d[1:2:3] = 1
self.assertEqual(d[1:2:3], 1)
d[1:2:3] += 1
self.assertEqual(d[1:2:3], 2)
del d[1:2:3]
self.assertNotIn(slice(1, 2, 3), d)
# Tuple of extended slices
d[1:2:3, 1:2:3] = 1
self.assertEqual(d[1:2:3, 1:2:3], 1)
d[1:2:3, 1:2:3] += 1
self.assertEqual(d[1:2:3, 1:2:3], 2)
del d[1:2:3, 1:2:3]
self.assertNotIn((slice(1, 2, 3), slice(1, 2, 3)), d)
# Ellipsis
d[...] = 1
self.assertEqual(d[...], 1)
d[...] += 1
self.assertEqual(d[...], 2)
del d[...]
self.assertNotIn(Ellipsis, d)
# Tuple of Ellipses
d[..., ...] = 1
self.assertEqual(d[..., ...], 1)
d[..., ...] += 1
self.assertEqual(d[..., ...], 2)
del d[..., ...]
self.assertNotIn((Ellipsis, Ellipsis), d)
def test_mangling(self):
class A:
def f():
__mangled = 1
__not_mangled__ = 2
import __mangled_mod
import __package__.module
self.assertIn("_A__mangled", A.f.func_code.co_varnames)
self.assertIn("__not_mangled__", A.f.func_code.co_varnames)
self.assertIn("_A__mangled_mod", A.f.func_code.co_varnames)
self.assertIn("__package__", A.f.func_code.co_varnames)
def test_compile_ast(self):
fname = __file__
if fname.lower().endswith(('pyc', 'pyo')):
fname = fname[:-1]
with open(fname, 'r') as f:
fcontents = f.read()
sample_code = [
['<assign>', 'x = 5'],
['<print1>', 'print 1'],
['<printv>', 'print v'],
['<printTrue>', 'print True'],
['<printList>', 'print []'],
['<ifblock>', """if True:\n pass\n"""],
['<forblock>', """for n in [1, 2, 3]:\n print n\n"""],
['<deffunc>', """def foo():\n pass\nfoo()\n"""],
[fname, fcontents],
]
for fname, code in sample_code:
co1 = compile(code, '%s1' % fname, 'exec')
ast = compile(code, '%s2' % fname, 'exec', _ast.PyCF_ONLY_AST)
self.assertTrue(type(ast) == _ast.Module)
co2 = compile(ast, '%s3' % fname, 'exec')
self.assertEqual(co1, co2)
# the code object's filename comes from the second compilation step
self.assertEqual(co2.co_filename, '%s3' % fname)
# raise exception when node type doesn't match with compile mode
co1 = compile('print 1', '<string>', 'exec', _ast.PyCF_ONLY_AST)
self.assertRaises(TypeError, compile, co1, '<ast>', 'eval')
# raise exception when node type is no start node
self.assertRaises(TypeError, compile, _ast.If(), '<ast>', 'exec')
# raise exception when node has invalid children
ast = _ast.Module()
ast.body = [_ast.BoolOp()]
self.assertRaises(TypeError, compile, ast, '<ast>', 'exec')
def test_main():
test_support.run_unittest(TestSpecifics)
if __name__ == "__main__":
test_main()
| mit |
Cryptophobia/ansible | lib/ansible/parsing/yaml/dumper.py | 46 | 1780 | # (c) 2012-2014, Michael DeHaan <michael.dehaan@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import yaml
from ansible.compat.six import PY3
from ansible.parsing.yaml.objects import AnsibleUnicode, AnsibleSequence, AnsibleMapping
from ansible.vars.hostvars import HostVars
class AnsibleDumper(yaml.SafeDumper):
'''
A simple stub class that allows us to add representers
for our overridden object types.
'''
pass
def represent_hostvars(self, data):
return self.represent_dict(dict(data))
if PY3:
represent_unicode = yaml.representer.SafeRepresenter.represent_str
else:
represent_unicode = yaml.representer.SafeRepresenter.represent_unicode
AnsibleDumper.add_representer(
AnsibleUnicode,
represent_unicode,
)
AnsibleDumper.add_representer(
HostVars,
represent_hostvars,
)
AnsibleDumper.add_representer(
AnsibleSequence,
yaml.representer.SafeRepresenter.represent_list,
)
AnsibleDumper.add_representer(
AnsibleMapping,
yaml.representer.SafeRepresenter.represent_dict,
)
| gpl-3.0 |
ckarademir/foursquared.eclair | util/oget.py | 262 | 3416 | #!/usr/bin/python
"""
Pull a oAuth protected page from foursquare.
Expects ~/.oget to contain (one on each line):
CONSUMER_KEY
CONSUMER_KEY_SECRET
USERNAME
PASSWORD
Don't forget to chmod 600 the file!
"""
import httplib
import os
import re
import sys
import urllib
import urllib2
import urlparse
import user
from xml.dom import pulldom
from xml.dom import minidom
import oauth
"""From: http://groups.google.com/group/foursquare-api/web/oauth
@consumer = OAuth::Consumer.new("consumer_token","consumer_secret", {
:site => "http://foursquare.com",
:scheme => :header,
:http_method => :post,
:request_token_path => "/oauth/request_token",
:access_token_path => "/oauth/access_token",
:authorize_path => "/oauth/authorize"
})
"""
SERVER = 'api.foursquare.com:80'
CONTENT_TYPE_HEADER = {'Content-Type' :'application/x-www-form-urlencoded'}
SIGNATURE_METHOD = oauth.OAuthSignatureMethod_HMAC_SHA1()
AUTHEXCHANGE_URL = 'http://api.foursquare.com/v1/authexchange'
def parse_auth_response(auth_response):
return (
re.search('<oauth_token>(.*)</oauth_token>', auth_response).groups()[0],
re.search('<oauth_token_secret>(.*)</oauth_token_secret>',
auth_response).groups()[0]
)
def create_signed_oauth_request(username, password, consumer):
oauth_request = oauth.OAuthRequest.from_consumer_and_token(
consumer, http_method='POST', http_url=AUTHEXCHANGE_URL,
parameters=dict(fs_username=username, fs_password=password))
oauth_request.sign_request(SIGNATURE_METHOD, consumer, None)
return oauth_request
def main():
url = urlparse.urlparse(sys.argv[1])
# Nevermind that the query can have repeated keys.
parameters = dict(urlparse.parse_qsl(url.query))
password_file = open(os.path.join(user.home, '.oget'))
lines = [line.strip() for line in password_file.readlines()]
if len(lines) == 4:
cons_key, cons_key_secret, username, password = lines
access_token = None
else:
cons_key, cons_key_secret, username, password, token, secret = lines
access_token = oauth.OAuthToken(token, secret)
consumer = oauth.OAuthConsumer(cons_key, cons_key_secret)
if not access_token:
oauth_request = create_signed_oauth_request(username, password, consumer)
connection = httplib.HTTPConnection(SERVER)
headers = {'Content-Type' :'application/x-www-form-urlencoded'}
connection.request(oauth_request.http_method, AUTHEXCHANGE_URL,
body=oauth_request.to_postdata(), headers=headers)
auth_response = connection.getresponse().read()
token = parse_auth_response(auth_response)
access_token = oauth.OAuthToken(*token)
open(os.path.join(user.home, '.oget'), 'w').write('\n'.join((
cons_key, cons_key_secret, username, password, token[0], token[1])))
oauth_request = oauth.OAuthRequest.from_consumer_and_token(consumer,
access_token, http_method='POST', http_url=url.geturl(),
parameters=parameters)
oauth_request.sign_request(SIGNATURE_METHOD, consumer, access_token)
connection = httplib.HTTPConnection(SERVER)
connection.request(oauth_request.http_method, oauth_request.to_url(),
body=oauth_request.to_postdata(), headers=CONTENT_TYPE_HEADER)
print connection.getresponse().read()
#print minidom.parse(connection.getresponse()).toprettyxml(indent=' ')
if __name__ == '__main__':
main()
| apache-2.0 |
Azure/azure-sdk-for-python | sdk/storage/azure-mgmt-storage/azure/mgmt/storage/v2017_06_01/models/_storage_management_enums.py | 1 | 6553 | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
from enum import Enum, EnumMeta
from six import with_metaclass
class _CaseInsensitiveEnumMeta(EnumMeta):
def __getitem__(self, name):
return super().__getitem__(name.upper())
def __getattr__(cls, name):
"""Return the enum member matching `name`
We use __getattr__ instead of descriptors or inserting into the enum
class' __dict__ in order to support `name` and `value` being both
properties for enum members (which live in the class' __dict__) and
enum members themselves.
"""
try:
return cls._member_map_[name.upper()]
except KeyError:
raise AttributeError(name)
class AccessTier(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
"""Required for storage accounts where kind = BlobStorage. The access tier used for billing.
"""
HOT = "Hot"
COOL = "Cool"
class AccountStatus(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
"""Gets the status indicating whether the primary location of the storage account is available or
unavailable.
"""
AVAILABLE = "available"
UNAVAILABLE = "unavailable"
class Bypass(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
"""Specifies whether traffic is bypassed for Logging/Metrics/AzureServices. Possible values are
any combination of Logging|Metrics|AzureServices (For example, "Logging, Metrics"), or None to
bypass none of those traffics.
"""
NONE = "None"
LOGGING = "Logging"
METRICS = "Metrics"
AZURE_SERVICES = "AzureServices"
class DefaultAction(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
"""Specifies the default action of allow or deny when no other rules match.
"""
ALLOW = "Allow"
DENY = "Deny"
class HttpProtocol(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
"""The protocol permitted for a request made with the account SAS.
"""
HTTPS_HTTP = "https,http"
HTTPS = "https"
class KeyPermission(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
"""Permissions for the key -- read-only or full permissions.
"""
READ = "Read"
FULL = "Full"
class KeySource(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
"""The encryption keySource (provider). Possible values (case-insensitive): Microsoft.Storage,
Microsoft.Keyvault
"""
MICROSOFT_STORAGE = "Microsoft.Storage"
MICROSOFT_KEYVAULT = "Microsoft.Keyvault"
class Kind(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
"""Indicates the type of storage account.
"""
STORAGE = "Storage"
BLOB_STORAGE = "BlobStorage"
class Permissions(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
"""The signed permissions for the account SAS. Possible values include: Read (r), Write (w),
Delete (d), List (l), Add (a), Create (c), Update (u) and Process (p).
"""
R = "r"
D = "d"
W = "w"
L = "l"
A = "a"
C = "c"
U = "u"
P = "p"
class ProvisioningState(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
"""Gets the status of the storage account at the time the operation was called.
"""
CREATING = "Creating"
RESOLVING_DNS = "ResolvingDNS"
SUCCEEDED = "Succeeded"
class Reason(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
"""Gets the reason that a storage account name could not be used. The Reason element is only
returned if NameAvailable is false.
"""
ACCOUNT_NAME_INVALID = "AccountNameInvalid"
ALREADY_EXISTS = "AlreadyExists"
class ReasonCode(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
"""The reason for the restriction. As of now this can be "QuotaId" or
"NotAvailableForSubscription". Quota Id is set when the SKU has requiredQuotas parameter as the
subscription does not belong to that quota. The "NotAvailableForSubscription" is related to
capacity at DC.
"""
QUOTA_ID = "QuotaId"
NOT_AVAILABLE_FOR_SUBSCRIPTION = "NotAvailableForSubscription"
class Services(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
"""The signed services accessible with the account SAS. Possible values include: Blob (b), Queue
(q), Table (t), File (f).
"""
B = "b"
Q = "q"
T = "t"
F = "f"
class SignedResource(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
"""The signed services accessible with the service SAS. Possible values include: Blob (b),
Container (c), File (f), Share (s).
"""
B = "b"
C = "c"
F = "f"
S = "s"
class SignedResourceTypes(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
"""The signed resource types that are accessible with the account SAS. Service (s): Access to
service-level APIs; Container (c): Access to container-level APIs; Object (o): Access to
object-level APIs for blobs, queue messages, table entities, and files.
"""
S = "s"
C = "c"
O = "o"
class SkuName(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
"""Gets or sets the sku name. Required for account creation; optional for update. Note that in
older versions, sku name was called accountType.
"""
STANDARD_LRS = "Standard_LRS"
STANDARD_GRS = "Standard_GRS"
STANDARD_RAGRS = "Standard_RAGRS"
STANDARD_ZRS = "Standard_ZRS"
PREMIUM_LRS = "Premium_LRS"
class SkuTier(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
"""Gets the sku tier. This is based on the SKU name.
"""
STANDARD = "Standard"
PREMIUM = "Premium"
class State(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
"""Gets the state of virtual network rule.
"""
PROVISIONING = "provisioning"
DEPROVISIONING = "deprovisioning"
SUCCEEDED = "succeeded"
FAILED = "failed"
NETWORK_SOURCE_DELETED = "networkSourceDeleted"
class UsageUnit(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
"""Gets the unit of measurement.
"""
COUNT = "Count"
BYTES = "Bytes"
SECONDS = "Seconds"
PERCENT = "Percent"
COUNTS_PER_SECOND = "CountsPerSecond"
BYTES_PER_SECOND = "BytesPerSecond"
| mit |
michaeljohnbarr/django-timezone-utils | tests/test_invalid_timezonefield.py | 1 | 1782 | # ==============================================================================
# IMPORTS
# ==============================================================================
# Python
import pytz
# Django
from django.core.exceptions import ValidationError
from django.test import TestCase
# App
from tests.models import (TZWithBadStringDefault, TZWithLowMaxLength)
# ==============================================================================
# TESTS
# ==============================================================================
class InvalidTimeZoneFieldTestCase(TestCase):
def test_location_max_length(self):
"""If a value is too low, we adjust it for convenience."""
self.assertEquals(
TZWithLowMaxLength._meta.get_field('timezone').max_length,
max(map(len, pytz.all_timezones)),
)
def test_bad_location_default_string(self):
with self.assertRaises(ValidationError):
TZWithBadStringDefault.objects.create()
def test_run_validators(self):
with self.assertRaises(ValidationError):
TZWithLowMaxLength._meta.get_field('timezone').run_validators('Bad')
def test_validate(self):
instance = TZWithLowMaxLength.objects.create(timezone='US/Eastern')
with self.assertRaises(ValidationError):
TZWithLowMaxLength._meta.get_field('timezone').validate(
value='Bad',
model_instance=instance
)
def test_validate_no_error(self):
instance = TZWithLowMaxLength.objects.create(timezone='US/Eastern')
self.assertIsNone(
obj=TZWithLowMaxLength._meta.get_field('timezone').validate(
value='US/Eastern',
model_instance=instance
)
)
| mit |
sivaramakrishnansr/ryu | ryu/services/protocols/bgp/operator/ssh.py | 12 | 16422 | # Copyright (C) 2013 Nippon Telegraph and Telephone Corporation.
# Copyright (C) 2013 YAMAMOTO Takashi <yamamoto at valinux co jp>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# a management cli application.
import logging
import paramiko
import sys
from copy import copy
import os.path
CONF = {
"ssh_port": 4990,
"ssh_host": "localhost",
"ssh_hostkey": None,
"ssh_username": "ryu",
"ssh_password": "ryu",
}
from ryu.lib import hub
from ryu import version
from ryu.services.protocols.bgp.operator.command import Command
from ryu.services.protocols.bgp.operator.command import CommandsResponse
from ryu.services.protocols.bgp.operator.commands.root import RootCmd
from ryu.services.protocols.bgp.operator.internal_api import InternalApi
from ryu.services.protocols.bgp.operator.command import STATUS_OK
from ryu.services.protocols.bgp.base import Activity
LOG = logging.getLogger('bgpspeaker.cli')
class SshServer(paramiko.ServerInterface):
TERM = "ansi"
PROMPT = "bgpd> "
WELCOME = """
Hello, this is Ryu BGP speaker (version %s).
""" % version
class HelpCmd(Command):
help_msg = 'show this help'
command = 'help'
def action(self, params):
return self.parent_cmd.question_mark()[0]
class QuitCmd(Command):
help_msg = 'exit this session'
command = 'quit'
def action(self, params):
self.api.sshserver.end_session()
return CommandsResponse(STATUS_OK, True)
def __init__(self, sock, addr):
super(SshServer, self).__init__()
# tweak InternalApi and RootCmd for non-bgp related commands
self.api = InternalApi(log_handler=logging.StreamHandler(sys.stderr))
setattr(self.api, 'sshserver', self)
self.root = RootCmd(self.api)
self.root.subcommands['help'] = self.HelpCmd
self.root.subcommands['quit'] = self.QuitCmd
transport = paramiko.Transport(sock)
transport.load_server_moduli()
host_key = self._find_ssh_server_key()
transport.add_server_key(host_key)
self.transport = transport
transport.start_server(server=self)
def _find_ssh_server_key(self):
if CONF["ssh_hostkey"]:
return paramiko.RSAKey.from_private_key_file(ssh_hostkey)
elif os.path.exists("/etc/ssh_host_rsa_key"):
# OSX
return paramiko.RSAKey.from_private_key_file(
"/etc/ssh_host_rsa_key")
elif os.path.exists("/etc/ssh/ssh_host_rsa_key"):
# Linux
return paramiko.RSAKey.from_private_key_file(
"/etc/ssh/ssh_host_rsa_key")
else:
return paramiko.RSAKey.generate(1024)
def check_auth_none(self, username):
return paramiko.AUTH_SUCCESSFUL
def check_auth_password(self, username, password):
if username == CONF["ssh_username"] and \
password == CONF["ssh_password"]:
return paramiko.AUTH_SUCCESSFUL
return paramiko.AUTH_FAILED
def check_channel_request(self, kind, chanid):
if kind == 'session':
return paramiko.OPEN_SUCCEEDED
return paramiko.OPEN_FAILED_ADMINISTRATIVELY_PROHIBITED
def check_channel_shell_request(self, chan):
hub.spawn(self._handle_shell_request)
return True
def check_channel_pty_request(self, chan, term, width, height,
pixelwidth, pixelheight, modes):
LOG.debug("termtype: %s", term)
self.TERM = term
return True
def check_channel_window_change_request(self, chan, width, height, pwidth,
pheight):
LOG.info("channel window change")
return True
def _is_echoable(self, c):
return not (c < chr(0x20) or c == chr(0x7F))
def _is_enter(self, c):
return c == chr(0x0d)
def _is_eof(self, c):
return c == chr(0x03)
def _is_esc(self, c):
return c == chr(0x1b)
def _is_hist(self, c):
return c == chr(0x10) or c == chr(0x0e)
def _is_del(self, c):
return c == chr(0x04) or c == chr(0x08) or c == chr(0x15) \
or c == chr(0x17) or c == chr(0x0c) or c == chr(0x7f)
def _is_curmov(self, c):
return c == chr(0x01) or c == chr(0x02) or c == chr(0x05) \
or c == chr(0x06)
def _is_cmpl(self, c):
return c == chr(0x09)
def _handle_csi_seq(self):
c = self.chan.recv(1)
if c == 'A':
self._lookup_hist_up()
elif c == 'B':
self._lookup_hist_down()
elif c == 'C':
self._movcursor(self.curpos + 1)
elif c == 'D':
self._movcursor(self.curpos - 1)
else:
LOG.error("unknown CSI sequence. do nothing: %c", c)
def _handle_esc_seq(self):
c = self.chan.recv(1)
if c == '[':
self._handle_csi_seq()
else:
LOG.error("non CSI sequence. do nothing")
def _send_csi_seq(self, cmd):
self.chan.send(b'\x1b[' + cmd)
def _movcursor(self, curpos):
if self.prompted and curpos < len(self.PROMPT):
self.curpos = len(self.PROMPT)
elif self.prompted and curpos > (len(self.PROMPT) + len(self.buf)):
self.curpos = len(self.PROMPT) + len(self.buf)
else:
self._send_csi_seq('%dG' % (curpos + 1))
self.curpos = curpos
def _clearscreen(self, prompt=None):
if not prompt and self.prompted:
prompt = self.PROMPT
# clear screen
self._send_csi_seq('2J')
# move cursor to the top
self._send_csi_seq('d')
# redraw prompt and buf
self._refreshline(prompt=prompt)
def _clearline(self, prompt=None):
if not prompt and self.prompted:
prompt = self.PROMPT
self.prompted = False
self._movcursor(0)
self._send_csi_seq('2K')
if prompt:
self.prompted = True
self.chan.send(prompt)
self._movcursor(len(prompt))
self.buf = []
def _refreshline(self, prompt=None):
if not prompt and self.prompted:
prompt = self.PROMPT
buf = copy(self.buf)
curpos = copy(self.curpos)
self._clearline(prompt=prompt)
self.chan.send(''.join(buf))
self.buf = buf
self.curpos = curpos
self._movcursor(curpos)
def _refreshnewline(self, prompt=None):
if not prompt and self.prompted:
prompt = self.PROMPT
buf = copy(self.buf)
curpos = copy(self.curpos)
self._startnewline(prompt)
self.chan.send(''.join(buf))
self.buf = buf
self.curpos = curpos
self._movcursor(curpos)
def _startnewline(self, prompt=None, buf=''):
if not prompt and self.prompted:
prompt = self.PROMPT
if type(buf) == str:
buf = list(buf)
if self.chan:
self.buf = buf
if prompt:
self.chan.send('\n\r' + prompt + ''.join(buf))
self.curpos = len(prompt) + len(buf)
self.prompted = True
else:
self.chan.send('\n\r' + ''.join(buf))
self.curpos = len(buf)
self.prompted = False
def _lookup_hist_up(self):
if len(self.history) == 0:
return
self.buf = self.history[self.histindex]
self.curpos = self.promptlen + len(self.buf)
self._refreshline()
if self.histindex + 1 < len(self.history):
self.histindex += 1
def _lookup_hist_down(self):
if self.histindex > 0:
self.histindex -= 1
self.buf = self.history[self.histindex]
self.curpos = self.promptlen + len(self.buf)
self._refreshline()
else:
self._clearline()
def _do_cmpl(self, buf, is_exec=False):
cmpleter = self.root
is_spaced = buf[-1] == ' ' if len(buf) > 0 else False
cmds = [tkn.strip() for tkn in ''.join(buf).split()]
ret = []
for i, cmd in enumerate(cmds):
subcmds = cmpleter.subcommands
matches = [x for x in subcmds.keys() if x.startswith(cmd)]
if len(matches) == 1:
cmpled_cmd = matches[0]
cmpleter = subcmds[cmpled_cmd](self.api)
if is_exec:
ret.append(cmpled_cmd)
continue
if (i + 1) == len(cmds):
if is_spaced:
result, cmd = cmpleter('?')
result = result.value.replace('\n', '\n\r').rstrip()
self.prompted = False
buf = copy(buf)
self._startnewline(buf=result)
self.prompted = True
self._startnewline(buf=buf)
else:
self.buf = buf[:(-1 * len(cmd))] + \
list(cmpled_cmd + ' ')
self.curpos += len(cmpled_cmd) - len(cmd) + 1
self._refreshline()
else:
self.prompted = False
buf = copy(self.buf)
if len(matches) == 0:
if cmpleter.param_help_msg:
self.prompted = True
ret.append(cmd)
continue
else:
self._startnewline(buf='Error: Not implemented')
else:
if (i + 1) < len(cmds):
self._startnewline(buf='Error: Ambiguous command')
else:
self._startnewline(buf=', '.join(matches))
ret = False
self.prompted = True
if not is_exec:
self._startnewline(buf=buf)
break
return ret
def _execute_cmd(self, cmds):
result, cmd = self.root(cmds)
LOG.debug("result: %s", result)
self.prompted = False
self._startnewline()
output = result.value.replace('\n', '\n\r').rstrip()
self.chan.send(output)
self.prompted = True
return result.status
def end_session(self):
self._startnewline(prompt=False, buf='bye.\n\r')
self.chan.close()
def _handle_shell_request(self):
LOG.info("session start")
chan = self.transport.accept(20)
if not chan:
LOG.info("transport.accept timed out")
return
self.chan = chan
self.buf = []
self.curpos = 0
self.history = []
self.histindex = 0
self.prompted = True
self.chan.send(self.WELCOME)
self._startnewline()
while True:
c = self.chan.recv(1)
if len(c) == 0:
break
LOG.debug("ord:%d, hex:0x%x", ord(c), ord(c))
self.promptlen = len(self.PROMPT) if self.prompted else 0
if c == '?':
cmpleter = self.root
cmds = [tkn.strip() for tkn in ''.join(self.buf).split()]
for i, cmd in enumerate(cmds):
subcmds = cmpleter.subcommands
matches = [x for x in subcmds.keys() if x.startswith(cmd)]
if len(matches) == 1:
cmpled_cmd = matches[0]
cmpleter = subcmds[cmpled_cmd](self.api)
result, cmd = cmpleter('?')
result = result.value.replace('\n', '\n\r').rstrip()
self.prompted = False
buf = copy(self.buf)
self._startnewline(buf=result)
self.prompted = True
self._startnewline(buf=buf)
elif self._is_echoable(c):
self.buf.insert(self.curpos - self.promptlen, c)
self.curpos += 1
self._refreshline()
elif self._is_esc(c):
self._handle_esc_seq()
elif self._is_eof(c):
self.end_session()
elif self._is_curmov(c):
# <C-a>
if c == chr(0x01):
self._movcursor(self.promptlen)
# <C-b>
elif c == chr(0x02):
self._movcursor(self.curpos - 1)
# <C-e>
elif c == chr(0x05):
self._movcursor(self.promptlen + len(self.buf))
# <C-f>
elif c == chr(0x06):
self._movcursor(self.curpos + 1)
else:
LOG.error("unknown cursor move cmd.")
continue
elif self._is_hist(c):
# <C-p>
if c == chr(0x10):
self._lookup_hist_up()
# <C-n>
elif c == chr(0x0e):
self._lookup_hist_down()
elif self._is_del(c):
# <C-d>
if c == chr(0x04):
if self.curpos < (self.promptlen + len(self.buf)):
self.buf.pop(self.curpos - self.promptlen)
self._refreshline()
# <C-h> or delete
elif c == chr(0x08) or c == chr(0x7f):
if self.curpos > self.promptlen:
self.buf.pop(self.curpos - self.promptlen - 1)
self.curpos -= 1
self._refreshline()
# <C-u>
elif c == chr(0x15):
self._clearline()
# <C-w>
elif c == chr(0x17):
pos = self.curpos - self.promptlen
i = pos
flag = False
for c in reversed(self.buf[:pos]):
if flag and c == ' ':
break
if c != ' ':
flag = True
i -= 1
del self.buf[i:pos]
self.curpos = self.promptlen + i
self._refreshline()
# <C-l>
elif c == chr(0x0c):
self._clearscreen()
elif self._is_cmpl(c):
self._do_cmpl(self.buf)
elif self._is_enter(c):
if len(''.join(self.buf).strip()) != 0:
# cmd line interpretation
cmds = self._do_cmpl(self.buf, is_exec=True)
if cmds:
self.history.insert(0, self.buf)
self.histindex = 0
self._execute_cmd(cmds)
else:
LOG.debug("blank buf. just start a new line.")
self._startnewline()
LOG.debug("curpos: %d, buf: %s, prompted: %s", self.curpos,
self.buf, self.prompted)
LOG.info("session end")
class SshServerFactory(object):
def __init__(self, *args, **kwargs):
super(SshServerFactory, self).__init__(*args, **kwargs)
def streamserver_handle(self, sock, addr):
SshServer(sock, addr)
class Cli(Activity):
def __init__(self):
super(Cli, self).__init__()
def _run(self, *args, **kwargs):
for k, v in kwargs.items():
if k in CONF:
CONF[k] = v
LOG.info("starting ssh server at %s:%d", CONF["ssh_host"],
CONF["ssh_port"])
factory = SshServerFactory()
server = hub.StreamServer((CONF["ssh_host"], CONF["ssh_port"]),
factory.streamserver_handle)
server.serve_forever()
SSH_CLI_CONTROLLER = Cli()
| apache-2.0 |
SergeyPirogov/ghost | node_modules/grunt-docker/node_modules/docker/node_modules/pygmentize-bundled/vendor/pygments/pygments/lexers/_sourcemodbuiltins.py | 274 | 21929 | # -*- coding: utf-8 -*-
"""
pygments.lexers._sourcemodbuiltins
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This file contains the names of SourceMod functions.
It is able to re-generate itself.
Do not edit the FUNCTIONS list by hand.
:copyright: Copyright 2006-2013 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
FUNCTIONS = ['TopMenuHandler',
'CreateTopMenu',
'LoadTopMenuConfig',
'AddToTopMenu',
'GetTopMenuInfoString',
'GetTopMenuObjName',
'RemoveFromTopMenu',
'DisplayTopMenu',
'FindTopMenuCategory',
'OnAdminMenuCreated',
'OnAdminMenuReady',
'GetAdminTopMenu',
'AddTargetsToMenu',
'AddTargetsToMenu2',
'RedisplayAdminMenu',
'TEHook',
'AddTempEntHook',
'RemoveTempEntHook',
'TE_Start',
'TE_IsValidProp',
'TE_WriteNum',
'TE_ReadNum',
'TE_WriteFloat',
'TE_ReadFloat',
'TE_WriteVector',
'TE_ReadVector',
'TE_WriteAngles',
'TE_WriteFloatArray',
'TE_Send',
'TE_WriteEncodedEnt',
'TE_SendToAll',
'TE_SendToClient',
'CreateKeyValues',
'KvSetString',
'KvSetNum',
'KvSetUInt64',
'KvSetFloat',
'KvSetColor',
'KvSetVector',
'KvGetString',
'KvGetNum',
'KvGetFloat',
'KvGetColor',
'KvGetUInt64',
'KvGetVector',
'KvJumpToKey',
'KvJumpToKeySymbol',
'KvGotoFirstSubKey',
'KvGotoNextKey',
'KvSavePosition',
'KvDeleteKey',
'KvDeleteThis',
'KvGoBack',
'KvRewind',
'KvGetSectionName',
'KvSetSectionName',
'KvGetDataType',
'KeyValuesToFile',
'FileToKeyValues',
'KvSetEscapeSequences',
'KvNodesInStack',
'KvCopySubkeys',
'KvFindKeyById',
'KvGetNameSymbol',
'KvGetSectionSymbol',
'TE_SetupSparks',
'TE_SetupSmoke',
'TE_SetupDust',
'TE_SetupMuzzleFlash',
'TE_SetupMetalSparks',
'TE_SetupEnergySplash',
'TE_SetupArmorRicochet',
'TE_SetupGlowSprite',
'TE_SetupExplosion',
'TE_SetupBloodSprite',
'TE_SetupBeamRingPoint',
'TE_SetupBeamPoints',
'TE_SetupBeamLaser',
'TE_SetupBeamRing',
'TE_SetupBeamFollow',
'HookEvent',
'HookEventEx',
'UnhookEvent',
'CreateEvent',
'FireEvent',
'CancelCreatedEvent',
'GetEventBool',
'SetEventBool',
'GetEventInt',
'SetEventInt',
'GetEventFloat',
'SetEventFloat',
'GetEventString',
'SetEventString',
'GetEventName',
'SetEventBroadcast',
'GetUserMessageId',
'GetUserMessageName',
'StartMessage',
'StartMessageEx',
'EndMessage',
'MsgHook',
'MsgPostHook',
'HookUserMessage',
'UnhookUserMessage',
'StartMessageAll',
'StartMessageOne',
'InactivateClient',
'ReconnectClient',
'GetMaxEntities',
'GetEntityCount',
'IsValidEntity',
'IsValidEdict',
'IsEntNetworkable',
'CreateEdict',
'RemoveEdict',
'GetEdictFlags',
'SetEdictFlags',
'GetEdictClassname',
'GetEntityNetClass',
'ChangeEdictState',
'GetEntData',
'SetEntData',
'GetEntDataFloat',
'SetEntDataFloat',
'GetEntDataEnt2',
'SetEntDataEnt2',
'GetEntDataVector',
'SetEntDataVector',
'GetEntDataString',
'SetEntDataString',
'FindSendPropOffs',
'FindSendPropInfo',
'FindDataMapOffs',
'GetEntSendPropOffs',
'GetEntProp',
'SetEntProp',
'GetEntPropFloat',
'SetEntPropFloat',
'GetEntPropEnt',
'SetEntPropEnt',
'GetEntPropVector',
'SetEntPropVector',
'GetEntPropString',
'SetEntPropString',
'GetEntPropArraySize',
'GetEntDataArray',
'SetEntDataArray',
'GetEntityClassname',
'float',
'FloatMul',
'FloatDiv',
'FloatAdd',
'FloatSub',
'FloatFraction',
'RoundToZero',
'RoundToCeil',
'RoundToFloor',
'RoundToNearest',
'FloatCompare',
'SquareRoot',
'Pow',
'Exponential',
'Logarithm',
'Sine',
'Cosine',
'Tangent',
'FloatAbs',
'ArcTangent',
'ArcCosine',
'ArcSine',
'ArcTangent2',
'RoundFloat',
'operator%',
'DegToRad',
'RadToDeg',
'GetURandomInt',
'GetURandomFloat',
'SetURandomSeed',
'SetURandomSeedSimple',
'RemovePlayerItem',
'GivePlayerItem',
'GetPlayerWeaponSlot',
'IgniteEntity',
'ExtinguishEntity',
'TeleportEntity',
'ForcePlayerSuicide',
'SlapPlayer',
'FindEntityByClassname',
'GetClientEyeAngles',
'CreateEntityByName',
'DispatchSpawn',
'DispatchKeyValue',
'DispatchKeyValueFloat',
'DispatchKeyValueVector',
'GetClientAimTarget',
'GetTeamCount',
'GetTeamName',
'GetTeamScore',
'SetTeamScore',
'GetTeamClientCount',
'SetEntityModel',
'GetPlayerDecalFile',
'GetServerNetStats',
'EquipPlayerWeapon',
'ActivateEntity',
'SetClientInfo',
'SetClientListeningFlags',
'GetClientListeningFlags',
'SetListenOverride',
'GetListenOverride',
'IsClientMuted',
'TR_GetPointContents',
'TR_GetPointContentsEnt',
'TR_TraceRay',
'TR_TraceHull',
'TR_TraceRayFilter',
'TR_TraceHullFilter',
'TR_TraceRayEx',
'TR_TraceHullEx',
'TR_TraceRayFilterEx',
'TR_TraceHullFilterEx',
'TR_GetFraction',
'TR_GetEndPosition',
'TR_GetEntityIndex',
'TR_DidHit',
'TR_GetHitGroup',
'TR_GetPlaneNormal',
'TR_PointOutsideWorld',
'SortIntegers',
'SortFloats',
'SortStrings',
'SortFunc1D',
'SortCustom1D',
'SortCustom2D',
'SortADTArray',
'SortFuncADTArray',
'SortADTArrayCustom',
'CompileRegex',
'MatchRegex',
'GetRegexSubString',
'SimpleRegexMatch',
'TF2_GetPlayerClass',
'TF2_SetPlayerClass',
'TF2_GetPlayerResourceData',
'TF2_SetPlayerResourceData',
'TF2_RemoveWeaponSlot',
'TF2_RemoveAllWeapons',
'TF2_IsPlayerInCondition',
'TF2_GetObjectType',
'TF2_GetObjectMode',
'NominateMap',
'RemoveNominationByMap',
'RemoveNominationByOwner',
'GetExcludeMapList',
'GetNominatedMapList',
'CanMapChooserStartVote',
'InitiateMapChooserVote',
'HasEndOfMapVoteFinished',
'EndOfMapVoteEnabled',
'OnNominationRemoved',
'OnMapVoteStarted',
'CreateTimer',
'KillTimer',
'TriggerTimer',
'GetTickedTime',
'GetMapTimeLeft',
'GetMapTimeLimit',
'ExtendMapTimeLimit',
'GetTickInterval',
'OnMapTimeLeftChanged',
'IsServerProcessing',
'CreateDataTimer',
'ByteCountToCells',
'CreateArray',
'ClearArray',
'CloneArray',
'ResizeArray',
'GetArraySize',
'PushArrayCell',
'PushArrayString',
'PushArrayArray',
'GetArrayCell',
'GetArrayString',
'GetArrayArray',
'SetArrayCell',
'SetArrayString',
'SetArrayArray',
'ShiftArrayUp',
'RemoveFromArray',
'SwapArrayItems',
'FindStringInArray',
'FindValueInArray',
'ProcessTargetString',
'ReplyToTargetError',
'MultiTargetFilter',
'AddMultiTargetFilter',
'RemoveMultiTargetFilter',
'OnBanClient',
'OnBanIdentity',
'OnRemoveBan',
'BanClient',
'BanIdentity',
'RemoveBan',
'CreateTrie',
'SetTrieValue',
'SetTrieArray',
'SetTrieString',
'GetTrieValue',
'GetTrieArray',
'GetTrieString',
'RemoveFromTrie',
'ClearTrie',
'GetTrieSize',
'GetFunctionByName',
'CreateGlobalForward',
'CreateForward',
'GetForwardFunctionCount',
'AddToForward',
'RemoveFromForward',
'RemoveAllFromForward',
'Call_StartForward',
'Call_StartFunction',
'Call_PushCell',
'Call_PushCellRef',
'Call_PushFloat',
'Call_PushFloatRef',
'Call_PushArray',
'Call_PushArrayEx',
'Call_PushString',
'Call_PushStringEx',
'Call_Finish',
'Call_Cancel',
'NativeCall',
'CreateNative',
'ThrowNativeError',
'GetNativeStringLength',
'GetNativeString',
'SetNativeString',
'GetNativeCell',
'GetNativeCellRef',
'SetNativeCellRef',
'GetNativeArray',
'SetNativeArray',
'FormatNativeString',
'OnRebuildAdminCache',
'DumpAdminCache',
'AddCommandOverride',
'GetCommandOverride',
'UnsetCommandOverride',
'CreateAdmGroup',
'FindAdmGroup',
'SetAdmGroupAddFlag',
'GetAdmGroupAddFlag',
'GetAdmGroupAddFlags',
'SetAdmGroupImmuneFrom',
'GetAdmGroupImmuneCount',
'GetAdmGroupImmuneFrom',
'AddAdmGroupCmdOverride',
'GetAdmGroupCmdOverride',
'RegisterAuthIdentType',
'CreateAdmin',
'GetAdminUsername',
'BindAdminIdentity',
'SetAdminFlag',
'GetAdminFlag',
'GetAdminFlags',
'AdminInheritGroup',
'GetAdminGroupCount',
'GetAdminGroup',
'SetAdminPassword',
'GetAdminPassword',
'FindAdminByIdentity',
'RemoveAdmin',
'FlagBitsToBitArray',
'FlagBitArrayToBits',
'FlagArrayToBits',
'FlagBitsToArray',
'FindFlagByName',
'FindFlagByChar',
'FindFlagChar',
'ReadFlagString',
'CanAdminTarget',
'CreateAuthMethod',
'SetAdmGroupImmunityLevel',
'GetAdmGroupImmunityLevel',
'SetAdminImmunityLevel',
'GetAdminImmunityLevel',
'FlagToBit',
'BitToFlag',
'ServerCommand',
'ServerCommandEx',
'InsertServerCommand',
'ServerExecute',
'ClientCommand',
'FakeClientCommand',
'FakeClientCommandEx',
'PrintToServer',
'PrintToConsole',
'ReplyToCommand',
'GetCmdReplySource',
'SetCmdReplySource',
'IsChatTrigger',
'ShowActivity2',
'ShowActivity',
'ShowActivityEx',
'FormatActivitySource',
'SrvCmd',
'RegServerCmd',
'ConCmd',
'RegConsoleCmd',
'RegAdminCmd',
'GetCmdArgs',
'GetCmdArg',
'GetCmdArgString',
'CreateConVar',
'FindConVar',
'ConVarChanged',
'HookConVarChange',
'UnhookConVarChange',
'GetConVarBool',
'SetConVarBool',
'GetConVarInt',
'SetConVarInt',
'GetConVarFloat',
'SetConVarFloat',
'GetConVarString',
'SetConVarString',
'ResetConVar',
'GetConVarDefault',
'GetConVarFlags',
'SetConVarFlags',
'GetConVarBounds',
'SetConVarBounds',
'GetConVarName',
'QueryClientConVar',
'GetCommandIterator',
'ReadCommandIterator',
'CheckCommandAccess',
'CheckAccess',
'IsValidConVarChar',
'GetCommandFlags',
'SetCommandFlags',
'FindFirstConCommand',
'FindNextConCommand',
'SendConVarValue',
'AddServerTag',
'RemoveServerTag',
'CommandListener',
'AddCommandListener',
'RemoveCommandListener',
'TF2_IgnitePlayer',
'TF2_RespawnPlayer',
'TF2_RegeneratePlayer',
'TF2_AddCondition',
'TF2_RemoveCondition',
'TF2_SetPlayerPowerPlay',
'TF2_DisguisePlayer',
'TF2_RemovePlayerDisguise',
'TF2_StunPlayer',
'TF2_MakeBleed',
'TF2_GetResourceEntity',
'TF2_GetClass',
'TF2_CalcIsAttackCritical',
'TF2_OnIsHolidayActive',
'TF2_IsPlayerInDuel',
'TF2_OnConditionAdded',
'TF2_OnConditionRemoved',
'TF2_OnWaitingForPlayersStart',
'TF2_OnWaitingForPlayersEnd',
'SQL_Connect',
'SQL_DefConnect',
'SQL_ConnectCustom',
'SQLite_UseDatabase',
'SQL_CheckConfig',
'SQL_GetDriver',
'SQL_ReadDriver',
'SQL_GetDriverIdent',
'SQL_GetDriverProduct',
'SQL_GetAffectedRows',
'SQL_GetInsertId',
'SQL_GetError',
'SQL_EscapeString',
'SQL_QuoteString',
'SQL_FastQuery',
'SQL_Query',
'SQL_PrepareQuery',
'SQL_FetchMoreResults',
'SQL_HasResultSet',
'SQL_GetRowCount',
'SQL_GetFieldCount',
'SQL_FieldNumToName',
'SQL_FieldNameToNum',
'SQL_FetchRow',
'SQL_MoreRows',
'SQL_Rewind',
'SQL_FetchString',
'SQL_FetchFloat',
'SQL_FetchInt',
'SQL_IsFieldNull',
'SQL_FetchSize',
'SQL_BindParamInt',
'SQL_BindParamFloat',
'SQL_BindParamString',
'SQL_Execute',
'SQL_LockDatabase',
'SQL_UnlockDatabase',
'SQLTCallback',
'SQL_IsSameConnection',
'SQL_TConnect',
'SQL_TQuery',
'CloseHandle',
'CloneHandle',
'MenuHandler',
'CreateMenu',
'DisplayMenu',
'DisplayMenuAtItem',
'AddMenuItem',
'InsertMenuItem',
'RemoveMenuItem',
'RemoveAllMenuItems',
'GetMenuItem',
'GetMenuSelectionPosition',
'GetMenuItemCount',
'SetMenuPagination',
'GetMenuPagination',
'GetMenuStyle',
'SetMenuTitle',
'GetMenuTitle',
'CreatePanelFromMenu',
'GetMenuExitButton',
'SetMenuExitButton',
'GetMenuExitBackButton',
'SetMenuExitBackButton',
'SetMenuNoVoteButton',
'CancelMenu',
'GetMenuOptionFlags',
'SetMenuOptionFlags',
'IsVoteInProgress',
'CancelVote',
'VoteMenu',
'VoteMenuToAll',
'VoteHandler',
'SetVoteResultCallback',
'CheckVoteDelay',
'IsClientInVotePool',
'RedrawClientVoteMenu',
'GetMenuStyleHandle',
'CreatePanel',
'CreateMenuEx',
'GetClientMenu',
'CancelClientMenu',
'GetMaxPageItems',
'GetPanelStyle',
'SetPanelTitle',
'DrawPanelItem',
'DrawPanelText',
'CanPanelDrawFlags',
'SetPanelKeys',
'SendPanelToClient',
'GetPanelTextRemaining',
'GetPanelCurrentKey',
'SetPanelCurrentKey',
'RedrawMenuItem',
'InternalShowMenu',
'GetMenuVoteInfo',
'IsNewVoteAllowed',
'PrefetchSound',
'EmitAmbientSound',
'FadeClientVolume',
'StopSound',
'EmitSound',
'EmitSentence',
'GetDistGainFromSoundLevel',
'AmbientSHook',
'NormalSHook',
'AddAmbientSoundHook',
'AddNormalSoundHook',
'RemoveAmbientSoundHook',
'RemoveNormalSoundHook',
'EmitSoundToClient',
'EmitSoundToAll',
'ATTN_TO_SNDLEVEL',
'strlen',
'StrContains',
'strcmp',
'strncmp',
'StrEqual',
'strcopy',
'Format',
'FormatEx',
'VFormat',
'StringToInt',
'StringToIntEx',
'IntToString',
'StringToFloat',
'StringToFloatEx',
'FloatToString',
'BreakString',
'TrimString',
'SplitString',
'ReplaceString',
'ReplaceStringEx',
'GetCharBytes',
'IsCharAlpha',
'IsCharNumeric',
'IsCharSpace',
'IsCharMB',
'IsCharUpper',
'IsCharLower',
'StripQuotes',
'CharToUpper',
'CharToLower',
'FindCharInString',
'StrCat',
'ExplodeString',
'ImplodeStrings',
'GetVectorLength',
'GetVectorDistance',
'GetVectorDotProduct',
'GetVectorCrossProduct',
'NormalizeVector',
'GetAngleVectors',
'GetVectorAngles',
'GetVectorVectors',
'AddVectors',
'SubtractVectors',
'ScaleVector',
'NegateVector',
'MakeVectorFromPoints',
'BaseComm_IsClientGagged',
'BaseComm_IsClientMuted',
'BaseComm_SetClientGag',
'BaseComm_SetClientMute',
'FormatUserLogText',
'FindPluginByFile',
'FindTarget',
'AcceptEntityInput',
'SetVariantBool',
'SetVariantString',
'SetVariantInt',
'SetVariantFloat',
'SetVariantVector3D',
'SetVariantPosVector3D',
'SetVariantColor',
'SetVariantEntity',
'GameRules_GetProp',
'GameRules_SetProp',
'GameRules_GetPropFloat',
'GameRules_SetPropFloat',
'GameRules_GetPropEnt',
'GameRules_SetPropEnt',
'GameRules_GetPropVector',
'GameRules_SetPropVector',
'GameRules_GetPropString',
'GameRules_SetPropString',
'GameRules_GetRoundState',
'OnClientConnect',
'OnClientConnected',
'OnClientPutInServer',
'OnClientDisconnect',
'OnClientDisconnect_Post',
'OnClientCommand',
'OnClientSettingsChanged',
'OnClientAuthorized',
'OnClientPreAdminCheck',
'OnClientPostAdminFilter',
'OnClientPostAdminCheck',
'GetMaxClients',
'GetClientCount',
'GetClientName',
'GetClientIP',
'GetClientAuthString',
'GetClientUserId',
'IsClientConnected',
'IsClientInGame',
'IsClientInKickQueue',
'IsClientAuthorized',
'IsFakeClient',
'IsClientSourceTV',
'IsClientReplay',
'IsClientObserver',
'IsPlayerAlive',
'GetClientInfo',
'GetClientTeam',
'SetUserAdmin',
'GetUserAdmin',
'AddUserFlags',
'RemoveUserFlags',
'SetUserFlagBits',
'GetUserFlagBits',
'CanUserTarget',
'RunAdminCacheChecks',
'NotifyPostAdminCheck',
'CreateFakeClient',
'SetFakeClientConVar',
'GetClientHealth',
'GetClientModel',
'GetClientWeapon',
'GetClientMaxs',
'GetClientMins',
'GetClientAbsAngles',
'GetClientAbsOrigin',
'GetClientArmor',
'GetClientDeaths',
'GetClientFrags',
'GetClientDataRate',
'IsClientTimingOut',
'GetClientTime',
'GetClientLatency',
'GetClientAvgLatency',
'GetClientAvgLoss',
'GetClientAvgChoke',
'GetClientAvgData',
'GetClientAvgPackets',
'GetClientOfUserId',
'KickClient',
'KickClientEx',
'ChangeClientTeam',
'GetClientSerial',
'GetClientFromSerial',
'FindStringTable',
'GetNumStringTables',
'GetStringTableNumStrings',
'GetStringTableMaxStrings',
'GetStringTableName',
'FindStringIndex',
'ReadStringTable',
'GetStringTableDataLength',
'GetStringTableData',
'SetStringTableData',
'AddToStringTable',
'LockStringTables',
'AddFileToDownloadsTable',
'GetEntityFlags',
'SetEntityFlags',
'GetEntityMoveType',
'SetEntityMoveType',
'GetEntityRenderMode',
'SetEntityRenderMode',
'GetEntityRenderFx',
'SetEntityRenderFx',
'SetEntityRenderColor',
'GetEntityGravity',
'SetEntityGravity',
'SetEntityHealth',
'GetClientButtons',
'EntityOutput',
'HookEntityOutput',
'UnhookEntityOutput',
'HookSingleEntityOutput',
'UnhookSingleEntityOutput',
'SMC_CreateParser',
'SMC_ParseFile',
'SMC_GetErrorString',
'SMC_ParseStart',
'SMC_SetParseStart',
'SMC_ParseEnd',
'SMC_SetParseEnd',
'SMC_NewSection',
'SMC_KeyValue',
'SMC_EndSection',
'SMC_SetReaders',
'SMC_RawLine',
'SMC_SetRawLine',
'BfWriteBool',
'BfWriteByte',
'BfWriteChar',
'BfWriteShort',
'BfWriteWord',
'BfWriteNum',
'BfWriteFloat',
'BfWriteString',
'BfWriteEntity',
'BfWriteAngle',
'BfWriteCoord',
'BfWriteVecCoord',
'BfWriteVecNormal',
'BfWriteAngles',
'BfReadBool',
'BfReadByte',
'BfReadChar',
'BfReadShort',
'BfReadWord',
'BfReadNum',
'BfReadFloat',
'BfReadString',
'BfReadEntity',
'BfReadAngle',
'BfReadCoord',
'BfReadVecCoord',
'BfReadVecNormal',
'BfReadAngles',
'BfGetNumBytesLeft',
'CreateProfiler',
'StartProfiling',
'StopProfiling',
'GetProfilerTime',
'OnPluginStart',
'AskPluginLoad2',
'OnPluginEnd',
'OnPluginPauseChange',
'OnGameFrame',
'OnMapStart',
'OnMapEnd',
'OnConfigsExecuted',
'OnAutoConfigsBuffered',
'OnAllPluginsLoaded',
'GetMyHandle',
'GetPluginIterator',
'MorePlugins',
'ReadPlugin',
'GetPluginStatus',
'GetPluginFilename',
'IsPluginDebugging',
'GetPluginInfo',
'FindPluginByNumber',
'SetFailState',
'ThrowError',
'GetTime',
'FormatTime',
'LoadGameConfigFile',
'GameConfGetOffset',
'GameConfGetKeyValue',
'GetSysTickCount',
'AutoExecConfig',
'RegPluginLibrary',
'LibraryExists',
'GetExtensionFileStatus',
'OnLibraryAdded',
'OnLibraryRemoved',
'ReadMapList',
'SetMapListCompatBind',
'OnClientFloodCheck',
'OnClientFloodResult',
'CanTestFeatures',
'GetFeatureStatus',
'RequireFeature',
'LoadFromAddress',
'StoreToAddress',
'CreateStack',
'PushStackCell',
'PushStackString',
'PushStackArray',
'PopStackCell',
'PopStackString',
'PopStackArray',
'IsStackEmpty',
'PopStack',
'OnPlayerRunCmd',
'BuildPath',
'OpenDirectory',
'ReadDirEntry',
'OpenFile',
'DeleteFile',
'ReadFileLine',
'ReadFile',
'ReadFileString',
'WriteFile',
'WriteFileString',
'WriteFileLine',
'ReadFileCell',
'WriteFileCell',
'IsEndOfFile',
'FileSeek',
'FilePosition',
'FileExists',
'RenameFile',
'DirExists',
'FileSize',
'FlushFile',
'RemoveDir',
'CreateDirectory',
'GetFileTime',
'LogToOpenFile',
'LogToOpenFileEx',
'SetNextMap',
'GetNextMap',
'ForceChangeLevel',
'GetMapHistorySize',
'GetMapHistory',
'GeoipCode2',
'GeoipCode3',
'GeoipCountry',
'MarkNativeAsOptional',
'RegClientCookie',
'FindClientCookie',
'SetClientCookie',
'GetClientCookie',
'SetAuthIdCookie',
'AreClientCookiesCached',
'OnClientCookiesCached',
'CookieMenuHandler',
'SetCookiePrefabMenu',
'SetCookieMenuItem',
'ShowCookieMenu',
'GetCookieIterator',
'ReadCookieIterator',
'GetCookieAccess',
'GetClientCookieTime',
'LoadTranslations',
'SetGlobalTransTarget',
'GetClientLanguage',
'GetServerLanguage',
'GetLanguageCount',
'GetLanguageInfo',
'SetClientLanguage',
'GetLanguageByCode',
'GetLanguageByName',
'CS_OnBuyCommand',
'CS_OnCSWeaponDrop',
'CS_OnGetWeaponPrice',
'CS_OnTerminateRound',
'CS_RespawnPlayer',
'CS_SwitchTeam',
'CS_DropWeapon',
'CS_TerminateRound',
'CS_GetTranslatedWeaponAlias',
'CS_GetWeaponPrice',
'CS_GetClientClanTag',
'CS_SetClientClanTag',
'LogToGame',
'SetRandomSeed',
'GetRandomFloat',
'GetRandomInt',
'IsMapValid',
'IsDedicatedServer',
'GetEngineTime',
'GetGameTime',
'GetGameTickCount',
'GetGameDescription',
'GetGameFolderName',
'GetCurrentMap',
'PrecacheModel',
'PrecacheSentenceFile',
'PrecacheDecal',
'PrecacheGeneric',
'IsModelPrecached',
'IsDecalPrecached',
'IsGenericPrecached',
'PrecacheSound',
'IsSoundPrecached',
'CreateDialog',
'GuessSDKVersion',
'PrintToChat',
'PrintToChatAll',
'PrintCenterText',
'PrintCenterTextAll',
'PrintHintText',
'PrintHintTextToAll',
'ShowVGUIPanel',
'CreateHudSynchronizer',
'SetHudTextParams',
'SetHudTextParamsEx',
'ShowSyncHudText',
'ClearSyncHud',
'ShowHudText',
'ShowMOTDPanel',
'DisplayAskConnectBox',
'EntIndexToEntRef',
'EntRefToEntIndex',
'MakeCompatEntRef',
'SetClientViewEntity',
'SetLightStyle',
'GetClientEyePosition',
'CreateDataPack',
'WritePackCell',
'WritePackFloat',
'WritePackString',
'ReadPackCell',
'ReadPackFloat',
'ReadPackString',
'ResetPack',
'GetPackPosition',
'SetPackPosition',
'IsPackReadable',
'LogMessage',
'LogMessageEx',
'LogToFile',
'LogToFileEx',
'LogAction',
'LogError',
'OnLogAction',
'GameLogHook',
'AddGameLogHook',
'RemoveGameLogHook',
'FindTeamByName',
'StartPrepSDKCall',
'PrepSDKCall_SetVirtual',
'PrepSDKCall_SetSignature',
'PrepSDKCall_SetFromConf',
'PrepSDKCall_SetReturnInfo',
'PrepSDKCall_AddParameter',
'EndPrepSDKCall',
'SDKCall']
if __name__ == '__main__':
import pprint
import re
import sys
import urllib
# urllib ends up wanting to import a module called 'math' -- if
# pygments/lexers is in the path, this ends badly.
for i in range(len(sys.path)-1, -1, -1):
if sys.path[i].endswith('/lexers'):
del sys.path[i]
def get_version():
f = urllib.urlopen('http://docs.sourcemod.net/api/index.php')
r = re.compile(r'SourceMod v\.<b>([\d\.]+)</td>')
for line in f:
m = r.search(line)
if m is not None:
return m.groups()[0]
def get_sm_functions():
f = urllib.urlopen('http://docs.sourcemod.net/api/SMfuncs.js')
r = re.compile(r'SMfunctions\[\d+\] = Array \("(?:public )?([^,]+)",".+"\);')
functions = []
for line in f:
m = r.match(line)
if m is not None:
functions.append(m.groups()[0])
return functions
def regenerate(filename, natives):
f = open(filename)
try:
content = f.read()
finally:
f.close()
header = content[:content.find('FUNCTIONS = [')]
footer = content[content.find("if __name__ == '__main__':"):]
f = open(filename, 'w')
f.write(header)
f.write('FUNCTIONS = %s\n\n' % pprint.pformat(natives))
f.write(footer)
f.close()
def run():
version = get_version()
print '> Downloading function index for SourceMod %s' % version
functions = get_sm_functions()
print '> %d functions found:' % len(functions)
functionlist = []
for full_function_name in functions:
print '>> %s' % full_function_name
functionlist.append(full_function_name)
regenerate(__file__, functionlist)
run()
| mit |
seanfisk/buzzword-bingo-server | django/core/management/commands/startproject.py | 322 | 1680 | from django.core.management.base import copy_helper, CommandError, LabelCommand
from django.utils.importlib import import_module
import os
import re
from random import choice
class Command(LabelCommand):
help = "Creates a Django project directory structure for the given project name in the current directory."
args = "[projectname]"
label = 'project name'
requires_model_validation = False
# Can't import settings during this command, because they haven't
# necessarily been created.
can_import_settings = False
def handle_label(self, project_name, **options):
# Determine the project_name a bit naively -- by looking at the name of
# the parent directory.
directory = os.getcwd()
# Check that the project_name cannot be imported.
try:
import_module(project_name)
except ImportError:
pass
else:
raise CommandError("%r conflicts with the name of an existing Python module and cannot be used as a project name. Please try another name." % project_name)
copy_helper(self.style, 'project', project_name, directory)
# Create a random SECRET_KEY hash, and put it in the main settings.
main_settings_file = os.path.join(directory, project_name, 'settings.py')
settings_contents = open(main_settings_file, 'r').read()
fp = open(main_settings_file, 'w')
secret_key = ''.join([choice('abcdefghijklmnopqrstuvwxyz0123456789!@#$%^&*(-_=+)') for i in range(50)])
settings_contents = re.sub(r"(?<=SECRET_KEY = ')'", secret_key + "'", settings_contents)
fp.write(settings_contents)
fp.close()
| bsd-3-clause |
jupierce/openshift-tools | openshift/installer/vendored/openshift-ansible-3.3.46/roles/etcd_common/library/delegated_serial_command.py | 19 | 8831 | #!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2012, Michael DeHaan <michael.dehaan@gmail.com>, and others
# (c) 2016, Andrew Butcher <abutcher@redhat.com>
#
# This module is derrived from the Ansible command module.
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# pylint: disable=unused-wildcard-import,wildcard-import,unused-import,redefined-builtin
''' delegated_serial_command '''
import copy
import sys
import datetime
import glob
import traceback
import re
import shlex
import os
import fcntl
import time
DOCUMENTATION = '''
---
module: delegated_serial_command
short_description: Executes a command on a remote node
version_added: historical
description:
- The M(command) module takes the command name followed by a list
of space-delimited arguments.
- The given command will be executed on all selected nodes. It
will not be processed through the shell, so variables like
C($HOME) and operations like C("<"), C(">"), C("|"), and C("&")
will not work (use the M(shell) module if you need these
features).
- Creates and maintains a lockfile such that this module will
wait for other invocations to proceed.
options:
command:
description:
- the command to run
required: true
default: null
creates:
description:
- a filename or (since 2.0) glob pattern, when it already
exists, this step will B(not) be run.
required: no
default: null
removes:
description:
- a filename or (since 2.0) glob pattern, when it does not
exist, this step will B(not) be run.
version_added: "0.8"
required: no
default: null
chdir:
description:
- cd into this directory before running the command
version_added: "0.6"
required: false
default: null
executable:
description:
- change the shell used to execute the command. Should be an
absolute path to the executable.
required: false
default: null
version_added: "0.9"
warn:
version_added: "1.8"
default: yes
description:
- if command warnings are on in ansible.cfg, do not warn about
this particular line if set to no/false.
required: false
lockfile:
default: yes
description:
- the lockfile that will be created
timeout:
default: yes
description:
- time in milliseconds to wait to obtain the lock
notes:
- If you want to run a command through the shell (say you are using C(<),
C(>), C(|), etc), you actually want the M(shell) module instead. The
M(command) module is much more secure as it's not affected by the user's
environment.
- " C(creates), C(removes), and C(chdir) can be specified after
the command. For instance, if you only want to run a command if
a certain file does not exist, use this."
author:
- Ansible Core Team
- Michael DeHaan
- Andrew Butcher
'''
EXAMPLES = '''
# Example from Ansible Playbooks.
- delegated_serial_command:
command: /sbin/shutdown -t now
# Run the command if the specified file does not exist.
- delegated_serial_command:
command: /usr/bin/make_database.sh arg1 arg2
creates: /path/to/database
'''
# Dict of options and their defaults
OPTIONS = {'chdir': None,
'creates': None,
'command': None,
'executable': None,
'NO_LOG': None,
'removes': None,
'warn': True,
'lockfile': None,
'timeout': None}
def check_command(commandline):
''' Check provided command '''
arguments = {'chown': 'owner', 'chmod': 'mode', 'chgrp': 'group',
'ln': 'state=link', 'mkdir': 'state=directory',
'rmdir': 'state=absent', 'rm': 'state=absent', 'touch': 'state=touch'}
commands = {'git': 'git', 'hg': 'hg', 'curl': 'get_url or uri', 'wget': 'get_url or uri',
'svn': 'subversion', 'service': 'service',
'mount': 'mount', 'rpm': 'yum, dnf or zypper', 'yum': 'yum', 'apt-get': 'apt',
'tar': 'unarchive', 'unzip': 'unarchive', 'sed': 'template or lineinfile',
'rsync': 'synchronize', 'dnf': 'dnf', 'zypper': 'zypper'}
become = ['sudo', 'su', 'pbrun', 'pfexec', 'runas']
warnings = list()
command = os.path.basename(commandline.split()[0])
# pylint: disable=line-too-long
if command in arguments:
warnings.append("Consider using file module with {0} rather than running {1}".format(arguments[command], command))
if command in commands:
warnings.append("Consider using {0} module rather than running {1}".format(commands[command], command))
if command in become:
warnings.append(
"Consider using 'become', 'become_method', and 'become_user' rather than running {0}".format(command,))
return warnings
# pylint: disable=too-many-statements,too-many-branches,too-many-locals
def main():
''' Main module function '''
module = AnsibleModule(
argument_spec=dict(
_uses_shell=dict(type='bool', default=False),
command=dict(required=True),
chdir=dict(),
executable=dict(),
creates=dict(),
removes=dict(),
warn=dict(type='bool', default=True),
lockfile=dict(default='/tmp/delegated_serial_command.lock'),
timeout=dict(type='int', default=30)
)
)
shell = module.params['_uses_shell']
chdir = module.params['chdir']
executable = module.params['executable']
command = module.params['command']
creates = module.params['creates']
removes = module.params['removes']
warn = module.params['warn']
lockfile = module.params['lockfile']
timeout = module.params['timeout']
if command.strip() == '':
module.fail_json(rc=256, msg="no command given")
iterated = 0
lockfd = open(lockfile, 'w+')
while iterated < timeout:
try:
fcntl.flock(lockfd, fcntl.LOCK_EX | fcntl.LOCK_NB)
break
# pylint: disable=invalid-name
except IOError as e:
if e.errno != errno.EAGAIN:
module.fail_json(msg="I/O Error {0}: {1}".format(e.errno, e.strerror))
else:
iterated += 1
time.sleep(0.1)
if chdir:
chdir = os.path.abspath(os.path.expanduser(chdir))
os.chdir(chdir)
if creates:
# do not run the command if the line contains creates=filename
# and the filename already exists. This allows idempotence
# of command executions.
path = os.path.expanduser(creates)
if glob.glob(path):
module.exit_json(
cmd=command,
stdout="skipped, since %s exists" % path,
changed=False,
stderr=False,
rc=0
)
if removes:
# do not run the command if the line contains removes=filename
# and the filename does not exist. This allows idempotence
# of command executions.
path = os.path.expanduser(removes)
if not glob.glob(path):
module.exit_json(
cmd=command,
stdout="skipped, since %s does not exist" % path,
changed=False,
stderr=False,
rc=0
)
warnings = list()
if warn:
warnings = check_command(command)
if not shell:
command = shlex.split(command)
startd = datetime.datetime.now()
# pylint: disable=invalid-name
rc, out, err = module.run_command(command, executable=executable, use_unsafe_shell=shell)
fcntl.flock(lockfd, fcntl.LOCK_UN)
lockfd.close()
endd = datetime.datetime.now()
delta = endd - startd
if out is None:
out = ''
if err is None:
err = ''
module.exit_json(
cmd=command,
stdout=out.rstrip("\r\n"),
stderr=err.rstrip("\r\n"),
rc=rc,
start=str(startd),
end=str(endd),
delta=str(delta),
changed=True,
warnings=warnings,
iterated=iterated
)
# import module snippets
from ansible.module_utils.basic import *
from ansible.module_utils.splitter import *
main()
| apache-2.0 |
scotty007/libavg | src/samples/logsample.py | 6 | 1933 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import logging
from libavg import avg, app
# Setup Python Logger
hdlr = logging.StreamHandler()
# category is added as an extra formatting key by libavg
formatter = logging.Formatter('[%(asctime)s][%(levelname)s][%(category)s] : %(message)s')
hdlr.setFormatter(formatter)
pyLogger = logging.getLogger(__name__)
pyLogger.addHandler(hdlr)
pyLogger.propagate = False
pyLogger.level = logging.DEBUG
class LoggingTest(app.MainDiv):
def onInit(self):
# Add the python logger to libavgs logger as a message sink
avg.logger.removeStdLogSink()
avg.logger.addSink(pyLogger)
avg.logger.debug("Hidden, unless AVG_LOG_CATEGORIES configured with APP:DEBUG")
avg.logger.configureCategory(avg.logger.Category.APP, avg.logger.Severity.INFO)
avg.logger.log("Custom Info level message", avg.logger.Category.APP,
avg.logger.Severity.INFO)
avg.logger.info("Info level message, with APP Category")
avg.logger.warning("Warn level message, with APP Category")
#Remove the logSink, no message should be logged now, if run with
#AVG_LOG_OMIT_STDERR=1
#avg.logger.removeSink(logging.getLogger("MY_APP"))
avg.logger.error("std::err - Error")
avg.logger.critical("std::err - Critical")
avg.logger.log("std::err - Log")
#Register custom log category
CUSTOM_LOG_CAT = avg.logger.configureCategory("My Custom Category",
avg.logger.Severity.INFO)
#Log with custom log category
avg.logger.log("Message with custom category", CUSTOM_LOG_CAT)
avg.logger.debug("Hidden message", CUSTOM_LOG_CAT)
avg.logger.configureCategory(CUSTOM_LOG_CAT, avg.logger.Severity.DBG)
avg.logger.debug("This will show up", CUSTOM_LOG_CAT)
if __name__ == '__main__':
app.App().run(LoggingTest(), app_resolution='140x140')
| lgpl-2.1 |
seanli9jan/tensorflow | tensorflow/python/data/experimental/kernel_tests/serialization/map_dataset_serialization_test.py | 16 | 4602 | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for the MapDataset serialization."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from tensorflow.python.data.experimental.kernel_tests.serialization import dataset_serialization_test_base
from tensorflow.python.data.ops import dataset_ops
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import errors
from tensorflow.python.framework import function
from tensorflow.python.framework import sparse_tensor
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import random_ops
from tensorflow.python.ops import variable_scope
from tensorflow.python.platform import test
class MapDatasetSerializationTest(
dataset_serialization_test_base.DatasetSerializationTestBase):
def setUp(self):
self._tensor_slice_len = 7
self._num_epochs = 14
self._num_outputs = self._tensor_slice_len * self._num_epochs
def _build_ds(self, multiplier=37.0):
components = (np.arange(self._tensor_slice_len), np.array([[1, 2, 3]]) *
np.arange(self._tensor_slice_len)[:, np.newaxis],
np.array(multiplier) * np.arange(self._tensor_slice_len))
def _map_fn(x, y, z):
return math_ops.square(x), math_ops.square(y), math_ops.square(z)
return (
dataset_ops.Dataset.from_tensor_slices(components).map(_map_fn)
.repeat(self._num_epochs))
def testSaveRestoreCore(self):
self.run_core_tests(
self._build_ds,
lambda: self._build_ds(multiplier=15.0),
self._num_outputs)
def testSaveStatefulFunction(self):
def _build_ds():
def _map_fn(x):
return random_ops.random_uniform(
(), 0, 10, dtype=dtypes.int32) * math_ops.to_int32(x)
return dataset_ops.Dataset.range(100).map(_map_fn)
self.verify_error_on_save(_build_ds, 15, errors.InvalidArgumentError)
def testCaptureVariableInMapFn(self):
def _build_ds():
counter_var = variable_scope.get_variable(
"counter", (), dtypes.int32, use_resource=True)
return (dataset_ops.Dataset.from_tensors(0).repeat(10).map(
lambda _: counter_var.assign_add(1)))
self.verify_error_on_save(_build_ds, 15, errors.InvalidArgumentError)
def testCaptureConstantInMapFn(self):
def _build_ds():
constant_var = constant_op.constant(5)
return (dataset_ops.Dataset.from_tensors(0).repeat(10).map(
lambda x: x + constant_var))
self.run_core_tests(_build_ds, None, 10)
def testCaptureDefunInMapFn(self):
num_outputs = 100
def _build_ds():
@function.Defun(dtypes.int64)
def defun_fn(x):
return constant_op.constant(1000) + math_ops.to_int32(x)
return dataset_ops.Dataset.range(num_outputs).map(defun_fn)
self.run_core_tests(_build_ds, None, num_outputs)
def testBuildDefunInMapFn(self):
num_outputs = 100
def _build_ds():
@function.Defun(dtypes.int64)
def defun_fn(x):
@function.Defun(dtypes.int32)
def defun_fn_deep(x):
return constant_op.constant(1000) + math_ops.to_int32(x)
return constant_op.constant(11000) + defun_fn_deep(math_ops.to_int32(x))
return dataset_ops.Dataset.range(num_outputs).map(defun_fn)
self.run_core_tests(_build_ds, None, num_outputs)
def testSparseCore(self):
def _sparse(i):
return sparse_tensor.SparseTensorValue(
indices=np.array([[0, 0]]),
values=(i * np.array([1])),
dense_shape=np.array([1, 1]))
def _build_ds(num_outputs):
return dataset_ops.Dataset.range(num_outputs).map(_sparse)
num_outputs = 10
self.run_core_tests(lambda: _build_ds(num_outputs),
lambda: _build_ds(int(num_outputs / 2)), num_outputs)
if __name__ == "__main__":
test.main()
| apache-2.0 |
gpapaz/eve-wspace | evewspace/POS/models.py | 3 | 10918 | # Eve W-Space
# Copyright 2014 Andrew Austin and contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import csv
from django.db import models
from django.conf import settings
import eveapi
from core.models import Type, Location
from API.models import CorpAPIKey
from core.models import Corporation, Alliance
from Map.models import System, MapSystem, Map
from API import cache_handler as handler
User = settings.AUTH_USER_MODEL
class POS(models.Model):
"""Represents a POS somewhere in space."""
system = models.ForeignKey(System, related_name="poses")
planet = models.IntegerField()
moon = models.IntegerField()
towertype = models.ForeignKey(Type, related_name="inspace")
corporation = models.ForeignKey(Corporation, related_name="poses")
posname = models.CharField(max_length=100, blank=True, null=True)
fitting = models.TextField(blank=True, null=True)
# Using CCP's status codes here for sanity with API checks
status = models.IntegerField(choices=((0, 'Unanchored'),
(1, 'Anchored'),
(2, 'Onlining'),
(3, 'Reinforced'),
(4, 'Online')))
# This should be the time the tower exits RF
# TODO: add a validator to make sure this is only set
# if status = 3 (Reinforced)
rftime = models.DateTimeField(null=True, blank=True)
updated = models.DateTimeField()
# These values will be set by the TSV parser from d-scan data if available
guns = models.IntegerField(null=True, blank=True)
ewar = models.IntegerField(null=True, blank=True)
sma = models.IntegerField(null=True, blank=True)
hardener = models.IntegerField(null=True, blank=True)
# This is a short comment that is displayed as a warning
warpin_notice = models.CharField(blank=True, null=True, max_length=64)
class Meta:
ordering = ['system__name', 'planet', 'moon']
@classmethod
def update_from_import_list(cls, system, import_list):
"""
Imports starbases from YAML importer.
"""
for pos in import_list:
planet = pos['planet']
moon = pos['moon']
warpin = pos['warpin']
status = pos['status']
rftime = pos['rftime']
name = pos['name']
tower = Type.objects.get(name=pos['tower'])
try:
owner = Corporation.objects.get(name=pos['owner'])
except Corporation.DoesNotExist:
from core import tasks
api = eveapi.EVEAPIConnection(cacheHandler=handler)
corp_id = api.eve.CharacterID(
names=pos['owner']).characters[0].characterID
owner = tasks.update_corporation(corp_id, True)
if POS.objects.filter(system=system, planet=planet,
moon=moon, corporation=owner).exists():
# Update first existing record
starbase = POS.objects.filter(system=system, planet=planet,
moon=moon,
corporation=owner).all()[0]
starbase.status = status
starbase.name = name
starbase.towertype = tower
if status == 3:
starbase.rftime = rftime
starbase.warpin_notice = warpin
else:
new_pos = POS(system=system, planet=planet, moon=moon,
corporation=owner, towertype=tower,
warpin_notice=warpin, status=status)
if status == 3:
new_pos.rftime = rftime
new_pos.save()
def as_dict(self):
data = {
'planet': self.planet, 'moon': self.moon,
'tower': self.towertype.name, 'owner': self.corporation.name,
'status': self.status, 'name': self.posname,
'rftime': self.rftime, 'warpin': self.warpin_notice,
}
return data
def clean(self):
from django.core.exceptions import ValidationError
if self.rftime and self.status != 3:
raise ValidationError("A POS cannot have an rftime unless "
"it is reinforced")
def __unicode__(self):
return self.posname
# override save to implement posname defaulting to towertype.name
def save(self, *args, **kwargs):
if not self.posname:
self.posname = self.towertype.name
# Mark tower as having been updated
from datetime import datetime
import pytz
self.updated = datetime.now(pytz.utc)
super(POS, self).save(*args, **kwargs)
def log(self, user, action, map_system):
"""
Records a log entry for POS updates and additions.
"""
map_system.map.add_log(
user,
"%s POS (Planet %s Moon %s, owner %s) in %s (%s), %s jumps out from root system."
%(action, self.planet, self.moon, self.corporation, map_system.system.name,
map_system.friendlyname, map_system.distance_from_root()))
def size(self):
"""
Returns the size of the tower, Small Medium or Large.
"""
if u'Small' in self.towertype.name:
return u'Small'
if u'Medium' in self.towertype.name:
return u'Medium'
return u'Large'
def fit_from_dscan(self, dscan):
"""
Fills in a POS's fitting from a copy / paste of d-scan results.
"""
return self.fit_from_iterable(csv.reader(dscan.splitlines(),
delimiter="\t"))
def fit_from_iterable(self, fit):
"""
Fills in a POS's fitting from an iterable (normally parsed d-scan)
"""
from core.models import Type
item_dict = dict()
# marketGroupIDs to consider guns, ewar, hardeners, and smas
guns_groups = [480, 479, 594, 595, 596]
ewar_groups = [481, 1009]
sma_groups = [484]
hardener_groups = [485]
towers = 0
self.sma = 0
self.hardener = 0
self.guns = 0
self.ewar = 0
for row in fit:
try:
item_type = Type.objects.get(name=row[1])
# odd bug where invalid items get into dscan
except Type.DoesNotExist:
continue
if item_type.marketgroup:
group_tree = []
parent = item_type.marketgroup
while parent:
group_tree.append(parent.id)
parent = parent.parentgroup
if item_type.marketgroup.id in guns_groups:
self.guns += 1
if item_type.marketgroup.id in ewar_groups:
self.ewar += 1
if item_type.marketgroup.id in sma_groups:
self.sma += 1
if item_type.marketgroup.id in hardener_groups:
self.hardener += 1
if item_type.marketgroup.id == 478:
towers += 1
towertype = item_type
posname = row[0]
if item_type.name in item_dict:
item_dict[item_type.name] += 1
elif 1285 in group_tree and 478 not in group_tree:
item_dict.update({item_type.name: 1})
self.fitting = "Imported from D-Scan:\n"
for itemtype in item_dict:
self.fitting += "\n%s : %s" % (itemtype, item_dict[itemtype])
if towers == 1 and self.towertype_id is None and self.posname is None:
self.towertype = towertype
self.posname = posname
if towers == 0 and self.towertype_id is None:
raise AttributeError('No POS in the D-Scan!')
elif towers <= 1:
self.save()
else:
raise AttributeError('Too many towers detected in the D-Scan!')
class CorpPOS(POS):
"""A corp-controlled POS with manager and password data."""
manager = models.ForeignKey(User, null=True, blank=True,
related_name='poses')
password = models.CharField(max_length=100)
description = models.TextField(null=True, blank=True)
# Let's store the CCP Item ID for the tower here to make API lookup easier
# If it is null, then we are not tracking this POS via API
apiitemid = models.BigIntegerField(null=True, blank=True)
apikey = models.ForeignKey(CorpAPIKey, null=True, blank=True,
related_name='poses')
class Meta:
permissions = (('can_see_pos_pw', 'Can see corp POS passwords.'),
('can_see_all_pos', 'Sees all corp POSes '
'regardless of manager.'),)
class POSApplication(models.Model):
"""Represents an application for a personal POS."""
applicant = models.ForeignKey(User, null=True, blank=True,
related_name='posapps')
towertype = models.ForeignKey(Type, null=True, blank=True,
related_name='posapps')
residents = models.ManyToManyField(User)
normalfit = models.TextField()
siegefit = models.TextField()
# Once it is approved, we will fill in these two to tie the records together
approved = models.DateTimeField(blank=True, null=True)
posrecord = models.ForeignKey(CorpPOS, blank=True, null=True,
related_name='application')
class Meta:
permissions = (('can_close_pos_app',
'Can dispose of corp POS applications.'),)
def __unicode__(self):
return 'Applicant: %s Tower: %s' % (self.applicant.username,
self.towertype.name)
class POSVote(models.Model):
"""Represents a vote on a personal POS application."""
application = models.ForeignKey(POSApplication, related_name='votes')
voter = models.ForeignKey(User, related_name='posvotes')
vote = models.IntegerField(choices=((0, 'Deny'),
(1, 'Approve'),
(2, 'Abstain')))
| apache-2.0 |
kangningyang/RobotAutoTest | setup.py | 3 | 1812 | #!/usr/bin/env python
import sys
from os.path import join, dirname
sys.path.append(join(dirname(__file__), 'src'))
from ez_setup import use_setuptools
use_setuptools()
from setuptools import setup
execfile(join(dirname(__file__), 'src', 'Selenium2Library', 'version.py'))
DESCRIPTION = """
Selenium2Library is a web testing library for Robot Framework
that leverages the Selenium 2 (WebDriver) libraries.
"""[1:-1]
setup(name = 'robotframework-selenium2library',
version = VERSION,
description = 'Web testing library for Robot Framework',
long_description = DESCRIPTION,
author = 'Ryan Tomac , Ed Manlove , Jeremy Johnson',
author_email = '<ryan@tomacfamily.com> , <devPyPlTw@verizon.net> , <jeremy@softworks.com.my>',
url = 'https://github.com/robotframework/Selenium2Library',
license = 'Apache License 2.0',
keywords = 'robotframework testing testautomation selenium selenium2 webdriver web',
platforms = 'any',
classifiers = [
"Development Status :: 5 - Production/Stable",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Topic :: Software Development :: Testing"
],
install_requires = [
'decorator >= 3.3.2',
'selenium >= 2.32.0',
'robotframework >= 2.6.0',
'docutils >= 0.8.1'
],
py_modules=['ez_setup'],
package_dir = {'' : 'src'},
packages = ['Selenium2Library','Selenium2Library.keywords','Selenium2Library.locators',
'Selenium2Library.utils'],
include_package_data = True,
)
| apache-2.0 |
qinjian623/emacs-config | elpa/jedi-20140321.1323/jediepcserver.py | 17 | 9444 | #!/usr/bin/env python
"""
Jedi EPC server.
Copyright (C) 2012 Takafumi Arakaki
Author: Takafumi Arakaki <aka.tkf at gmail.com>
This file is NOT part of GNU Emacs.
Jedi EPC server is free software: you can redistribute it and/or
modify it under the terms of the GNU General Public License as
published by the Free Software Foundation, either version 3 of the
License, or (at your option) any later version.
Jedi EPC server is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with Jedi EPC server.
If not, see <http://www.gnu.org/licenses/>.
"""
import os
import sys
import re
import itertools
import logging
import site
jedi = None # I will load it later
PY3 = (sys.version_info[0] >= 3)
NEED_ENCODE = not PY3
def jedi_script(source, line, column, source_path):
if NEED_ENCODE:
source = source.encode('utf-8')
source_path = source_path and source_path.encode('utf-8')
return jedi.Script(source, line, column, source_path or '')
def candidate_symbol(comp):
"""
Return a character representing completion type.
:type comp: jedi.api.Completion
:arg comp: A completion object returned by `jedi.Script.complete`.
"""
try:
return comp.type[0].lower()
except (AttributeError, TypeError):
return '?'
def candidates_description(comp):
"""
Return `comp.description` in an appropriate format.
* Avoid return a string 'None'.
* Strip off all newlines. This is required for using
`comp.description` as candidate summary.
"""
desc = comp.description
return _WHITESPACES_RE.sub(' ', desc) if desc and desc != 'None' else ''
_WHITESPACES_RE = re.compile(r'\s+')
def complete(*args):
reply = []
for comp in jedi_script(*args).complete():
reply.append(dict(
word=comp.word,
doc=comp.doc,
description=candidates_description(comp),
symbol=candidate_symbol(comp),
))
return reply
def get_in_function_call(*args):
call_def = jedi_script(*args).get_in_function_call()
if call_def:
return dict(
# p.get_code(False) should do the job. But jedi-vim use replace.
# So follow what jedi-vim does...
params=[p.get_code().replace('\n', '') for p in call_def.params],
index=call_def.index,
call_name=call_def.call_name,
)
else:
return [] # nil
def _goto(method, *args):
"""
Helper function for `goto` and `related_names`.
:arg method: `jedi.Script.goto` or `jedi.Script.related_names`
:arg args: Arguments to `jedi_script`
"""
# `definitions` is a list. Each element is an instances of
# `jedi.api_classes.BaseOutput` subclass, i.e.,
# `jedi.api_classes.RelatedName` or `jedi.api_classes.Definition`.
definitions = method(jedi_script(*args))
return [dict(
column=d.column,
line_nr=d.line_nr,
module_path=d.module_path if d.module_path != '__builtin__' else [],
module_name=d.module_name,
description=d.description,
) for d in definitions]
def goto(*args):
return _goto(jedi.Script.goto, *args)
def related_names(*args):
return _goto(jedi.Script.related_names, *args)
def definition_to_dict(d):
return dict(
doc=d.doc,
description=d.description,
desc_with_module=d.desc_with_module,
line_nr=d.line_nr,
column=d.column,
module_path=d.module_path,
name=getattr(d, 'name', []),
full_name=getattr(d, 'full_name', []),
type=getattr(d, 'type', []),
)
def get_definition(*args):
definitions = jedi_script(*args).get_definition()
return list(map(definition_to_dict, definitions))
def get_names_recursively(definition, parent=None):
"""
Fetch interesting defined names in sub-scopes under `definition`.
:type names: jedi.api_classes.Definition
"""
d = definition_to_dict(definition)
try:
d['local_name'] = parent['local_name'] + '.' + d['name']
except (AttributeError, TypeError):
d['local_name'] = d['name']
if definition.type == 'class':
ds = definition.defined_names()
return [d] + [get_names_recursively(c, d) for c in ds]
else:
return [d]
def defined_names(*args):
return list(map(get_names_recursively, jedi.api.defined_names(*args)))
def get_module_version(module):
try:
from pkg_resources import get_distribution, DistributionNotFound
try:
return get_distribution(module.__name__).version
except DistributionNotFound:
pass
except ImportError:
pass
notfound = object()
for key in ['__version__', 'version']:
version = getattr(module, key, notfound)
if version is not notfound:
return version
def get_jedi_version():
import epc
import sexpdata
return [dict(
name=module.__name__,
file=getattr(module, '__file__', []),
version=get_module_version(module) or [],
) for module in [sys, jedi, epc, sexpdata]]
def jedi_epc_server(address='localhost', port=0, port_file=sys.stdout,
sys_path=[], virtual_env=[],
debugger=None, log=None, log_level=None,
log_traceback=None):
add_virtualenv_path()
for p in virtual_env:
add_virtualenv_path(p)
sys_path = map(os.path.expandvars, map(os.path.expanduser, sys_path))
sys.path = [''] + list(filter(None, itertools.chain(sys_path, sys.path)))
# Workaround Jedi's module cache. Use this workaround until Jedi
# got an API to set module paths.
# See also: https://github.com/davidhalter/jedi/issues/36
import_jedi()
import epc.server
server = epc.server.EPCServer((address, port))
server.register_function(complete)
server.register_function(get_in_function_call)
server.register_function(goto)
server.register_function(related_names)
server.register_function(get_definition)
server.register_function(defined_names)
server.register_function(get_jedi_version)
@server.register_function
def toggle_log_traceback():
server.log_traceback = not server.log_traceback
return server.log_traceback
port_file.write(str(server.server_address[1])) # needed for Emacs client
port_file.write("\n")
port_file.flush()
if port_file is not sys.stdout:
port_file.close()
# This is not supported Python-EPC API, but I am using this for
# backward compatibility for Python-EPC < 0.0.4. In the future,
# it should be passed to the constructor.
server.log_traceback = bool(log_traceback)
if log:
handler = logging.FileHandler(filename=log, mode='w')
if log_level:
log_level = getattr(logging, log_level.upper())
handler.setLevel(log_level)
server.logger.setLevel(log_level)
server.logger.addHandler(handler)
if debugger:
server.set_debugger(debugger)
handler = logging.StreamHandler()
handler.setLevel(logging.DEBUG)
server.logger.addHandler(handler)
server.logger.setLevel(logging.DEBUG)
server.serve_forever()
server.logger.info('exit')
return server
def import_jedi():
global jedi
import jedi
import jedi.api
def add_virtualenv_path(venv=os.getenv('VIRTUAL_ENV')):
"""Add virtualenv's site-packages to `sys.path`."""
if not venv:
return
venv = os.path.abspath(venv)
path = os.path.join(
venv, 'lib', 'python%d.%d' % sys.version_info[:2], 'site-packages')
sys.path.insert(0, path)
site.addsitedir(path)
def main(args=None):
import argparse
parser = argparse.ArgumentParser(
formatter_class=argparse.RawTextHelpFormatter,
description=__doc__)
parser.add_argument(
'--address', default='localhost')
parser.add_argument(
'--port', default=0, type=int)
parser.add_argument(
'--port-file', '-f', default='-', type=argparse.FileType('wt'),
help='file to write port on. default is stdout.')
parser.add_argument(
'--sys-path', '-p', default=[], action='append',
help='paths to be inserted at the top of `sys.path`.')
parser.add_argument(
'--virtual-env', '-v', default=[], action='append',
help='paths to be used as if VIRTUAL_ENV is set to it.')
parser.add_argument(
'--log', help='save server log to this file.')
parser.add_argument(
'--log-level',
choices=['CRITICAL', 'ERROR', 'WARN', 'INFO', 'DEBUG'],
help='logging level for log file.')
parser.add_argument(
'--log-traceback', action='store_true', default=False,
help='Include traceback in logging output.')
parser.add_argument(
'--pdb', dest='debugger', const='pdb', action='store_const',
help='start pdb when error occurs.')
parser.add_argument(
'--ipdb', dest='debugger', const='ipdb', action='store_const',
help='start ipdb when error occurs.')
ns = parser.parse_args(args)
jedi_epc_server(**vars(ns))
if __name__ == '__main__':
main()
| gpl-3.0 |
sgzsh269/django | tests/sitemaps_tests/test_http.py | 16 | 11105 | from __future__ import unicode_literals
import os
from datetime import date
from unittest import skipUnless
from django.apps import apps
from django.conf import settings
from django.contrib.sitemaps import GenericSitemap, Sitemap
from django.contrib.sites.models import Site
from django.core.exceptions import ImproperlyConfigured
from django.test import modify_settings, override_settings
from django.utils._os import upath
from django.utils.formats import localize
from django.utils.translation import activate, deactivate
from .base import SitemapTestsBase
from .models import TestModel
class HTTPSitemapTests(SitemapTestsBase):
def test_simple_sitemap_index(self):
"A simple sitemap index can be rendered"
response = self.client.get('/simple/index.xml')
expected_content = """<?xml version="1.0" encoding="UTF-8"?>
<sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<sitemap><loc>%s/simple/sitemap-simple.xml</loc></sitemap>
</sitemapindex>
""" % self.base_url
self.assertXMLEqual(response.content.decode('utf-8'), expected_content)
@override_settings(TEMPLATES=[{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(os.path.dirname(upath(__file__)), 'templates')],
}])
def test_simple_sitemap_custom_index(self):
"A simple sitemap index can be rendered with a custom template"
response = self.client.get('/simple/custom-index.xml')
expected_content = """<?xml version="1.0" encoding="UTF-8"?>
<!-- This is a customised template -->
<sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<sitemap><loc>%s/simple/sitemap-simple.xml</loc></sitemap>
</sitemapindex>
""" % self.base_url
self.assertXMLEqual(response.content.decode('utf-8'), expected_content)
def test_simple_sitemap_section(self):
"A simple sitemap section can be rendered"
response = self.client.get('/simple/sitemap-simple.xml')
expected_content = """<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<url><loc>%s/location/</loc><lastmod>%s</lastmod><changefreq>never</changefreq><priority>0.5</priority></url>
</urlset>
""" % (self.base_url, date.today())
self.assertXMLEqual(response.content.decode('utf-8'), expected_content)
def test_simple_sitemap(self):
"A simple sitemap can be rendered"
response = self.client.get('/simple/sitemap.xml')
expected_content = """<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<url><loc>%s/location/</loc><lastmod>%s</lastmod><changefreq>never</changefreq><priority>0.5</priority></url>
</urlset>
""" % (self.base_url, date.today())
self.assertXMLEqual(response.content.decode('utf-8'), expected_content)
@override_settings(TEMPLATES=[{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(os.path.dirname(upath(__file__)), 'templates')],
}])
def test_simple_custom_sitemap(self):
"A simple sitemap can be rendered with a custom template"
response = self.client.get('/simple/custom-sitemap.xml')
expected_content = """<?xml version="1.0" encoding="UTF-8"?>
<!-- This is a customised template -->
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<url><loc>%s/location/</loc><lastmod>%s</lastmod><changefreq>never</changefreq><priority>0.5</priority></url>
</urlset>
""" % (self.base_url, date.today())
self.assertXMLEqual(response.content.decode('utf-8'), expected_content)
def test_sitemap_last_modified(self):
"Tests that Last-Modified header is set correctly"
response = self.client.get('/lastmod/sitemap.xml')
self.assertEqual(response['Last-Modified'], 'Wed, 13 Mar 2013 10:00:00 GMT')
def test_sitemap_last_modified_date(self):
"""
The Last-Modified header should be support dates (without time).
"""
response = self.client.get('/lastmod/date-sitemap.xml')
self.assertEqual(response['Last-Modified'], 'Wed, 13 Mar 2013 00:00:00 GMT')
def test_sitemap_last_modified_tz(self):
"""
The Last-Modified header should be converted from timezone aware dates
to GMT.
"""
response = self.client.get('/lastmod/tz-sitemap.xml')
self.assertEqual(response['Last-Modified'], 'Wed, 13 Mar 2013 15:00:00 GMT')
def test_sitemap_last_modified_missing(self):
"Tests that Last-Modified header is missing when sitemap has no lastmod"
response = self.client.get('/generic/sitemap.xml')
self.assertFalse(response.has_header('Last-Modified'))
def test_sitemap_last_modified_mixed(self):
"Tests that Last-Modified header is omitted when lastmod not on all items"
response = self.client.get('/lastmod-mixed/sitemap.xml')
self.assertFalse(response.has_header('Last-Modified'))
def test_sitemaps_lastmod_mixed_ascending_last_modified_missing(self):
"""
The Last-Modified header is omitted when lastmod isn't found in all
sitemaps. Test sitemaps are sorted by lastmod in ascending order.
"""
response = self.client.get('/lastmod-sitemaps/mixed-ascending.xml')
self.assertFalse(response.has_header('Last-Modified'))
def test_sitemaps_lastmod_mixed_descending_last_modified_missing(self):
"""
The Last-Modified header is omitted when lastmod isn't found in all
sitemaps. Test sitemaps are sorted by lastmod in descending order.
"""
response = self.client.get('/lastmod-sitemaps/mixed-descending.xml')
self.assertFalse(response.has_header('Last-Modified'))
def test_sitemaps_lastmod_ascending(self):
"""
The Last-Modified header is set to the most recent sitemap lastmod.
Test sitemaps are sorted by lastmod in ascending order.
"""
response = self.client.get('/lastmod-sitemaps/ascending.xml')
self.assertEqual(response['Last-Modified'], 'Sat, 20 Apr 2013 05:00:00 GMT')
def test_sitemaps_lastmod_descending(self):
"""
The Last-Modified header is set to the most recent sitemap lastmod.
Test sitemaps are sorted by lastmod in descending order.
"""
response = self.client.get('/lastmod-sitemaps/descending.xml')
self.assertEqual(response['Last-Modified'], 'Sat, 20 Apr 2013 05:00:00 GMT')
@skipUnless(settings.USE_I18N, "Internationalization is not enabled")
@override_settings(USE_L10N=True)
def test_localized_priority(self):
"The priority value should not be localized (Refs #14164)"
activate('fr')
self.assertEqual('0,3', localize(0.3))
# Retrieve the sitemap. Check that priorities
# haven't been rendered in localized format
response = self.client.get('/simple/sitemap.xml')
self.assertContains(response, '<priority>0.5</priority>')
self.assertContains(response, '<lastmod>%s</lastmod>' % date.today())
deactivate()
@modify_settings(INSTALLED_APPS={'remove': 'django.contrib.sites'})
def test_requestsite_sitemap(self):
# Make sure hitting the flatpages sitemap without the sites framework
# installed doesn't raise an exception.
response = self.client.get('/simple/sitemap.xml')
expected_content = """<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<url><loc>http://testserver/location/</loc><lastmod>%s</lastmod><changefreq>never</changefreq><priority>0.5</priority></url>
</urlset>
""" % date.today()
self.assertXMLEqual(response.content.decode('utf-8'), expected_content)
@skipUnless(apps.is_installed('django.contrib.sites'),
"django.contrib.sites app not installed.")
def test_sitemap_get_urls_no_site_1(self):
"""
Check we get ImproperlyConfigured if we don't pass a site object to
Sitemap.get_urls and no Site objects exist
"""
Site.objects.all().delete()
with self.assertRaises(ImproperlyConfigured):
Sitemap().get_urls()
@modify_settings(INSTALLED_APPS={'remove': 'django.contrib.sites'})
def test_sitemap_get_urls_no_site_2(self):
"""
Check we get ImproperlyConfigured when we don't pass a site object to
Sitemap.get_urls if Site objects exists, but the sites framework is not
actually installed.
"""
with self.assertRaises(ImproperlyConfigured):
Sitemap().get_urls()
def test_sitemap_item(self):
"""
Check to make sure that the raw item is included with each
Sitemap.get_url() url result.
"""
test_sitemap = GenericSitemap({'queryset': TestModel.objects.order_by('pk').all()})
def is_testmodel(url):
return isinstance(url['item'], TestModel)
item_in_url_info = all(map(is_testmodel, test_sitemap.get_urls()))
self.assertTrue(item_in_url_info)
def test_cached_sitemap_index(self):
"""
Check that a cached sitemap index can be rendered (#2713).
"""
response = self.client.get('/cached/index.xml')
expected_content = """<?xml version="1.0" encoding="UTF-8"?>
<sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<sitemap><loc>%s/cached/sitemap-simple.xml</loc></sitemap>
</sitemapindex>
""" % self.base_url
self.assertXMLEqual(response.content.decode('utf-8'), expected_content)
def test_x_robots_sitemap(self):
response = self.client.get('/simple/index.xml')
self.assertEqual(response['X-Robots-Tag'], 'noindex, noodp, noarchive')
response = self.client.get('/simple/sitemap.xml')
self.assertEqual(response['X-Robots-Tag'], 'noindex, noodp, noarchive')
def test_empty_sitemap(self):
response = self.client.get('/empty/sitemap.xml')
self.assertEqual(response.status_code, 200)
@override_settings(LANGUAGES=(('en', 'English'), ('pt', 'Portuguese')))
def test_simple_i18nsitemap_index(self):
"A simple i18n sitemap index can be rendered"
response = self.client.get('/simple/i18n.xml')
expected_content = """<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<url><loc>{0}/en/i18n/testmodel/{1}/</loc><changefreq>never</changefreq><priority>0.5</priority></url><url><loc>{0}/pt/i18n/testmodel/{1}/</loc><changefreq>never</changefreq><priority>0.5</priority></url>
</urlset>
""".format(self.base_url, self.i18n_model.pk)
self.assertXMLEqual(response.content.decode('utf-8'), expected_content)
def test_sitemap_without_entries(self):
response = self.client.get('/sitemap-without-entries/sitemap.xml')
expected_content = """<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
</urlset>"""
self.assertXMLEqual(response.content.decode('utf-8'), expected_content)
| bsd-3-clause |
geekboxzone/lollipop_external_chromium_org | chrome/common/extensions/docs/server2/branch_utility_test.py | 77 | 7693 | #!/usr/bin/env python
# Copyright (c) 2012 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import os
import sys
import unittest
from branch_utility import BranchUtility, ChannelInfo
from fake_url_fetcher import FakeUrlFetcher
from object_store_creator import ObjectStoreCreator
from test_util import Server2Path
class BranchUtilityTest(unittest.TestCase):
def setUp(self):
self._branch_util = BranchUtility(
os.path.join('branch_utility', 'first.json'),
os.path.join('branch_utility', 'second.json'),
FakeUrlFetcher(Server2Path('test_data')),
ObjectStoreCreator.ForTest())
def testSplitChannelNameFromPath(self):
self.assertEquals(('stable', 'extensions/stuff.html'),
self._branch_util.SplitChannelNameFromPath(
'stable/extensions/stuff.html'))
self.assertEquals(('dev', 'extensions/stuff.html'),
self._branch_util.SplitChannelNameFromPath(
'dev/extensions/stuff.html'))
self.assertEquals(('beta', 'extensions/stuff.html'),
self._branch_util.SplitChannelNameFromPath(
'beta/extensions/stuff.html'))
self.assertEquals(('master', 'extensions/stuff.html'),
self._branch_util.SplitChannelNameFromPath(
'master/extensions/stuff.html'))
self.assertEquals((None, 'extensions/stuff.html'),
self._branch_util.SplitChannelNameFromPath(
'extensions/stuff.html'))
self.assertEquals((None, 'apps/stuff.html'),
self._branch_util.SplitChannelNameFromPath(
'apps/stuff.html'))
self.assertEquals((None, 'extensions/dev/stuff.html'),
self._branch_util.SplitChannelNameFromPath(
'extensions/dev/stuff.html'))
self.assertEquals((None, 'stuff.html'),
self._branch_util.SplitChannelNameFromPath(
'stuff.html'))
def testNewestChannel(self):
self.assertEquals('master',
self._branch_util.NewestChannel(('master', 'dev', 'beta', 'stable')))
self.assertEquals('master',
self._branch_util.NewestChannel(('stable', 'beta', 'dev', 'master')))
self.assertEquals('dev',
self._branch_util.NewestChannel(('stable', 'beta', 'dev')))
self.assertEquals('dev',
self._branch_util.NewestChannel(('dev', 'beta', 'stable')))
self.assertEquals('beta',
self._branch_util.NewestChannel(('beta', 'stable')))
self.assertEquals('beta',
self._branch_util.NewestChannel(('stable', 'beta')))
self.assertEquals('stable', self._branch_util.NewestChannel(('stable',)))
self.assertEquals('beta', self._branch_util.NewestChannel(('beta',)))
self.assertEquals('dev', self._branch_util.NewestChannel(('dev',)))
self.assertEquals('master', self._branch_util.NewestChannel(('master',)))
def testNewer(self):
oldest_stable_info = ChannelInfo('stable', '963', 17)
older_stable_info = ChannelInfo('stable', '1025', 18)
old_stable_info = ChannelInfo('stable', '1084', 19)
sort_of_old_stable_info = ChannelInfo('stable', '1500', 28)
stable_info = ChannelInfo('stable', '1547', 29)
beta_info = ChannelInfo('beta', '1599', 30)
dev_info = ChannelInfo('dev', '1612', 31)
master_info = ChannelInfo('master', 'master', 'master')
self.assertEquals(older_stable_info,
self._branch_util.Newer(oldest_stable_info))
self.assertEquals(old_stable_info,
self._branch_util.Newer(older_stable_info))
self.assertEquals(stable_info,
self._branch_util.Newer(sort_of_old_stable_info))
self.assertEquals(beta_info, self._branch_util.Newer(stable_info))
self.assertEquals(dev_info, self._branch_util.Newer(beta_info))
self.assertEquals(master_info, self._branch_util.Newer(dev_info))
# Test the upper limit.
self.assertEquals(None, self._branch_util.Newer(master_info))
def testOlder(self):
master_info = ChannelInfo('master', 'master', 'master')
dev_info = ChannelInfo('dev', '1612', 31)
beta_info = ChannelInfo('beta', '1599', 30)
stable_info = ChannelInfo('stable', '1547', 29)
old_stable_info = ChannelInfo('stable', '1500', 28)
older_stable_info = ChannelInfo('stable', '1453', 27)
oldest_stable_info = ChannelInfo('stable', '396', 5)
self.assertEquals(dev_info, self._branch_util.Older(master_info))
self.assertEquals(beta_info, self._branch_util.Older(dev_info))
self.assertEquals(stable_info, self._branch_util.Older(beta_info))
self.assertEquals(old_stable_info, self._branch_util.Older(stable_info))
self.assertEquals(older_stable_info,
self._branch_util.Older(old_stable_info))
# Test the lower limit.
self.assertEquals(None, self._branch_util.Older(oldest_stable_info))
def testGetChannelInfo(self):
master_info = ChannelInfo('master', 'master', 'master')
self.assertEquals(master_info, self._branch_util.GetChannelInfo('master'))
dev_info = ChannelInfo('dev', '1612', 31)
self.assertEquals(dev_info, self._branch_util.GetChannelInfo('dev'))
beta_info = ChannelInfo('beta', '1599', 30)
self.assertEquals(beta_info, self._branch_util.GetChannelInfo('beta'))
stable_info = ChannelInfo('stable', '1547', 29)
self.assertEquals(stable_info, self._branch_util.GetChannelInfo('stable'))
def testGetLatestVersionNumber(self):
self.assertEquals(37, self._branch_util.GetLatestVersionNumber())
def testGetBranchForVersion(self):
self.assertEquals('1500',
self._branch_util.GetBranchForVersion(28))
self.assertEquals('1453',
self._branch_util.GetBranchForVersion(27))
self.assertEquals('1410',
self._branch_util.GetBranchForVersion(26))
self.assertEquals('1364',
self._branch_util.GetBranchForVersion(25))
self.assertEquals('1312',
self._branch_util.GetBranchForVersion(24))
self.assertEquals('1271',
self._branch_util.GetBranchForVersion(23))
self.assertEquals('1229',
self._branch_util.GetBranchForVersion(22))
self.assertEquals('1180',
self._branch_util.GetBranchForVersion(21))
self.assertEquals('1132',
self._branch_util.GetBranchForVersion(20))
self.assertEquals('1084',
self._branch_util.GetBranchForVersion(19))
self.assertEquals('1025',
self._branch_util.GetBranchForVersion(18))
self.assertEquals('963',
self._branch_util.GetBranchForVersion(17))
self.assertEquals('696',
self._branch_util.GetBranchForVersion(11))
self.assertEquals('396',
self._branch_util.GetBranchForVersion(5))
def testGetChannelForVersion(self):
self.assertEquals('master',
self._branch_util.GetChannelForVersion('master'))
self.assertEquals('dev',
self._branch_util.GetChannelForVersion(31))
self.assertEquals('beta',
self._branch_util.GetChannelForVersion(30))
self.assertEquals('stable',
self._branch_util.GetChannelForVersion(26))
self.assertEquals('stable',
self._branch_util.GetChannelForVersion(22))
self.assertEquals('stable',
self._branch_util.GetChannelForVersion(18))
self.assertEquals('stable',
self._branch_util.GetChannelForVersion(14))
self.assertEquals(None,
self._branch_util.GetChannelForVersion(32))
self.assertEquals(None,
self._branch_util.GetChannelForVersion(42))
if __name__ == '__main__':
unittest.main()
| bsd-3-clause |
jeboo/kernel_JB_ZSLS6_i777 | Documentation/networking/cxacru-cf.py | 14668 | 1626 | #!/usr/bin/env python
# Copyright 2009 Simon Arlott
#
# This program is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by the Free
# Software Foundation; either version 2 of the License, or (at your option)
# any later version.
#
# This program is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
# more details.
#
# You should have received a copy of the GNU General Public License along with
# this program; if not, write to the Free Software Foundation, Inc., 59
# Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
# Usage: cxacru-cf.py < cxacru-cf.bin
# Output: values string suitable for the sysfs adsl_config attribute
#
# Warning: cxacru-cf.bin with MD5 hash cdbac2689969d5ed5d4850f117702110
# contains mis-aligned values which will stop the modem from being able
# to make a connection. If the first and last two bytes are removed then
# the values become valid, but the modulation will be forced to ANSI
# T1.413 only which may not be appropriate.
#
# The original binary format is a packed list of le32 values.
import sys
import struct
i = 0
while True:
buf = sys.stdin.read(4)
if len(buf) == 0:
break
elif len(buf) != 4:
sys.stdout.write("\n")
sys.stderr.write("Error: read {0} not 4 bytes\n".format(len(buf)))
sys.exit(1)
if i > 0:
sys.stdout.write(" ")
sys.stdout.write("{0:x}={1}".format(i, struct.unpack("<I", buf)[0]))
i += 1
sys.stdout.write("\n")
| gpl-2.0 |
noslenfa/tdjangorest | uw/lib/python2.7/site-packages/IPython/parallel/engine/engine.py | 2 | 13047 | """A simple engine that talks to a controller over 0MQ.
it handles registration, etc. and launches a kernel
connected to the Controller's Schedulers.
Authors:
* Min RK
"""
#-----------------------------------------------------------------------------
# Copyright (C) 2010-2011 The IPython Development Team
#
# Distributed under the terms of the BSD License. The full license is in
# the file COPYING, distributed as part of this software.
#-----------------------------------------------------------------------------
from __future__ import print_function
import sys
import time
from getpass import getpass
import zmq
from zmq.eventloop import ioloop, zmqstream
from IPython.external.ssh import tunnel
# internal
from IPython.utils.localinterfaces import LOCALHOST
from IPython.utils.traitlets import (
Instance, Dict, Integer, Type, Float, Integer, Unicode, CBytes, Bool
)
from IPython.utils.py3compat import cast_bytes
from IPython.parallel.controller.heartmonitor import Heart
from IPython.parallel.factory import RegistrationFactory
from IPython.parallel.util import disambiguate_url
from IPython.kernel.zmq.session import Message
from IPython.kernel.zmq.ipkernel import Kernel
from IPython.kernel.zmq.kernelapp import IPKernelApp
class EngineFactory(RegistrationFactory):
"""IPython engine"""
# configurables:
out_stream_factory=Type('IPython.kernel.zmq.iostream.OutStream', config=True,
help="""The OutStream for handling stdout/err.
Typically 'IPython.kernel.zmq.iostream.OutStream'""")
display_hook_factory=Type('IPython.kernel.zmq.displayhook.ZMQDisplayHook', config=True,
help="""The class for handling displayhook.
Typically 'IPython.kernel.zmq.displayhook.ZMQDisplayHook'""")
location=Unicode(config=True,
help="""The location (an IP address) of the controller. This is
used for disambiguating URLs, to determine whether
loopback should be used to connect or the public address.""")
timeout=Float(5.0, config=True,
help="""The time (in seconds) to wait for the Controller to respond
to registration requests before giving up.""")
max_heartbeat_misses=Integer(50, config=True,
help="""The maximum number of times a check for the heartbeat ping of a
controller can be missed before shutting down the engine.
If set to 0, the check is disabled.""")
sshserver=Unicode(config=True,
help="""The SSH server to use for tunneling connections to the Controller.""")
sshkey=Unicode(config=True,
help="""The SSH private key file to use when tunneling connections to the Controller.""")
paramiko=Bool(sys.platform == 'win32', config=True,
help="""Whether to use paramiko instead of openssh for tunnels.""")
# not configurable:
connection_info = Dict()
user_ns = Dict()
id = Integer(allow_none=True)
registrar = Instance('zmq.eventloop.zmqstream.ZMQStream')
kernel = Instance(Kernel)
hb_check_period=Integer()
# States for the heartbeat monitoring
# Initial values for monitored and pinged must satisfy "monitored > pinged == False" so that
# during the first check no "missed" ping is reported. Must be floats for Python 3 compatibility.
_hb_last_pinged = 0.0
_hb_last_monitored = 0.0
_hb_missed_beats = 0
# The zmq Stream which receives the pings from the Heart
_hb_listener = None
bident = CBytes()
ident = Unicode()
def _ident_changed(self, name, old, new):
self.bident = cast_bytes(new)
using_ssh=Bool(False)
def __init__(self, **kwargs):
super(EngineFactory, self).__init__(**kwargs)
self.ident = self.session.session
def init_connector(self):
"""construct connection function, which handles tunnels."""
self.using_ssh = bool(self.sshkey or self.sshserver)
if self.sshkey and not self.sshserver:
# We are using ssh directly to the controller, tunneling localhost to localhost
self.sshserver = self.url.split('://')[1].split(':')[0]
if self.using_ssh:
if tunnel.try_passwordless_ssh(self.sshserver, self.sshkey, self.paramiko):
password=False
else:
password = getpass("SSH Password for %s: "%self.sshserver)
else:
password = False
def connect(s, url):
url = disambiguate_url(url, self.location)
if self.using_ssh:
self.log.debug("Tunneling connection to %s via %s", url, self.sshserver)
return tunnel.tunnel_connection(s, url, self.sshserver,
keyfile=self.sshkey, paramiko=self.paramiko,
password=password,
)
else:
return s.connect(url)
def maybe_tunnel(url):
"""like connect, but don't complete the connection (for use by heartbeat)"""
url = disambiguate_url(url, self.location)
if self.using_ssh:
self.log.debug("Tunneling connection to %s via %s", url, self.sshserver)
url,tunnelobj = tunnel.open_tunnel(url, self.sshserver,
keyfile=self.sshkey, paramiko=self.paramiko,
password=password,
)
return str(url)
return connect, maybe_tunnel
def register(self):
"""send the registration_request"""
self.log.info("Registering with controller at %s"%self.url)
ctx = self.context
connect,maybe_tunnel = self.init_connector()
reg = ctx.socket(zmq.DEALER)
reg.setsockopt(zmq.IDENTITY, self.bident)
connect(reg, self.url)
self.registrar = zmqstream.ZMQStream(reg, self.loop)
content = dict(uuid=self.ident)
self.registrar.on_recv(lambda msg: self.complete_registration(msg, connect, maybe_tunnel))
# print (self.session.key)
self.session.send(self.registrar, "registration_request", content=content)
def _report_ping(self, msg):
"""Callback for when the heartmonitor.Heart receives a ping"""
#self.log.debug("Received a ping: %s", msg)
self._hb_last_pinged = time.time()
def complete_registration(self, msg, connect, maybe_tunnel):
# print msg
self._abort_dc.stop()
ctx = self.context
loop = self.loop
identity = self.bident
idents,msg = self.session.feed_identities(msg)
msg = self.session.unserialize(msg)
content = msg['content']
info = self.connection_info
def url(key):
"""get zmq url for given channel"""
return str(info["interface"] + ":%i" % info[key])
if content['status'] == 'ok':
self.id = int(content['id'])
# launch heartbeat
# possibly forward hb ports with tunnels
hb_ping = maybe_tunnel(url('hb_ping'))
hb_pong = maybe_tunnel(url('hb_pong'))
hb_monitor = None
if self.max_heartbeat_misses > 0:
# Add a monitor socket which will record the last time a ping was seen
mon = self.context.socket(zmq.SUB)
mport = mon.bind_to_random_port('tcp://%s' % LOCALHOST)
mon.setsockopt(zmq.SUBSCRIBE, b"")
self._hb_listener = zmqstream.ZMQStream(mon, self.loop)
self._hb_listener.on_recv(self._report_ping)
hb_monitor = "tcp://%s:%i" % (LOCALHOST, mport)
heart = Heart(hb_ping, hb_pong, hb_monitor , heart_id=identity)
heart.start()
# create Shell Connections (MUX, Task, etc.):
shell_addrs = url('mux'), url('task')
# Use only one shell stream for mux and tasks
stream = zmqstream.ZMQStream(ctx.socket(zmq.ROUTER), loop)
stream.setsockopt(zmq.IDENTITY, identity)
shell_streams = [stream]
for addr in shell_addrs:
connect(stream, addr)
# control stream:
control_addr = url('control')
control_stream = zmqstream.ZMQStream(ctx.socket(zmq.ROUTER), loop)
control_stream.setsockopt(zmq.IDENTITY, identity)
connect(control_stream, control_addr)
# create iopub stream:
iopub_addr = url('iopub')
iopub_socket = ctx.socket(zmq.PUB)
iopub_socket.setsockopt(zmq.IDENTITY, identity)
connect(iopub_socket, iopub_addr)
# disable history:
self.config.HistoryManager.hist_file = ':memory:'
# Redirect input streams and set a display hook.
if self.out_stream_factory:
sys.stdout = self.out_stream_factory(self.session, iopub_socket, u'stdout')
sys.stdout.topic = cast_bytes('engine.%i.stdout' % self.id)
sys.stderr = self.out_stream_factory(self.session, iopub_socket, u'stderr')
sys.stderr.topic = cast_bytes('engine.%i.stderr' % self.id)
if self.display_hook_factory:
sys.displayhook = self.display_hook_factory(self.session, iopub_socket)
sys.displayhook.topic = cast_bytes('engine.%i.pyout' % self.id)
self.kernel = Kernel(parent=self, int_id=self.id, ident=self.ident, session=self.session,
control_stream=control_stream, shell_streams=shell_streams, iopub_socket=iopub_socket,
loop=loop, user_ns=self.user_ns, log=self.log)
self.kernel.shell.display_pub.topic = cast_bytes('engine.%i.displaypub' % self.id)
# periodically check the heartbeat pings of the controller
# Should be started here and not in "start()" so that the right period can be taken
# from the hubs HeartBeatMonitor.period
if self.max_heartbeat_misses > 0:
# Use a slightly bigger check period than the hub signal period to not warn unnecessary
self.hb_check_period = int(content['hb_period'])+10
self.log.info("Starting to monitor the heartbeat signal from the hub every %i ms." , self.hb_check_period)
self._hb_reporter = ioloop.PeriodicCallback(self._hb_monitor, self.hb_check_period, self.loop)
self._hb_reporter.start()
else:
self.log.info("Monitoring of the heartbeat signal from the hub is not enabled.")
# FIXME: This is a hack until IPKernelApp and IPEngineApp can be fully merged
app = IPKernelApp(parent=self, shell=self.kernel.shell, kernel=self.kernel, log=self.log)
app.init_profile_dir()
app.init_code()
self.kernel.start()
else:
self.log.fatal("Registration Failed: %s"%msg)
raise Exception("Registration Failed: %s"%msg)
self.log.info("Completed registration with id %i"%self.id)
def abort(self):
self.log.fatal("Registration timed out after %.1f seconds"%self.timeout)
if self.url.startswith('127.'):
self.log.fatal("""
If the controller and engines are not on the same machine,
you will have to instruct the controller to listen on an external IP (in ipcontroller_config.py):
c.HubFactory.ip='*' # for all interfaces, internal and external
c.HubFactory.ip='192.168.1.101' # or any interface that the engines can see
or tunnel connections via ssh.
""")
self.session.send(self.registrar, "unregistration_request", content=dict(id=self.id))
time.sleep(1)
sys.exit(255)
def _hb_monitor(self):
"""Callback to monitor the heartbeat from the controller"""
self._hb_listener.flush()
if self._hb_last_monitored > self._hb_last_pinged:
self._hb_missed_beats += 1
self.log.warn("No heartbeat in the last %s ms (%s time(s) in a row).", self.hb_check_period, self._hb_missed_beats)
else:
#self.log.debug("Heartbeat received (after missing %s beats).", self._hb_missed_beats)
self._hb_missed_beats = 0
if self._hb_missed_beats >= self.max_heartbeat_misses:
self.log.fatal("Maximum number of heartbeats misses reached (%s times %s ms), shutting down.",
self.max_heartbeat_misses, self.hb_check_period)
self.session.send(self.registrar, "unregistration_request", content=dict(id=self.id))
self.loop.stop()
self._hb_last_monitored = time.time()
def start(self):
dc = ioloop.DelayedCallback(self.register, 0, self.loop)
dc.start()
self._abort_dc = ioloop.DelayedCallback(self.abort, self.timeout*1000, self.loop)
self._abort_dc.start()
| apache-2.0 |
kaiweifan/vse-lbaas-plugin-poc | quantum/plugins/nicira/nicira_nvp_plugin/nicira_models.py | 7 | 2493 | # vim: tabstop=4 shiftwidth=4 softtabstop=4
# Copyright 2012 Nicira, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from sqlalchemy import Column, Enum, ForeignKey, Integer, String
from quantum.db.models_v2 import model_base
class NvpNetworkBinding(model_base.BASEV2):
"""Represents a binding of a virtual network with a transport zone.
This model class associates a Quantum network with a transport zone;
optionally a vlan ID might be used if the binding type is 'bridge'
"""
__tablename__ = 'nvp_network_bindings'
network_id = Column(String(36),
ForeignKey('networks.id', ondelete="CASCADE"),
primary_key=True)
# 'flat', 'vlan', stt' or 'gre'
binding_type = Column(Enum('flat', 'vlan', 'stt', 'gre', 'l3_ext',
name='nvp_network_bindings_binding_type'),
nullable=False)
phy_uuid = Column(String(36))
vlan_id = Column(Integer)
def __init__(self, network_id, binding_type, phy_uuid, vlan_id):
self.network_id = network_id
self.binding_type = binding_type
self.phy_uuid = phy_uuid
self.vlan_id = vlan_id
def __repr__(self):
return "<NetworkBinding(%s,%s,%s,%s)>" % (self.network_id,
self.binding_type,
self.phy_uuid,
self.vlan_id)
class QuantumNvpPortMapping(model_base.BASEV2):
"""Represents the mapping between quantum and nvp port uuids."""
__tablename__ = 'quantum_nvp_port_mapping'
quantum_id = Column(String(36),
ForeignKey('ports.id', ondelete="CASCADE"),
primary_key=True)
nvp_id = Column(String(36))
def __init__(self, quantum_id, nvp_id):
self.quantum_id = quantum_id
self.nvp_id = nvp_id
| apache-2.0 |
ujenmr/ansible | lib/ansible/modules/network/nxos/nxos_pim_rp_address.py | 68 | 8038 | #!/usr/bin/python
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'network'}
DOCUMENTATION = '''
---
module: nxos_pim_rp_address
extends_documentation_fragment: nxos
version_added: "2.2"
short_description: Manages configuration of an PIM static RP address instance.
description:
- Manages configuration of an Protocol Independent Multicast (PIM) static
rendezvous point (RP) address instance.
author: Gabriele Gerbino (@GGabriele)
notes:
- Tested against NXOSv 7.3.(0)D1(1) on VIRL
- C(state=absent) is currently not supported on all platforms.
options:
rp_address:
description:
- Configures a Protocol Independent Multicast (PIM) static
rendezvous point (RP) address. Valid values are
unicast addresses.
required: true
group_list:
description:
- Group range for static RP. Valid values are multicast addresses.
prefix_list:
description:
- Prefix list policy for static RP. Valid values are prefix-list
policy names.
route_map:
description:
- Route map policy for static RP. Valid values are route-map
policy names.
bidir:
description:
- Group range is treated in PIM bidirectional mode.
type: bool
state:
description:
- Specify desired state of the resource.
required: true
default: present
choices: ['present','absent','default']
'''
EXAMPLES = '''
- nxos_pim_rp_address:
rp_address: "10.1.1.20"
state: present
'''
RETURN = '''
commands:
description: commands sent to the device
returned: always
type: list
sample: ["router bgp 65535", "vrf test", "router-id 192.0.2.1"]
'''
import re
from ansible.module_utils.network.nxos.nxos import get_config, load_config
from ansible.module_utils.network.nxos.nxos import nxos_argument_spec, check_args
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.network.common.config import CustomNetworkConfig
def get_existing(module, args, gl):
existing = {}
config = str(get_config(module))
address = module.params['rp_address']
pim_address_re = r'ip pim rp-address (?P<value>.*)$'
for line in re.findall(pim_address_re, config, re.M):
values = line.split()
if values[0] != address:
continue
if gl and 'group-list' not in line:
continue
elif not gl and 'group-list' in line:
if '224.0.0.0/4' not in line: # ignore default group-list
continue
existing['bidir'] = existing.get('bidir') or 'bidir' in line
if len(values) > 2:
value = values[2]
if values[1] == 'route-map':
existing['route_map'] = value
elif values[1] == 'prefix-list':
existing['prefix_list'] = value
elif values[1] == 'group-list':
if value != '224.0.0.0/4': # ignore default group-list
existing['group_list'] = value
return existing
def state_present(module, existing, proposed, candidate):
address = module.params['rp_address']
command = 'ip pim rp-address {0}'.format(address)
if module.params['group_list'] and not proposed.get('group_list'):
command += ' group-list ' + module.params['group_list']
if module.params['prefix_list']:
if not proposed.get('prefix_list'):
command += ' prefix-list ' + module.params['prefix_list']
if module.params['route_map']:
if not proposed.get('route_map'):
command += ' route-map ' + module.params['route_map']
commands = build_command(proposed, command)
if commands:
candidate.add(commands, parents=[])
def build_command(param_dict, command):
for param in ['group_list', 'prefix_list', 'route_map']:
if param_dict.get(param):
command += ' {0} {1}'.format(
param.replace('_', '-'), param_dict.get(param))
if param_dict.get('bidir'):
command += ' bidir'
return [command]
def state_absent(module, existing, candidate):
address = module.params['rp_address']
commands = []
command = 'no ip pim rp-address {0}'.format(address)
if module.params['group_list'] == existing.get('group_list'):
commands = build_command(existing, command)
elif not module.params['group_list']:
commands = [command]
if commands:
candidate.add(commands, parents=[])
def get_proposed(pargs, existing):
proposed = {}
for key, value in pargs.items():
if key != 'rp_address':
if str(value).lower() == 'true':
value = True
elif str(value).lower() == 'false':
value = False
if existing.get(key) != value:
proposed[key] = value
return proposed
def main():
argument_spec = dict(
rp_address=dict(required=True, type='str'),
group_list=dict(required=False, type='str'),
prefix_list=dict(required=False, type='str'),
route_map=dict(required=False, type='str'),
bidir=dict(required=False, type='bool'),
state=dict(choices=['present', 'absent'], default='present', required=False),
)
argument_spec.update(nxos_argument_spec)
module = AnsibleModule(argument_spec=argument_spec,
mutually_exclusive=[['group_list', 'route_map'],
['group_list', 'prefix_list'],
['route_map', 'prefix_list']],
supports_check_mode=True)
warnings = list()
check_args(module, warnings)
result = {'changed': False, 'commands': [], 'warnings': warnings}
state = module.params['state']
args = [
'rp_address',
'group_list',
'prefix_list',
'route_map',
'bidir'
]
proposed_args = dict((k, v) for k, v in module.params.items()
if v is not None and k in args)
if module.params['group_list']:
existing = get_existing(module, args, True)
proposed = get_proposed(proposed_args, existing)
else:
existing = get_existing(module, args, False)
proposed = get_proposed(proposed_args, existing)
candidate = CustomNetworkConfig(indent=3)
if state == 'present' and (proposed or not existing):
state_present(module, existing, proposed, candidate)
elif state == 'absent' and existing:
state_absent(module, existing, candidate)
if candidate:
candidate = candidate.items_text()
result['commands'] = candidate
result['changed'] = True
msgs = load_config(module, candidate, True)
if msgs:
for item in msgs:
if item:
if isinstance(item, dict):
err_str = item['clierror']
else:
err_str = item
if 'No policy was configured' in err_str:
if state == 'absent':
addr = module.params['rp_address']
new_cmd = 'no ip pim rp-address {0}'.format(addr)
load_config(module, new_cmd)
module.exit_json(**result)
if __name__ == '__main__':
main()
| gpl-3.0 |
w1ll1am23/home-assistant | tests/components/zha/test_number.py | 8 | 5098 | """Test zha analog output."""
from unittest.mock import call, patch
import pytest
import zigpy.profiles.zha
import zigpy.types
import zigpy.zcl.clusters.general as general
import zigpy.zcl.foundation as zcl_f
from homeassistant.components.number import DOMAIN
from homeassistant.const import STATE_UNAVAILABLE
from homeassistant.setup import async_setup_component
from .common import (
async_enable_traffic,
async_test_rejoin,
find_entity_id,
send_attributes_report,
)
from tests.common import mock_coro
@pytest.fixture
def zigpy_analog_output_device(zigpy_device_mock):
"""Zigpy analog_output device."""
endpoints = {
1: {
"device_type": zigpy.profiles.zha.DeviceType.LEVEL_CONTROL_SWITCH,
"in_clusters": [general.AnalogOutput.cluster_id, general.Basic.cluster_id],
"out_clusters": [],
}
}
return zigpy_device_mock(endpoints)
async def test_number(hass, zha_device_joined_restored, zigpy_analog_output_device):
"""Test zha number platform."""
cluster = zigpy_analog_output_device.endpoints.get(1).analog_output
cluster.PLUGGED_ATTR_READS = {
"present_value": 15.0,
"max_present_value": 100.0,
"min_present_value": 0.0,
"relinquish_default": 50.0,
"resolution": 1.0,
"description": "PWM1",
"engineering_units": 98,
"application_type": 4 * 0x10000,
}
zha_device = await zha_device_joined_restored(zigpy_analog_output_device)
# one for present_value and one for the rest configuration attributes
assert cluster.read_attributes.call_count == 2
assert "max_present_value" in cluster.read_attributes.call_args[0][0]
assert "min_present_value" in cluster.read_attributes.call_args[0][0]
assert "relinquish_default" in cluster.read_attributes.call_args[0][0]
assert "resolution" in cluster.read_attributes.call_args[0][0]
assert "description" in cluster.read_attributes.call_args[0][0]
assert "engineering_units" in cluster.read_attributes.call_args[0][0]
assert "application_type" in cluster.read_attributes.call_args[0][0]
entity_id = await find_entity_id(DOMAIN, zha_device, hass)
assert entity_id is not None
await async_enable_traffic(hass, [zha_device], enabled=False)
# test that the number was created and that it is unavailable
assert hass.states.get(entity_id).state == STATE_UNAVAILABLE
# allow traffic to flow through the gateway and device
assert cluster.read_attributes.call_count == 2
await async_enable_traffic(hass, [zha_device])
await hass.async_block_till_done()
assert cluster.read_attributes.call_count == 4
# test that the state has changed from unavailable to 15.0
assert hass.states.get(entity_id).state == "15.0"
# test attributes
assert hass.states.get(entity_id).attributes.get("min") == 0.0
assert hass.states.get(entity_id).attributes.get("max") == 100.0
assert hass.states.get(entity_id).attributes.get("step") == 1.0
assert hass.states.get(entity_id).attributes.get("icon") == "mdi:percent"
assert hass.states.get(entity_id).attributes.get("unit_of_measurement") == "%"
assert (
hass.states.get(entity_id).attributes.get("friendly_name")
== "FakeManufacturer FakeModel e769900a analog_output PWM1"
)
# change value from device
assert cluster.read_attributes.call_count == 4
await send_attributes_report(hass, cluster, {0x0055: 15})
assert hass.states.get(entity_id).state == "15.0"
# update value from device
await send_attributes_report(hass, cluster, {0x0055: 20})
assert hass.states.get(entity_id).state == "20.0"
# change value from HA
with patch(
"zigpy.zcl.Cluster.write_attributes",
return_value=mock_coro([zcl_f.Status.SUCCESS, zcl_f.Status.SUCCESS]),
):
# set value via UI
await hass.services.async_call(
DOMAIN, "set_value", {"entity_id": entity_id, "value": 30.0}, blocking=True
)
assert len(cluster.write_attributes.mock_calls) == 1
assert cluster.write_attributes.call_args == call({"present_value": 30.0})
cluster.PLUGGED_ATTR_READS["present_value"] = 30.0
# test rejoin
assert cluster.read_attributes.call_count == 4
await async_test_rejoin(hass, zigpy_analog_output_device, [cluster], (1,))
assert hass.states.get(entity_id).state == "30.0"
assert cluster.read_attributes.call_count == 6
# update device value with failed attribute report
cluster.PLUGGED_ATTR_READS["present_value"] = 40.0
# validate the entity still contains old value
assert hass.states.get(entity_id).state == "30.0"
await async_setup_component(hass, "homeassistant", {})
await hass.async_block_till_done()
await hass.services.async_call(
"homeassistant", "update_entity", {"entity_id": entity_id}, blocking=True
)
assert hass.states.get(entity_id).state == "40.0"
assert cluster.read_attributes.call_count == 7
assert "present_value" in cluster.read_attributes.call_args[0][0]
| apache-2.0 |
ukanga/SickRage | lib/sqlalchemy/orm/descriptor_props.py | 78 | 24571 | # orm/descriptor_props.py
# Copyright (C) 2005-2014 the SQLAlchemy authors and contributors <see AUTHORS file>
#
# This module is part of SQLAlchemy and is released under
# the MIT License: http://www.opensource.org/licenses/mit-license.php
"""Descriptor properties are more "auxiliary" properties
that exist as configurational elements, but don't participate
as actively in the load/persist ORM loop.
"""
from .interfaces import MapperProperty, PropComparator
from .util import _none_set
from . import attributes
from .. import util, sql, exc as sa_exc, event, schema
from ..sql import expression
from . import properties
from . import query
class DescriptorProperty(MapperProperty):
""":class:`.MapperProperty` which proxies access to a
user-defined descriptor."""
doc = None
def instrument_class(self, mapper):
prop = self
class _ProxyImpl(object):
accepts_scalar_loader = False
expire_missing = True
collection = False
def __init__(self, key):
self.key = key
if hasattr(prop, 'get_history'):
def get_history(self, state, dict_,
passive=attributes.PASSIVE_OFF):
return prop.get_history(state, dict_, passive)
if self.descriptor is None:
desc = getattr(mapper.class_, self.key, None)
if mapper._is_userland_descriptor(desc):
self.descriptor = desc
if self.descriptor is None:
def fset(obj, value):
setattr(obj, self.name, value)
def fdel(obj):
delattr(obj, self.name)
def fget(obj):
return getattr(obj, self.name)
self.descriptor = property(
fget=fget,
fset=fset,
fdel=fdel,
)
proxy_attr = attributes.\
create_proxied_attribute(self.descriptor)\
(
self.parent.class_,
self.key,
self.descriptor,
lambda: self._comparator_factory(mapper),
doc=self.doc,
original_property=self
)
proxy_attr.impl = _ProxyImpl(self.key)
mapper.class_manager.instrument_attribute(self.key, proxy_attr)
@util.langhelpers.dependency_for("sqlalchemy.orm.properties")
class CompositeProperty(DescriptorProperty):
"""Defines a "composite" mapped attribute, representing a collection
of columns as one attribute.
:class:`.CompositeProperty` is constructed using the :func:`.composite`
function.
.. seealso::
:ref:`mapper_composite`
"""
def __init__(self, class_, *attrs, **kwargs):
"""Return a composite column-based property for use with a Mapper.
See the mapping documentation section :ref:`mapper_composite` for a full
usage example.
The :class:`.MapperProperty` returned by :func:`.composite`
is the :class:`.CompositeProperty`.
:param class\_:
The "composite type" class.
:param \*cols:
List of Column objects to be mapped.
:param active_history=False:
When ``True``, indicates that the "previous" value for a
scalar attribute should be loaded when replaced, if not
already loaded. See the same flag on :func:`.column_property`.
.. versionchanged:: 0.7
This flag specifically becomes meaningful
- previously it was a placeholder.
:param group:
A group name for this property when marked as deferred.
:param deferred:
When True, the column property is "deferred", meaning that it does not
load immediately, and is instead loaded when the attribute is first
accessed on an instance. See also :func:`~sqlalchemy.orm.deferred`.
:param comparator_factory: a class which extends
:class:`.CompositeProperty.Comparator` which provides custom SQL clause
generation for comparison operations.
:param doc:
optional string that will be applied as the doc on the
class-bound descriptor.
:param info: Optional data dictionary which will be populated into the
:attr:`.MapperProperty.info` attribute of this object.
.. versionadded:: 0.8
:param extension:
an :class:`.AttributeExtension` instance,
or list of extensions, which will be prepended to the list of
attribute listeners for the resulting descriptor placed on the class.
**Deprecated.** Please see :class:`.AttributeEvents`.
"""
self.attrs = attrs
self.composite_class = class_
self.active_history = kwargs.get('active_history', False)
self.deferred = kwargs.get('deferred', False)
self.group = kwargs.get('group', None)
self.comparator_factory = kwargs.pop('comparator_factory',
self.__class__.Comparator)
if 'info' in kwargs:
self.info = kwargs.pop('info')
util.set_creation_order(self)
self._create_descriptor()
def instrument_class(self, mapper):
super(CompositeProperty, self).instrument_class(mapper)
self._setup_event_handlers()
def do_init(self):
"""Initialization which occurs after the :class:`.CompositeProperty`
has been associated with its parent mapper.
"""
self._setup_arguments_on_columns()
def _create_descriptor(self):
"""Create the Python descriptor that will serve as
the access point on instances of the mapped class.
"""
def fget(instance):
dict_ = attributes.instance_dict(instance)
state = attributes.instance_state(instance)
if self.key not in dict_:
# key not present. Iterate through related
# attributes, retrieve their values. This
# ensures they all load.
values = [
getattr(instance, key)
for key in self._attribute_keys
]
# current expected behavior here is that the composite is
# created on access if the object is persistent or if
# col attributes have non-None. This would be better
# if the composite were created unconditionally,
# but that would be a behavioral change.
if self.key not in dict_ and (
state.key is not None or
not _none_set.issuperset(values)
):
dict_[self.key] = self.composite_class(*values)
state.manager.dispatch.refresh(state, None, [self.key])
return dict_.get(self.key, None)
def fset(instance, value):
dict_ = attributes.instance_dict(instance)
state = attributes.instance_state(instance)
attr = state.manager[self.key]
previous = dict_.get(self.key, attributes.NO_VALUE)
for fn in attr.dispatch.set:
value = fn(state, value, previous, attr.impl)
dict_[self.key] = value
if value is None:
for key in self._attribute_keys:
setattr(instance, key, None)
else:
for key, value in zip(
self._attribute_keys,
value.__composite_values__()):
setattr(instance, key, value)
def fdel(instance):
state = attributes.instance_state(instance)
dict_ = attributes.instance_dict(instance)
previous = dict_.pop(self.key, attributes.NO_VALUE)
attr = state.manager[self.key]
attr.dispatch.remove(state, previous, attr.impl)
for key in self._attribute_keys:
setattr(instance, key, None)
self.descriptor = property(fget, fset, fdel)
@util.memoized_property
def _comparable_elements(self):
return [
getattr(self.parent.class_, prop.key)
for prop in self.props
]
@util.memoized_property
def props(self):
props = []
for attr in self.attrs:
if isinstance(attr, str):
prop = self.parent.get_property(attr, _configure_mappers=False)
elif isinstance(attr, schema.Column):
prop = self.parent._columntoproperty[attr]
elif isinstance(attr, attributes.InstrumentedAttribute):
prop = attr.property
else:
raise sa_exc.ArgumentError(
"Composite expects Column objects or mapped "
"attributes/attribute names as arguments, got: %r"
% (attr,))
props.append(prop)
return props
@property
def columns(self):
return [a for a in self.attrs if isinstance(a, schema.Column)]
def _setup_arguments_on_columns(self):
"""Propagate configuration arguments made on this composite
to the target columns, for those that apply.
"""
for prop in self.props:
prop.active_history = self.active_history
if self.deferred:
prop.deferred = self.deferred
prop.strategy_class = prop._strategy_lookup(
("deferred", True),
("instrument", True))
prop.group = self.group
def _setup_event_handlers(self):
"""Establish events that populate/expire the composite attribute."""
def load_handler(state, *args):
dict_ = state.dict
if self.key in dict_:
return
# if column elements aren't loaded, skip.
# __get__() will initiate a load for those
# columns
for k in self._attribute_keys:
if k not in dict_:
return
#assert self.key not in dict_
dict_[self.key] = self.composite_class(
*[state.dict[key] for key in
self._attribute_keys]
)
def expire_handler(state, keys):
if keys is None or set(self._attribute_keys).intersection(keys):
state.dict.pop(self.key, None)
def insert_update_handler(mapper, connection, state):
"""After an insert or update, some columns may be expired due
to server side defaults, or re-populated due to client side
defaults. Pop out the composite value here so that it
recreates.
"""
state.dict.pop(self.key, None)
event.listen(self.parent, 'after_insert',
insert_update_handler, raw=True)
event.listen(self.parent, 'after_update',
insert_update_handler, raw=True)
event.listen(self.parent, 'load',
load_handler, raw=True, propagate=True)
event.listen(self.parent, 'refresh',
load_handler, raw=True, propagate=True)
event.listen(self.parent, 'expire',
expire_handler, raw=True, propagate=True)
# TODO: need a deserialize hook here
@util.memoized_property
def _attribute_keys(self):
return [
prop.key for prop in self.props
]
def get_history(self, state, dict_, passive=attributes.PASSIVE_OFF):
"""Provided for userland code that uses attributes.get_history()."""
added = []
deleted = []
has_history = False
for prop in self.props:
key = prop.key
hist = state.manager[key].impl.get_history(state, dict_)
if hist.has_changes():
has_history = True
non_deleted = hist.non_deleted()
if non_deleted:
added.extend(non_deleted)
else:
added.append(None)
if hist.deleted:
deleted.extend(hist.deleted)
else:
deleted.append(None)
if has_history:
return attributes.History(
[self.composite_class(*added)],
(),
[self.composite_class(*deleted)]
)
else:
return attributes.History(
(), [self.composite_class(*added)], ()
)
def _comparator_factory(self, mapper):
return self.comparator_factory(self, mapper)
class CompositeBundle(query.Bundle):
def __init__(self, property, expr):
self.property = property
super(CompositeProperty.CompositeBundle, self).__init__(
property.key, *expr)
def create_row_processor(self, query, procs, labels):
def proc(row, result):
return self.property.composite_class(*[proc(row, result) for proc in procs])
return proc
class Comparator(PropComparator):
"""Produce boolean, comparison, and other operators for
:class:`.CompositeProperty` attributes.
See the example in :ref:`composite_operations` for an overview
of usage , as well as the documentation for :class:`.PropComparator`.
See also:
:class:`.PropComparator`
:class:`.ColumnOperators`
:ref:`types_operators`
:attr:`.TypeEngine.comparator_factory`
"""
__hash__ = None
@property
def clauses(self):
return self.__clause_element__()
def __clause_element__(self):
return expression.ClauseList(group=False, *self._comparable_elements)
def _query_clause_element(self):
return CompositeProperty.CompositeBundle(self.prop, self.__clause_element__())
@util.memoized_property
def _comparable_elements(self):
if self._adapt_to_entity:
return [
getattr(
self._adapt_to_entity.entity,
prop.key
) for prop in self.prop._comparable_elements
]
else:
return self.prop._comparable_elements
def __eq__(self, other):
if other is None:
values = [None] * len(self.prop._comparable_elements)
else:
values = other.__composite_values__()
comparisons = [
a == b
for a, b in zip(self.prop._comparable_elements, values)
]
if self._adapt_to_entity:
comparisons = [self.adapter(x) for x in comparisons]
return sql.and_(*comparisons)
def __ne__(self, other):
return sql.not_(self.__eq__(other))
def __str__(self):
return str(self.parent.class_.__name__) + "." + self.key
@util.langhelpers.dependency_for("sqlalchemy.orm.properties")
class ConcreteInheritedProperty(DescriptorProperty):
"""A 'do nothing' :class:`.MapperProperty` that disables
an attribute on a concrete subclass that is only present
on the inherited mapper, not the concrete classes' mapper.
Cases where this occurs include:
* When the superclass mapper is mapped against a
"polymorphic union", which includes all attributes from
all subclasses.
* When a relationship() is configured on an inherited mapper,
but not on the subclass mapper. Concrete mappers require
that relationship() is configured explicitly on each
subclass.
"""
def _comparator_factory(self, mapper):
comparator_callable = None
for m in self.parent.iterate_to_root():
p = m._props[self.key]
if not isinstance(p, ConcreteInheritedProperty):
comparator_callable = p.comparator_factory
break
return comparator_callable
def __init__(self):
def warn():
raise AttributeError("Concrete %s does not implement "
"attribute %r at the instance level. Add this "
"property explicitly to %s." %
(self.parent, self.key, self.parent))
class NoninheritedConcreteProp(object):
def __set__(s, obj, value):
warn()
def __delete__(s, obj):
warn()
def __get__(s, obj, owner):
if obj is None:
return self.descriptor
warn()
self.descriptor = NoninheritedConcreteProp()
@util.langhelpers.dependency_for("sqlalchemy.orm.properties")
class SynonymProperty(DescriptorProperty):
def __init__(self, name, map_column=None,
descriptor=None, comparator_factory=None,
doc=None):
"""Denote an attribute name as a synonym to a mapped property,
in that the attribute will mirror the value and expression behavior
of another attribute.
:param name: the name of the existing mapped property. This
can refer to the string name of any :class:`.MapperProperty`
configured on the class, including column-bound attributes
and relationships.
:param descriptor: a Python :term:`descriptor` that will be used
as a getter (and potentially a setter) when this attribute is
accessed at the instance level.
:param map_column: if ``True``, the :func:`.synonym` construct will
locate the existing named :class:`.MapperProperty` based on the
attribute name of this :func:`.synonym`, and assign it to a new
attribute linked to the name of this :func:`.synonym`.
That is, given a mapping like::
class MyClass(Base):
__tablename__ = 'my_table'
id = Column(Integer, primary_key=True)
job_status = Column(String(50))
job_status = synonym("_job_status", map_column=True)
The above class ``MyClass`` will now have the ``job_status``
:class:`.Column` object mapped to the attribute named ``_job_status``,
and the attribute named ``job_status`` will refer to the synonym
itself. This feature is typically used in conjunction with the
``descriptor`` argument in order to link a user-defined descriptor
as a "wrapper" for an existing column.
:param comparator_factory: A subclass of :class:`.PropComparator`
that will provide custom comparison behavior at the SQL expression
level.
.. note::
For the use case of providing an attribute which redefines both
Python-level and SQL-expression level behavior of an attribute,
please refer to the Hybrid attribute introduced at
:ref:`mapper_hybrids` for a more effective technique.
.. seealso::
:ref:`synonyms` - examples of functionality.
:ref:`mapper_hybrids` - Hybrids provide a better approach for
more complicated attribute-wrapping schemes than synonyms.
"""
self.name = name
self.map_column = map_column
self.descriptor = descriptor
self.comparator_factory = comparator_factory
self.doc = doc or (descriptor and descriptor.__doc__) or None
util.set_creation_order(self)
# TODO: when initialized, check _proxied_property,
# emit a warning if its not a column-based property
@util.memoized_property
def _proxied_property(self):
return getattr(self.parent.class_, self.name).property
def _comparator_factory(self, mapper):
prop = self._proxied_property
if self.comparator_factory:
comp = self.comparator_factory(prop, mapper)
else:
comp = prop.comparator_factory(prop, mapper)
return comp
def set_parent(self, parent, init):
if self.map_column:
# implement the 'map_column' option.
if self.key not in parent.mapped_table.c:
raise sa_exc.ArgumentError(
"Can't compile synonym '%s': no column on table "
"'%s' named '%s'"
% (self.name, parent.mapped_table.description, self.key))
elif parent.mapped_table.c[self.key] in \
parent._columntoproperty and \
parent._columntoproperty[
parent.mapped_table.c[self.key]
].key == self.name:
raise sa_exc.ArgumentError(
"Can't call map_column=True for synonym %r=%r, "
"a ColumnProperty already exists keyed to the name "
"%r for column %r" %
(self.key, self.name, self.name, self.key)
)
p = properties.ColumnProperty(parent.mapped_table.c[self.key])
parent._configure_property(
self.name, p,
init=init,
setparent=True)
p._mapped_by_synonym = self.key
self.parent = parent
@util.langhelpers.dependency_for("sqlalchemy.orm.properties")
class ComparableProperty(DescriptorProperty):
"""Instruments a Python property for use in query expressions."""
def __init__(self, comparator_factory, descriptor=None, doc=None):
"""Provides a method of applying a :class:`.PropComparator`
to any Python descriptor attribute.
.. versionchanged:: 0.7
:func:`.comparable_property` is superseded by
the :mod:`~sqlalchemy.ext.hybrid` extension. See the example
at :ref:`hybrid_custom_comparators`.
Allows any Python descriptor to behave like a SQL-enabled
attribute when used at the class level in queries, allowing
redefinition of expression operator behavior.
In the example below we redefine :meth:`.PropComparator.operate`
to wrap both sides of an expression in ``func.lower()`` to produce
case-insensitive comparison::
from sqlalchemy.orm import comparable_property
from sqlalchemy.orm.interfaces import PropComparator
from sqlalchemy.sql import func
from sqlalchemy import Integer, String, Column
from sqlalchemy.ext.declarative import declarative_base
class CaseInsensitiveComparator(PropComparator):
def __clause_element__(self):
return self.prop
def operate(self, op, other):
return op(
func.lower(self.__clause_element__()),
func.lower(other)
)
Base = declarative_base()
class SearchWord(Base):
__tablename__ = 'search_word'
id = Column(Integer, primary_key=True)
word = Column(String)
word_insensitive = comparable_property(lambda prop, mapper:
CaseInsensitiveComparator(mapper.c.word, mapper)
)
A mapping like the above allows the ``word_insensitive`` attribute
to render an expression like::
>>> print SearchWord.word_insensitive == "Trucks"
lower(search_word.word) = lower(:lower_1)
:param comparator_factory:
A PropComparator subclass or factory that defines operator behavior
for this property.
:param descriptor:
Optional when used in a ``properties={}`` declaration. The Python
descriptor or property to layer comparison behavior on top of.
The like-named descriptor will be automatically retrieved from the
mapped class if left blank in a ``properties`` declaration.
"""
self.descriptor = descriptor
self.comparator_factory = comparator_factory
self.doc = doc or (descriptor and descriptor.__doc__) or None
util.set_creation_order(self)
def _comparator_factory(self, mapper):
return self.comparator_factory(self, mapper)
| gpl-3.0 |
FenceAtMHacks/flaskbackend | fence-api/flask/lib/python2.7/site-packages/pip/_vendor/requests/models.py | 43 | 28027 | # -*- coding: utf-8 -*-
"""
requests.models
~~~~~~~~~~~~~~~
This module contains the primary objects that power Requests.
"""
import collections
import datetime
from io import BytesIO, UnsupportedOperation
from .hooks import default_hooks
from .structures import CaseInsensitiveDict
from .auth import HTTPBasicAuth
from .cookies import cookiejar_from_dict, get_cookie_header
from .packages.urllib3.fields import RequestField
from .packages.urllib3.filepost import encode_multipart_formdata
from .packages.urllib3.util import parse_url
from .packages.urllib3.exceptions import (
DecodeError, ReadTimeoutError, ProtocolError)
from .exceptions import (
HTTPError, RequestException, MissingSchema, InvalidURL,
ChunkedEncodingError, ContentDecodingError, ConnectionError,
StreamConsumedError)
from .utils import (
guess_filename, get_auth_from_url, requote_uri,
stream_decode_response_unicode, to_key_val_list, parse_header_links,
iter_slices, guess_json_utf, super_len, to_native_string)
from .compat import (
cookielib, urlunparse, urlsplit, urlencode, str, bytes, StringIO,
is_py2, chardet, json, builtin_str, basestring)
from .status_codes import codes
#: The set of HTTP status codes that indicate an automatically
#: processable redirect.
REDIRECT_STATI = (
codes.moved, # 301
codes.found, # 302
codes.other, # 303
codes.temporary_redirect, # 307
codes.permanent_redirect, # 308
)
DEFAULT_REDIRECT_LIMIT = 30
CONTENT_CHUNK_SIZE = 10 * 1024
ITER_CHUNK_SIZE = 512
json_dumps = json.dumps
class RequestEncodingMixin(object):
@property
def path_url(self):
"""Build the path URL to use."""
url = []
p = urlsplit(self.url)
path = p.path
if not path:
path = '/'
url.append(path)
query = p.query
if query:
url.append('?')
url.append(query)
return ''.join(url)
@staticmethod
def _encode_params(data):
"""Encode parameters in a piece of data.
Will successfully encode parameters when passed as a dict or a list of
2-tuples. Order is retained if data is a list of 2-tuples but arbitrary
if parameters are supplied as a dict.
"""
if isinstance(data, (str, bytes)):
return data
elif hasattr(data, 'read'):
return data
elif hasattr(data, '__iter__'):
result = []
for k, vs in to_key_val_list(data):
if isinstance(vs, basestring) or not hasattr(vs, '__iter__'):
vs = [vs]
for v in vs:
if v is not None:
result.append(
(k.encode('utf-8') if isinstance(k, str) else k,
v.encode('utf-8') if isinstance(v, str) else v))
return urlencode(result, doseq=True)
else:
return data
@staticmethod
def _encode_files(files, data):
"""Build the body for a multipart/form-data request.
Will successfully encode files when passed as a dict or a list of
2-tuples. Order is retained if data is a list of 2-tuples but arbitrary
if parameters are supplied as a dict.
"""
if (not files):
raise ValueError("Files must be provided.")
elif isinstance(data, basestring):
raise ValueError("Data must not be a string.")
new_fields = []
fields = to_key_val_list(data or {})
files = to_key_val_list(files or {})
for field, val in fields:
if isinstance(val, basestring) or not hasattr(val, '__iter__'):
val = [val]
for v in val:
if v is not None:
# Don't call str() on bytestrings: in Py3 it all goes wrong.
if not isinstance(v, bytes):
v = str(v)
new_fields.append(
(field.decode('utf-8') if isinstance(field, bytes) else field,
v.encode('utf-8') if isinstance(v, str) else v))
for (k, v) in files:
# support for explicit filename
ft = None
fh = None
if isinstance(v, (tuple, list)):
if len(v) == 2:
fn, fp = v
elif len(v) == 3:
fn, fp, ft = v
else:
fn, fp, ft, fh = v
else:
fn = guess_filename(v) or k
fp = v
if isinstance(fp, str):
fp = StringIO(fp)
if isinstance(fp, bytes):
fp = BytesIO(fp)
rf = RequestField(name=k, data=fp.read(),
filename=fn, headers=fh)
rf.make_multipart(content_type=ft)
new_fields.append(rf)
body, content_type = encode_multipart_formdata(new_fields)
return body, content_type
class RequestHooksMixin(object):
def register_hook(self, event, hook):
"""Properly register a hook."""
if event not in self.hooks:
raise ValueError('Unsupported event specified, with event name "%s"' % (event))
if isinstance(hook, collections.Callable):
self.hooks[event].append(hook)
elif hasattr(hook, '__iter__'):
self.hooks[event].extend(h for h in hook if isinstance(h, collections.Callable))
def deregister_hook(self, event, hook):
"""Deregister a previously registered hook.
Returns True if the hook existed, False if not.
"""
try:
self.hooks[event].remove(hook)
return True
except ValueError:
return False
class Request(RequestHooksMixin):
"""A user-created :class:`Request <Request>` object.
Used to prepare a :class:`PreparedRequest <PreparedRequest>`, which is sent to the server.
:param method: HTTP method to use.
:param url: URL to send.
:param headers: dictionary of headers to send.
:param files: dictionary of {filename: fileobject} files to multipart upload.
:param data: the body to attach to the request. If a dictionary is provided, form-encoding will take place.
:param json: json for the body to attach to the request (if data is not specified).
:param params: dictionary of URL parameters to append to the URL.
:param auth: Auth handler or (user, pass) tuple.
:param cookies: dictionary or CookieJar of cookies to attach to this request.
:param hooks: dictionary of callback hooks, for internal usage.
Usage::
>>> import requests
>>> req = requests.Request('GET', 'http://httpbin.org/get')
>>> req.prepare()
<PreparedRequest [GET]>
"""
def __init__(self,
method=None,
url=None,
headers=None,
files=None,
data=None,
params=None,
auth=None,
cookies=None,
hooks=None,
json=None):
# Default empty dicts for dict params.
data = [] if data is None else data
files = [] if files is None else files
headers = {} if headers is None else headers
params = {} if params is None else params
hooks = {} if hooks is None else hooks
self.hooks = default_hooks()
for (k, v) in list(hooks.items()):
self.register_hook(event=k, hook=v)
self.method = method
self.url = url
self.headers = headers
self.files = files
self.data = data
self.json = json
self.params = params
self.auth = auth
self.cookies = cookies
def __repr__(self):
return '<Request [%s]>' % (self.method)
def prepare(self):
"""Constructs a :class:`PreparedRequest <PreparedRequest>` for transmission and returns it."""
p = PreparedRequest()
p.prepare(
method=self.method,
url=self.url,
headers=self.headers,
files=self.files,
data=self.data,
json=self.json,
params=self.params,
auth=self.auth,
cookies=self.cookies,
hooks=self.hooks,
)
return p
class PreparedRequest(RequestEncodingMixin, RequestHooksMixin):
"""The fully mutable :class:`PreparedRequest <PreparedRequest>` object,
containing the exact bytes that will be sent to the server.
Generated from either a :class:`Request <Request>` object or manually.
Usage::
>>> import requests
>>> req = requests.Request('GET', 'http://httpbin.org/get')
>>> r = req.prepare()
<PreparedRequest [GET]>
>>> s = requests.Session()
>>> s.send(r)
<Response [200]>
"""
def __init__(self):
#: HTTP verb to send to the server.
self.method = None
#: HTTP URL to send the request to.
self.url = None
#: dictionary of HTTP headers.
self.headers = None
# The `CookieJar` used to create the Cookie header will be stored here
# after prepare_cookies is called
self._cookies = None
#: request body to send to the server.
self.body = None
#: dictionary of callback hooks, for internal usage.
self.hooks = default_hooks()
def prepare(self, method=None, url=None, headers=None, files=None,
data=None, params=None, auth=None, cookies=None, hooks=None,
json=None):
"""Prepares the entire request with the given parameters."""
self.prepare_method(method)
self.prepare_url(url, params)
self.prepare_headers(headers)
self.prepare_cookies(cookies)
self.prepare_body(data, files, json)
self.prepare_auth(auth, url)
# Note that prepare_auth must be last to enable authentication schemes
# such as OAuth to work on a fully prepared request.
# This MUST go after prepare_auth. Authenticators could add a hook
self.prepare_hooks(hooks)
def __repr__(self):
return '<PreparedRequest [%s]>' % (self.method)
def copy(self):
p = PreparedRequest()
p.method = self.method
p.url = self.url
p.headers = self.headers.copy() if self.headers is not None else None
p._cookies = self._cookies.copy() if self._cookies is not None else None
p.body = self.body
p.hooks = self.hooks
return p
def prepare_method(self, method):
"""Prepares the given HTTP method."""
self.method = method
if self.method is not None:
self.method = self.method.upper()
def prepare_url(self, url, params):
"""Prepares the given HTTP URL."""
#: Accept objects that have string representations.
#: We're unable to blindy call unicode/str functions
#: as this will include the bytestring indicator (b'')
#: on python 3.x.
#: https://github.com/kennethreitz/requests/pull/2238
if isinstance(url, bytes):
url = url.decode('utf8')
else:
url = unicode(url) if is_py2 else str(url)
# Don't do any URL preparation for non-HTTP schemes like `mailto`,
# `data` etc to work around exceptions from `url_parse`, which
# handles RFC 3986 only.
if ':' in url and not url.lower().startswith('http'):
self.url = url
return
# Support for unicode domain names and paths.
scheme, auth, host, port, path, query, fragment = parse_url(url)
if not scheme:
raise MissingSchema("Invalid URL {0!r}: No schema supplied. "
"Perhaps you meant http://{0}?".format(url))
if not host:
raise InvalidURL("Invalid URL %r: No host supplied" % url)
# Only want to apply IDNA to the hostname
try:
host = host.encode('idna').decode('utf-8')
except UnicodeError:
raise InvalidURL('URL has an invalid label.')
# Carefully reconstruct the network location
netloc = auth or ''
if netloc:
netloc += '@'
netloc += host
if port:
netloc += ':' + str(port)
# Bare domains aren't valid URLs.
if not path:
path = '/'
if is_py2:
if isinstance(scheme, str):
scheme = scheme.encode('utf-8')
if isinstance(netloc, str):
netloc = netloc.encode('utf-8')
if isinstance(path, str):
path = path.encode('utf-8')
if isinstance(query, str):
query = query.encode('utf-8')
if isinstance(fragment, str):
fragment = fragment.encode('utf-8')
enc_params = self._encode_params(params)
if enc_params:
if query:
query = '%s&%s' % (query, enc_params)
else:
query = enc_params
url = requote_uri(urlunparse([scheme, netloc, path, None, query, fragment]))
self.url = url
def prepare_headers(self, headers):
"""Prepares the given HTTP headers."""
if headers:
self.headers = CaseInsensitiveDict((to_native_string(name), value) for name, value in headers.items())
else:
self.headers = CaseInsensitiveDict()
def prepare_body(self, data, files, json=None):
"""Prepares the given HTTP body data."""
# Check if file, fo, generator, iterator.
# If not, run through normal process.
# Nottin' on you.
body = None
content_type = None
length = None
if json is not None:
content_type = 'application/json'
body = json_dumps(json)
is_stream = all([
hasattr(data, '__iter__'),
not isinstance(data, (basestring, list, tuple, dict))
])
try:
length = super_len(data)
except (TypeError, AttributeError, UnsupportedOperation):
length = None
if is_stream:
body = data
if files:
raise NotImplementedError('Streamed bodies and files are mutually exclusive.')
if length is not None:
self.headers['Content-Length'] = builtin_str(length)
else:
self.headers['Transfer-Encoding'] = 'chunked'
else:
# Multi-part file uploads.
if files:
(body, content_type) = self._encode_files(files, data)
else:
if data and json is None:
body = self._encode_params(data)
if isinstance(data, basestring) or hasattr(data, 'read'):
content_type = None
else:
content_type = 'application/x-www-form-urlencoded'
self.prepare_content_length(body)
# Add content-type if it wasn't explicitly provided.
if content_type and ('content-type' not in self.headers):
self.headers['Content-Type'] = content_type
self.body = body
def prepare_content_length(self, body):
if hasattr(body, 'seek') and hasattr(body, 'tell'):
body.seek(0, 2)
self.headers['Content-Length'] = builtin_str(body.tell())
body.seek(0, 0)
elif body is not None:
l = super_len(body)
if l:
self.headers['Content-Length'] = builtin_str(l)
elif (self.method not in ('GET', 'HEAD')) and (self.headers.get('Content-Length') is None):
self.headers['Content-Length'] = '0'
def prepare_auth(self, auth, url=''):
"""Prepares the given HTTP auth data."""
# If no Auth is explicitly provided, extract it from the URL first.
if auth is None:
url_auth = get_auth_from_url(self.url)
auth = url_auth if any(url_auth) else None
if auth:
if isinstance(auth, tuple) and len(auth) == 2:
# special-case basic HTTP auth
auth = HTTPBasicAuth(*auth)
# Allow auth to make its changes.
r = auth(self)
# Update self to reflect the auth changes.
self.__dict__.update(r.__dict__)
# Recompute Content-Length
self.prepare_content_length(self.body)
def prepare_cookies(self, cookies):
"""Prepares the given HTTP cookie data."""
if isinstance(cookies, cookielib.CookieJar):
self._cookies = cookies
else:
self._cookies = cookiejar_from_dict(cookies)
cookie_header = get_cookie_header(self._cookies, self)
if cookie_header is not None:
self.headers['Cookie'] = cookie_header
def prepare_hooks(self, hooks):
"""Prepares the given hooks."""
for event in hooks:
self.register_hook(event, hooks[event])
class Response(object):
"""The :class:`Response <Response>` object, which contains a
server's response to an HTTP request.
"""
__attrs__ = [
'_content',
'status_code',
'headers',
'url',
'history',
'encoding',
'reason',
'cookies',
'elapsed',
'request',
]
def __init__(self):
super(Response, self).__init__()
self._content = False
self._content_consumed = False
#: Integer Code of responded HTTP Status, e.g. 404 or 200.
self.status_code = None
#: Case-insensitive Dictionary of Response Headers.
#: For example, ``headers['content-encoding']`` will return the
#: value of a ``'Content-Encoding'`` response header.
self.headers = CaseInsensitiveDict()
#: File-like object representation of response (for advanced usage).
#: Use of ``raw`` requires that ``stream=True`` be set on the request.
# This requirement does not apply for use internally to Requests.
self.raw = None
#: Final URL location of Response.
self.url = None
#: Encoding to decode with when accessing r.text.
self.encoding = None
#: A list of :class:`Response <Response>` objects from
#: the history of the Request. Any redirect responses will end
#: up here. The list is sorted from the oldest to the most recent request.
self.history = []
#: Textual reason of responded HTTP Status, e.g. "Not Found" or "OK".
self.reason = None
#: A CookieJar of Cookies the server sent back.
self.cookies = cookiejar_from_dict({})
#: The amount of time elapsed between sending the request
#: and the arrival of the response (as a timedelta)
self.elapsed = datetime.timedelta(0)
#: The :class:`PreparedRequest <PreparedRequest>` object to which this
#: is a response.
self.request = None
def __getstate__(self):
# Consume everything; accessing the content attribute makes
# sure the content has been fully read.
if not self._content_consumed:
self.content
return dict(
(attr, getattr(self, attr, None))
for attr in self.__attrs__
)
def __setstate__(self, state):
for name, value in state.items():
setattr(self, name, value)
# pickled objects do not have .raw
setattr(self, '_content_consumed', True)
setattr(self, 'raw', None)
def __repr__(self):
return '<Response [%s]>' % (self.status_code)
def __bool__(self):
"""Returns true if :attr:`status_code` is 'OK'."""
return self.ok
def __nonzero__(self):
"""Returns true if :attr:`status_code` is 'OK'."""
return self.ok
def __iter__(self):
"""Allows you to use a response as an iterator."""
return self.iter_content(128)
@property
def ok(self):
try:
self.raise_for_status()
except RequestException:
return False
return True
@property
def is_redirect(self):
"""True if this Response is a well-formed HTTP redirect that could have
been processed automatically (by :meth:`Session.resolve_redirects`).
"""
return ('location' in self.headers and self.status_code in REDIRECT_STATI)
@property
def is_permanent_redirect(self):
"""True if this Response one of the permanant versions of redirect"""
return ('location' in self.headers and self.status_code in (codes.moved_permanently, codes.permanent_redirect))
@property
def apparent_encoding(self):
"""The apparent encoding, provided by the chardet library"""
return chardet.detect(self.content)['encoding']
def iter_content(self, chunk_size=1, decode_unicode=False):
"""Iterates over the response data. When stream=True is set on the
request, this avoids reading the content at once into memory for
large responses. The chunk size is the number of bytes it should
read into memory. This is not necessarily the length of each item
returned as decoding can take place.
If decode_unicode is True, content will be decoded using the best
available encoding based on the response.
"""
def generate():
try:
# Special case for urllib3.
try:
for chunk in self.raw.stream(chunk_size, decode_content=True):
yield chunk
except ProtocolError as e:
raise ChunkedEncodingError(e)
except DecodeError as e:
raise ContentDecodingError(e)
except ReadTimeoutError as e:
raise ConnectionError(e)
except AttributeError:
# Standard file-like object.
while True:
chunk = self.raw.read(chunk_size)
if not chunk:
break
yield chunk
self._content_consumed = True
if self._content_consumed and isinstance(self._content, bool):
raise StreamConsumedError()
# simulate reading small chunks of the content
reused_chunks = iter_slices(self._content, chunk_size)
stream_chunks = generate()
chunks = reused_chunks if self._content_consumed else stream_chunks
if decode_unicode:
chunks = stream_decode_response_unicode(chunks, self)
return chunks
def iter_lines(self, chunk_size=ITER_CHUNK_SIZE, decode_unicode=None, delimiter=None):
"""Iterates over the response data, one line at a time. When
stream=True is set on the request, this avoids reading the
content at once into memory for large responses.
"""
pending = None
for chunk in self.iter_content(chunk_size=chunk_size, decode_unicode=decode_unicode):
if pending is not None:
chunk = pending + chunk
if delimiter:
lines = chunk.split(delimiter)
else:
lines = chunk.splitlines()
if lines and lines[-1] and chunk and lines[-1][-1] == chunk[-1]:
pending = lines.pop()
else:
pending = None
for line in lines:
yield line
if pending is not None:
yield pending
@property
def content(self):
"""Content of the response, in bytes."""
if self._content is False:
# Read the contents.
try:
if self._content_consumed:
raise RuntimeError(
'The content for this response was already consumed')
if self.status_code == 0:
self._content = None
else:
self._content = bytes().join(self.iter_content(CONTENT_CHUNK_SIZE)) or bytes()
except AttributeError:
self._content = None
self._content_consumed = True
# don't need to release the connection; that's been handled by urllib3
# since we exhausted the data.
return self._content
@property
def text(self):
"""Content of the response, in unicode.
If Response.encoding is None, encoding will be guessed using
``chardet``.
The encoding of the response content is determined based solely on HTTP
headers, following RFC 2616 to the letter. If you can take advantage of
non-HTTP knowledge to make a better guess at the encoding, you should
set ``r.encoding`` appropriately before accessing this property.
"""
# Try charset from content-type
content = None
encoding = self.encoding
if not self.content:
return str('')
# Fallback to auto-detected encoding.
if self.encoding is None:
encoding = self.apparent_encoding
# Decode unicode from given encoding.
try:
content = str(self.content, encoding, errors='replace')
except (LookupError, TypeError):
# A LookupError is raised if the encoding was not found which could
# indicate a misspelling or similar mistake.
#
# A TypeError can be raised if encoding is None
#
# So we try blindly encoding.
content = str(self.content, errors='replace')
return content
def json(self, **kwargs):
"""Returns the json-encoded content of a response, if any.
:param \*\*kwargs: Optional arguments that ``json.loads`` takes.
"""
if not self.encoding and len(self.content) > 3:
# No encoding set. JSON RFC 4627 section 3 states we should expect
# UTF-8, -16 or -32. Detect which one to use; If the detection or
# decoding fails, fall back to `self.text` (using chardet to make
# a best guess).
encoding = guess_json_utf(self.content)
if encoding is not None:
try:
return json.loads(self.content.decode(encoding), **kwargs)
except UnicodeDecodeError:
# Wrong UTF codec detected; usually because it's not UTF-8
# but some other 8-bit codec. This is an RFC violation,
# and the server didn't bother to tell us what codec *was*
# used.
pass
return json.loads(self.text, **kwargs)
@property
def links(self):
"""Returns the parsed header links of the response, if any."""
header = self.headers.get('link')
# l = MultiDict()
l = {}
if header:
links = parse_header_links(header)
for link in links:
key = link.get('rel') or link.get('url')
l[key] = link
return l
def raise_for_status(self):
"""Raises stored :class:`HTTPError`, if one occurred."""
http_error_msg = ''
if 400 <= self.status_code < 500:
http_error_msg = '%s Client Error: %s' % (self.status_code, self.reason)
elif 500 <= self.status_code < 600:
http_error_msg = '%s Server Error: %s' % (self.status_code, self.reason)
if http_error_msg:
raise HTTPError(http_error_msg, response=self)
def close(self):
"""Releases the connection back to the pool. Once this method has been
called the underlying ``raw`` object must not be accessed again.
*Note: Should not normally need to be called explicitly.*
"""
return self.raw.release_conn()
| mit |
lennonchan/OgreSource | Tools/Blender2.5Export/mesh_properties.py | 6 | 5736 | # ##### BEGIN MIT LICENSE BLOCK #####
# Copyright (C) 2011 by Lih-Hern Pang
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
# ##### END MIT LICENSE BLOCK #####
import bpy, os, sys, configparser
from bpy.props import *
# ##############################################
# Mesh Properties on the mesh objects
class MeshProperties(bpy.types.PropertyGroup):
# Enable/Disable export of this mesh.
exportEnabled = BoolProperty(
name = "Export",
description = "Export this mesh.",
default = True
)
requireMaterials_override = BoolProperty(
name = "Require Materials Override",
description = "Override global setting.",
default = False
)
requireMaterials = BoolProperty(
name = "Require Materials",
description = "Generate Error message when part of this mesh is not assigned with a material.",
default = True
)
skeletonNameFollowMesh_override = BoolProperty(
name = "Skeleton Name Follow Mesh Override",
description = "Override global setting.",
default = False
)
skeletonNameFollowMesh = BoolProperty(
name = "Skeleton Name Follow Mesh",
description = "Use mesh name for exported skeleton name instead of the armature name.",
default = True
)
applyModifiers_override = BoolProperty(
name = "Apply Modifiers Override",
description = "Override global setting.",
default = False
)
applyModifiers = BoolProperty(
name = "Apply Modifiers",
description = "Apply mesh modifiers before export. (Slow and may break vertex order for morph targets!)",
default = False
)
# ##############################################
# XML Converter specific Properties
extremityPoints_override = BoolProperty(
name = "Extremity Points Override",
description = "Override global setting.",
default = False
)
extremityPoints = IntProperty(
name = "Extremity Points",
description = "Generate no more than num eXtremes for every submesh. (For submesh render sorting when using alpha materials on submesh)",
soft_min = 0,
soft_max = 65536
)
edgeLists_override = BoolProperty(
name = "Edge Lists Override",
description = "Override global setting.",
default = False
)
edgeLists = BoolProperty(
name = "Edge Lists",
description = "Generate edge lists. (Useful for outlining or doing stencil shadows)",
default = False
)
tangent_override = BoolProperty(
name = "Tangent Override",
description = "Override global setting.",
default = False
)
tangent = BoolProperty(
name = "Tangent",
description = "Generate tangent.",
default = False
)
tangentSemantic_override = BoolProperty(
name = "Tangent Semantic Override",
description = "Override global setting.",
default = False
)
tangentSemantic = EnumProperty(
name = "Tangent Semantic",
description = "Tangent Semantic to use.",
items=(("uvw", "uvw", "Use UV semantic."),
("tangent", "tangent", "Use tangent semantic."),
),
default= "tangent"
)
tangentSize_override = BoolProperty(
name = "Tangent Size Override",
description = "Override global setting.",
default = False
)
tangentSize = EnumProperty(
name = "Tangent Size",
description = "Size of tangent.",
items=(("4", "4 component (parity)", "Use 4 component tangent where 4th component is parity."),
("3", "3 component", "Use 3 component tangent."),
),
default= "3"
)
splitMirrored_override = BoolProperty(
name = "Split Mirrored Override",
description = "Override global setting.",
default = False
)
splitMirrored = BoolProperty(
name = "Split Mirrored",
description = "Split tangent vertices at UV mirror points.",
default = False
)
splitRotated_override = BoolProperty(
name = "Split Rotated Override",
description = "Override global setting.",
default = False
)
splitRotated = BoolProperty(
name = "Split Rotated",
description = "Split tangent vertices where basis is rotated > 90 degrees.",
default = False
)
reorganiseVertBuff_override = BoolProperty(
name = "Reorganise Vertex Buffers Override",
description = "Override global setting.",
default = False
)
reorganiseVertBuff = BoolProperty(
name = "Reorganise Vertex Buffers",
description = "Reorganise vertex buffer to make it GPU vertex cache friendly.",
default = True
)
optimiseAnimation_override = BoolProperty(
name = "Optimise Animation Override",
description = "Override global setting.",
default = False
)
optimiseAnimation = BoolProperty(
name = "Optimise Animation",
description = "Optimise out redundant tracks & keyframes.",
default = True
)
# registering and menu integration
def register():
bpy.utils.register_module(__name__)
# unregistering and removing menus
def unregister():
bpy.utils.unregister_module(__name__)
if __name__ == "__main__":
register()
| mit |
andmos/ansible | lib/ansible/modules/network/illumos/dladm_etherstub.py | 52 | 4151 | #!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2015, Adam Števko <adam.stevko@gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: dladm_etherstub
short_description: Manage etherstubs on Solaris/illumos systems.
description:
- Create or delete etherstubs on Solaris/illumos systems.
version_added: "2.2"
author: Adam Števko (@xen0l)
options:
name:
description:
- Etherstub name.
required: true
temporary:
description:
- Specifies that the etherstub is temporary. Temporary etherstubs
do not persist across reboots.
required: false
default: false
type: bool
state:
description:
- Create or delete Solaris/illumos etherstub.
required: false
default: "present"
choices: [ "present", "absent" ]
'''
EXAMPLES = '''
# Create 'stub0' etherstub
- dladm_etherstub:
name: stub0
state: present
# Remove 'stub0 etherstub
- dladm_etherstub:
name: stub0
state: absent
'''
RETURN = '''
name:
description: etherstub name
returned: always
type: str
sample: "switch0"
state:
description: state of the target
returned: always
type: str
sample: "present"
temporary:
description: etherstub's persistence
returned: always
type: bool
sample: "True"
'''
from ansible.module_utils.basic import AnsibleModule
class Etherstub(object):
def __init__(self, module):
self.module = module
self.name = module.params['name']
self.temporary = module.params['temporary']
self.state = module.params['state']
def etherstub_exists(self):
cmd = [self.module.get_bin_path('dladm', True)]
cmd.append('show-etherstub')
cmd.append(self.name)
(rc, _, _) = self.module.run_command(cmd)
if rc == 0:
return True
else:
return False
def create_etherstub(self):
cmd = [self.module.get_bin_path('dladm', True)]
cmd.append('create-etherstub')
if self.temporary:
cmd.append('-t')
cmd.append(self.name)
return self.module.run_command(cmd)
def delete_etherstub(self):
cmd = [self.module.get_bin_path('dladm', True)]
cmd.append('delete-etherstub')
if self.temporary:
cmd.append('-t')
cmd.append(self.name)
return self.module.run_command(cmd)
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(required=True),
temporary=dict(default=False, type='bool'),
state=dict(default='present', choices=['absent', 'present']),
),
supports_check_mode=True
)
etherstub = Etherstub(module)
rc = None
out = ''
err = ''
result = {}
result['name'] = etherstub.name
result['state'] = etherstub.state
result['temporary'] = etherstub.temporary
if etherstub.state == 'absent':
if etherstub.etherstub_exists():
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = etherstub.delete_etherstub()
if rc != 0:
module.fail_json(name=etherstub.name, msg=err, rc=rc)
elif etherstub.state == 'present':
if not etherstub.etherstub_exists():
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = etherstub.create_etherstub()
if rc is not None and rc != 0:
module.fail_json(name=etherstub.name, msg=err, rc=rc)
if rc is None:
result['changed'] = False
else:
result['changed'] = True
if out:
result['stdout'] = out
if err:
result['stderr'] = err
module.exit_json(**result)
if __name__ == '__main__':
main()
| gpl-3.0 |
canaltinova/servo | tests/wpt/web-platform-tests/conformance-checkers/tools/ins-del-datetime.py | 107 | 8420 | # -*- coding: utf-8 -*-
import os
ccdir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
template = """<!DOCTYPE html>
<meta charset=utf-8>
"""
errors = {
"date-year-0000": "0000-12-09",
"date-month-00": "2002-00-15",
"date-month-13": "2002-13-15",
"date-0005-02-29": "0005-02-29",
"date-1969-02-29": "1969-02-29",
"date-1900-02-29": "1900-02-29",
"date-2100-02-29": "2100-02-29",
"date-2200-02-29": "2200-02-29",
"date-2014-02-29": "2014-02-29",
"date-day-04-31": "2002-04-31",
"date-day-06-31": "2002-06-31",
"date-day-09-31": "2002-09-31",
"date-day-11-31": "2002-11-31",
"date-day-01-32": "2002-01-32",
"date-day-03-32": "2002-03-32",
"date-day-05-32": "2002-05-32",
"date-day-07-32": "2002-07-32",
"date-day-08-32": "2002-08-32",
"date-day-10-32": "2002-10-32",
"date-day-12-32": "2002-12-32",
"date-iso8601-YYYYMMDD-no-hyphen": "20020929",
"date-leading-whitespace": " 2002-09-29",
"date-trailing-whitespace": "2002-09-29 ",
"date-month-one-digit": "2002-9-29",
"date-month-three-digits": "2002-011-29",
"date-year-three-digits": "782-09-29",
"date-day-one-digit": "2002-09-9",
"date-day-three-digits": "2002-11-009",
"date-day-missing-separator": "2014-0220",
"date-month-missing-separator": "201402-20",
"date-non-ascii-digit": "2002-09-29",
"date-trailing-U+0000": "2002-09-29�",
"date-trailing-pile-of-poo": "2002-09-29💩",
"date-wrong-day-separator": "2014-02:20",
"date-wrong-month-separator": "2014:02-20",
"date-year-negative": "-2002-09-29",
"date-leading-bom": "2002-09-29",
"global-date-and-time-60-minutes": "2011-11-12T00:60:00+08:00",
"global-date-and-time-60-seconds": "2011-11-12T00:00:60+08:00",
"global-date-and-time-2400": "2011-11-12T24:00:00+08:00",
"global-date-and-time-space-before-timezone": "2011-11-12T06:54:39 08:00",
"global-date-and-time-hour-one-digit": "2011-11-12T6:54:39-08:00",
"global-date-and-time-hour-three-digits": "2011-11-12T016:54:39-08:00",
"global-date-and-time-minutes-one-digit": "2011-11-12T16:4:39-08:00",
"global-date-and-time-minutes-three-digits": "2011-11-12T16:354:39-08:00",
"global-date-and-time-seconds-one-digit": "2011-11-12T16:54:9-08:00",
"global-date-and-time-seconds-three-digits": "2011-11-12T16:54:039-08:00",
"global-date-and-time-timezone-with-seconds": "2011-11-12T06:54:39-08:00:00",
"global-date-and-time-timezone-60-minutes": "2011-11-12T06:54:39-08:60",
"global-date-and-time-timezone-one-digit-hour": "2011-11-12T06:54:39-5:00",
"global-date-and-time-timezone-one-digit-minute": "2011-11-12T06:54:39-05:0",
"global-date-and-time-timezone-three-digit-hour": "2011-11-12T06:54:39-005:00",
"global-date-and-time-timezone-three-digit-minute": "2011-11-12T06:54:39-05:000",
"global-date-and-time-nbsp": "2011-11-12 14:54Z",
"global-date-and-time-missing-minutes-separator": "2011-11-12T1454Z",
"global-date-and-time-missing-seconds-separator": "2011-11-12T14:5439Z",
"global-date-and-time-wrong-minutes-separator": "2011-11-12T14-54Z",
"global-date-and-time-wrong-seconds-separator": "2011-11-12T14:54-39Z",
"global-date-and-time-lowercase-z": "2011-11-12T14:54z",
"global-date-and-time-with-both-T-and-space": "2011-11-12T 14:54Z",
"global-date-and-time-zero-digit-fraction": "2011-11-12T06:54:39.-08:00",
"global-date-and-time-four-digit-fraction": "2011-11-12T06:54:39.9291-08:00",
"global-date-and-time-bad-fraction-separator": "2011-11-12T14:54:39,929+0000",
"global-date-and-time-timezone-non-T-character": "2011-11-12+14:54Z",
"global-date-and-time-timezone-lowercase-t": "2011-11-12t14:54Z",
"global-date-and-time-timezone-multiple-spaces": "2011-11-12 14:54Z",
"global-date-and-time-timezone-offset-space-start": "2011-11-12T06:54:39.929 08:00",
"global-date-and-time-timezone-offset-colon-start": "2011-11-12T06:54:39.929:08:00",
"global-date-and-time-timezone-plus-2400": "2011-11-12T06:54:39-24:00",
"global-date-and-time-timezone-minus-2400": "2011-11-12T06:54:39-24:00",
"global-date-and-time-timezone-iso8601-two-digit": "2011-11-12T06:54:39-08",
"global-date-and-time-iso8601-hhmmss-no-colon": "2011-11-12T145439Z",
"global-date-and-time-iso8601-hhmm-no-colon": "2011-11-12T1454Z",
"global-date-and-time-iso8601-hh": "2011-11-12T14Z",
"year": "2006",
"yearless-date": "07-15",
"month": "2011-11",
"week": "2011-W46",
"time": "14:54:39",
"local-date-and-time": "2011-11-12T14:54",
"duration-P-form": "PT4H18M3S",
"duration-time-component": "4h 18m 3s",
}
warnings = {
"global-date-and-time-timezone-plus-1500": "2011-11-12T00:00:00+1500",
"global-date-and-time-timezone-minus-1300": "2011-11-12T00:00:00-1300",
"global-date-and-time-timezone-minutes-15": "2011-11-12T00:00:00+08:15",
"date-0214-09-29": "0214-09-29",
"date-20014-09-29": "20014-09-29",
"date-0004-02-29": "0004-02-29",
"date-year-five-digits": "12014-09-29",
}
non_errors = {
"date": "2002-09-29",
"date-2000-02-29": "2000-02-29",
"date-2400-02-29": "2400-02-29",
"date-1968-02-29": "1968-02-29",
"date-1900-02-28": "1900-02-28",
"date-2100-02-28": "2100-02-28",
"date-2200-02-28": "2200-02-28",
"date-2014-02-28": "2014-02-28",
"date-day-01-31": "2002-01-31",
"date-day-03-31": "2002-03-31",
"date-day-05-31": "2002-05-31",
"date-day-07-31": "2002-07-31",
"date-day-08-31": "2002-08-31",
"date-day-10-31": "2002-10-31",
"date-day-12-31": "2002-12-31",
"date-day-04-30": "2002-04-30",
"date-day-06-30": "2002-06-30",
"date-day-09-30": "2002-09-30",
"date-day-11-30": "2002-11-30",
"global-date-and-time-no-seconds": "2011-11-12T14:54Z",
"global-date-and-time-with-seconds": "2011-11-12T14:54:39+0000",
"global-date-and-time-with-one-digit-fraction": "2011-11-12T06:54:39.9-08:00",
"global-date-and-time-with-two-digit-fraction": "2011-11-12T06:54:39.92+07:00",
"global-date-and-time-with-three-digit-fraction": "2011-11-12T06:54:39.929-06:00",
"global-date-and-time-space": "2011-11-12 14:54Z",
"global-date-and-time-timezone": "2011-11-12T06:54:39+0900",
"global-date-and-time-timezone-30": "2011-11-12T06:54:39-0830",
"global-date-and-time-timezone-45": "2011-11-12T06:54:39-0845",
"global-date-and-time-timezone-with-colon": "2011-11-12T06:54:39-08:00",
"global-date-and-time-timezone-without-colon": "2011-11-12T06:54:39-0800",
}
for key in errors.keys():
error = errors[key]
template_ins = template
template_del = template
template_ins += '<title>%s</title>\n' % key
template_del += '<title>%s</title>\n' % key
template_ins += '<ins datetime="%s"></ins>' % errors[key]
template_del += '<del datetime="%s"></del>' % errors[key]
ins_file = open(os.path.join(ccdir, "html/elements/ins/%s-novalid.html" % key), 'wb')
ins_file.write(template_ins)
ins_file.close()
del_file = open(os.path.join(ccdir, "html/elements/del/%s-novalid.html" % key), 'wb')
del_file.write(template_del)
del_file.close()
for key in warnings.keys():
non_error = warnings[key]
template_ins = template
template_del = template
template_ins += '<title>%s</title>\n' % key
template_del += '<title>%s</title>\n' % key
template_ins += '<ins datetime="%s"></ins>' % warnings[key]
template_del += '<del datetime="%s"></del>' % warnings[key]
ins_file = open(os.path.join(ccdir, "html/elements/ins/%s-haswarn.html" % key), 'wb')
ins_file.write(template_ins)
ins_file.close()
del_file = open(os.path.join(ccdir, "html/elements/del/%s-haswarn.html" % key), 'wb')
del_file.write(template_del)
del_file.close()
ins_file = open(os.path.join(ccdir, "html/elements/ins/datetime-isvalid.html"), 'wb')
del_file = open(os.path.join(ccdir, "html/elements/del/datetime-isvalid.html"), 'wb')
ins_file.write(template + '<title>valid datetime</title>\n')
del_file.write(template + '<title>valid datetime</title>\n')
for key in non_errors.keys():
non_error = non_errors[key]
ins_file.write('<ins datetime="%s"></ins> <!-- %s -->\n' % (non_errors[key], key))
del_file.write('<del datetime="%s"></del> <!-- %s -->\n' % (non_errors[key], key))
ins_file.close()
del_file.close()
# vim: ts=4:sw=4
| mpl-2.0 |
moorescloud/holideck | cherrypy/test/test_routes.py | 42 | 2411 | import os
curdir = os.path.join(os.getcwd(), os.path.dirname(__file__))
import cherrypy
from cherrypy.test import helper
import nose
class RoutesDispatchTest(helper.CPWebCase):
def setup_server():
try:
import routes
except ImportError:
raise nose.SkipTest('Install routes to test RoutesDispatcher code')
class Dummy:
def index(self):
return "I said good day!"
class City:
def __init__(self, name):
self.name = name
self.population = 10000
def index(self, **kwargs):
return "Welcome to %s, pop. %s" % (self.name, self.population)
index._cp_config = {'tools.response_headers.on': True,
'tools.response_headers.headers': [('Content-Language', 'en-GB')]}
def update(self, **kwargs):
self.population = kwargs['pop']
return "OK"
d = cherrypy.dispatch.RoutesDispatcher()
d.connect(action='index', name='hounslow', route='/hounslow',
controller=City('Hounslow'))
d.connect(name='surbiton', route='/surbiton', controller=City('Surbiton'),
action='index', conditions=dict(method=['GET']))
d.mapper.connect('/surbiton', controller='surbiton',
action='update', conditions=dict(method=['POST']))
d.connect('main', ':action', controller=Dummy())
conf = {'/': {'request.dispatch': d}}
cherrypy.tree.mount(root=None, config=conf)
setup_server = staticmethod(setup_server)
def test_Routes_Dispatch(self):
self.getPage("/hounslow")
self.assertStatus("200 OK")
self.assertBody("Welcome to Hounslow, pop. 10000")
self.getPage("/foo")
self.assertStatus("404 Not Found")
self.getPage("/surbiton")
self.assertStatus("200 OK")
self.assertBody("Welcome to Surbiton, pop. 10000")
self.getPage("/surbiton", method="POST", body="pop=1327")
self.assertStatus("200 OK")
self.assertBody("OK")
self.getPage("/surbiton")
self.assertStatus("200 OK")
self.assertHeader("Content-Language", "en-GB")
self.assertBody("Welcome to Surbiton, pop. 1327")
| mit |
jart/tensorflow | tensorflow/python/util/example_parser_configuration_test.py | 157 | 2775 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for ExampleParserConfiguration."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from google.protobuf import text_format
from tensorflow.core.example import example_parser_configuration_pb2
from tensorflow.python.client import session
from tensorflow.python.framework import dtypes
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import parsing_ops
from tensorflow.python.platform import test
from tensorflow.python.util.example_parser_configuration import extract_example_parser_configuration
BASIC_PROTO = """
feature_map {
key: "x"
value {
fixed_len_feature {
dtype: DT_FLOAT
shape {
dim {
size: 1
}
}
default_value {
dtype: DT_FLOAT
tensor_shape {
dim {
size: 1
}
}
float_val: 33.0
}
values_output_tensor_name: "ParseExample/ParseExample:3"
}
}
}
feature_map {
key: "y"
value {
var_len_feature {
dtype: DT_STRING
values_output_tensor_name: "ParseExample/ParseExample:1"
indices_output_tensor_name: "ParseExample/ParseExample:0"
shapes_output_tensor_name: "ParseExample/ParseExample:2"
}
}
}
"""
class ExampleParserConfigurationTest(test.TestCase):
def testBasic(self):
golden_config = example_parser_configuration_pb2.ExampleParserConfiguration(
)
text_format.Parse(BASIC_PROTO, golden_config)
with session.Session() as sess:
examples = array_ops.placeholder(dtypes.string, shape=[1])
feature_to_type = {
'x': parsing_ops.FixedLenFeature([1], dtypes.float32, 33.0),
'y': parsing_ops.VarLenFeature(dtypes.string)
}
_ = parsing_ops.parse_example(examples, feature_to_type)
parse_example_op = sess.graph.get_operation_by_name(
'ParseExample/ParseExample')
config = extract_example_parser_configuration(parse_example_op, sess)
self.assertProtoEquals(golden_config, config)
if __name__ == '__main__':
test.main()
| apache-2.0 |
Universal-Model-Converter/UMC3.0a | data/Python/x86/Lib/HTMLParser.py | 85 | 16982 | """A parser for HTML and XHTML."""
# This file is based on sgmllib.py, but the API is slightly different.
# XXX There should be a way to distinguish between PCDATA (parsed
# character data -- the normal case), RCDATA (replaceable character
# data -- only char and entity references and end tags are special)
# and CDATA (character data -- only end tags are special).
import markupbase
import re
# Regular expressions used for parsing
interesting_normal = re.compile('[&<]')
incomplete = re.compile('&[a-zA-Z#]')
entityref = re.compile('&([a-zA-Z][-.a-zA-Z0-9]*)[^a-zA-Z0-9]')
charref = re.compile('&#(?:[0-9]+|[xX][0-9a-fA-F]+)[^0-9a-fA-F]')
starttagopen = re.compile('<[a-zA-Z]')
piclose = re.compile('>')
commentclose = re.compile(r'--\s*>')
tagfind = re.compile('([a-zA-Z][-.a-zA-Z0-9:_]*)(?:\s|/(?!>))*')
# see http://www.w3.org/TR/html5/tokenization.html#tag-open-state
# and http://www.w3.org/TR/html5/tokenization.html#tag-name-state
tagfind_tolerant = re.compile('[a-zA-Z][^\t\n\r\f />\x00]*')
attrfind = re.compile(
r'((?<=[\'"\s/])[^\s/>][^\s/=>]*)(\s*=+\s*'
r'(\'[^\']*\'|"[^"]*"|(?![\'"])[^>\s]*))?(?:\s|/(?!>))*')
locatestarttagend = re.compile(r"""
<[a-zA-Z][-.a-zA-Z0-9:_]* # tag name
(?:[\s/]* # optional whitespace before attribute name
(?:(?<=['"\s/])[^\s/>][^\s/=>]* # attribute name
(?:\s*=+\s* # value indicator
(?:'[^']*' # LITA-enclosed value
|"[^"]*" # LIT-enclosed value
|(?!['"])[^>\s]* # bare value
)
)?(?:\s|/(?!>))*
)*
)?
\s* # trailing whitespace
""", re.VERBOSE)
endendtag = re.compile('>')
# the HTML 5 spec, section 8.1.2.2, doesn't allow spaces between
# </ and the tag name, so maybe this should be fixed
endtagfind = re.compile('</\s*([a-zA-Z][-.a-zA-Z0-9:_]*)\s*>')
class HTMLParseError(Exception):
"""Exception raised for all parse errors."""
def __init__(self, msg, position=(None, None)):
assert msg
self.msg = msg
self.lineno = position[0]
self.offset = position[1]
def __str__(self):
result = self.msg
if self.lineno is not None:
result = result + ", at line %d" % self.lineno
if self.offset is not None:
result = result + ", column %d" % (self.offset + 1)
return result
class HTMLParser(markupbase.ParserBase):
"""Find tags and other markup and call handler functions.
Usage:
p = HTMLParser()
p.feed(data)
...
p.close()
Start tags are handled by calling self.handle_starttag() or
self.handle_startendtag(); end tags by self.handle_endtag(). The
data between tags is passed from the parser to the derived class
by calling self.handle_data() with the data as argument (the data
may be split up in arbitrary chunks). Entity references are
passed by calling self.handle_entityref() with the entity
reference as the argument. Numeric character references are
passed to self.handle_charref() with the string containing the
reference as the argument.
"""
CDATA_CONTENT_ELEMENTS = ("script", "style")
def __init__(self):
"""Initialize and reset this instance."""
self.reset()
def reset(self):
"""Reset this instance. Loses all unprocessed data."""
self.rawdata = ''
self.lasttag = '???'
self.interesting = interesting_normal
self.cdata_elem = None
markupbase.ParserBase.reset(self)
def feed(self, data):
r"""Feed data to the parser.
Call this as often as you want, with as little or as much text
as you want (may include '\n').
"""
self.rawdata = self.rawdata + data
self.goahead(0)
def close(self):
"""Handle any buffered data."""
self.goahead(1)
def error(self, message):
raise HTMLParseError(message, self.getpos())
__starttag_text = None
def get_starttag_text(self):
"""Return full source of start tag: '<...>'."""
return self.__starttag_text
def set_cdata_mode(self, elem):
self.cdata_elem = elem.lower()
self.interesting = re.compile(r'</\s*%s\s*>' % self.cdata_elem, re.I)
def clear_cdata_mode(self):
self.interesting = interesting_normal
self.cdata_elem = None
# Internal -- handle data as far as reasonable. May leave state
# and data to be processed by a subsequent call. If 'end' is
# true, force handling all data as if followed by EOF marker.
def goahead(self, end):
rawdata = self.rawdata
i = 0
n = len(rawdata)
while i < n:
match = self.interesting.search(rawdata, i) # < or &
if match:
j = match.start()
else:
if self.cdata_elem:
break
j = n
if i < j: self.handle_data(rawdata[i:j])
i = self.updatepos(i, j)
if i == n: break
startswith = rawdata.startswith
if startswith('<', i):
if starttagopen.match(rawdata, i): # < + letter
k = self.parse_starttag(i)
elif startswith("</", i):
k = self.parse_endtag(i)
elif startswith("<!--", i):
k = self.parse_comment(i)
elif startswith("<?", i):
k = self.parse_pi(i)
elif startswith("<!", i):
k = self.parse_html_declaration(i)
elif (i + 1) < n:
self.handle_data("<")
k = i + 1
else:
break
if k < 0:
if not end:
break
k = rawdata.find('>', i + 1)
if k < 0:
k = rawdata.find('<', i + 1)
if k < 0:
k = i + 1
else:
k += 1
self.handle_data(rawdata[i:k])
i = self.updatepos(i, k)
elif startswith("&#", i):
match = charref.match(rawdata, i)
if match:
name = match.group()[2:-1]
self.handle_charref(name)
k = match.end()
if not startswith(';', k-1):
k = k - 1
i = self.updatepos(i, k)
continue
else:
if ";" in rawdata[i:]: #bail by consuming &#
self.handle_data(rawdata[0:2])
i = self.updatepos(i, 2)
break
elif startswith('&', i):
match = entityref.match(rawdata, i)
if match:
name = match.group(1)
self.handle_entityref(name)
k = match.end()
if not startswith(';', k-1):
k = k - 1
i = self.updatepos(i, k)
continue
match = incomplete.match(rawdata, i)
if match:
# match.group() will contain at least 2 chars
if end and match.group() == rawdata[i:]:
self.error("EOF in middle of entity or char ref")
# incomplete
break
elif (i + 1) < n:
# not the end of the buffer, and can't be confused
# with some other construct
self.handle_data("&")
i = self.updatepos(i, i + 1)
else:
break
else:
assert 0, "interesting.search() lied"
# end while
if end and i < n and not self.cdata_elem:
self.handle_data(rawdata[i:n])
i = self.updatepos(i, n)
self.rawdata = rawdata[i:]
# Internal -- parse html declarations, return length or -1 if not terminated
# See w3.org/TR/html5/tokenization.html#markup-declaration-open-state
# See also parse_declaration in _markupbase
def parse_html_declaration(self, i):
rawdata = self.rawdata
if rawdata[i:i+2] != '<!':
self.error('unexpected call to parse_html_declaration()')
if rawdata[i:i+4] == '<!--':
# this case is actually already handled in goahead()
return self.parse_comment(i)
elif rawdata[i:i+3] == '<![':
return self.parse_marked_section(i)
elif rawdata[i:i+9].lower() == '<!doctype':
# find the closing >
gtpos = rawdata.find('>', i+9)
if gtpos == -1:
return -1
self.handle_decl(rawdata[i+2:gtpos])
return gtpos+1
else:
return self.parse_bogus_comment(i)
# Internal -- parse bogus comment, return length or -1 if not terminated
# see http://www.w3.org/TR/html5/tokenization.html#bogus-comment-state
def parse_bogus_comment(self, i, report=1):
rawdata = self.rawdata
if rawdata[i:i+2] not in ('<!', '</'):
self.error('unexpected call to parse_comment()')
pos = rawdata.find('>', i+2)
if pos == -1:
return -1
if report:
self.handle_comment(rawdata[i+2:pos])
return pos + 1
# Internal -- parse processing instr, return end or -1 if not terminated
def parse_pi(self, i):
rawdata = self.rawdata
assert rawdata[i:i+2] == '<?', 'unexpected call to parse_pi()'
match = piclose.search(rawdata, i+2) # >
if not match:
return -1
j = match.start()
self.handle_pi(rawdata[i+2: j])
j = match.end()
return j
# Internal -- handle starttag, return end or -1 if not terminated
def parse_starttag(self, i):
self.__starttag_text = None
endpos = self.check_for_whole_start_tag(i)
if endpos < 0:
return endpos
rawdata = self.rawdata
self.__starttag_text = rawdata[i:endpos]
# Now parse the data between i+1 and j into a tag and attrs
attrs = []
match = tagfind.match(rawdata, i+1)
assert match, 'unexpected call to parse_starttag()'
k = match.end()
self.lasttag = tag = match.group(1).lower()
while k < endpos:
m = attrfind.match(rawdata, k)
if not m:
break
attrname, rest, attrvalue = m.group(1, 2, 3)
if not rest:
attrvalue = None
elif attrvalue[:1] == '\'' == attrvalue[-1:] or \
attrvalue[:1] == '"' == attrvalue[-1:]:
attrvalue = attrvalue[1:-1]
if attrvalue:
attrvalue = self.unescape(attrvalue)
attrs.append((attrname.lower(), attrvalue))
k = m.end()
end = rawdata[k:endpos].strip()
if end not in (">", "/>"):
lineno, offset = self.getpos()
if "\n" in self.__starttag_text:
lineno = lineno + self.__starttag_text.count("\n")
offset = len(self.__starttag_text) \
- self.__starttag_text.rfind("\n")
else:
offset = offset + len(self.__starttag_text)
self.handle_data(rawdata[i:endpos])
return endpos
if end.endswith('/>'):
# XHTML-style empty tag: <span attr="value" />
self.handle_startendtag(tag, attrs)
else:
self.handle_starttag(tag, attrs)
if tag in self.CDATA_CONTENT_ELEMENTS:
self.set_cdata_mode(tag)
return endpos
# Internal -- check to see if we have a complete starttag; return end
# or -1 if incomplete.
def check_for_whole_start_tag(self, i):
rawdata = self.rawdata
m = locatestarttagend.match(rawdata, i)
if m:
j = m.end()
next = rawdata[j:j+1]
if next == ">":
return j + 1
if next == "/":
if rawdata.startswith("/>", j):
return j + 2
if rawdata.startswith("/", j):
# buffer boundary
return -1
# else bogus input
self.updatepos(i, j + 1)
self.error("malformed empty start tag")
if next == "":
# end of input
return -1
if next in ("abcdefghijklmnopqrstuvwxyz=/"
"ABCDEFGHIJKLMNOPQRSTUVWXYZ"):
# end of input in or before attribute value, or we have the
# '/' from a '/>' ending
return -1
if j > i:
return j
else:
return i + 1
raise AssertionError("we should not get here!")
# Internal -- parse endtag, return end or -1 if incomplete
def parse_endtag(self, i):
rawdata = self.rawdata
assert rawdata[i:i+2] == "</", "unexpected call to parse_endtag"
match = endendtag.search(rawdata, i+1) # >
if not match:
return -1
gtpos = match.end()
match = endtagfind.match(rawdata, i) # </ + tag + >
if not match:
if self.cdata_elem is not None:
self.handle_data(rawdata[i:gtpos])
return gtpos
# find the name: w3.org/TR/html5/tokenization.html#tag-name-state
namematch = tagfind_tolerant.match(rawdata, i+2)
if not namematch:
# w3.org/TR/html5/tokenization.html#end-tag-open-state
if rawdata[i:i+3] == '</>':
return i+3
else:
return self.parse_bogus_comment(i)
tagname = namematch.group().lower()
# consume and ignore other stuff between the name and the >
# Note: this is not 100% correct, since we might have things like
# </tag attr=">">, but looking for > after tha name should cover
# most of the cases and is much simpler
gtpos = rawdata.find('>', namematch.end())
self.handle_endtag(tagname)
return gtpos+1
elem = match.group(1).lower() # script or style
if self.cdata_elem is not None:
if elem != self.cdata_elem:
self.handle_data(rawdata[i:gtpos])
return gtpos
self.handle_endtag(elem)
self.clear_cdata_mode()
return gtpos
# Overridable -- finish processing of start+end tag: <tag.../>
def handle_startendtag(self, tag, attrs):
self.handle_starttag(tag, attrs)
self.handle_endtag(tag)
# Overridable -- handle start tag
def handle_starttag(self, tag, attrs):
pass
# Overridable -- handle end tag
def handle_endtag(self, tag):
pass
# Overridable -- handle character reference
def handle_charref(self, name):
pass
# Overridable -- handle entity reference
def handle_entityref(self, name):
pass
# Overridable -- handle data
def handle_data(self, data):
pass
# Overridable -- handle comment
def handle_comment(self, data):
pass
# Overridable -- handle declaration
def handle_decl(self, decl):
pass
# Overridable -- handle processing instruction
def handle_pi(self, data):
pass
def unknown_decl(self, data):
pass
# Internal -- helper to remove special character quoting
entitydefs = None
def unescape(self, s):
if '&' not in s:
return s
def replaceEntities(s):
s = s.groups()[0]
try:
if s[0] == "#":
s = s[1:]
if s[0] in ['x','X']:
c = int(s[1:], 16)
else:
c = int(s)
return unichr(c)
except ValueError:
return '&#'+s+';'
else:
# Cannot use name2codepoint directly, because HTMLParser supports apos,
# which is not part of HTML 4
import htmlentitydefs
if HTMLParser.entitydefs is None:
entitydefs = HTMLParser.entitydefs = {'apos':u"'"}
for k, v in htmlentitydefs.name2codepoint.iteritems():
entitydefs[k] = unichr(v)
try:
return self.entitydefs[s]
except KeyError:
return '&'+s+';'
return re.sub(r"&(#?[xX]?(?:[0-9a-fA-F]+|\w{1,8}));", replaceEntities, s)
| mit |
ychen820/microblog | y/google-cloud-sdk/platform/gsutil/gslib/addlhelp/acls.py | 16 | 9354 | # -*- coding: utf-8 -*-
# Copyright 2012 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Additional help about Access Control Lists."""
from __future__ import absolute_import
from gslib.help_provider import HelpProvider
_DETAILED_HELP_TEXT = ("""
<B>OVERVIEW</B>
Access Control Lists (ACLs) allow you to control who can read and write
your data, and who can read and write the ACLs themselves.
If not specified at the time an object is uploaded (e.g., via the gsutil cp
-a option), objects will be created with a default object ACL set on the
bucket (see "gsutil help defacl"). You can replace the ACL on an object
or bucket using the "gsutil acl set" command, or
modify the existing ACL using the "gsutil acl ch" command (see "gsutil help
acl").
<B>BUCKET VS OBJECT ACLS</B>
In Google Cloud Storage, the bucket ACL works as follows:
- Users granted READ access are allowed to list the bucket contents.
- Users granted WRITE access are allowed READ access and also are
allowed to write and delete objects in that bucket -- including
overwriting previously written objects.
- Users granted OWNER access are allowed WRITE access and also
are allowed to read and write the bucket's ACL.
The object ACL works as follows:
- Users granted READ access are allowed to read the object's data and
metadata.
- Users granted OWNER access are allowed READ access and also
are allowed to read and write the object's ACL.
A couple of points are worth noting, that sometimes surprise users:
1. There is no WRITE access for objects; attempting to set an ACL with WRITE
permission for an object will result in an error.
2. The bucket ACL plays no role in determining who can read objects; only the
object ACL matters for that purpose. This is different from how things
work in Linux file systems, where both the file and directory permission
control file read access. It also means, for example, that someone with
OWNER over the bucket may not have read access to objects in
the bucket. This is by design, and supports useful cases. For example,
you might want to set up bucket ownership so that a small group of
administrators have OWNER on the bucket (with the ability to
delete data to control storage costs), but not grant those users read
access to the object data (which might be sensitive data that should
only be accessed by a different specific group of users).
<B>CANNED ACLS</B>
The simplest way to set an ACL on a bucket or object is using a "canned
ACL". The available canned ACLs are:
project-private
Gives permission to the project team based on their roles. Anyone who is
part of the team has READ permission, and project owners and project editors
have OWNER permission. This is the default ACL for newly created
buckets. This is also the default ACL for newly created objects unless the
default object ACL for that bucket has been changed. For more details see
"gsutil help projects".
private
Gives the requester (and only the requester) OWNER permission for a
bucket or object.
public-read
Gives all users (whether logged in or anonymous) READ permission. When
you apply this to an object, anyone on the Internet can read the object
without authenticating.
NOTE: By default, publicly readable objects are served with a Cache-Control
header allowing such objects to be cached for 3600 seconds. If you need to
ensure that updates become visible immediately, you should set a
Cache-Control header of "Cache-Control:private, max-age=0, no-transform" on
such objects. For help doing this, see 'gsutil help setmeta'.
NOTE: Setting a bucket ACL to public-read will remove all OWNER and WRITE
permissions from everyone except the project owner group. Setting an object
ACL to public-read will remove all OWNER and WRITE permissions from
everyone except the object owner. For this reason, we recommend using
the "acl ch" command to make these changes; see "gsutil help acl ch" for
details.
public-read-write
Gives all users READ and WRITE permission. This ACL applies only to buckets.
NOTE: Setting a bucket to public-read-write will allow anyone on the
Internet to upload anything to your bucket. You will be responsible for this
content.
NOTE: Setting a bucket ACL to public-read-write will remove all OWNER
permissions from everyone except the project owner group. Setting an object
ACL to public-read-write will remove all OWNER permissions from
everyone except the object owner. For this reason, we recommend using
the "acl ch" command to make these changes; see "gsutil help acl ch" for
details.
authenticated-read
Gives the requester OWNER permission and gives all authenticated
Google account holders READ permission.
bucket-owner-read
Gives the requester OWNER permission and gives the bucket owner READ
permission. This is used only with objects.
bucket-owner-full-control
Gives the requester OWNER permission and gives the bucket owner
OWNER permission. This is used only with objects.
<B>ACL JSON</B>
When you use a canned ACL, it is translated into an JSON representation
that can later be retrieved and edited to specify more fine-grained
detail about who can read and write buckets and objects. By running
the "gsutil acl get" command you can retrieve the ACL JSON, and edit it to
customize the permissions.
As an example, if you create an object in a bucket that has no default
object ACL set and then retrieve the ACL on the object, it will look
something like this:
[
{
"entity": "group-00b4903a9740e42c29800f53bd5a9a62a2f96eb3f64a4313a115df3f3a776bf7",
"entityId": "00b4903a9740e42c29800f53bd5a9a62a2f96eb3f64a4313a115df3f3a776bf7",
"role": "OWNER"
},
{
"entity": "group-00b4903a977fd817e9da167bc81306489181a110456bb635f466d71cf90a0d51",
"entityId": "00b4903a977fd817e9da167bc81306489181a110456bb635f466d71cf90a0d51",
"role": "OWNER"
},
{
"entity": "00b4903a974898cc8fc309f2f2835308ba3d3df1b889d3fc7e33e187d52d8e71",
"entityId": "00b4903a974898cc8fc309f2f2835308ba3d3df1b889d3fc7e33e187d52d8e71",
"role": "READER"
}
]
The ACL consists collection of elements, each of which specifies an Entity
and a Role. Entities are the way you specify an individual or group of
individuals, and Roles specify what access they're permitted.
This particular ACL grants OWNER to two groups (which means members
of those groups are allowed to read the object and read and write the ACL),
and READ permission to a third group. The project groups are (in order)
the project owners group, editors group, and viewers group.
The 64 digit hex identifiers (following any prefixes like "group-") used in
this ACL are called canonical IDs. They are used to identify predefined
groups associated with the project that owns the bucket: the Project Owners,
Project Editors, and All Project Team Members groups. For more information
the permissions and roles of these project groups, see "gsutil help projects".
Here's an example of an ACL specified using the group-by-email and
group-by-domain entities:
[
{
"entity": "group-travel-companion-owners@googlegroups.com"
"email": "travel-companion-owners@googlegroups.com",
"role": "OWNER",
}
{
"domain": "example.com",
"entity": "domain-example.com"
"role": "READER",
},
]
This ACL grants members of an email group OWNER, and grants READ
access to any user in a domain (which must be a Google Apps for Business
domain). By applying email group grants to a collection of objects
you can edit access control for large numbers of objects at once via
http://groups.google.com. That way, for example, you can easily and quickly
change access to a group of company objects when employees join and leave
your company (i.e., without having to individually change ACLs across
potentially millions of objects).
<B>SHARING SCENARIOS</B>
For more detailed examples how to achieve various useful sharing use
cases see https://developers.google.com/storage/docs/collaboration
""")
class CommandOptions(HelpProvider):
"""Additional help about Access Control Lists."""
# Help specification. See help_provider.py for documentation.
help_spec = HelpProvider.HelpSpec(
help_name='acls',
help_name_aliases=[
'ACL', 'access control', 'access control list', 'authorization',
'canned', 'canned acl'],
help_type='additional_help',
help_one_line_summary='Working With Access Control Lists',
help_text=_DETAILED_HELP_TEXT,
subcommand_help_text={},
)
| bsd-3-clause |
mkieszek/odoo | addons/auth_ldap/users_ldap.py | 17 | 9858 | # Part of Odoo. See LICENSE file for full copyright and licensing details.
import ldap
import logging
from ldap.filter import filter_format
import openerp.exceptions
from openerp import tools
from openerp.osv import fields, osv
from openerp import SUPERUSER_ID
from openerp.modules.registry import RegistryManager
_logger = logging.getLogger(__name__)
class CompanyLDAP(osv.osv):
_name = 'res.company.ldap'
_order = 'sequence'
_rec_name = 'ldap_server'
def get_ldap_dicts(self, cr, ids=None):
"""
Retrieve res_company_ldap resources from the database in dictionary
format.
:param list ids: Valid ids of model res_company_ldap. If not \
specified, process all resources (unlike other ORM methods).
:return: ldap configurations
:rtype: list of dictionaries
"""
if ids:
id_clause = 'AND id IN (%s)'
args = [tuple(ids)]
else:
id_clause = ''
args = []
cr.execute("""
SELECT id, company, ldap_server, ldap_server_port, ldap_binddn,
ldap_password, ldap_filter, ldap_base, "user", create_user,
ldap_tls
FROM res_company_ldap
WHERE ldap_server != '' """ + id_clause + """ ORDER BY sequence
""", args)
return cr.dictfetchall()
def connect(self, conf):
"""
Connect to an LDAP server specified by an ldap
configuration dictionary.
:param dict conf: LDAP configuration
:return: an LDAP object
"""
uri = 'ldap://%s:%d' % (conf['ldap_server'],
conf['ldap_server_port'])
connection = ldap.initialize(uri)
if conf['ldap_tls']:
connection.start_tls_s()
return connection
def authenticate(self, conf, login, password):
"""
Authenticate a user against the specified LDAP server.
In order to prevent an unintended 'unauthenticated authentication',
which is an anonymous bind with a valid dn and a blank password,
check for empty passwords explicitely (:rfc:`4513#section-6.3.1`)
:param dict conf: LDAP configuration
:param login: username
:param password: Password for the LDAP user
:return: LDAP entry of authenticated user or False
:rtype: dictionary of attributes
"""
if not password:
return False
entry = False
filter = filter_format(conf['ldap_filter'], (login,))
try:
results = self.query(conf, filter)
# Get rid of (None, attrs) for searchResultReference replies
results = [i for i in results if i[0]]
if results and len(results) == 1:
dn = results[0][0]
conn = self.connect(conf)
conn.simple_bind_s(dn, password.encode('utf-8'))
conn.unbind()
entry = results[0]
except ldap.INVALID_CREDENTIALS:
return False
except ldap.LDAPError, e:
_logger.error('An LDAP exception occurred: %s', e)
return entry
def query(self, conf, filter, retrieve_attributes=None):
"""
Query an LDAP server with the filter argument and scope subtree.
Allow for all authentication methods of the simple authentication
method:
- authenticated bind (non-empty binddn + valid password)
- anonymous bind (empty binddn + empty password)
- unauthenticated authentication (non-empty binddn + empty password)
.. seealso::
:rfc:`4513#section-5.1` - LDAP: Simple Authentication Method.
:param dict conf: LDAP configuration
:param filter: valid LDAP filter
:param list retrieve_attributes: LDAP attributes to be retrieved. \
If not specified, return all attributes.
:return: ldap entries
:rtype: list of tuples (dn, attrs)
"""
results = []
try:
conn = self.connect(conf)
ldap_password = conf['ldap_password'] or ''
ldap_binddn = conf['ldap_binddn'] or ''
conn.simple_bind_s(ldap_binddn.encode('utf-8'), ldap_password.encode('utf-8'))
results = conn.search_st(conf['ldap_base'], ldap.SCOPE_SUBTREE,
filter, retrieve_attributes, timeout=60)
conn.unbind()
except ldap.INVALID_CREDENTIALS:
_logger.error('LDAP bind failed.')
except ldap.LDAPError, e:
_logger.error('An LDAP exception occurred: %s', e)
return results
def map_ldap_attributes(self, cr, uid, conf, login, ldap_entry):
"""
Compose values for a new resource of model res_users,
based upon the retrieved ldap entry and the LDAP settings.
:param dict conf: LDAP configuration
:param login: the new user's login
:param tuple ldap_entry: single LDAP result (dn, attrs)
:return: parameters for a new resource of model res_users
:rtype: dict
"""
values = { 'name': ldap_entry[1]['cn'][0],
'login': login,
'company_id': conf['company']
}
return values
def get_or_create_user(self, cr, uid, conf, login, ldap_entry,
context=None):
"""
Retrieve an active resource of model res_users with the specified
login. Create the user if it is not initially found.
:param dict conf: LDAP configuration
:param login: the user's login
:param tuple ldap_entry: single LDAP result (dn, attrs)
:return: res_users id
:rtype: int
"""
user_id = False
login = tools.ustr(login.lower().strip())
cr.execute("SELECT id, active FROM res_users WHERE lower(login)=%s", (login,))
res = cr.fetchone()
if res:
if res[1]:
user_id = res[0]
elif conf['create_user']:
_logger.debug("Creating new Odoo user \"%s\" from LDAP" % login)
user_obj = self.pool['res.users']
values = self.map_ldap_attributes(cr, uid, conf, login, ldap_entry)
if conf['user']:
values['active'] = True
user_id = user_obj.copy(cr, SUPERUSER_ID, conf['user'],
default=values)
else:
user_id = user_obj.create(cr, SUPERUSER_ID, values)
return user_id
_columns = {
'sequence': fields.integer('Sequence'),
'company': fields.many2one('res.company', 'Company', required=True,
ondelete='cascade'),
'ldap_server': fields.char('LDAP Server address', required=True),
'ldap_server_port': fields.integer('LDAP Server port', required=True),
'ldap_binddn': fields.char('LDAP binddn',
help=("The user account on the LDAP server that is used to query "
"the directory. Leave empty to connect anonymously.")),
'ldap_password': fields.char('LDAP password',
help=("The password of the user account on the LDAP server that is "
"used to query the directory.")),
'ldap_filter': fields.char('LDAP filter', required=True),
'ldap_base': fields.char('LDAP base', required=True),
'user': fields.many2one('res.users', 'Template User',
help="User to copy when creating new users"),
'create_user': fields.boolean('Create user',
help="Automatically create local user accounts for new users authenticating via LDAP"),
'ldap_tls': fields.boolean('Use TLS',
help="Request secure TLS/SSL encryption when connecting to the LDAP server. "
"This option requires a server with STARTTLS enabled, "
"otherwise all authentication attempts will fail."),
}
_defaults = {
'ldap_server': '127.0.0.1',
'ldap_server_port': 389,
'sequence': 10,
'create_user': True,
}
class res_company(osv.osv):
_inherit = "res.company"
_columns = {
'ldaps': fields.one2many(
'res.company.ldap', 'company', 'LDAP Parameters', copy=True, groups="base.group_system"),
}
class users(osv.osv):
_inherit = "res.users"
def _login(self, db, login, password):
user_id = super(users, self)._login(db, login, password)
if user_id:
return user_id
registry = RegistryManager.get(db)
with registry.cursor() as cr:
cr.execute("SELECT id FROM res_users WHERE lower(login)=%s", (login,))
res = cr.fetchone()
if res:
return False
ldap_obj = registry.get('res.company.ldap')
for conf in ldap_obj.get_ldap_dicts(cr):
entry = ldap_obj.authenticate(conf, login, password)
if entry:
user_id = ldap_obj.get_or_create_user(
cr, SUPERUSER_ID, conf, login, entry)
if user_id:
break
return user_id
def check_credentials(self, cr, uid, password):
try:
super(users, self).check_credentials(cr, uid, password)
except openerp.exceptions.AccessDenied:
cr.execute('SELECT login FROM res_users WHERE id=%s AND active=TRUE',
(int(uid),))
res = cr.fetchone()
if res:
ldap_obj = self.pool['res.company.ldap']
for conf in ldap_obj.get_ldap_dicts(cr):
if ldap_obj.authenticate(conf, res[0], password):
return
raise
| agpl-3.0 |
Spookz0r/Temporal_Insanity | Gabriel/Code/Keras/mnist_dataset_sorted.py | 1 | 1969 | import numpy as np
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten, pooling
from keras.layers import Convolution2D, MaxPooling2D
from keras.utils import np_utils
from keras import backend as K
from keras.models import load_model
from temporal_insanity import *
'''Choose batch size, number of output classes and numbers of epochs
Batch is the number of test data you should take from the training set each epoch
'''
batch_size = 128
nb_classes = 10
nb_epoch = 1
''' Choose image dimension for input '''
img_rows = 28
img_cols = 28
''' Number of convolutional filters (not including maxpool and relu etc...)
A larger number of filters makes it possible to detect more features
'''
nb_filters = 12
''' Size of the pooling area for max pooling, almost always 2x2 '''
pool_size = (2,2)
''' Convolution kernel size, ie the size of the matrix sliding over the image
detecting features '''
kernel_size = (3,3)
''' Split up the data in train and test sets '''
(X_train, y_train),(X_test, y_test) = mnist.load_data()
#zero_array = [ np.array(0), np.array(0)]
zero_images = []
one_images = []
two_images = []
three_images = []
four_images = []
five_images = []
six_images = []
seven_images = []
eight_images = []
nine_images = []
for i, item in enumerate(y_train):
if item == 0:
zero_images.append(X_train[i])
elif item == 1:
one_images.append(X_train[i])
elif item == 2:
two_images.append(X_train[i])
elif item == 3:
three_images.append(X_train[i])
elif item == 4:
four_images.append(X_train[i])
elif item == 5:
five_images.append(X_train[i])
elif item == 6:
six_images.append(X_train[i])
elif item == 7:
seven_images.append(X_train[i])
elif item == 8:
eight_images.append(X_train[i])
elif item == 9:
nine_images.append(X_train[i])
print("Done")
| mit |
dvcolgan/ludumdare27 | urls.py | 1 | 1279 | from django.conf.urls import patterns, include, url
from django.contrib import admin
from django.core.urlresolvers import reverse
from django.conf import settings
from django.views.generic import TemplateView
#from library.forms import *
admin.autodiscover()
urlpatterns = patterns('',
url(r'^admin/', include(admin.site.urls)),
url(r'', include('game.urls')),
url(r'^qunit-tests/', TemplateView.as_view(template_name='qunit-tests.html'), name='qunit-tests'),
url(r'^login/$', 'django.contrib.auth.views.login', {
'template_name': 'login.html',
}, 'login'),
#url(r'^api-auth/', include('rest_framework.urls', namespace='rest_framework')),
url(r'^password-change/$', 'django.contrib.auth.views.password_change', {
'template_name': 'password_change.html',
}, 'password_change'),
url(r'^password-reset/$', 'django.contrib.auth.views.password_reset', {
'template_name': 'password_change.html',
}, 'password_reset'),
url(r'^logout/$', 'django.contrib.auth.views.logout_then_login', name='logout'),
url(r'', include('django.contrib.auth.urls')),
)
if settings.DEBUG:
urlpatterns += patterns('',
(r'^media/(?P<path>.*)$', 'django.views.static.serve', {'document_root': settings.MEDIA_ROOT}),
)
| mit |
zulip/django | django/contrib/admin/templatetags/log.py | 499 | 2080 | from django import template
from django.contrib.admin.models import LogEntry
register = template.Library()
class AdminLogNode(template.Node):
def __init__(self, limit, varname, user):
self.limit, self.varname, self.user = limit, varname, user
def __repr__(self):
return "<GetAdminLog Node>"
def render(self, context):
if self.user is None:
entries = LogEntry.objects.all()
else:
user_id = self.user
if not user_id.isdigit():
user_id = context[self.user].pk
entries = LogEntry.objects.filter(user__pk=user_id)
context[self.varname] = entries.select_related('content_type', 'user')[:int(self.limit)]
return ''
@register.tag
def get_admin_log(parser, token):
"""
Populates a template variable with the admin log for the given criteria.
Usage::
{% get_admin_log [limit] as [varname] for_user [context_var_containing_user_obj] %}
Examples::
{% get_admin_log 10 as admin_log for_user 23 %}
{% get_admin_log 10 as admin_log for_user user %}
{% get_admin_log 10 as admin_log %}
Note that ``context_var_containing_user_obj`` can be a hard-coded integer
(user ID) or the name of a template context variable containing the user
object whose ID you want.
"""
tokens = token.contents.split()
if len(tokens) < 4:
raise template.TemplateSyntaxError(
"'get_admin_log' statements require two arguments")
if not tokens[1].isdigit():
raise template.TemplateSyntaxError(
"First argument to 'get_admin_log' must be an integer")
if tokens[2] != 'as':
raise template.TemplateSyntaxError(
"Second argument to 'get_admin_log' must be 'as'")
if len(tokens) > 4:
if tokens[4] != 'for_user':
raise template.TemplateSyntaxError(
"Fourth argument to 'get_admin_log' must be 'for_user'")
return AdminLogNode(limit=tokens[1], varname=tokens[3], user=(tokens[5] if len(tokens) > 5 else None))
| bsd-3-clause |
CCPorg/BOS-BossCoin-Ver-1-Copy | share/qt/make_spinner.py | 4415 | 1035 | #!/usr/bin/env python
# W.J. van der Laan, 2011
# Make spinning .mng animation from a .png
# Requires imagemagick 6.7+
from __future__ import division
from os import path
from PIL import Image
from subprocess import Popen
SRC='img/reload_scaled.png'
DST='../../src/qt/res/movies/update_spinner.mng'
TMPDIR='/tmp'
TMPNAME='tmp-%03i.png'
NUMFRAMES=35
FRAMERATE=10.0
CONVERT='convert'
CLOCKWISE=True
DSIZE=(16,16)
im_src = Image.open(SRC)
if CLOCKWISE:
im_src = im_src.transpose(Image.FLIP_LEFT_RIGHT)
def frame_to_filename(frame):
return path.join(TMPDIR, TMPNAME % frame)
frame_files = []
for frame in xrange(NUMFRAMES):
rotation = (frame + 0.5) / NUMFRAMES * 360.0
if CLOCKWISE:
rotation = -rotation
im_new = im_src.rotate(rotation, Image.BICUBIC)
im_new.thumbnail(DSIZE, Image.ANTIALIAS)
outfile = frame_to_filename(frame)
im_new.save(outfile, 'png')
frame_files.append(outfile)
p = Popen([CONVERT, "-delay", str(FRAMERATE), "-dispose", "2"] + frame_files + [DST])
p.communicate()
| mit |
Pulecz/BGames_tools | CONST.py | 2 | 35371 | fallout4_utils = {
"game" : "Fallout 4",
"utilities" : [
{
"_comment_" : ["https://sourceforge.net/projects/modorganizer/",
"https://github.com/TanninOne/modorganizer/releases",
"https://github.com/LePresidente/modorganizer/releases-https://github.com/LePresidente/modorganizer/releases/download/v2.0.8.3b/Mod.Organizer-2.0.8.3.exe"],
"name": "Mod Organizer",
"version": "2.0.7",
"download": "https://github.com/TanninOne/modorganizer/releases/download/v2.0.7b/Mod.Organizer-2.0.7.exe",
"sha1": "34d69d98b67c6ebd59088206451fc8422d75a721",
"install_path": "%FO4Path%\\Mods\\ModOrganizer"
},
{
"_comment_" : "http://f4se.silverlock.org/",
"name": "SKSE",
"version": "0.3.0",
"download": "http://f4se.silverlock.org/beta/f4se_0_03_00.7z",
"sha1": "2d197c658131d96fe525c651038aff1609cec943",
"install_path": "%FO4Path%\\MODS"
},
{
"_comment_" : "http://enbdev.com/download_mod_fallout4.htm",
"name": "ENB",
"version": "0.311",
"download": "http://enbdev.com/enbseries_fallout4_v0311.zip",
"sha1": "98093ad5d03580f869dc79fab3f0f717fad58b82",
"install_path": "%FO4Path%\\MODS"
},
{
"_comment_" : "http://www.nexusmods.com/fallout4/mods/1822/?",
"name": "Shadow Boost",
"version": "1.9.4.0",
"download": "http://www.dev-c.com/files/ShadowBoost_1.9.4.0.zip",
"sha1": "14385a3412944d0eea686680566b3f93a44ca077",
"install_path": "%FO4Path%\\MODS"
},
{
"_comment_" : "https://github.com/loot/loot/releases",
"name": "LOOT",
"version": "0.10.3",
"download": "https://github.com/loot/loot/releases/download/0.10.3/loot_0.10.3-0-g0fcf788_dev_Win32.7z",
"sha1": "62e2de9277bd98527ba6848f253f89682b3950c8",
"install_path": "%FO4Path%\\Mods\\LOOT"
},
{
"_comment_" : "https://github.com/TES5Edit",
"name": "FO4Edit",
"version": "3.1.3",
"download": "https://github.com/TES5Edit/TES5Edit/releases/download/FO4Edit-fa3cf0b/FO4Edit.3.1.3.-.fa3cf0b.7z",
"sha1": "3ba6078c45cf5a9d7d1c0f0f447496aeeac8d49b",
"install_path": "%FO4Path%\\Mods\\FO4Edit"
},
{
"_comment_" : "https://github.com/matortheeternal/smash/releases",
"name": "Mator Smash",
"version": "0.4",
"download": "https://github.com/matortheeternal/smash/releases/download/0.4/MatorSmash.zip",
"sha1": "e437683c5a380bc972154d2817758f1fc4ac5053",
"install_path": "%FO4Path%\\Mods\\MatorSmash"
},
{
"_comment_" : "https://github.com/wrye-bash/wrye-bash/releases",
"name": "Wrye Bash",
"version": "0.306",
"download": "https://github.com/wrye-bash/wrye-bash/releases/download/v306/Wrye.Bash.306.-.Standalone.Executable.7z",
"sha1": "05b373772bee61d8f13a9732950b8036e8daea67",
"install_path": "%FO4Path%\\Mods\\WryeBash"
},
{
"_comment_" : "https://github.com/matortheeternal/mod-analyzer/releases",
"name": "Mod Analyzer",
"version": "2.0.5",
"download": "https://github.com/matortheeternal/mod-analyzer/releases/download/2.0.5/ModAnalyzer.zip",
"sha1": "4f4e3f40a9393a916c451dccc16145ce81e4db23",
"install_path": "%FO4Path%\\Mods\\ModAnalyzer"
}
],
"ModOrganizer.ini": {
"[Plugins]": [
"Basic%20diagnosis%20plugin\\check_modorder=false"
],
"[customExecutables]": [
{
"title": "F4SE",
"custom": "false",
"toolbar": "false",
"ownicon": "false",
"binary": "%FO4Path%\\f4se_loader.exe",
"arguments": "",
"workingDirectory": "",
"closeOnStart": "false",
"steamAppID": ""
},
{
"title": "Fallout 4",
"custom": "false",
"toolbar": "false",
"ownicon": "false",
"binary": "%FO4Path%\\Fallout4.exe",
"arguments": "",
"workingDirectory": "",
"closeOnStart": "false",
"steamAppID": ""
},
{
"title": "Fallout 4 Launcher",
"custom": "false",
"toolbar": "false",
"ownicon": "false",
"binary": "%FO4Path%\\Fallout4Launcher.exe",
"arguments": "",
"workingDirectory": "",
"closeOnStart": "false",
"steamAppID": ""
},
{
"title": "FO4Edit",
"custom": "true",
"toolbar": "true",
"ownicon": "true",
"binary": "%FO4Path%\\Mods\\FO4Edit\\FO4Edit.exe",
"arguments": "",
"workingDirectory": "",
"closeOnStart": "false",
"steamAppID": ""
},
{
"title": "LOOT",
"custom": "true",
"toolbar": "true",
"ownicon": "true",
"binary": "%FO4Path%\\Mods\\LOOT\\LOOT.exe",
"arguments": "",
"workingDirectory": "",
"closeOnStart": "false",
"steamAppID": ""
},
{
"title": "Smash",
"custom": "true",
"toolbar": "true",
"ownicon": "true",
"binary": "%FO4Path%\\Mods\\MatorSmash\\MatorSmash.exe",
"arguments": "",
"workingDirectory": "",
"closeOnStart": "false",
"steamAppID": ""
},
{
"title": "WryeBash",
"custom": "true",
"toolbar": "true",
"ownicon": "true",
"binary": "%FO4Path%\\Mods\\WryeBash\\Wrye Bash.exe",
"arguments": "",
"workingDirectory": "",
"closeOnStart": "false",
"steamAppID": ""
},
{
"title": "Mod Analyzer",
"custom": "true",
"toolbar": "true",
"ownicon": "true",
"binary": "%FO4Path%\\Mods\\ModAnalyzer\\ModAnalyzer.exe",
"arguments": "",
"workingDirectory": "",
"closeOnStart": "false",
"steamAppID": ""
}
]
}
}
fallout4_99_ids = { "0" : [None,None, None, None],
"1" : [None,"X2c - Improved Eyes at Fallout 4 Nexus - Mods and community", "17", "Body, Face, and Hair"],
"2" : [None,None, None, None],
"3" : [["Todd.7z-3-.7z"],"Todd Howard Icon for Fallout 4 at Fallout 4 Nexus - Mods and community", "2", "Miscellaneous"],
"4" : [None,None, None, None],
"5" : [["Icon Pack-5-.7z"],"Fallout 4 Icon Pack at Fallout 4 Nexus - Mods and community", "2", "Miscellaneous"],
"6" : [None,None, None, None],
"7" : [["Both Files .ico format (3.0)-7-1-0.7z","Tallhout 4 Icon (God Howard Edition) -7-1-0.7z","Tallhout 4 Icon (God Howard Edition 2.2) -7-1-0.7z"],"\"Toddout 4\" Icon at Fallout 4 Nexus - Mods and community", "2", "Miscellaneous"],
"8" : [["GECK Icon-8-1-0.7z","GECK Icon replacer-8-1-0.7z"],"GECK Icon at Fallout 4 Nexus - Mods and community", "38", "Utilities"],
"9" : [["CBM - Survival of the Fittest-9-1-1.zip"],"CBM - Survival of the Fittest at Fallout 4 Nexus - Mods and community", "15", "Gameplay Effects and Changes"],
"10" : [None,None, None, None],
"11" : [None,None, None, None],
"12" : [None,None, None, None],
"13" : [None,None, None, None],
"14" : [None,None, None, None],
"15" : [["Caliente's Beautiful Bodies Enhancer - v2.3-15-2-3.7z","CBBE Body and Hands texture Source and Options v1.0-15-1-0.7z","Furry Undergarments Fix-15-1-0.7z","Modder's Resource-15-1-0.7z","CBBE Body and Hands texture Source and Options v1.0-15-1-0.7z"], "Caliente's Beautiful Bodies Enhancer -CBBE", "19", "Models and Textures"],
"16" : [["Patcher-16-1-9-4.zip"],"Patcher at Fallout 4 Nexus - Mods and community", "38", "Utilities"],
"17" : [["T60 Icon-17-FINAL.zip","T60 PNG Version-17-FINAL.zip","T60 PNG-17-FINAL.zip"],"Fallout 4 T60 Icon at Fallout 4 Nexus - Mods and community", "2", "Miscellaneous"],
"18" : [None,None, None, None],
"19" : [None,None, None, None],
"20" : [None,None, None, None],
"21" : [None,None, None, None],
"22" : [None,None, None, None],
"23" : [None,None, None, None],
"24" : [None,None, None, None],
"25" : [["BodySlide and Outfit Studio - v4.2.3-25-4-2-3.7z"],"BodySlide and Outfit Studio at Fallout 4 Nexus - Mods and community", "38", "Utilities"],
"26" : [["New Vegas Amber Hud Instructions-26-1.zip","New Vegas Amber Hud Instructions-26-2.zip"],"New Vegas Amber Hud at Fallout 4 Nexus - Mods and community", "37", "User Interface"],
"27" : [["Time Lapse Main Menu Theme - Long-27-1.zip","Time Lapse Main Menu Theme - Short-27-1.zip"],"Time Lapse Main Menu Replacer at Fallout 4 Nexus - Mods and community", "37", "User Interface"],
"28" : [None,None, None, None],
"29" : [None,None, None, None],
"30" : [None,None, None, None],
"31" : [["Alternate alternate font 1-31-1.zip","Alternate Alternate Font 2-31-1.zip","Main-31-2.zip","Latin Extended Support Fonts-31-3.zip","Main font only-31-1.zip"],"Alternate UI Fonts at Fallout 4 Nexus - Mods and community", "37", "User Interface"],
"32" : [["Mutant toad howard-32-1-0.rar"],"Mutant Toad Howard icon at Fallout 4 Nexus - Mods and community", "2", "Miscellaneous"],
"33" : [None,None, None, None],
"34" : [None,None, None, None],
"35" : [["Fallout 4 Icon - 8k Resolution -35-1-0.7z","Hi-Res Fallout 4 Icon-35-1-0.7z"],"Hi-Res Fallout 4 Icon - 8k at Fallout 4 Nexus - Mods and community", "2", "Miscellaneous"],
"36" : [None,None, None, None],
"37" : [None,None, None, None],
"38" : [None,None, None, None],
"39" : [["IcoPack-39-1-8.7z"],"24 AllRez Fallout4 IconSet at Fallout 4 Nexus - Mods and community", "2", "Miscellaneous"],
"40" : [["FALLOUT 4 - Enhanced Wasteland Preset v3.0-40-3-0.zip","FALLOUT 4 - Enhanced Wasteland v5.0-40-5-0.zip"],"Enhanced Wasteland Preset at Fallout 4 Nexus - Mods and community", "40", "Visuals and Graphics"],
"41" : [None,None, None, None],
"42" : [["VOGUE ENB v0.19.1 - COPYRIGHT EDITION-42-v0-19-1.zip","VOGUE ENB v0.19.1 - LITE-42-v0-19-1L.zip","VOGUE ENB v0.19.1 - No DoF Effect-42-v0-19-1n.zip"],"VOGUE ENB - Realism at Fallout 4 Nexus - Mods and community", "13", "ENB Presets"],
"43" : [["PerkPoster full version.-43-1-0.zip","PerkPoster Patch-43-1-0.zip"],"Unofficial Fallout 4 PerkPoster Patch at Fallout 4 Nexus - Mods and community", "25", "Patches"],
"44" : [["Godd Howard Icon-44-1-00.zip"],"Godd Howard Icon for Fallout 4 at Fallout 4 Nexus - Mods and community", "2", "Miscellaneous"],
"45" : [None,None, None, None],
"46" : [["The Mod-46-.rar"],"Garage Icon - Custom Photo at Fallout 4 Nexus - Mods and community", "2", "Miscellaneous"],
"47" : [None,None, None, None],
"48" : [None,None, None, None],
"49" : [None,None, None, None],
"50" : [None,None, None, None],
"51" : [["Fo4 icon-51-.rar","Fo4 icon 2-51-.rar"],"Fallout 4 T-60 icons at Fallout 4 Nexus - Mods and community", "2", "Miscellaneous"],
"52" : [None,None, None, None],
"53" : [["Geralt Voice resources-53-1-0.7z","Long version before cuting the lines -53-1-0a.7z"],"Geralt The Witcher Voice Resources at Fallout 4 Nexus - Mods and community", "18", "Modders Resources and Tutorials"],
"54" : [["Fallout 4 Language Filter-54-0-40.7z"],"Fallout 4 Language Filter at Fallout 4 Nexus - Mods and community", "36", "Audio - Voice"],
"55" : [None,None, None, None],
"56" : [None,None, None, None],
"57" : [None,None, None, None],
"58" : [["K-putt's_Fallout_ReShade_1.5 'Basic'-58-1-5.rar","K-putt's_Fallout_ReShade_1.5 'Performance'-58-1-5.rar","K-putt's_Fallout_ReShade_1.5 'DOF and AO'-58-.rar"],"K-putt's Config 1.5 at Fallout 4 Nexus - Mods and community", "55", "ReShade Presets"],
"59" : [None,None, None, None],
"60" : [None,None, None, None],
"61" : [["Screenshot Tools-61-.zip"],"Screenshot Tools at Fallout 4 Nexus - Mods and community", "2", "Miscellaneous"],
"62" : [["SeamlessPack1-62-1-0.rar","SeamlessPack2-62-1-0.rar","SeamlessPack3-62-1-0.rar"],"Jesters Seamless Texture Pack ( MODDERS RESOURCE ) at Fallout 4 Nexus - Mods and community", "18", "Modders Resources and Tutorials"],
"63" : [["max-63-2.rar"],"Maxed. S.P.E.C.I.A.L .bat at Fallout 4 Nexus - Mods and community", "8", "Cheats and God items"],
"64" : [["Settlement Helper-64-1-0.7z","Settlement Helper-64-1-1.7z","Settlement Helper-64-1-2.7z"],"Settlement Helper at Fallout 4 Nexus - Mods and community", "38", "Utilities"],
"65" : [None,None, None, None],
"66" : [["Chris2012 Realistic SweetFX Preset 1.0-66-1-0.rar","Chris2012 Realistic SweetFX Preset 1.1-66-1-1.rar"],"Chris2012's Realistic SweetFX Preset at Fallout 4 Nexus - Mods and community", "40", "Visuals and Graphics"],
"67" : [None,None, None, None],
"68" : [["Batch File all the junk-68-1-6.rar","Batch File all the ammo-68-1-4.rar"],"Batch File all the junk at Fallout 4 Nexus - Mods and community", "8", "Cheats and God items"],
"69" : [["Lyssa Vagabond save 1.1-69-1-1.rar","lyssa vagabond after vault -69-.rar"],"Lyssa Vagabond Character at Fallout 4 Nexus - Mods and community", "32", "Saved Games"],
"70" : [["Vault 111 Quickstart - Female-70-V2.zip","Vault 111 Quickstart - Male-70-V2.zip"],"Vault 111 Quickstart - Male And Female at Fallout 4 Nexus - Mods and community", "32", "Saved Games"],
"71" : [None,None, None, None],
"72" : [None,None, None, None],
"73" : [["ULG 1.2-73-1-2.zip"],"ULG - Ultra Low Graphics for low-end PC's (F4) at Fallout 4 Nexus - Mods and community", "40", "Visuals and Graphics"],
"74" : [None,None, None, None],
"75" : [None,None, None, None],
"76" : [None,None, None, None],
"77" : [["All Items Save-77-1-1-0-0.rar"],"All Items Containers at Fallout 4 Nexus - Mods and community", "32", "Saved Games"],
"78" : [["BAE v0.10-78-0-10.7z"],"B.A.E. - Bethesda Archive Extractor at Fallout 4 Nexus - Mods and community", "38", "Utilities"],
"79" : [["Savegame 1-79-1.rar","Savegame 2-79-1.rar"],"Savegame Female at Fallout 4 Nexus - Mods and community", "32", "Saved Games"],
"80" : [["The love of Eli-80-1-0.7z"],"The love of Eli - a dystopic Reshade at Fallout 4 Nexus - Mods and community", "55", "ReShade Presets"],
"81" : [["Backup Files-81-.7z"],"Fallout 4 Config ini at Fallout 4 Nexus - Mods and community", "40", "Visuals and Graphics"],
"82" : [["English Translation-82-0-1.zip","English Translation-82-0-2.zip"],"English Translation at Fallout 4 Nexus - Mods and community", "25", "Patches"],
"83" : [None,None, None, None],
"84" : [["Bahuda - Kim - Fallout 4-84-.7z"],"Kim - Face - Character Model at Fallout 4 Nexus - Mods and community", "32", "Saved Games"],
"85" : [["MoreOrLessXP-85-1-3.zip","Rules for MoreOrLessXP-85-1-3.zip"],"More or Less XP at Fallout 4 Nexus - Mods and community", "2", "Miscellaneous"],
"86" : [["Stalker Lights 4.0-86-4-0.rar","Stalker Lights 4.1 red error fix-86-4-1.rar","Stalker Lights 4.2 les brightnes-86-4-2.rar"],"Stalker Lights Sweet Fx and ENB at Fallout 4 Nexus - Mods and community", "55", "ReShade Presets"],
"87" : [["EMMA-87-.rar"],"Crappy Emma Stone at Fallout 4 Nexus - Mods and community", "32", "Saved Games"],
"88" : [["Experience Points for Level 50-88-1-0.rar"],"Experience Points at Fallout 4 Nexus - Mods and community", "46", "Skills and Leveling"],
"89" : [["Claire-89-.rar"],"Character Save - Claire at Fallout 4 Nexus - Mods and community", "32", "Saved Games"],
"90" : [["Ultimate Engine-90-.rar","Ultimate Engine-90-1-3.rar"],"Ultimate Engine at Fallout 4 Nexus - Mods and community", "40", "Visuals and Graphics"],
"91" : [None,None, None, None],
"92" : [["Fallout 4 Performance Optimization (My Specs)-92-2-0.zip","Fallout 4 Performance Optimization (Weak Shader Compute Cores)-92-1-3.7z","Fallout 4 Performance Optimization - Final-92-1-4.zip","Fallout 4 Performance Optimization Med-Hi Set (Test)-92-1-2.zip","Fallout 4 Performance Optimization Update (Test 3)-92-1-1-3.zip","Fallout 4 Performance Optimization Update (Test Updated)-92-1-1-2.zip","Fallout 4 Performance Optimization-92-1-0.zip"],"Fallout 4 Performance Optimization at Fallout 4 Nexus - Mods and community", "40", "Visuals and Graphics"],
"93" : [["Guidelines 2015 V5.3.2-93-5-3-2.exe"],"Pyros Software - Guidelines ReadMe and Description Page generator at Fallout 4 Nexus - Mods and community", "38", "Utilities"],
"94" : [["Ruined City FX V1.2 (performance)-94-1-2.zip","Ruined City FX V1.2-94-1-2.zip"],"Ruined City FX at Fallout 4 Nexus - Mods and community", "55", "ReShade Presets"],
"95" : [None,None, None, None],
"96" : [["2.Power Fan Edition-96-V1-06895.rar","3 Boston Irish Edition-96-V1-04385.rar","4 Rad Sox - Jungle Camo-96-V1-028Camo.rar","5 Rad Sox - Urban Camo-96-V1-09Urb.rar","A. Solid Black-96-V1-05432.rar","B. Jungle Camo-96-V1-048Camo.rar","C. Urban Camo-96-V1-9432.rar","D. Magic Unicorn Rainbows-96-V1-05MURBO.rar","E Vault 111 Standard Issue Panties and Bra-96-V-V111.rar","F. Cow Girl-96-V1-COW.rar","G. Strawberries-96-V1-02Berry.rar","H. SweetHeart Type 1-96-V1-SH1.rar","Immersive Attire Fix-96-V1-09alpha.rar","K. Shinji Swag Edition-96-VShinji.rar","K. SweetHeart Type 2-96-V1-SH2.rar","M.1 Classic Rad Sox for Him (Male)-96-VM-102C.rar","M.2 Black with White Elastic (Ceo Brand Edition) For Him (Male)-96-VBECB.rar","M.3 Magic Rainbow Unicorns for Him (Male)-96-VMUB1.rar","M.A Black with White Elastic for Him (Male)-96-VMBE4.rar","Source Files - PSD V2.01-96-V2-01.rar","SPECIAL Rad Sox Classic Hat Retexture-96-V3-23HAT.rar","zFor Troubleshooting. My Ini files-96-VMyIni.rar"],"Immersive Attire Fix at Fallout 4 Nexus - Mods and community", "19", "Models and Textures"],
"97" : [["English Strings for Fallout 4 - DEF_UI and FDI Compatibility Version-97-1-9.7z","English Strings for Fallout 4 - DEF_UI Compatibility Version-97-1-9.7z","English Strings for Fallout 4 - Full Dialogue Interface (Paraphrase) Compatibility Version-97-1-9.7z","English Strings for Fallout 4 - Full Dialogue Interface Compatibility Version-97-1-9.7z","English Strings for Fallout 4-97-1-9.7z"],"English Strings for Fallout 4 at Fallout 4 Nexus - Mods and community", "37", "User Interface"],
"98" : [["f4pexdump.7z-98-1-0.7z"],"F4 Papyrus Dumper at Fallout 4 Nexus - Mods and community", "38", "Utilities"],
"99" : [["F4R Saved Characters - Caucasian Redhead-99-1-2.rar","F4R Saved Characters - Nora and Nath-99-1-1.rar","F4R Saved Characters-99-1-2.rar"],"Fallout 4 Radioactive - Saved Characters at Fallout 4 Nexus - Mods and community", "32", "Saved Games"]}
fallout4_default_categories_dat = "1|Ammo|3|0\n2|Animation|4|0\n3|Armour|5|0\n4|Audio - Misc|35|0\n5|Audio - Music|34|0\n6|Audio - SFX|33|0\n7|Audio - Voice|36|0\n8|Body, Face, and Hair|17|0\n9|Bug Fixes|6|0\n10|Buildings|7|0\n11|Character Presets|58|0\n12|Cheats and God items|8|0\n13|Clothing|9|0\n14|Clothing - Backpacks|49|0\n15|Collectibles, Treasure Hunts, and Puzzles|10|0\n16|Companions|11|0\n17|Crafting - Equipment|44|0\n18|Crafting - Home/Settlement|45|0\n19|Crafting - Other|50|0\n20|Creatures|12|0\n21|ENB Presets|13|0\n22|Environment|14|0\n23|Factions|16|0\n24|Gameplay Effects and Changes|15|0\n25|Immersion|51|0\n26|Items (Food, Drinks, Chems, etc)|43|0\n27|Locations - New|21|0\n28|Locations - Vanilla|47|0\n29|Miscellaneous|2|0\n30|Modders Resources and Tutorials|18|0\n31|Models and Textures|19|0\n32|New Lands|20|0\n33|NPC|22|0\n34|NPC - Vendors|23|0\n35|Overhauls|24|0\n36|Patches|25|0\n37|Performance|26|0\n38|Perks|27|0\n39|Pip-Boy|52|0\n40|Player Homes|28|0\n41|Player Settlement|48|0\n42|Poses|29|0\n43|Power Armour|53|0\n44|Quests and Adventures|30|0\n45|Radio|31|0\n46|ReShade Presets|55|0\n47|Save Games|32|0\n48|Skills and Leveling|46|0\n49|Tattoos|57|0\n50|User Interface|37|0\n51|Utilities|38|0\n52|Vehicles|39|0\n53|Visuals and Graphics|40|0\n54|Weapons|41|0\n55|Weapons and Armour|42|0\n56|Weather|56|0" #MO2_FO4_categories.dat
skyrim_utils = {
"game" : "Skyrim",
"utilities" : [
{
"_comment_" : ["https://sourceforge.net/projects/modorganizer/",
"https://github.com/TanninOne/modorganizer/releases"],
"name": "Mod Organizer",
"version": "1.3.10",
"download": "http://iweb.dl.sourceforge.net/project/modorganizer/ModOrganizer_v1.3.10.7z",
"sha1": "a5953ac567f04055ebeb3971916f2ccc107e7172",
"install_path": "%SkyrimPath%\\Mods\\ModOrganizer"
},
{
"_comment_" : "http://skse.silverlock.org/",
"name": "SKSE",
"version": "1.7.3",
"download": "http://skse.silverlock.org/beta/skse_1_07_03.7z",
"sha1": "8a0b2a766327103fa25e9b0129647125f7a9a6e6",
"install_path": "%SkyrimPath%"
},
{
"_comment_" : "http://enbdev.com/download_mod_tesskyrim.html",
"name": "ENB",
"version": "0.308",
"download": "http://enbdev.com/enbseries_skyrim_v0308.zip",
"sha1": "cec5527abaa285870514377bcc98fe971ed60284",
"install_path": "%SkyrimPath%"
},
{
"_comment_" : "https://github.com/TES5Edit/TES5Edit/releases",
"name": "TES5Edit",
"version": "3.1.2",
"download": "https://github.com/TES5Edit/TES5Edit/releases/download/xedit-3.1.2/TES5Edit_3_1_2.7z",
"sha1": "f8db29d2144282156b991b0cb5671d62b0a616cc",
"install_path": "%SkyrimPath%\\Mods\\TES5Edit"
},
{
"_comment_" : "https://github.com/matortheeternal/smash/releases",
"name": "Mator Smash",
"version": "0.4",
"download": "https://github.com/matortheeternal/smash/releases/download/0.4/MatorSmash.zip",
"sha1": "e437683c5a380bc972154d2817758f1fc4ac5053",
"install_path": "%SkyrimPath%\\Mods\\MatorSmash"
},
{
"_comment_" : "https://github.com/wrye-bash/wrye-bash/releases",
"name": "Wrye Bash",
"version": "0.306",
"download": "https://github.com/wrye-bash/wrye-bash/releases/download/v306/Wrye.Bash.306.-.Standalone.Executable.7z",
"sha1": "05b373772bee61d8f13a9732950b8036e8daea67",
"install_path": "%SkyrimPath%\\Mods\\WryeBash"
},
{
"_comment_" : "https://github.com/matortheeternal/mod-analyzer/releases",
"name": "Mod Analyzer",
"version": "2.0.5",
"download": "https://github.com/matortheeternal/mod-analyzer/releases/download/2.0.5/ModAnalyzer.zip",
"sha1": "4f4e3f40a9393a916c451dccc16145ce81e4db23",
"install_path": "%SkyrimPath%\\Mods\\ModAnalyzer"
}
],
"ModOrganizer.ini": {
"[Plugins]": [
"BSA%20Extractor\\enabled=true",
"Basic%20diagnosis%20plugin\\check_modorder=false"
],
"[customExecutables]": [
{
"title": "SKSE",
"custom": "false",
"toolbar": "false",
"ownicon": "false"
},
{
"title": "Skyrim",
"custom": "false",
"toolbar": "false",
"ownicon": "false"
},
{
"title": "Skyrim Launcher",
"custom": "false",
"toolbar": "false",
"ownicon": "false"
},
{
"title": "TES5Edit",
"custom": "true",
"toolbar": "true",
"ownicon": "true",
"binary": "%SkyrimPath%\\Mods\\TES5Edit\\TES5Edit.exe",
"arguments": "",
"workingDirectory": "",
"closeOnStart": "false",
"steamAppID": ""
},
{
"title": "LOOT",
"custom": "true",
"toolbar": "true",
"ownicon": "true",
"binary": "%SkyrimPath%\\Mods\\LOOT\\LOOT.exe",
"arguments": "",
"workingDirectory": "",
"closeOnStart": "false",
"steamAppID": ""
},
{
"title": "Smash",
"custom": "true",
"toolbar": "true",
"ownicon": "true",
"binary": "%SkyrimPath%\\Mods\\MatorSmash\\MatorSmash.exe",
"arguments": "",
"workingDirectory": "",
"closeOnStart": "false",
"steamAppID": ""
},
{
"title": "WryeBash",
"custom": "true",
"toolbar": "true",
"ownicon": "true",
"binary": "%SkyrimPath%\\Mods\\WryeBash\\Wrye Bash.exe",
"arguments": "",
"workingDirectory": "",
"closeOnStart": "false",
"steamAppID": ""
},
{
"title": "Mod Analyzer",
"custom": "true",
"toolbar": "true",
"ownicon": "true",
"binary": "%SkyrimPath%\\Mods\\ModAnalyzer\\ModAnalyzer.exe",
"arguments": "",
"workingDirectory": "",
"closeOnStart": "false",
"steamAppID": ""
}
]
}
}
skyrim_99_ids = { "0": [None,None,None,None],
"1": [None,None,None,None],
"2": [None,None,None,None],
"3": [None,None,None,None],
"4": [None,None,None,None],
"5": [None,None,None,None],
"6": [None,None,None,None],
"7": [None,None,None,None],
"8": [["1680x1050-8-1.rar"],"Skyrim wallpaper without logo","28","Miscellaneous"],
"9": [["tes v skyrim quickly wallpaper by hectrol-9-1.rar"],"Hectrol Quickly Wallpaper -1680x1050- -March 1-","28","Miscellaneous"],
"10": [None,None,None,None],
"11": [["STEP Vanilla Optimized Textures - Standard-11-1-2.7z","STEP Optimized Vanilla Textures - Performance Version 1.2-11-1-2.7z","Enhanced Distant Terrain v1.1.1-11-1-1-1.7z","The Ruffled Feather v4.4-11-4-4.7z","STEP Compilation Installer - High Res - 2292m-11-2-2-9-2m.7z","STEP Compilation Installer - Normal Res - 2292m-11-2-2-9-2m.7z"],"STEP - Skyrim Total Enhancement Project","79","Overhauls"],
"12": [["Green eye retexture-12-1.rar"],"Uguublin eye retextures","29","Models and Textures"],
"13": [["Skyrim Incremental Saver 136-13-1-36.zip"],"Skyrim Incremental Saver","39","Utilities"],
"14": [None,None,None,None],
"15": [["Savegame Manager v1_1-15-1-1.zip"],"TES V Savegame Manager","39","Utilities"],
"16": [None,None,None,None],
"17": [["CreatePlayer-17-1-0.zip"],"Quick Start","43","Save Games"],
"18": [["Main File - Caucasian races for now only-18-1-0.zip","Main file - Includes clean hands -18-1-1.zip"],"No Dirty Bodies - Caucasian Female Races","29","Models and Textures"],
"19": [["Unofficial Skyrim Patch-19-2-1-3b.7z ","SMPC Overwrite Fixes-19-1-0-5.7z"],"Unofficial Skyrim Patch","84","Patches"],
"20": [None,None,None,None],
"21": [None,None,None,None],
"22": [["OTEOYN Skyrim Music Conversion V2-22-2.7z","blah-22-2.7z","StuffOne-22-2.7z","Goo-22-2.7z"],"ON THE EDGE OF YOUR NERVES - Skyrim Music Conversion","61","Audio - SFX, Music, and Voice"],
"23": [["I Wash ver 93-23-93.zip"],"I Wash - New Skin for Skyrim","29","Models and Textures"],
"24": [["Borderless Windowed Fullscreen 1-1-24-1-1.7z"],"Borderless Windowed Fullscreen - AutoHotkey Script","24","Gameplay Effects and Changes"],
"25": [["New Moons Over Skyrim-25.7z","Deathstar over Skyrim-25-WIP.zip"],"New Moons over Skyrim","29","Models and Textures"],
"26": [["Detailed_Faces-2_00-26-2-0.7z","Detailed_Faces-2_00-Lite-26-2-0L.zip"],"Detailed Faces v2","29","Models and Textures"],
"27": [["TruePC Bundle-27-v2-0a.exe","Skyrim-Prefs INI Base-27.zip"],"TruePC by Nekroze","62","Visuals and Graphics"],
"28": [None,None,None,None],
"29": [None,None,None,None],
"30": [["No_More_Blocky_Faces-1_50-30-1-5.7z","Smoothed_Dunmer-1_40-30.7z","Merged_With_LUEF-1_00 -30.7z"],"No More Blocky Faces","29","Models and Textures"],
"31": [["nvidiaInspector_v1955-31-1.zip"],"Skyrim - NVIDIA Ambient Occlusion","45","Videos and Trailers"],
"32": [None,None,None,None],
"33": [["No Kill Moves v1_11-33-1-11.7z"],"No Kill Moves","24","Gameplay Effects and Changes"],
"34": [["FPS Limiter-34-V1-01.rar"],"FPS Limiter","39","Utilities"],
"35": [None,None,None,None],
"36": [["No Blood Splatter on your HUD-36-1-1.zip","No Blood Spurts-36-1-2.zip","No Blood-36-1-3.zip"],"No Blood","62","Visuals and Graphics"],
"37": [["Imperial Green Outfit-37-1-0.zip"],"Outfits Recolor","29","Models and Textures"],
"38": [["Nude Females - Barbie Doll-38-1-1.7z"],"Nude Females - Barbie Doll","29","Models and Textures"],
"39": [["Kaylee EyeShadow Version-39-1-1.rar","Kaylee No EyeShadow Version-39-1-1.rar"],"Kaylee Female Gamesave","43","Save Games"],
"40": [["Eldothas dunmer savegame-40-1.rar"],"Eldothas Dunmer Male Savefile","43","Save Games"],
"41": [None,None,None,None],
"42": [["Optional Red-42-1-0.7z","PSD-42-1.7z","White N Dirty-42-1-2.7z"],"Retextured female underwear","60","Clothing"],
"43": [["Proper Clean Female Body-43-1-0.zip"],"Proper Clean Female Body","29","Models and Textures"],
"44": [["SkyrimFemaleMuscleMod-44-1-0.zip","Diesel v1_4 -44-1-4.zip","Female Muscle Mod - DIESEL-44-1-3.zip","FemaleMuscleModv1_1_Chiseled-44.zip","FemaleMuscleModv1_1_Soft-44.zip","FemaleMuscleModv1_2_Chiseled-44.zip","FemaleMuscleModv1_2_Soft-44.zip","4096x4096 v1_5 Diesel-44-1-5.zip","4096x4096 v1_5 Hardcore-44-1-5.zip"],"Female Muscle Mod","29","Models and Textures"],
"45": [["Obligatory Chainmail Bikini-45-1.zip"],"Obligatory Chainmail Bikini","60","Clothing"],
"46": [None,None,None,None],
"47": [["TESV Reduced Texture-47.7z","Readme-47-1-0.7z"],"TESV Reduced Texture Pack","29","Models and Textures"],
"48": [["fix for ATI AMD video cards-48-1-0.rar"],"framerate fix for ATI cards","62","Visuals and Graphics"],
"49": [["Slower Skills-49.zip"],"Slower Skill Gain","24","Gameplay Effects and Changes"],
"50": [["Steel Sword Texture-50-1.zip"],"Alternate Steel Sword Texture","29","Models and Textures"],
"51": [["Masser Versions-51-2.rar","secunda versions-51-2.rar"],"20 Moons - replacer pack","74","Environmental"],
"52": [["Max Out All Levels-52.7z","Max Out Perks-52-1-1.7z"],"Max Out All Levels and Perks","39","Utilities"],
"53": [None,None,None,None],
"54": [["Small retexture of the Steel Great Sword-54-1.zip","Steel GreatSword Re-Texture gold version-54-1-5.zip"],"Steel GreatSword Re-Texture","29","Models and Textures"],
"55": [["less dirty female and male faces-55-2-0.zip","less dirty male faces-55-1.zip"],"Less dirty male and female faces","29","Models and Textures"],
"56": [["start with cheats savegame-56-1.rar"],"start with cheats savegame","43","Save Games"],
"57": [["steelswordtexture_v1-57-1-0.zip"],"HQ Leather Handle Steel Sword Re-Texture","55","Weapons"],
"58": [["FemaleSkins-58-1.zip","FemaleSkins_CompressedBodyTextures-58.zip"],"Improved Female Skin Textures","29","Models and Textures"],
"59": [["Steel War Axe Re-Texture-59-1.zip"],"Steel War Axe Re-texture","29","Models and Textures"],
"60": [["-Enhanced Blood Textures 3_6c NMM -60-3-6c.7z","Other Blood ESP_tweaks-60-.rar","Old Splatter Textures-60.rar","No Screen Blood-60-1.rar"],"Enhanced Blood Textures","62","Visuals and Graphics"],
"61": [["Arrow keys lockpicking-61-1.rar"],"Arrow keys Lockpicking","42","User Interface"],
"62": [["Dragon Texture Pack-62-1-0.zip"],"Improved Dragon Textures","83","Creatures"],
"63": [None,None,None,None],
"64": [["Lokir saved game-64-1.rar","Lokir saved game V2-64-2.rar"],"Lokir saved game","43","Save Games"],
"65": [None,None,None,None],
"66": [None,"Deutsche Wegweiser","29","Models and Textures"],
"67": [["Casual Clothing Retexture-67-1.7z"],"Slightly Improved Casual Clothing 01","29","Models and Textures"],
"68": [None,None,None,None],
"69": [["SkyRim Creature Alive Version 025CK-69-0-25.7z","SkyRimCreaturesProject-69.7z"],"Skyrim Creatures Alive","79","Overhauls"],
"70": [["Nude Females - XCE-70.7z","Nude Females v1-5 Photoshop Files-70.7z","Nude Females v1-5-70-1-5.7z"],"Nude Females","29","Models and Textures"],
"71": [["Twiggys Colourful Barkeeps Textures-71.rar","Optional bouncy mesh-71-1.zip"],"Twiggys Colourful Barkeeps and CHSBHC bouncing meshes","60","Clothing"],
"72": [["Grey-72-1-0.7z","Ivory-72-1-0.7z","Red-72-1-0.7z","White N Dirty-72-1-1.7z"],"Male underwear retexture","29","Models and Textures"],
"73": [["Traduction des panneaux avec normal mapping-73-1-2.zip"],"Roadsigns - French translation","29","Models and Textures"],
"74": [None,None,None,None],
"75": [["Carine Gamesave-75.rar"],"Carine Female Breton Gamesave","43","Save Games"],
"76": [["1_4 Beta-76-1-4B.zip","Alchemy Assistant 1_32-Czech Fix-76-1-32.zip"],"Alchemy Assistant","39","Utilities"],
"77": [["Nude Patch-77-1-0.zip"],"Nude Patch - Women Only","29","Models and Textures"],
"78": [None,None,None,None],
"79": [["Underworld vampire eyes-79-1.zip"],"Underworld vampire eyes","29","Models and Textures"],
"80": [["Alternate Skyrim cursors v1-80.zip","Alternate Skyrim cursors v3-80-3.zip"],"KenMOD - Alternate Skyrim cursors","42","User Interface"],
"81": [["Imperial Studded Leather Armor-81-1.7z","ImperialChainmailv1-81-1.7z"],"Imperial Chainmail Armor","54","Armour"],
"82": [["Skyrim-PC-controls-mod2-82-2-0.zip"],"Magic or items left and right hand fix","95","Bug Fixes"],
"83": [["Carteles en Espanol 2_1-83-2-1.zip","Source PSD-83-1-0.zip"],"Carteles en Espanol -spanish signs-","29","Models and Textures"],
"84": [["25 kgs x lvl-84-1-01b.rar","500 kgs-84-1-0b.rar"],"25 - 500 kgs x lvl - BETA","24","Gameplay Effects and Changes"],
"85": [["Enhanced Night Skyrim v04 Aquamarine Galaxy-85-0-4.zip","Enhanced Night Skyrim v04 Blue Galaxy-85-0-4.zip","Enhanced Night Skyrim v04 Color Galaxy-85-0-4.zip","Enhanced Night Skyrim v04 Green Galaxy-85-0-4.zip","Enhanced Night Skyrim v04 High Stars-85-0-4.zip","Enhanced Night Skyrim v04 Low Stars-85-0-4.zip","Enhanced Night Skyrim v04 Lower Stars-85-0-4.zip","Enhanced Night Skyrim v04 Medium Stars-85-0-4.zip"],"Enhanced Night Skyrim","29","Models and Textures"],
"86": [None,None,None,None],
"87": [["Commands for Alternate Skills and Perks -CAPS--87-1.zip","Unlock all perks-87-1-0.zip"],"Unlock all perks","40","Cheats and God items"],
"88": [["Numpad Bindable-88-3-2.zip","Reduced Lockpick-88-3-2.zip"],"Interface Hard Coded Key Tweaks","42","User Interface"],
"89": [["Respawn rate 1 day-89-1-1.zip","Respawn rate 2 days-89-1-1.zip","Respawn rate 3 days-89-1-1.zip","Respawn rate 4 days-89-1-1.zip","Respawn rate 7 days-89-1-1.zip","SkyrimKeyHelper_2000-88-2-0-0-0.zip","Standard Tweaks-88-3-2.zip"],"Skyrim Respawn","24","Gameplay Effects and Changes"],
"90": [["hairier texture beta-90-1.rar","hairier texture-90-1.rar"],"Hairier male texture","29","Models and Textures"],
"91": [None,None,None,None],
"92": [["Correct-PS3-Button-Icons-1_0-92.zip"],"Correct PS3 Button Icons","42","User Interface"],
"93": [None,None,None,None],
"94": [None,None,None,None],
"95": [["Andalus Font-95-2-0.rar","Centaur - Large-95.rar","Centaur Font-95-2-0.rar","Fertigo Pro Font-95-2-0.rar","Magic Cards Font-95-2-0.rar","Morpheus - Large-95.rar","Morpheus Font-95.rar","Tara Type-95-2-0.rar"],["Fertigo Pro Font-95-2-0.rar"],"Main Font Replacement","42","User Interface"],
"96": [None,None,None,None],
"97": [["Green eye recolor high-res-97-.rar"],"Highelf green eye recolor","29","Models and Textures"],
"98": [["Time on loading v5-98.zip"],"KenMOD - Time on loading screen","42","User Interface"],
"99": [["Female Clean Face v02-99-0-2.zip","Female Clean Face-99-0-1.zip"],"Human Female Clean Face","29","Models and Textures"]}
skyrim_default_categories_dat = "1|Animations|51|0\n2|Armour|54|0\n3|Sound & Music|61|0\n5|Clothing|60|0\n6|Collectables|92|0\n28|Companions|66,96|0\n7|Creatures & Mounts|83,65|0\n8|Factions|25|0\n9|Gameplay|24|0\n10|Hair|26|0\n11|Items|27,85|0\n32|Mercantile|69|0\n19|Weapons|55|11\n36|Weapon & Armour Sets|39|11\n12|Locations|22,30,70,88,89,90,91|0\n31|Landscape Changes|58|0\n4|Cities|53|12\n29|Environment|74|0\n30|Immersion|78|0\n25|Castles & Mansions|68|23\n20|Magic|75,93,94|0\n21|Models & Textures|29|0\n33|Modders resources|82|0\n13|NPCs|33|0\n14|Patches|79,84|0\n24|Bugfixes|95|0\n35|Utilities|38,39|0\n26|Cheats|40|0\n23|Player Homes|67|0\n15|Quests|35|0\n16|Races & Classes|34|0\n27|Combat|77|0\n22|Skills|73|0\n34|Stealth|76|0\n17|UI|42|0\n18|Visuals|62|0" #MO_default_categories.dat
| gpl-2.0 |
patrioticcow/MessagesForSkype | packages/win32/bundle/MessagesForSkype/modules/python/1.3.1-beta/Lib/distutils/versionpredicate.py | 397 | 5095 | """Module for parsing and testing package version predicate strings.
"""
import re
import distutils.version
import operator
re_validPackage = re.compile(r"(?i)^\s*([a-z_]\w*(?:\.[a-z_]\w*)*)(.*)")
# (package) (rest)
re_paren = re.compile(r"^\s*\((.*)\)\s*$") # (list) inside of parentheses
re_splitComparison = re.compile(r"^\s*(<=|>=|<|>|!=|==)\s*([^\s,]+)\s*$")
# (comp) (version)
def splitUp(pred):
"""Parse a single version comparison.
Return (comparison string, StrictVersion)
"""
res = re_splitComparison.match(pred)
if not res:
raise ValueError("bad package restriction syntax: %r" % pred)
comp, verStr = res.groups()
return (comp, distutils.version.StrictVersion(verStr))
compmap = {"<": operator.lt, "<=": operator.le, "==": operator.eq,
">": operator.gt, ">=": operator.ge, "!=": operator.ne}
class VersionPredicate:
"""Parse and test package version predicates.
>>> v = VersionPredicate('pyepat.abc (>1.0, <3333.3a1, !=1555.1b3)')
The `name` attribute provides the full dotted name that is given::
>>> v.name
'pyepat.abc'
The str() of a `VersionPredicate` provides a normalized
human-readable version of the expression::
>>> print v
pyepat.abc (> 1.0, < 3333.3a1, != 1555.1b3)
The `satisfied_by()` method can be used to determine with a given
version number is included in the set described by the version
restrictions::
>>> v.satisfied_by('1.1')
True
>>> v.satisfied_by('1.4')
True
>>> v.satisfied_by('1.0')
False
>>> v.satisfied_by('4444.4')
False
>>> v.satisfied_by('1555.1b3')
False
`VersionPredicate` is flexible in accepting extra whitespace::
>>> v = VersionPredicate(' pat( == 0.1 ) ')
>>> v.name
'pat'
>>> v.satisfied_by('0.1')
True
>>> v.satisfied_by('0.2')
False
If any version numbers passed in do not conform to the
restrictions of `StrictVersion`, a `ValueError` is raised::
>>> v = VersionPredicate('p1.p2.p3.p4(>=1.0, <=1.3a1, !=1.2zb3)')
Traceback (most recent call last):
...
ValueError: invalid version number '1.2zb3'
It the module or package name given does not conform to what's
allowed as a legal module or package name, `ValueError` is
raised::
>>> v = VersionPredicate('foo-bar')
Traceback (most recent call last):
...
ValueError: expected parenthesized list: '-bar'
>>> v = VersionPredicate('foo bar (12.21)')
Traceback (most recent call last):
...
ValueError: expected parenthesized list: 'bar (12.21)'
"""
def __init__(self, versionPredicateStr):
"""Parse a version predicate string.
"""
# Fields:
# name: package name
# pred: list of (comparison string, StrictVersion)
versionPredicateStr = versionPredicateStr.strip()
if not versionPredicateStr:
raise ValueError("empty package restriction")
match = re_validPackage.match(versionPredicateStr)
if not match:
raise ValueError("bad package name in %r" % versionPredicateStr)
self.name, paren = match.groups()
paren = paren.strip()
if paren:
match = re_paren.match(paren)
if not match:
raise ValueError("expected parenthesized list: %r" % paren)
str = match.groups()[0]
self.pred = [splitUp(aPred) for aPred in str.split(",")]
if not self.pred:
raise ValueError("empty parenthesized list in %r"
% versionPredicateStr)
else:
self.pred = []
def __str__(self):
if self.pred:
seq = [cond + " " + str(ver) for cond, ver in self.pred]
return self.name + " (" + ", ".join(seq) + ")"
else:
return self.name
def satisfied_by(self, version):
"""True if version is compatible with all the predicates in self.
The parameter version must be acceptable to the StrictVersion
constructor. It may be either a string or StrictVersion.
"""
for cond, ver in self.pred:
if not compmap[cond](version, ver):
return False
return True
_provision_rx = None
def split_provision(value):
"""Return the name and optional version number of a provision.
The version number, if given, will be returned as a `StrictVersion`
instance, otherwise it will be `None`.
>>> split_provision('mypkg')
('mypkg', None)
>>> split_provision(' mypkg( 1.2 ) ')
('mypkg', StrictVersion ('1.2'))
"""
global _provision_rx
if _provision_rx is None:
_provision_rx = re.compile(
"([a-zA-Z_]\w*(?:\.[a-zA-Z_]\w*)*)(?:\s*\(\s*([^)\s]+)\s*\))?$")
value = value.strip()
m = _provision_rx.match(value)
if not m:
raise ValueError("illegal provides specification: %r" % value)
ver = m.group(2) or None
if ver:
ver = distutils.version.StrictVersion(ver)
return m.group(1), ver
| mit |
Edraak/circleci-edx-platform | lms/djangoapps/ccx/tests/test_utils.py | 27 | 1683 | """
test utils
"""
from nose.plugins.attrib import attr
from lms.djangoapps.ccx.tests.factories import CcxFactory
from student.roles import CourseCcxCoachRole
from student.tests.factories import (
AdminFactory,
)
from xmodule.modulestore.tests.django_utils import (
ModuleStoreTestCase,
TEST_DATA_SPLIT_MODULESTORE)
from xmodule.modulestore.tests.factories import CourseFactory
from ccx_keys.locator import CCXLocator
@attr('shard_1')
class TestGetCCXFromCCXLocator(ModuleStoreTestCase):
"""Verify that get_ccx_from_ccx_locator functions properly"""
MODULESTORE = TEST_DATA_SPLIT_MODULESTORE
def setUp(self):
"""Set up a course, coach, ccx and user"""
super(TestGetCCXFromCCXLocator, self).setUp()
self.course = CourseFactory.create()
coach = self.coach = AdminFactory.create()
role = CourseCcxCoachRole(self.course.id)
role.add_users(coach)
def call_fut(self, course_id):
"""call the function under test in this test case"""
from lms.djangoapps.ccx.utils import get_ccx_from_ccx_locator
return get_ccx_from_ccx_locator(course_id)
def test_non_ccx_locator(self):
"""verify that nothing is returned if locator is not a ccx locator
"""
result = self.call_fut(self.course.id)
self.assertEqual(result, None)
def test_ccx_locator(self):
"""verify that the ccx is retuned if using a ccx locator
"""
ccx = CcxFactory(course_id=self.course.id, coach=self.coach)
course_key = CCXLocator.from_course_locator(self.course.id, ccx.id)
result = self.call_fut(course_key)
self.assertEqual(result, ccx)
| agpl-3.0 |
paulmadore/Eric-IDE | 6-6.0.9/eric/ThirdParty/CharDet/chardet/big5freq.py | 3133 | 82594 | ######################## BEGIN LICENSE BLOCK ########################
# The Original Code is Mozilla Communicator client code.
#
# The Initial Developer of the Original Code is
# Netscape Communications Corporation.
# Portions created by the Initial Developer are Copyright (C) 1998
# the Initial Developer. All Rights Reserved.
#
# Contributor(s):
# Mark Pilgrim - port to Python
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
# 02110-1301 USA
######################### END LICENSE BLOCK #########################
# Big5 frequency table
# by Taiwan's Mandarin Promotion Council
# <http://www.edu.tw:81/mandr/>
#
# 128 --> 0.42261
# 256 --> 0.57851
# 512 --> 0.74851
# 1024 --> 0.89384
# 2048 --> 0.97583
#
# Ideal Distribution Ratio = 0.74851/(1-0.74851) =2.98
# Random Distribution Ration = 512/(5401-512)=0.105
#
# Typical Distribution Ratio about 25% of Ideal one, still much higher than RDR
BIG5_TYPICAL_DISTRIBUTION_RATIO = 0.75
#Char to FreqOrder table
BIG5_TABLE_SIZE = 5376
Big5CharToFreqOrder = (
1,1801,1506, 255,1431, 198, 9, 82, 6,5008, 177, 202,3681,1256,2821, 110, # 16
3814, 33,3274, 261, 76, 44,2114, 16,2946,2187,1176, 659,3971, 26,3451,2653, # 32
1198,3972,3350,4202, 410,2215, 302, 590, 361,1964, 8, 204, 58,4510,5009,1932, # 48
63,5010,5011, 317,1614, 75, 222, 159,4203,2417,1480,5012,3555,3091, 224,2822, # 64
3682, 3, 10,3973,1471, 29,2787,1135,2866,1940, 873, 130,3275,1123, 312,5013, # 80
4511,2052, 507, 252, 682,5014, 142,1915, 124, 206,2947, 34,3556,3204, 64, 604, # 96
5015,2501,1977,1978, 155,1991, 645, 641,1606,5016,3452, 337, 72, 406,5017, 80, # 112
630, 238,3205,1509, 263, 939,1092,2654, 756,1440,1094,3453, 449, 69,2987, 591, # 128
179,2096, 471, 115,2035,1844, 60, 50,2988, 134, 806,1869, 734,2036,3454, 180, # 144
995,1607, 156, 537,2907, 688,5018, 319,1305, 779,2145, 514,2379, 298,4512, 359, # 160
2502, 90,2716,1338, 663, 11, 906,1099,2553, 20,2441, 182, 532,1716,5019, 732, # 176
1376,4204,1311,1420,3206, 25,2317,1056, 113, 399, 382,1950, 242,3455,2474, 529, # 192
3276, 475,1447,3683,5020, 117, 21, 656, 810,1297,2300,2334,3557,5021, 126,4205, # 208
706, 456, 150, 613,4513, 71,1118,2037,4206, 145,3092, 85, 835, 486,2115,1246, # 224
1426, 428, 727,1285,1015, 800, 106, 623, 303,1281,5022,2128,2359, 347,3815, 221, # 240
3558,3135,5023,1956,1153,4207, 83, 296,1199,3093, 192, 624, 93,5024, 822,1898, # 256
2823,3136, 795,2065, 991,1554,1542,1592, 27, 43,2867, 859, 139,1456, 860,4514, # 272
437, 712,3974, 164,2397,3137, 695, 211,3037,2097, 195,3975,1608,3559,3560,3684, # 288
3976, 234, 811,2989,2098,3977,2233,1441,3561,1615,2380, 668,2077,1638, 305, 228, # 304
1664,4515, 467, 415,5025, 262,2099,1593, 239, 108, 300, 200,1033, 512,1247,2078, # 320
5026,5027,2176,3207,3685,2682, 593, 845,1062,3277, 88,1723,2038,3978,1951, 212, # 336
266, 152, 149, 468,1899,4208,4516, 77, 187,5028,3038, 37, 5,2990,5029,3979, # 352
5030,5031, 39,2524,4517,2908,3208,2079, 55, 148, 74,4518, 545, 483,1474,1029, # 368
1665, 217,1870,1531,3138,1104,2655,4209, 24, 172,3562, 900,3980,3563,3564,4519, # 384
32,1408,2824,1312, 329, 487,2360,2251,2717, 784,2683, 4,3039,3351,1427,1789, # 400
188, 109, 499,5032,3686,1717,1790, 888,1217,3040,4520,5033,3565,5034,3352,1520, # 416
3687,3981, 196,1034, 775,5035,5036, 929,1816, 249, 439, 38,5037,1063,5038, 794, # 432
3982,1435,2301, 46, 178,3278,2066,5039,2381,5040, 214,1709,4521, 804, 35, 707, # 448
324,3688,1601,2554, 140, 459,4210,5041,5042,1365, 839, 272, 978,2262,2580,3456, # 464
2129,1363,3689,1423, 697, 100,3094, 48, 70,1231, 495,3139,2196,5043,1294,5044, # 480
2080, 462, 586,1042,3279, 853, 256, 988, 185,2382,3457,1698, 434,1084,5045,3458, # 496
314,2625,2788,4522,2335,2336, 569,2285, 637,1817,2525, 757,1162,1879,1616,3459, # 512
287,1577,2116, 768,4523,1671,2868,3566,2526,1321,3816, 909,2418,5046,4211, 933, # 528
3817,4212,2053,2361,1222,4524, 765,2419,1322, 786,4525,5047,1920,1462,1677,2909, # 544
1699,5048,4526,1424,2442,3140,3690,2600,3353,1775,1941,3460,3983,4213, 309,1369, # 560
1130,2825, 364,2234,1653,1299,3984,3567,3985,3986,2656, 525,1085,3041, 902,2001, # 576
1475, 964,4527, 421,1845,1415,1057,2286, 940,1364,3141, 376,4528,4529,1381, 7, # 592
2527, 983,2383, 336,1710,2684,1846, 321,3461, 559,1131,3042,2752,1809,1132,1313, # 608
265,1481,1858,5049, 352,1203,2826,3280, 167,1089, 420,2827, 776, 792,1724,3568, # 624
4214,2443,3281,5050,4215,5051, 446, 229, 333,2753, 901,3818,1200,1557,4530,2657, # 640
1921, 395,2754,2685,3819,4216,1836, 125, 916,3209,2626,4531,5052,5053,3820,5054, # 656
5055,5056,4532,3142,3691,1133,2555,1757,3462,1510,2318,1409,3569,5057,2146, 438, # 672
2601,2910,2384,3354,1068, 958,3043, 461, 311,2869,2686,4217,1916,3210,4218,1979, # 688
383, 750,2755,2627,4219, 274, 539, 385,1278,1442,5058,1154,1965, 384, 561, 210, # 704
98,1295,2556,3570,5059,1711,2420,1482,3463,3987,2911,1257, 129,5060,3821, 642, # 720
523,2789,2790,2658,5061, 141,2235,1333, 68, 176, 441, 876, 907,4220, 603,2602, # 736
710, 171,3464, 404, 549, 18,3143,2398,1410,3692,1666,5062,3571,4533,2912,4534, # 752
5063,2991, 368,5064, 146, 366, 99, 871,3693,1543, 748, 807,1586,1185, 22,2263, # 768
379,3822,3211,5065,3212, 505,1942,2628,1992,1382,2319,5066, 380,2362, 218, 702, # 784
1818,1248,3465,3044,3572,3355,3282,5067,2992,3694, 930,3283,3823,5068, 59,5069, # 800
585, 601,4221, 497,3466,1112,1314,4535,1802,5070,1223,1472,2177,5071, 749,1837, # 816
690,1900,3824,1773,3988,1476, 429,1043,1791,2236,2117, 917,4222, 447,1086,1629, # 832
5072, 556,5073,5074,2021,1654, 844,1090, 105, 550, 966,1758,2828,1008,1783, 686, # 848
1095,5075,2287, 793,1602,5076,3573,2603,4536,4223,2948,2302,4537,3825, 980,2503, # 864
544, 353, 527,4538, 908,2687,2913,5077, 381,2629,1943,1348,5078,1341,1252, 560, # 880
3095,5079,3467,2870,5080,2054, 973, 886,2081, 143,4539,5081,5082, 157,3989, 496, # 896
4224, 57, 840, 540,2039,4540,4541,3468,2118,1445, 970,2264,1748,1966,2082,4225, # 912
3144,1234,1776,3284,2829,3695, 773,1206,2130,1066,2040,1326,3990,1738,1725,4226, # 928
279,3145, 51,1544,2604, 423,1578,2131,2067, 173,4542,1880,5083,5084,1583, 264, # 944
610,3696,4543,2444, 280, 154,5085,5086,5087,1739, 338,1282,3096, 693,2871,1411, # 960
1074,3826,2445,5088,4544,5089,5090,1240, 952,2399,5091,2914,1538,2688, 685,1483, # 976
4227,2475,1436, 953,4228,2055,4545, 671,2400, 79,4229,2446,3285, 608, 567,2689, # 992
3469,4230,4231,1691, 393,1261,1792,2401,5092,4546,5093,5094,5095,5096,1383,1672, # 1008
3827,3213,1464, 522,1119, 661,1150, 216, 675,4547,3991,1432,3574, 609,4548,2690, # 1024
2402,5097,5098,5099,4232,3045, 0,5100,2476, 315, 231,2447, 301,3356,4549,2385, # 1040
5101, 233,4233,3697,1819,4550,4551,5102, 96,1777,1315,2083,5103, 257,5104,1810, # 1056
3698,2718,1139,1820,4234,2022,1124,2164,2791,1778,2659,5105,3097, 363,1655,3214, # 1072
5106,2993,5107,5108,5109,3992,1567,3993, 718, 103,3215, 849,1443, 341,3357,2949, # 1088
1484,5110,1712, 127, 67, 339,4235,2403, 679,1412, 821,5111,5112, 834, 738, 351, # 1104
2994,2147, 846, 235,1497,1881, 418,1993,3828,2719, 186,1100,2148,2756,3575,1545, # 1120
1355,2950,2872,1377, 583,3994,4236,2581,2995,5113,1298,3699,1078,2557,3700,2363, # 1136
78,3829,3830, 267,1289,2100,2002,1594,4237, 348, 369,1274,2197,2178,1838,4552, # 1152
1821,2830,3701,2757,2288,2003,4553,2951,2758, 144,3358, 882,4554,3995,2759,3470, # 1168
4555,2915,5114,4238,1726, 320,5115,3996,3046, 788,2996,5116,2831,1774,1327,2873, # 1184
3997,2832,5117,1306,4556,2004,1700,3831,3576,2364,2660, 787,2023, 506, 824,3702, # 1200
534, 323,4557,1044,3359,2024,1901, 946,3471,5118,1779,1500,1678,5119,1882,4558, # 1216
165, 243,4559,3703,2528, 123, 683,4239, 764,4560, 36,3998,1793, 589,2916, 816, # 1232
626,1667,3047,2237,1639,1555,1622,3832,3999,5120,4000,2874,1370,1228,1933, 891, # 1248
2084,2917, 304,4240,5121, 292,2997,2720,3577, 691,2101,4241,1115,4561, 118, 662, # 1264
5122, 611,1156, 854,2386,1316,2875, 2, 386, 515,2918,5123,5124,3286, 868,2238, # 1280
1486, 855,2661, 785,2216,3048,5125,1040,3216,3578,5126,3146, 448,5127,1525,5128, # 1296
2165,4562,5129,3833,5130,4242,2833,3579,3147, 503, 818,4001,3148,1568, 814, 676, # 1312
1444, 306,1749,5131,3834,1416,1030, 197,1428, 805,2834,1501,4563,5132,5133,5134, # 1328
1994,5135,4564,5136,5137,2198, 13,2792,3704,2998,3149,1229,1917,5138,3835,2132, # 1344
5139,4243,4565,2404,3580,5140,2217,1511,1727,1120,5141,5142, 646,3836,2448, 307, # 1360
5143,5144,1595,3217,5145,5146,5147,3705,1113,1356,4002,1465,2529,2530,5148, 519, # 1376
5149, 128,2133, 92,2289,1980,5150,4003,1512, 342,3150,2199,5151,2793,2218,1981, # 1392
3360,4244, 290,1656,1317, 789, 827,2365,5152,3837,4566, 562, 581,4004,5153, 401, # 1408
4567,2252, 94,4568,5154,1399,2794,5155,1463,2025,4569,3218,1944,5156, 828,1105, # 1424
4245,1262,1394,5157,4246, 605,4570,5158,1784,2876,5159,2835, 819,2102, 578,2200, # 1440
2952,5160,1502, 436,3287,4247,3288,2836,4005,2919,3472,3473,5161,2721,2320,5162, # 1456
5163,2337,2068, 23,4571, 193, 826,3838,2103, 699,1630,4248,3098, 390,1794,1064, # 1472
3581,5164,1579,3099,3100,1400,5165,4249,1839,1640,2877,5166,4572,4573, 137,4250, # 1488
598,3101,1967, 780, 104, 974,2953,5167, 278, 899, 253, 402, 572, 504, 493,1339, # 1504
5168,4006,1275,4574,2582,2558,5169,3706,3049,3102,2253, 565,1334,2722, 863, 41, # 1520
5170,5171,4575,5172,1657,2338, 19, 463,2760,4251, 606,5173,2999,3289,1087,2085, # 1536
1323,2662,3000,5174,1631,1623,1750,4252,2691,5175,2878, 791,2723,2663,2339, 232, # 1552
2421,5176,3001,1498,5177,2664,2630, 755,1366,3707,3290,3151,2026,1609, 119,1918, # 1568
3474, 862,1026,4253,5178,4007,3839,4576,4008,4577,2265,1952,2477,5179,1125, 817, # 1584
4254,4255,4009,1513,1766,2041,1487,4256,3050,3291,2837,3840,3152,5180,5181,1507, # 1600
5182,2692, 733, 40,1632,1106,2879, 345,4257, 841,2531, 230,4578,3002,1847,3292, # 1616
3475,5183,1263, 986,3476,5184, 735, 879, 254,1137, 857, 622,1300,1180,1388,1562, # 1632
4010,4011,2954, 967,2761,2665,1349, 592,2134,1692,3361,3003,1995,4258,1679,4012, # 1648
1902,2188,5185, 739,3708,2724,1296,1290,5186,4259,2201,2202,1922,1563,2605,2559, # 1664
1871,2762,3004,5187, 435,5188, 343,1108, 596, 17,1751,4579,2239,3477,3709,5189, # 1680
4580, 294,3582,2955,1693, 477, 979, 281,2042,3583, 643,2043,3710,2631,2795,2266, # 1696
1031,2340,2135,2303,3584,4581, 367,1249,2560,5190,3585,5191,4582,1283,3362,2005, # 1712
240,1762,3363,4583,4584, 836,1069,3153, 474,5192,2149,2532, 268,3586,5193,3219, # 1728
1521,1284,5194,1658,1546,4260,5195,3587,3588,5196,4261,3364,2693,1685,4262, 961, # 1744
1673,2632, 190,2006,2203,3841,4585,4586,5197, 570,2504,3711,1490,5198,4587,2633, # 1760
3293,1957,4588, 584,1514, 396,1045,1945,5199,4589,1968,2449,5200,5201,4590,4013, # 1776
619,5202,3154,3294, 215,2007,2796,2561,3220,4591,3221,4592, 763,4263,3842,4593, # 1792
5203,5204,1958,1767,2956,3365,3712,1174, 452,1477,4594,3366,3155,5205,2838,1253, # 1808
2387,2189,1091,2290,4264, 492,5206, 638,1169,1825,2136,1752,4014, 648, 926,1021, # 1824
1324,4595, 520,4596, 997, 847,1007, 892,4597,3843,2267,1872,3713,2405,1785,4598, # 1840
1953,2957,3103,3222,1728,4265,2044,3714,4599,2008,1701,3156,1551, 30,2268,4266, # 1856
5207,2027,4600,3589,5208, 501,5209,4267, 594,3478,2166,1822,3590,3479,3591,3223, # 1872
829,2839,4268,5210,1680,3157,1225,4269,5211,3295,4601,4270,3158,2341,5212,4602, # 1888
4271,5213,4015,4016,5214,1848,2388,2606,3367,5215,4603, 374,4017, 652,4272,4273, # 1904
375,1140, 798,5216,5217,5218,2366,4604,2269, 546,1659, 138,3051,2450,4605,5219, # 1920
2254, 612,1849, 910, 796,3844,1740,1371, 825,3845,3846,5220,2920,2562,5221, 692, # 1936
444,3052,2634, 801,4606,4274,5222,1491, 244,1053,3053,4275,4276, 340,5223,4018, # 1952
1041,3005, 293,1168, 87,1357,5224,1539, 959,5225,2240, 721, 694,4277,3847, 219, # 1968
1478, 644,1417,3368,2666,1413,1401,1335,1389,4019,5226,5227,3006,2367,3159,1826, # 1984
730,1515, 184,2840, 66,4607,5228,1660,2958, 246,3369, 378,1457, 226,3480, 975, # 2000
4020,2959,1264,3592, 674, 696,5229, 163,5230,1141,2422,2167, 713,3593,3370,4608, # 2016
4021,5231,5232,1186, 15,5233,1079,1070,5234,1522,3224,3594, 276,1050,2725, 758, # 2032
1126, 653,2960,3296,5235,2342, 889,3595,4022,3104,3007, 903,1250,4609,4023,3481, # 2048
3596,1342,1681,1718, 766,3297, 286, 89,2961,3715,5236,1713,5237,2607,3371,3008, # 2064
5238,2962,2219,3225,2880,5239,4610,2505,2533, 181, 387,1075,4024, 731,2190,3372, # 2080
5240,3298, 310, 313,3482,2304, 770,4278, 54,3054, 189,4611,3105,3848,4025,5241, # 2096
1230,1617,1850, 355,3597,4279,4612,3373, 111,4280,3716,1350,3160,3483,3055,4281, # 2112
2150,3299,3598,5242,2797,4026,4027,3009, 722,2009,5243,1071, 247,1207,2343,2478, # 2128
1378,4613,2010, 864,1437,1214,4614, 373,3849,1142,2220, 667,4615, 442,2763,2563, # 2144
3850,4028,1969,4282,3300,1840, 837, 170,1107, 934,1336,1883,5244,5245,2119,4283, # 2160
2841, 743,1569,5246,4616,4284, 582,2389,1418,3484,5247,1803,5248, 357,1395,1729, # 2176
3717,3301,2423,1564,2241,5249,3106,3851,1633,4617,1114,2086,4285,1532,5250, 482, # 2192
2451,4618,5251,5252,1492, 833,1466,5253,2726,3599,1641,2842,5254,1526,1272,3718, # 2208
4286,1686,1795, 416,2564,1903,1954,1804,5255,3852,2798,3853,1159,2321,5256,2881, # 2224
4619,1610,1584,3056,2424,2764, 443,3302,1163,3161,5257,5258,4029,5259,4287,2506, # 2240
3057,4620,4030,3162,2104,1647,3600,2011,1873,4288,5260,4289, 431,3485,5261, 250, # 2256
97, 81,4290,5262,1648,1851,1558, 160, 848,5263, 866, 740,1694,5264,2204,2843, # 2272
3226,4291,4621,3719,1687, 950,2479, 426, 469,3227,3720,3721,4031,5265,5266,1188, # 2288
424,1996, 861,3601,4292,3854,2205,2694, 168,1235,3602,4293,5267,2087,1674,4622, # 2304
3374,3303, 220,2565,1009,5268,3855, 670,3010, 332,1208, 717,5269,5270,3603,2452, # 2320
4032,3375,5271, 513,5272,1209,2882,3376,3163,4623,1080,5273,5274,5275,5276,2534, # 2336
3722,3604, 815,1587,4033,4034,5277,3605,3486,3856,1254,4624,1328,3058,1390,4035, # 2352
1741,4036,3857,4037,5278, 236,3858,2453,3304,5279,5280,3723,3859,1273,3860,4625, # 2368
5281, 308,5282,4626, 245,4627,1852,2480,1307,2583, 430, 715,2137,2454,5283, 270, # 2384
199,2883,4038,5284,3606,2727,1753, 761,1754, 725,1661,1841,4628,3487,3724,5285, # 2400
5286, 587, 14,3305, 227,2608, 326, 480,2270, 943,2765,3607, 291, 650,1884,5287, # 2416
1702,1226, 102,1547, 62,3488, 904,4629,3489,1164,4294,5288,5289,1224,1548,2766, # 2432
391, 498,1493,5290,1386,1419,5291,2056,1177,4630, 813, 880,1081,2368, 566,1145, # 2448
4631,2291,1001,1035,2566,2609,2242, 394,1286,5292,5293,2069,5294, 86,1494,1730, # 2464
4039, 491,1588, 745, 897,2963, 843,3377,4040,2767,2884,3306,1768, 998,2221,2070, # 2480
397,1827,1195,1970,3725,3011,3378, 284,5295,3861,2507,2138,2120,1904,5296,4041, # 2496
2151,4042,4295,1036,3490,1905, 114,2567,4296, 209,1527,5297,5298,2964,2844,2635, # 2512
2390,2728,3164, 812,2568,5299,3307,5300,1559, 737,1885,3726,1210, 885, 28,2695, # 2528
3608,3862,5301,4297,1004,1780,4632,5302, 346,1982,2222,2696,4633,3863,1742, 797, # 2544
1642,4043,1934,1072,1384,2152, 896,4044,3308,3727,3228,2885,3609,5303,2569,1959, # 2560
4634,2455,1786,5304,5305,5306,4045,4298,1005,1308,3728,4299,2729,4635,4636,1528, # 2576
2610, 161,1178,4300,1983, 987,4637,1101,4301, 631,4046,1157,3229,2425,1343,1241, # 2592
1016,2243,2570, 372, 877,2344,2508,1160, 555,1935, 911,4047,5307, 466,1170, 169, # 2608
1051,2921,2697,3729,2481,3012,1182,2012,2571,1251,2636,5308, 992,2345,3491,1540, # 2624
2730,1201,2071,2406,1997,2482,5309,4638, 528,1923,2191,1503,1874,1570,2369,3379, # 2640
3309,5310, 557,1073,5311,1828,3492,2088,2271,3165,3059,3107, 767,3108,2799,4639, # 2656
1006,4302,4640,2346,1267,2179,3730,3230, 778,4048,3231,2731,1597,2667,5312,4641, # 2672
5313,3493,5314,5315,5316,3310,2698,1433,3311, 131, 95,1504,4049, 723,4303,3166, # 2688
1842,3610,2768,2192,4050,2028,2105,3731,5317,3013,4051,1218,5318,3380,3232,4052, # 2704
4304,2584, 248,1634,3864, 912,5319,2845,3732,3060,3865, 654, 53,5320,3014,5321, # 2720
1688,4642, 777,3494,1032,4053,1425,5322, 191, 820,2121,2846, 971,4643, 931,3233, # 2736
135, 664, 783,3866,1998, 772,2922,1936,4054,3867,4644,2923,3234, 282,2732, 640, # 2752
1372,3495,1127, 922, 325,3381,5323,5324, 711,2045,5325,5326,4055,2223,2800,1937, # 2768
4056,3382,2224,2255,3868,2305,5327,4645,3869,1258,3312,4057,3235,2139,2965,4058, # 2784
4059,5328,2225, 258,3236,4646, 101,1227,5329,3313,1755,5330,1391,3314,5331,2924, # 2800
2057, 893,5332,5333,5334,1402,4305,2347,5335,5336,3237,3611,5337,5338, 878,1325, # 2816
1781,2801,4647, 259,1385,2585, 744,1183,2272,4648,5339,4060,2509,5340, 684,1024, # 2832
4306,5341, 472,3612,3496,1165,3315,4061,4062, 322,2153, 881, 455,1695,1152,1340, # 2848
660, 554,2154,4649,1058,4650,4307, 830,1065,3383,4063,4651,1924,5342,1703,1919, # 2864
5343, 932,2273, 122,5344,4652, 947, 677,5345,3870,2637, 297,1906,1925,2274,4653, # 2880
2322,3316,5346,5347,4308,5348,4309, 84,4310, 112, 989,5349, 547,1059,4064, 701, # 2896
3613,1019,5350,4311,5351,3497, 942, 639, 457,2306,2456, 993,2966, 407, 851, 494, # 2912
4654,3384, 927,5352,1237,5353,2426,3385, 573,4312, 680, 921,2925,1279,1875, 285, # 2928
790,1448,1984, 719,2168,5354,5355,4655,4065,4066,1649,5356,1541, 563,5357,1077, # 2944
5358,3386,3061,3498, 511,3015,4067,4068,3733,4069,1268,2572,3387,3238,4656,4657, # 2960
5359, 535,1048,1276,1189,2926,2029,3167,1438,1373,2847,2967,1134,2013,5360,4313, # 2976
1238,2586,3109,1259,5361, 700,5362,2968,3168,3734,4314,5363,4315,1146,1876,1907, # 2992
4658,2611,4070, 781,2427, 132,1589, 203, 147, 273,2802,2407, 898,1787,2155,4071, # 3008
4072,5364,3871,2803,5365,5366,4659,4660,5367,3239,5368,1635,3872, 965,5369,1805, # 3024
2699,1516,3614,1121,1082,1329,3317,4073,1449,3873, 65,1128,2848,2927,2769,1590, # 3040
3874,5370,5371, 12,2668, 45, 976,2587,3169,4661, 517,2535,1013,1037,3240,5372, # 3056
3875,2849,5373,3876,5374,3499,5375,2612, 614,1999,2323,3877,3110,2733,2638,5376, # 3072
2588,4316, 599,1269,5377,1811,3735,5378,2700,3111, 759,1060, 489,1806,3388,3318, # 3088
1358,5379,5380,2391,1387,1215,2639,2256, 490,5381,5382,4317,1759,2392,2348,5383, # 3104
4662,3878,1908,4074,2640,1807,3241,4663,3500,3319,2770,2349, 874,5384,5385,3501, # 3120
3736,1859, 91,2928,3737,3062,3879,4664,5386,3170,4075,2669,5387,3502,1202,1403, # 3136
3880,2969,2536,1517,2510,4665,3503,2511,5388,4666,5389,2701,1886,1495,1731,4076, # 3152
2370,4667,5390,2030,5391,5392,4077,2702,1216, 237,2589,4318,2324,4078,3881,4668, # 3168
4669,2703,3615,3504, 445,4670,5393,5394,5395,5396,2771, 61,4079,3738,1823,4080, # 3184
5397, 687,2046, 935, 925, 405,2670, 703,1096,1860,2734,4671,4081,1877,1367,2704, # 3200
3389, 918,2106,1782,2483, 334,3320,1611,1093,4672, 564,3171,3505,3739,3390, 945, # 3216
2641,2058,4673,5398,1926, 872,4319,5399,3506,2705,3112, 349,4320,3740,4082,4674, # 3232
3882,4321,3741,2156,4083,4675,4676,4322,4677,2408,2047, 782,4084, 400, 251,4323, # 3248
1624,5400,5401, 277,3742, 299,1265, 476,1191,3883,2122,4324,4325,1109, 205,5402, # 3264
2590,1000,2157,3616,1861,5403,5404,5405,4678,5406,4679,2573, 107,2484,2158,4085, # 3280
3507,3172,5407,1533, 541,1301, 158, 753,4326,2886,3617,5408,1696, 370,1088,4327, # 3296
4680,3618, 579, 327, 440, 162,2244, 269,1938,1374,3508, 968,3063, 56,1396,3113, # 3312
2107,3321,3391,5409,1927,2159,4681,3016,5410,3619,5411,5412,3743,4682,2485,5413, # 3328
2804,5414,1650,4683,5415,2613,5416,5417,4086,2671,3392,1149,3393,4087,3884,4088, # 3344
5418,1076, 49,5419, 951,3242,3322,3323, 450,2850, 920,5420,1812,2805,2371,4328, # 3360
1909,1138,2372,3885,3509,5421,3243,4684,1910,1147,1518,2428,4685,3886,5422,4686, # 3376
2393,2614, 260,1796,3244,5423,5424,3887,3324, 708,5425,3620,1704,5426,3621,1351, # 3392
1618,3394,3017,1887, 944,4329,3395,4330,3064,3396,4331,5427,3744, 422, 413,1714, # 3408
3325, 500,2059,2350,4332,2486,5428,1344,1911, 954,5429,1668,5430,5431,4089,2409, # 3424
4333,3622,3888,4334,5432,2307,1318,2512,3114, 133,3115,2887,4687, 629, 31,2851, # 3440
2706,3889,4688, 850, 949,4689,4090,2970,1732,2089,4335,1496,1853,5433,4091, 620, # 3456
3245, 981,1242,3745,3397,1619,3746,1643,3326,2140,2457,1971,1719,3510,2169,5434, # 3472
3246,5435,5436,3398,1829,5437,1277,4690,1565,2048,5438,1636,3623,3116,5439, 869, # 3488
2852, 655,3890,3891,3117,4092,3018,3892,1310,3624,4691,5440,5441,5442,1733, 558, # 3504
4692,3747, 335,1549,3065,1756,4336,3748,1946,3511,1830,1291,1192, 470,2735,2108, # 3520
2806, 913,1054,4093,5443,1027,5444,3066,4094,4693, 982,2672,3399,3173,3512,3247, # 3536
3248,1947,2807,5445, 571,4694,5446,1831,5447,3625,2591,1523,2429,5448,2090, 984, # 3552
4695,3749,1960,5449,3750, 852, 923,2808,3513,3751, 969,1519, 999,2049,2325,1705, # 3568
5450,3118, 615,1662, 151, 597,4095,2410,2326,1049, 275,4696,3752,4337, 568,3753, # 3584
3626,2487,4338,3754,5451,2430,2275, 409,3249,5452,1566,2888,3514,1002, 769,2853, # 3600
194,2091,3174,3755,2226,3327,4339, 628,1505,5453,5454,1763,2180,3019,4096, 521, # 3616
1161,2592,1788,2206,2411,4697,4097,1625,4340,4341, 412, 42,3119, 464,5455,2642, # 3632
4698,3400,1760,1571,2889,3515,2537,1219,2207,3893,2643,2141,2373,4699,4700,3328, # 3648
1651,3401,3627,5456,5457,3628,2488,3516,5458,3756,5459,5460,2276,2092, 460,5461, # 3664
4701,5462,3020, 962, 588,3629, 289,3250,2644,1116, 52,5463,3067,1797,5464,5465, # 3680
5466,1467,5467,1598,1143,3757,4342,1985,1734,1067,4702,1280,3402, 465,4703,1572, # 3696
510,5468,1928,2245,1813,1644,3630,5469,4704,3758,5470,5471,2673,1573,1534,5472, # 3712
5473, 536,1808,1761,3517,3894,3175,2645,5474,5475,5476,4705,3518,2929,1912,2809, # 3728
5477,3329,1122, 377,3251,5478, 360,5479,5480,4343,1529, 551,5481,2060,3759,1769, # 3744
2431,5482,2930,4344,3330,3120,2327,2109,2031,4706,1404, 136,1468,1479, 672,1171, # 3760
3252,2308, 271,3176,5483,2772,5484,2050, 678,2736, 865,1948,4707,5485,2014,4098, # 3776
2971,5486,2737,2227,1397,3068,3760,4708,4709,1735,2931,3403,3631,5487,3895, 509, # 3792
2854,2458,2890,3896,5488,5489,3177,3178,4710,4345,2538,4711,2309,1166,1010, 552, # 3808
681,1888,5490,5491,2972,2973,4099,1287,1596,1862,3179, 358, 453, 736, 175, 478, # 3824
1117, 905,1167,1097,5492,1854,1530,5493,1706,5494,2181,3519,2292,3761,3520,3632, # 3840
4346,2093,4347,5495,3404,1193,2489,4348,1458,2193,2208,1863,1889,1421,3331,2932, # 3856
3069,2182,3521, 595,2123,5496,4100,5497,5498,4349,1707,2646, 223,3762,1359, 751, # 3872
3121, 183,3522,5499,2810,3021, 419,2374, 633, 704,3897,2394, 241,5500,5501,5502, # 3888
838,3022,3763,2277,2773,2459,3898,1939,2051,4101,1309,3122,2246,1181,5503,1136, # 3904
2209,3899,2375,1446,4350,2310,4712,5504,5505,4351,1055,2615, 484,3764,5506,4102, # 3920
625,4352,2278,3405,1499,4353,4103,5507,4104,4354,3253,2279,2280,3523,5508,5509, # 3936
2774, 808,2616,3765,3406,4105,4355,3123,2539, 526,3407,3900,4356, 955,5510,1620, # 3952
4357,2647,2432,5511,1429,3766,1669,1832, 994, 928,5512,3633,1260,5513,5514,5515, # 3968
1949,2293, 741,2933,1626,4358,2738,2460, 867,1184, 362,3408,1392,5516,5517,4106, # 3984
4359,1770,1736,3254,2934,4713,4714,1929,2707,1459,1158,5518,3070,3409,2891,1292, # 4000
1930,2513,2855,3767,1986,1187,2072,2015,2617,4360,5519,2574,2514,2170,3768,2490, # 4016
3332,5520,3769,4715,5521,5522, 666,1003,3023,1022,3634,4361,5523,4716,1814,2257, # 4032
574,3901,1603, 295,1535, 705,3902,4362, 283, 858, 417,5524,5525,3255,4717,4718, # 4048
3071,1220,1890,1046,2281,2461,4107,1393,1599, 689,2575, 388,4363,5526,2491, 802, # 4064
5527,2811,3903,2061,1405,2258,5528,4719,3904,2110,1052,1345,3256,1585,5529, 809, # 4080
5530,5531,5532, 575,2739,3524, 956,1552,1469,1144,2328,5533,2329,1560,2462,3635, # 4096
3257,4108, 616,2210,4364,3180,2183,2294,5534,1833,5535,3525,4720,5536,1319,3770, # 4112
3771,1211,3636,1023,3258,1293,2812,5537,5538,5539,3905, 607,2311,3906, 762,2892, # 4128
1439,4365,1360,4721,1485,3072,5540,4722,1038,4366,1450,2062,2648,4367,1379,4723, # 4144
2593,5541,5542,4368,1352,1414,2330,2935,1172,5543,5544,3907,3908,4724,1798,1451, # 4160
5545,5546,5547,5548,2936,4109,4110,2492,2351, 411,4111,4112,3637,3333,3124,4725, # 4176
1561,2674,1452,4113,1375,5549,5550, 47,2974, 316,5551,1406,1591,2937,3181,5552, # 4192
1025,2142,3125,3182, 354,2740, 884,2228,4369,2412, 508,3772, 726,3638, 996,2433, # 4208
3639, 729,5553, 392,2194,1453,4114,4726,3773,5554,5555,2463,3640,2618,1675,2813, # 4224
919,2352,2975,2353,1270,4727,4115, 73,5556,5557, 647,5558,3259,2856,2259,1550, # 4240
1346,3024,5559,1332, 883,3526,5560,5561,5562,5563,3334,2775,5564,1212, 831,1347, # 4256
4370,4728,2331,3909,1864,3073, 720,3910,4729,4730,3911,5565,4371,5566,5567,4731, # 4272
5568,5569,1799,4732,3774,2619,4733,3641,1645,2376,4734,5570,2938, 669,2211,2675, # 4288
2434,5571,2893,5572,5573,1028,3260,5574,4372,2413,5575,2260,1353,5576,5577,4735, # 4304
3183, 518,5578,4116,5579,4373,1961,5580,2143,4374,5581,5582,3025,2354,2355,3912, # 4320
516,1834,1454,4117,2708,4375,4736,2229,2620,1972,1129,3642,5583,2776,5584,2976, # 4336
1422, 577,1470,3026,1524,3410,5585,5586, 432,4376,3074,3527,5587,2594,1455,2515, # 4352
2230,1973,1175,5588,1020,2741,4118,3528,4737,5589,2742,5590,1743,1361,3075,3529, # 4368
2649,4119,4377,4738,2295, 895, 924,4378,2171, 331,2247,3076, 166,1627,3077,1098, # 4384
5591,1232,2894,2231,3411,4739, 657, 403,1196,2377, 542,3775,3412,1600,4379,3530, # 4400
5592,4740,2777,3261, 576, 530,1362,4741,4742,2540,2676,3776,4120,5593, 842,3913, # 4416
5594,2814,2032,1014,4121, 213,2709,3413, 665, 621,4380,5595,3777,2939,2435,5596, # 4432
2436,3335,3643,3414,4743,4381,2541,4382,4744,3644,1682,4383,3531,1380,5597, 724, # 4448
2282, 600,1670,5598,1337,1233,4745,3126,2248,5599,1621,4746,5600, 651,4384,5601, # 4464
1612,4385,2621,5602,2857,5603,2743,2312,3078,5604, 716,2464,3079, 174,1255,2710, # 4480
4122,3645, 548,1320,1398, 728,4123,1574,5605,1891,1197,3080,4124,5606,3081,3082, # 4496
3778,3646,3779, 747,5607, 635,4386,4747,5608,5609,5610,4387,5611,5612,4748,5613, # 4512
3415,4749,2437, 451,5614,3780,2542,2073,4388,2744,4389,4125,5615,1764,4750,5616, # 4528
4390, 350,4751,2283,2395,2493,5617,4391,4126,2249,1434,4127, 488,4752, 458,4392, # 4544
4128,3781, 771,1330,2396,3914,2576,3184,2160,2414,1553,2677,3185,4393,5618,2494, # 4560
2895,2622,1720,2711,4394,3416,4753,5619,2543,4395,5620,3262,4396,2778,5621,2016, # 4576
2745,5622,1155,1017,3782,3915,5623,3336,2313, 201,1865,4397,1430,5624,4129,5625, # 4592
5626,5627,5628,5629,4398,1604,5630, 414,1866, 371,2595,4754,4755,3532,2017,3127, # 4608
4756,1708, 960,4399, 887, 389,2172,1536,1663,1721,5631,2232,4130,2356,2940,1580, # 4624
5632,5633,1744,4757,2544,4758,4759,5634,4760,5635,2074,5636,4761,3647,3417,2896, # 4640
4400,5637,4401,2650,3418,2815, 673,2712,2465, 709,3533,4131,3648,4402,5638,1148, # 4656
502, 634,5639,5640,1204,4762,3649,1575,4763,2623,3783,5641,3784,3128, 948,3263, # 4672
121,1745,3916,1110,5642,4403,3083,2516,3027,4132,3785,1151,1771,3917,1488,4133, # 4688
1987,5643,2438,3534,5644,5645,2094,5646,4404,3918,1213,1407,2816, 531,2746,2545, # 4704
3264,1011,1537,4764,2779,4405,3129,1061,5647,3786,3787,1867,2897,5648,2018, 120, # 4720
4406,4407,2063,3650,3265,2314,3919,2678,3419,1955,4765,4134,5649,3535,1047,2713, # 4736
1266,5650,1368,4766,2858, 649,3420,3920,2546,2747,1102,2859,2679,5651,5652,2000, # 4752
5653,1111,3651,2977,5654,2495,3921,3652,2817,1855,3421,3788,5655,5656,3422,2415, # 4768
2898,3337,3266,3653,5657,2577,5658,3654,2818,4135,1460, 856,5659,3655,5660,2899, # 4784
2978,5661,2900,3922,5662,4408, 632,2517, 875,3923,1697,3924,2296,5663,5664,4767, # 4800
3028,1239, 580,4768,4409,5665, 914, 936,2075,1190,4136,1039,2124,5666,5667,5668, # 4816
5669,3423,1473,5670,1354,4410,3925,4769,2173,3084,4137, 915,3338,4411,4412,3339, # 4832
1605,1835,5671,2748, 398,3656,4413,3926,4138, 328,1913,2860,4139,3927,1331,4414, # 4848
3029, 937,4415,5672,3657,4140,4141,3424,2161,4770,3425, 524, 742, 538,3085,1012, # 4864
5673,5674,3928,2466,5675, 658,1103, 225,3929,5676,5677,4771,5678,4772,5679,3267, # 4880
1243,5680,4142, 963,2250,4773,5681,2714,3658,3186,5682,5683,2596,2332,5684,4774, # 4896
5685,5686,5687,3536, 957,3426,2547,2033,1931,2941,2467, 870,2019,3659,1746,2780, # 4912
2781,2439,2468,5688,3930,5689,3789,3130,3790,3537,3427,3791,5690,1179,3086,5691, # 4928
3187,2378,4416,3792,2548,3188,3131,2749,4143,5692,3428,1556,2549,2297, 977,2901, # 4944
2034,4144,1205,3429,5693,1765,3430,3189,2125,1271, 714,1689,4775,3538,5694,2333, # 4960
3931, 533,4417,3660,2184, 617,5695,2469,3340,3539,2315,5696,5697,3190,5698,5699, # 4976
3932,1988, 618, 427,2651,3540,3431,5700,5701,1244,1690,5702,2819,4418,4776,5703, # 4992
3541,4777,5704,2284,1576, 473,3661,4419,3432, 972,5705,3662,5706,3087,5707,5708, # 5008
4778,4779,5709,3793,4145,4146,5710, 153,4780, 356,5711,1892,2902,4420,2144, 408, # 5024
803,2357,5712,3933,5713,4421,1646,2578,2518,4781,4782,3934,5714,3935,4422,5715, # 5040
2416,3433, 752,5716,5717,1962,3341,2979,5718, 746,3030,2470,4783,4423,3794, 698, # 5056
4784,1893,4424,3663,2550,4785,3664,3936,5719,3191,3434,5720,1824,1302,4147,2715, # 5072
3937,1974,4425,5721,4426,3192, 823,1303,1288,1236,2861,3542,4148,3435, 774,3938, # 5088
5722,1581,4786,1304,2862,3939,4787,5723,2440,2162,1083,3268,4427,4149,4428, 344, # 5104
1173, 288,2316, 454,1683,5724,5725,1461,4788,4150,2597,5726,5727,4789, 985, 894, # 5120
5728,3436,3193,5729,1914,2942,3795,1989,5730,2111,1975,5731,4151,5732,2579,1194, # 5136
425,5733,4790,3194,1245,3796,4429,5734,5735,2863,5736, 636,4791,1856,3940, 760, # 5152
1800,5737,4430,2212,1508,4792,4152,1894,1684,2298,5738,5739,4793,4431,4432,2213, # 5168
479,5740,5741, 832,5742,4153,2496,5743,2980,2497,3797, 990,3132, 627,1815,2652, # 5184
4433,1582,4434,2126,2112,3543,4794,5744, 799,4435,3195,5745,4795,2113,1737,3031, # 5200
1018, 543, 754,4436,3342,1676,4796,4797,4154,4798,1489,5746,3544,5747,2624,2903, # 5216
4155,5748,5749,2981,5750,5751,5752,5753,3196,4799,4800,2185,1722,5754,3269,3270, # 5232
1843,3665,1715, 481, 365,1976,1857,5755,5756,1963,2498,4801,5757,2127,3666,3271, # 5248
433,1895,2064,2076,5758, 602,2750,5759,5760,5761,5762,5763,3032,1628,3437,5764, # 5264
3197,4802,4156,2904,4803,2519,5765,2551,2782,5766,5767,5768,3343,4804,2905,5769, # 5280
4805,5770,2864,4806,4807,1221,2982,4157,2520,5771,5772,5773,1868,1990,5774,5775, # 5296
5776,1896,5777,5778,4808,1897,4158, 318,5779,2095,4159,4437,5780,5781, 485,5782, # 5312
938,3941, 553,2680, 116,5783,3942,3667,5784,3545,2681,2783,3438,3344,2820,5785, # 5328
3668,2943,4160,1747,2944,2983,5786,5787, 207,5788,4809,5789,4810,2521,5790,3033, # 5344
890,3669,3943,5791,1878,3798,3439,5792,2186,2358,3440,1652,5793,5794,5795, 941, # 5360
2299, 208,3546,4161,2020, 330,4438,3944,2906,2499,3799,4439,4811,5796,5797,5798, # 5376 #last 512
#Everything below is of no interest for detection purpose
2522,1613,4812,5799,3345,3945,2523,5800,4162,5801,1637,4163,2471,4813,3946,5802, # 5392
2500,3034,3800,5803,5804,2195,4814,5805,2163,5806,5807,5808,5809,5810,5811,5812, # 5408
5813,5814,5815,5816,5817,5818,5819,5820,5821,5822,5823,5824,5825,5826,5827,5828, # 5424
5829,5830,5831,5832,5833,5834,5835,5836,5837,5838,5839,5840,5841,5842,5843,5844, # 5440
5845,5846,5847,5848,5849,5850,5851,5852,5853,5854,5855,5856,5857,5858,5859,5860, # 5456
5861,5862,5863,5864,5865,5866,5867,5868,5869,5870,5871,5872,5873,5874,5875,5876, # 5472
5877,5878,5879,5880,5881,5882,5883,5884,5885,5886,5887,5888,5889,5890,5891,5892, # 5488
5893,5894,5895,5896,5897,5898,5899,5900,5901,5902,5903,5904,5905,5906,5907,5908, # 5504
5909,5910,5911,5912,5913,5914,5915,5916,5917,5918,5919,5920,5921,5922,5923,5924, # 5520
5925,5926,5927,5928,5929,5930,5931,5932,5933,5934,5935,5936,5937,5938,5939,5940, # 5536
5941,5942,5943,5944,5945,5946,5947,5948,5949,5950,5951,5952,5953,5954,5955,5956, # 5552
5957,5958,5959,5960,5961,5962,5963,5964,5965,5966,5967,5968,5969,5970,5971,5972, # 5568
5973,5974,5975,5976,5977,5978,5979,5980,5981,5982,5983,5984,5985,5986,5987,5988, # 5584
5989,5990,5991,5992,5993,5994,5995,5996,5997,5998,5999,6000,6001,6002,6003,6004, # 5600
6005,6006,6007,6008,6009,6010,6011,6012,6013,6014,6015,6016,6017,6018,6019,6020, # 5616
6021,6022,6023,6024,6025,6026,6027,6028,6029,6030,6031,6032,6033,6034,6035,6036, # 5632
6037,6038,6039,6040,6041,6042,6043,6044,6045,6046,6047,6048,6049,6050,6051,6052, # 5648
6053,6054,6055,6056,6057,6058,6059,6060,6061,6062,6063,6064,6065,6066,6067,6068, # 5664
6069,6070,6071,6072,6073,6074,6075,6076,6077,6078,6079,6080,6081,6082,6083,6084, # 5680
6085,6086,6087,6088,6089,6090,6091,6092,6093,6094,6095,6096,6097,6098,6099,6100, # 5696
6101,6102,6103,6104,6105,6106,6107,6108,6109,6110,6111,6112,6113,6114,6115,6116, # 5712
6117,6118,6119,6120,6121,6122,6123,6124,6125,6126,6127,6128,6129,6130,6131,6132, # 5728
6133,6134,6135,6136,6137,6138,6139,6140,6141,6142,6143,6144,6145,6146,6147,6148, # 5744
6149,6150,6151,6152,6153,6154,6155,6156,6157,6158,6159,6160,6161,6162,6163,6164, # 5760
6165,6166,6167,6168,6169,6170,6171,6172,6173,6174,6175,6176,6177,6178,6179,6180, # 5776
6181,6182,6183,6184,6185,6186,6187,6188,6189,6190,6191,6192,6193,6194,6195,6196, # 5792
6197,6198,6199,6200,6201,6202,6203,6204,6205,6206,6207,6208,6209,6210,6211,6212, # 5808
6213,6214,6215,6216,6217,6218,6219,6220,6221,6222,6223,3670,6224,6225,6226,6227, # 5824
6228,6229,6230,6231,6232,6233,6234,6235,6236,6237,6238,6239,6240,6241,6242,6243, # 5840
6244,6245,6246,6247,6248,6249,6250,6251,6252,6253,6254,6255,6256,6257,6258,6259, # 5856
6260,6261,6262,6263,6264,6265,6266,6267,6268,6269,6270,6271,6272,6273,6274,6275, # 5872
6276,6277,6278,6279,6280,6281,6282,6283,6284,6285,4815,6286,6287,6288,6289,6290, # 5888
6291,6292,4816,6293,6294,6295,6296,6297,6298,6299,6300,6301,6302,6303,6304,6305, # 5904
6306,6307,6308,6309,6310,6311,4817,4818,6312,6313,6314,6315,6316,6317,6318,4819, # 5920
6319,6320,6321,6322,6323,6324,6325,6326,6327,6328,6329,6330,6331,6332,6333,6334, # 5936
6335,6336,6337,4820,6338,6339,6340,6341,6342,6343,6344,6345,6346,6347,6348,6349, # 5952
6350,6351,6352,6353,6354,6355,6356,6357,6358,6359,6360,6361,6362,6363,6364,6365, # 5968
6366,6367,6368,6369,6370,6371,6372,6373,6374,6375,6376,6377,6378,6379,6380,6381, # 5984
6382,6383,6384,6385,6386,6387,6388,6389,6390,6391,6392,6393,6394,6395,6396,6397, # 6000
6398,6399,6400,6401,6402,6403,6404,6405,6406,6407,6408,6409,6410,3441,6411,6412, # 6016
6413,6414,6415,6416,6417,6418,6419,6420,6421,6422,6423,6424,6425,4440,6426,6427, # 6032
6428,6429,6430,6431,6432,6433,6434,6435,6436,6437,6438,6439,6440,6441,6442,6443, # 6048
6444,6445,6446,6447,6448,6449,6450,6451,6452,6453,6454,4821,6455,6456,6457,6458, # 6064
6459,6460,6461,6462,6463,6464,6465,6466,6467,6468,6469,6470,6471,6472,6473,6474, # 6080
6475,6476,6477,3947,3948,6478,6479,6480,6481,3272,4441,6482,6483,6484,6485,4442, # 6096
6486,6487,6488,6489,6490,6491,6492,6493,6494,6495,6496,4822,6497,6498,6499,6500, # 6112
6501,6502,6503,6504,6505,6506,6507,6508,6509,6510,6511,6512,6513,6514,6515,6516, # 6128
6517,6518,6519,6520,6521,6522,6523,6524,6525,6526,6527,6528,6529,6530,6531,6532, # 6144
6533,6534,6535,6536,6537,6538,6539,6540,6541,6542,6543,6544,6545,6546,6547,6548, # 6160
6549,6550,6551,6552,6553,6554,6555,6556,2784,6557,4823,6558,6559,6560,6561,6562, # 6176
6563,6564,6565,6566,6567,6568,6569,3949,6570,6571,6572,4824,6573,6574,6575,6576, # 6192
6577,6578,6579,6580,6581,6582,6583,4825,6584,6585,6586,3950,2785,6587,6588,6589, # 6208
6590,6591,6592,6593,6594,6595,6596,6597,6598,6599,6600,6601,6602,6603,6604,6605, # 6224
6606,6607,6608,6609,6610,6611,6612,4826,6613,6614,6615,4827,6616,6617,6618,6619, # 6240
6620,6621,6622,6623,6624,6625,4164,6626,6627,6628,6629,6630,6631,6632,6633,6634, # 6256
3547,6635,4828,6636,6637,6638,6639,6640,6641,6642,3951,2984,6643,6644,6645,6646, # 6272
6647,6648,6649,4165,6650,4829,6651,6652,4830,6653,6654,6655,6656,6657,6658,6659, # 6288
6660,6661,6662,4831,6663,6664,6665,6666,6667,6668,6669,6670,6671,4166,6672,4832, # 6304
3952,6673,6674,6675,6676,4833,6677,6678,6679,4167,6680,6681,6682,3198,6683,6684, # 6320
6685,6686,6687,6688,6689,6690,6691,6692,6693,6694,6695,6696,6697,4834,6698,6699, # 6336
6700,6701,6702,6703,6704,6705,6706,6707,6708,6709,6710,6711,6712,6713,6714,6715, # 6352
6716,6717,6718,6719,6720,6721,6722,6723,6724,6725,6726,6727,6728,6729,6730,6731, # 6368
6732,6733,6734,4443,6735,6736,6737,6738,6739,6740,6741,6742,6743,6744,6745,4444, # 6384
6746,6747,6748,6749,6750,6751,6752,6753,6754,6755,6756,6757,6758,6759,6760,6761, # 6400
6762,6763,6764,6765,6766,6767,6768,6769,6770,6771,6772,6773,6774,6775,6776,6777, # 6416
6778,6779,6780,6781,4168,6782,6783,3442,6784,6785,6786,6787,6788,6789,6790,6791, # 6432
4169,6792,6793,6794,6795,6796,6797,6798,6799,6800,6801,6802,6803,6804,6805,6806, # 6448
6807,6808,6809,6810,6811,4835,6812,6813,6814,4445,6815,6816,4446,6817,6818,6819, # 6464
6820,6821,6822,6823,6824,6825,6826,6827,6828,6829,6830,6831,6832,6833,6834,6835, # 6480
3548,6836,6837,6838,6839,6840,6841,6842,6843,6844,6845,6846,4836,6847,6848,6849, # 6496
6850,6851,6852,6853,6854,3953,6855,6856,6857,6858,6859,6860,6861,6862,6863,6864, # 6512
6865,6866,6867,6868,6869,6870,6871,6872,6873,6874,6875,6876,6877,3199,6878,6879, # 6528
6880,6881,6882,4447,6883,6884,6885,6886,6887,6888,6889,6890,6891,6892,6893,6894, # 6544
6895,6896,6897,6898,6899,6900,6901,6902,6903,6904,4170,6905,6906,6907,6908,6909, # 6560
6910,6911,6912,6913,6914,6915,6916,6917,6918,6919,6920,6921,6922,6923,6924,6925, # 6576
6926,6927,4837,6928,6929,6930,6931,6932,6933,6934,6935,6936,3346,6937,6938,4838, # 6592
6939,6940,6941,4448,6942,6943,6944,6945,6946,4449,6947,6948,6949,6950,6951,6952, # 6608
6953,6954,6955,6956,6957,6958,6959,6960,6961,6962,6963,6964,6965,6966,6967,6968, # 6624
6969,6970,6971,6972,6973,6974,6975,6976,6977,6978,6979,6980,6981,6982,6983,6984, # 6640
6985,6986,6987,6988,6989,6990,6991,6992,6993,6994,3671,6995,6996,6997,6998,4839, # 6656
6999,7000,7001,7002,3549,7003,7004,7005,7006,7007,7008,7009,7010,7011,7012,7013, # 6672
7014,7015,7016,7017,7018,7019,7020,7021,7022,7023,7024,7025,7026,7027,7028,7029, # 6688
7030,4840,7031,7032,7033,7034,7035,7036,7037,7038,4841,7039,7040,7041,7042,7043, # 6704
7044,7045,7046,7047,7048,7049,7050,7051,7052,7053,7054,7055,7056,7057,7058,7059, # 6720
7060,7061,7062,7063,7064,7065,7066,7067,7068,7069,7070,2985,7071,7072,7073,7074, # 6736
7075,7076,7077,7078,7079,7080,4842,7081,7082,7083,7084,7085,7086,7087,7088,7089, # 6752
7090,7091,7092,7093,7094,7095,7096,7097,7098,7099,7100,7101,7102,7103,7104,7105, # 6768
7106,7107,7108,7109,7110,7111,7112,7113,7114,7115,7116,7117,7118,4450,7119,7120, # 6784
7121,7122,7123,7124,7125,7126,7127,7128,7129,7130,7131,7132,7133,7134,7135,7136, # 6800
7137,7138,7139,7140,7141,7142,7143,4843,7144,7145,7146,7147,7148,7149,7150,7151, # 6816
7152,7153,7154,7155,7156,7157,7158,7159,7160,7161,7162,7163,7164,7165,7166,7167, # 6832
7168,7169,7170,7171,7172,7173,7174,7175,7176,7177,7178,7179,7180,7181,7182,7183, # 6848
7184,7185,7186,7187,7188,4171,4172,7189,7190,7191,7192,7193,7194,7195,7196,7197, # 6864
7198,7199,7200,7201,7202,7203,7204,7205,7206,7207,7208,7209,7210,7211,7212,7213, # 6880
7214,7215,7216,7217,7218,7219,7220,7221,7222,7223,7224,7225,7226,7227,7228,7229, # 6896
7230,7231,7232,7233,7234,7235,7236,7237,7238,7239,7240,7241,7242,7243,7244,7245, # 6912
7246,7247,7248,7249,7250,7251,7252,7253,7254,7255,7256,7257,7258,7259,7260,7261, # 6928
7262,7263,7264,7265,7266,7267,7268,7269,7270,7271,7272,7273,7274,7275,7276,7277, # 6944
7278,7279,7280,7281,7282,7283,7284,7285,7286,7287,7288,7289,7290,7291,7292,7293, # 6960
7294,7295,7296,4844,7297,7298,7299,7300,7301,7302,7303,7304,7305,7306,7307,7308, # 6976
7309,7310,7311,7312,7313,7314,7315,7316,4451,7317,7318,7319,7320,7321,7322,7323, # 6992
7324,7325,7326,7327,7328,7329,7330,7331,7332,7333,7334,7335,7336,7337,7338,7339, # 7008
7340,7341,7342,7343,7344,7345,7346,7347,7348,7349,7350,7351,7352,7353,4173,7354, # 7024
7355,4845,7356,7357,7358,7359,7360,7361,7362,7363,7364,7365,7366,7367,7368,7369, # 7040
7370,7371,7372,7373,7374,7375,7376,7377,7378,7379,7380,7381,7382,7383,7384,7385, # 7056
7386,7387,7388,4846,7389,7390,7391,7392,7393,7394,7395,7396,7397,7398,7399,7400, # 7072
7401,7402,7403,7404,7405,3672,7406,7407,7408,7409,7410,7411,7412,7413,7414,7415, # 7088
7416,7417,7418,7419,7420,7421,7422,7423,7424,7425,7426,7427,7428,7429,7430,7431, # 7104
7432,7433,7434,7435,7436,7437,7438,7439,7440,7441,7442,7443,7444,7445,7446,7447, # 7120
7448,7449,7450,7451,7452,7453,4452,7454,3200,7455,7456,7457,7458,7459,7460,7461, # 7136
7462,7463,7464,7465,7466,7467,7468,7469,7470,7471,7472,7473,7474,4847,7475,7476, # 7152
7477,3133,7478,7479,7480,7481,7482,7483,7484,7485,7486,7487,7488,7489,7490,7491, # 7168
7492,7493,7494,7495,7496,7497,7498,7499,7500,7501,7502,3347,7503,7504,7505,7506, # 7184
7507,7508,7509,7510,7511,7512,7513,7514,7515,7516,7517,7518,7519,7520,7521,4848, # 7200
7522,7523,7524,7525,7526,7527,7528,7529,7530,7531,7532,7533,7534,7535,7536,7537, # 7216
7538,7539,7540,7541,7542,7543,7544,7545,7546,7547,7548,7549,3801,4849,7550,7551, # 7232
7552,7553,7554,7555,7556,7557,7558,7559,7560,7561,7562,7563,7564,7565,7566,7567, # 7248
7568,7569,3035,7570,7571,7572,7573,7574,7575,7576,7577,7578,7579,7580,7581,7582, # 7264
7583,7584,7585,7586,7587,7588,7589,7590,7591,7592,7593,7594,7595,7596,7597,7598, # 7280
7599,7600,7601,7602,7603,7604,7605,7606,7607,7608,7609,7610,7611,7612,7613,7614, # 7296
7615,7616,4850,7617,7618,3802,7619,7620,7621,7622,7623,7624,7625,7626,7627,7628, # 7312
7629,7630,7631,7632,4851,7633,7634,7635,7636,7637,7638,7639,7640,7641,7642,7643, # 7328
7644,7645,7646,7647,7648,7649,7650,7651,7652,7653,7654,7655,7656,7657,7658,7659, # 7344
7660,7661,7662,7663,7664,7665,7666,7667,7668,7669,7670,4453,7671,7672,7673,7674, # 7360
7675,7676,7677,7678,7679,7680,7681,7682,7683,7684,7685,7686,7687,7688,7689,7690, # 7376
7691,7692,7693,7694,7695,7696,7697,3443,7698,7699,7700,7701,7702,4454,7703,7704, # 7392
7705,7706,7707,7708,7709,7710,7711,7712,7713,2472,7714,7715,7716,7717,7718,7719, # 7408
7720,7721,7722,7723,7724,7725,7726,7727,7728,7729,7730,7731,3954,7732,7733,7734, # 7424
7735,7736,7737,7738,7739,7740,7741,7742,7743,7744,7745,7746,7747,7748,7749,7750, # 7440
3134,7751,7752,4852,7753,7754,7755,4853,7756,7757,7758,7759,7760,4174,7761,7762, # 7456
7763,7764,7765,7766,7767,7768,7769,7770,7771,7772,7773,7774,7775,7776,7777,7778, # 7472
7779,7780,7781,7782,7783,7784,7785,7786,7787,7788,7789,7790,7791,7792,7793,7794, # 7488
7795,7796,7797,7798,7799,7800,7801,7802,7803,7804,7805,4854,7806,7807,7808,7809, # 7504
7810,7811,7812,7813,7814,7815,7816,7817,7818,7819,7820,7821,7822,7823,7824,7825, # 7520
4855,7826,7827,7828,7829,7830,7831,7832,7833,7834,7835,7836,7837,7838,7839,7840, # 7536
7841,7842,7843,7844,7845,7846,7847,3955,7848,7849,7850,7851,7852,7853,7854,7855, # 7552
7856,7857,7858,7859,7860,3444,7861,7862,7863,7864,7865,7866,7867,7868,7869,7870, # 7568
7871,7872,7873,7874,7875,7876,7877,7878,7879,7880,7881,7882,7883,7884,7885,7886, # 7584
7887,7888,7889,7890,7891,4175,7892,7893,7894,7895,7896,4856,4857,7897,7898,7899, # 7600
7900,2598,7901,7902,7903,7904,7905,7906,7907,7908,4455,7909,7910,7911,7912,7913, # 7616
7914,3201,7915,7916,7917,7918,7919,7920,7921,4858,7922,7923,7924,7925,7926,7927, # 7632
7928,7929,7930,7931,7932,7933,7934,7935,7936,7937,7938,7939,7940,7941,7942,7943, # 7648
7944,7945,7946,7947,7948,7949,7950,7951,7952,7953,7954,7955,7956,7957,7958,7959, # 7664
7960,7961,7962,7963,7964,7965,7966,7967,7968,7969,7970,7971,7972,7973,7974,7975, # 7680
7976,7977,7978,7979,7980,7981,4859,7982,7983,7984,7985,7986,7987,7988,7989,7990, # 7696
7991,7992,7993,7994,7995,7996,4860,7997,7998,7999,8000,8001,8002,8003,8004,8005, # 7712
8006,8007,8008,8009,8010,8011,8012,8013,8014,8015,8016,4176,8017,8018,8019,8020, # 7728
8021,8022,8023,4861,8024,8025,8026,8027,8028,8029,8030,8031,8032,8033,8034,8035, # 7744
8036,4862,4456,8037,8038,8039,8040,4863,8041,8042,8043,8044,8045,8046,8047,8048, # 7760
8049,8050,8051,8052,8053,8054,8055,8056,8057,8058,8059,8060,8061,8062,8063,8064, # 7776
8065,8066,8067,8068,8069,8070,8071,8072,8073,8074,8075,8076,8077,8078,8079,8080, # 7792
8081,8082,8083,8084,8085,8086,8087,8088,8089,8090,8091,8092,8093,8094,8095,8096, # 7808
8097,8098,8099,4864,4177,8100,8101,8102,8103,8104,8105,8106,8107,8108,8109,8110, # 7824
8111,8112,8113,8114,8115,8116,8117,8118,8119,8120,4178,8121,8122,8123,8124,8125, # 7840
8126,8127,8128,8129,8130,8131,8132,8133,8134,8135,8136,8137,8138,8139,8140,8141, # 7856
8142,8143,8144,8145,4865,4866,8146,8147,8148,8149,8150,8151,8152,8153,8154,8155, # 7872
8156,8157,8158,8159,8160,8161,8162,8163,8164,8165,4179,8166,8167,8168,8169,8170, # 7888
8171,8172,8173,8174,8175,8176,8177,8178,8179,8180,8181,4457,8182,8183,8184,8185, # 7904
8186,8187,8188,8189,8190,8191,8192,8193,8194,8195,8196,8197,8198,8199,8200,8201, # 7920
8202,8203,8204,8205,8206,8207,8208,8209,8210,8211,8212,8213,8214,8215,8216,8217, # 7936
8218,8219,8220,8221,8222,8223,8224,8225,8226,8227,8228,8229,8230,8231,8232,8233, # 7952
8234,8235,8236,8237,8238,8239,8240,8241,8242,8243,8244,8245,8246,8247,8248,8249, # 7968
8250,8251,8252,8253,8254,8255,8256,3445,8257,8258,8259,8260,8261,8262,4458,8263, # 7984
8264,8265,8266,8267,8268,8269,8270,8271,8272,4459,8273,8274,8275,8276,3550,8277, # 8000
8278,8279,8280,8281,8282,8283,8284,8285,8286,8287,8288,8289,4460,8290,8291,8292, # 8016
8293,8294,8295,8296,8297,8298,8299,8300,8301,8302,8303,8304,8305,8306,8307,4867, # 8032
8308,8309,8310,8311,8312,3551,8313,8314,8315,8316,8317,8318,8319,8320,8321,8322, # 8048
8323,8324,8325,8326,4868,8327,8328,8329,8330,8331,8332,8333,8334,8335,8336,8337, # 8064
8338,8339,8340,8341,8342,8343,8344,8345,8346,8347,8348,8349,8350,8351,8352,8353, # 8080
8354,8355,8356,8357,8358,8359,8360,8361,8362,8363,4869,4461,8364,8365,8366,8367, # 8096
8368,8369,8370,4870,8371,8372,8373,8374,8375,8376,8377,8378,8379,8380,8381,8382, # 8112
8383,8384,8385,8386,8387,8388,8389,8390,8391,8392,8393,8394,8395,8396,8397,8398, # 8128
8399,8400,8401,8402,8403,8404,8405,8406,8407,8408,8409,8410,4871,8411,8412,8413, # 8144
8414,8415,8416,8417,8418,8419,8420,8421,8422,4462,8423,8424,8425,8426,8427,8428, # 8160
8429,8430,8431,8432,8433,2986,8434,8435,8436,8437,8438,8439,8440,8441,8442,8443, # 8176
8444,8445,8446,8447,8448,8449,8450,8451,8452,8453,8454,8455,8456,8457,8458,8459, # 8192
8460,8461,8462,8463,8464,8465,8466,8467,8468,8469,8470,8471,8472,8473,8474,8475, # 8208
8476,8477,8478,4180,8479,8480,8481,8482,8483,8484,8485,8486,8487,8488,8489,8490, # 8224
8491,8492,8493,8494,8495,8496,8497,8498,8499,8500,8501,8502,8503,8504,8505,8506, # 8240
8507,8508,8509,8510,8511,8512,8513,8514,8515,8516,8517,8518,8519,8520,8521,8522, # 8256
8523,8524,8525,8526,8527,8528,8529,8530,8531,8532,8533,8534,8535,8536,8537,8538, # 8272
8539,8540,8541,8542,8543,8544,8545,8546,8547,8548,8549,8550,8551,8552,8553,8554, # 8288
8555,8556,8557,8558,8559,8560,8561,8562,8563,8564,4872,8565,8566,8567,8568,8569, # 8304
8570,8571,8572,8573,4873,8574,8575,8576,8577,8578,8579,8580,8581,8582,8583,8584, # 8320
8585,8586,8587,8588,8589,8590,8591,8592,8593,8594,8595,8596,8597,8598,8599,8600, # 8336
8601,8602,8603,8604,8605,3803,8606,8607,8608,8609,8610,8611,8612,8613,4874,3804, # 8352
8614,8615,8616,8617,8618,8619,8620,8621,3956,8622,8623,8624,8625,8626,8627,8628, # 8368
8629,8630,8631,8632,8633,8634,8635,8636,8637,8638,2865,8639,8640,8641,8642,8643, # 8384
8644,8645,8646,8647,8648,8649,8650,8651,8652,8653,8654,8655,8656,4463,8657,8658, # 8400
8659,4875,4876,8660,8661,8662,8663,8664,8665,8666,8667,8668,8669,8670,8671,8672, # 8416
8673,8674,8675,8676,8677,8678,8679,8680,8681,4464,8682,8683,8684,8685,8686,8687, # 8432
8688,8689,8690,8691,8692,8693,8694,8695,8696,8697,8698,8699,8700,8701,8702,8703, # 8448
8704,8705,8706,8707,8708,8709,2261,8710,8711,8712,8713,8714,8715,8716,8717,8718, # 8464
8719,8720,8721,8722,8723,8724,8725,8726,8727,8728,8729,8730,8731,8732,8733,4181, # 8480
8734,8735,8736,8737,8738,8739,8740,8741,8742,8743,8744,8745,8746,8747,8748,8749, # 8496
8750,8751,8752,8753,8754,8755,8756,8757,8758,8759,8760,8761,8762,8763,4877,8764, # 8512
8765,8766,8767,8768,8769,8770,8771,8772,8773,8774,8775,8776,8777,8778,8779,8780, # 8528
8781,8782,8783,8784,8785,8786,8787,8788,4878,8789,4879,8790,8791,8792,4880,8793, # 8544
8794,8795,8796,8797,8798,8799,8800,8801,4881,8802,8803,8804,8805,8806,8807,8808, # 8560
8809,8810,8811,8812,8813,8814,8815,3957,8816,8817,8818,8819,8820,8821,8822,8823, # 8576
8824,8825,8826,8827,8828,8829,8830,8831,8832,8833,8834,8835,8836,8837,8838,8839, # 8592
8840,8841,8842,8843,8844,8845,8846,8847,4882,8848,8849,8850,8851,8852,8853,8854, # 8608
8855,8856,8857,8858,8859,8860,8861,8862,8863,8864,8865,8866,8867,8868,8869,8870, # 8624
8871,8872,8873,8874,8875,8876,8877,8878,8879,8880,8881,8882,8883,8884,3202,8885, # 8640
8886,8887,8888,8889,8890,8891,8892,8893,8894,8895,8896,8897,8898,8899,8900,8901, # 8656
8902,8903,8904,8905,8906,8907,8908,8909,8910,8911,8912,8913,8914,8915,8916,8917, # 8672
8918,8919,8920,8921,8922,8923,8924,4465,8925,8926,8927,8928,8929,8930,8931,8932, # 8688
4883,8933,8934,8935,8936,8937,8938,8939,8940,8941,8942,8943,2214,8944,8945,8946, # 8704
8947,8948,8949,8950,8951,8952,8953,8954,8955,8956,8957,8958,8959,8960,8961,8962, # 8720
8963,8964,8965,4884,8966,8967,8968,8969,8970,8971,8972,8973,8974,8975,8976,8977, # 8736
8978,8979,8980,8981,8982,8983,8984,8985,8986,8987,8988,8989,8990,8991,8992,4885, # 8752
8993,8994,8995,8996,8997,8998,8999,9000,9001,9002,9003,9004,9005,9006,9007,9008, # 8768
9009,9010,9011,9012,9013,9014,9015,9016,9017,9018,9019,9020,9021,4182,9022,9023, # 8784
9024,9025,9026,9027,9028,9029,9030,9031,9032,9033,9034,9035,9036,9037,9038,9039, # 8800
9040,9041,9042,9043,9044,9045,9046,9047,9048,9049,9050,9051,9052,9053,9054,9055, # 8816
9056,9057,9058,9059,9060,9061,9062,9063,4886,9064,9065,9066,9067,9068,9069,4887, # 8832
9070,9071,9072,9073,9074,9075,9076,9077,9078,9079,9080,9081,9082,9083,9084,9085, # 8848
9086,9087,9088,9089,9090,9091,9092,9093,9094,9095,9096,9097,9098,9099,9100,9101, # 8864
9102,9103,9104,9105,9106,9107,9108,9109,9110,9111,9112,9113,9114,9115,9116,9117, # 8880
9118,9119,9120,9121,9122,9123,9124,9125,9126,9127,9128,9129,9130,9131,9132,9133, # 8896
9134,9135,9136,9137,9138,9139,9140,9141,3958,9142,9143,9144,9145,9146,9147,9148, # 8912
9149,9150,9151,4888,9152,9153,9154,9155,9156,9157,9158,9159,9160,9161,9162,9163, # 8928
9164,9165,9166,9167,9168,9169,9170,9171,9172,9173,9174,9175,4889,9176,9177,9178, # 8944
9179,9180,9181,9182,9183,9184,9185,9186,9187,9188,9189,9190,9191,9192,9193,9194, # 8960
9195,9196,9197,9198,9199,9200,9201,9202,9203,4890,9204,9205,9206,9207,9208,9209, # 8976
9210,9211,9212,9213,9214,9215,9216,9217,9218,9219,9220,9221,9222,4466,9223,9224, # 8992
9225,9226,9227,9228,9229,9230,9231,9232,9233,9234,9235,9236,9237,9238,9239,9240, # 9008
9241,9242,9243,9244,9245,4891,9246,9247,9248,9249,9250,9251,9252,9253,9254,9255, # 9024
9256,9257,4892,9258,9259,9260,9261,4893,4894,9262,9263,9264,9265,9266,9267,9268, # 9040
9269,9270,9271,9272,9273,4467,9274,9275,9276,9277,9278,9279,9280,9281,9282,9283, # 9056
9284,9285,3673,9286,9287,9288,9289,9290,9291,9292,9293,9294,9295,9296,9297,9298, # 9072
9299,9300,9301,9302,9303,9304,9305,9306,9307,9308,9309,9310,9311,9312,9313,9314, # 9088
9315,9316,9317,9318,9319,9320,9321,9322,4895,9323,9324,9325,9326,9327,9328,9329, # 9104
9330,9331,9332,9333,9334,9335,9336,9337,9338,9339,9340,9341,9342,9343,9344,9345, # 9120
9346,9347,4468,9348,9349,9350,9351,9352,9353,9354,9355,9356,9357,9358,9359,9360, # 9136
9361,9362,9363,9364,9365,9366,9367,9368,9369,9370,9371,9372,9373,4896,9374,4469, # 9152
9375,9376,9377,9378,9379,4897,9380,9381,9382,9383,9384,9385,9386,9387,9388,9389, # 9168
9390,9391,9392,9393,9394,9395,9396,9397,9398,9399,9400,9401,9402,9403,9404,9405, # 9184
9406,4470,9407,2751,9408,9409,3674,3552,9410,9411,9412,9413,9414,9415,9416,9417, # 9200
9418,9419,9420,9421,4898,9422,9423,9424,9425,9426,9427,9428,9429,3959,9430,9431, # 9216
9432,9433,9434,9435,9436,4471,9437,9438,9439,9440,9441,9442,9443,9444,9445,9446, # 9232
9447,9448,9449,9450,3348,9451,9452,9453,9454,9455,9456,9457,9458,9459,9460,9461, # 9248
9462,9463,9464,9465,9466,9467,9468,9469,9470,9471,9472,4899,9473,9474,9475,9476, # 9264
9477,4900,9478,9479,9480,9481,9482,9483,9484,9485,9486,9487,9488,3349,9489,9490, # 9280
9491,9492,9493,9494,9495,9496,9497,9498,9499,9500,9501,9502,9503,9504,9505,9506, # 9296
9507,9508,9509,9510,9511,9512,9513,9514,9515,9516,9517,9518,9519,9520,4901,9521, # 9312
9522,9523,9524,9525,9526,4902,9527,9528,9529,9530,9531,9532,9533,9534,9535,9536, # 9328
9537,9538,9539,9540,9541,9542,9543,9544,9545,9546,9547,9548,9549,9550,9551,9552, # 9344
9553,9554,9555,9556,9557,9558,9559,9560,9561,9562,9563,9564,9565,9566,9567,9568, # 9360
9569,9570,9571,9572,9573,9574,9575,9576,9577,9578,9579,9580,9581,9582,9583,9584, # 9376
3805,9585,9586,9587,9588,9589,9590,9591,9592,9593,9594,9595,9596,9597,9598,9599, # 9392
9600,9601,9602,4903,9603,9604,9605,9606,9607,4904,9608,9609,9610,9611,9612,9613, # 9408
9614,4905,9615,9616,9617,9618,9619,9620,9621,9622,9623,9624,9625,9626,9627,9628, # 9424
9629,9630,9631,9632,4906,9633,9634,9635,9636,9637,9638,9639,9640,9641,9642,9643, # 9440
4907,9644,9645,9646,9647,9648,9649,9650,9651,9652,9653,9654,9655,9656,9657,9658, # 9456
9659,9660,9661,9662,9663,9664,9665,9666,9667,9668,9669,9670,9671,9672,4183,9673, # 9472
9674,9675,9676,9677,4908,9678,9679,9680,9681,4909,9682,9683,9684,9685,9686,9687, # 9488
9688,9689,9690,4910,9691,9692,9693,3675,9694,9695,9696,2945,9697,9698,9699,9700, # 9504
9701,9702,9703,9704,9705,4911,9706,9707,9708,9709,9710,9711,9712,9713,9714,9715, # 9520
9716,9717,9718,9719,9720,9721,9722,9723,9724,9725,9726,9727,9728,9729,9730,9731, # 9536
9732,9733,9734,9735,4912,9736,9737,9738,9739,9740,4913,9741,9742,9743,9744,9745, # 9552
9746,9747,9748,9749,9750,9751,9752,9753,9754,9755,9756,9757,9758,4914,9759,9760, # 9568
9761,9762,9763,9764,9765,9766,9767,9768,9769,9770,9771,9772,9773,9774,9775,9776, # 9584
9777,9778,9779,9780,9781,9782,4915,9783,9784,9785,9786,9787,9788,9789,9790,9791, # 9600
9792,9793,4916,9794,9795,9796,9797,9798,9799,9800,9801,9802,9803,9804,9805,9806, # 9616
9807,9808,9809,9810,9811,9812,9813,9814,9815,9816,9817,9818,9819,9820,9821,9822, # 9632
9823,9824,9825,9826,9827,9828,9829,9830,9831,9832,9833,9834,9835,9836,9837,9838, # 9648
9839,9840,9841,9842,9843,9844,9845,9846,9847,9848,9849,9850,9851,9852,9853,9854, # 9664
9855,9856,9857,9858,9859,9860,9861,9862,9863,9864,9865,9866,9867,9868,4917,9869, # 9680
9870,9871,9872,9873,9874,9875,9876,9877,9878,9879,9880,9881,9882,9883,9884,9885, # 9696
9886,9887,9888,9889,9890,9891,9892,4472,9893,9894,9895,9896,9897,3806,9898,9899, # 9712
9900,9901,9902,9903,9904,9905,9906,9907,9908,9909,9910,9911,9912,9913,9914,4918, # 9728
9915,9916,9917,4919,9918,9919,9920,9921,4184,9922,9923,9924,9925,9926,9927,9928, # 9744
9929,9930,9931,9932,9933,9934,9935,9936,9937,9938,9939,9940,9941,9942,9943,9944, # 9760
9945,9946,4920,9947,9948,9949,9950,9951,9952,9953,9954,9955,4185,9956,9957,9958, # 9776
9959,9960,9961,9962,9963,9964,9965,4921,9966,9967,9968,4473,9969,9970,9971,9972, # 9792
9973,9974,9975,9976,9977,4474,9978,9979,9980,9981,9982,9983,9984,9985,9986,9987, # 9808
9988,9989,9990,9991,9992,9993,9994,9995,9996,9997,9998,9999,10000,10001,10002,10003, # 9824
10004,10005,10006,10007,10008,10009,10010,10011,10012,10013,10014,10015,10016,10017,10018,10019, # 9840
10020,10021,4922,10022,4923,10023,10024,10025,10026,10027,10028,10029,10030,10031,10032,10033, # 9856
10034,10035,10036,10037,10038,10039,10040,10041,10042,10043,10044,10045,10046,10047,10048,4924, # 9872
10049,10050,10051,10052,10053,10054,10055,10056,10057,10058,10059,10060,10061,10062,10063,10064, # 9888
10065,10066,10067,10068,10069,10070,10071,10072,10073,10074,10075,10076,10077,10078,10079,10080, # 9904
10081,10082,10083,10084,10085,10086,10087,4475,10088,10089,10090,10091,10092,10093,10094,10095, # 9920
10096,10097,4476,10098,10099,10100,10101,10102,10103,10104,10105,10106,10107,10108,10109,10110, # 9936
10111,2174,10112,10113,10114,10115,10116,10117,10118,10119,10120,10121,10122,10123,10124,10125, # 9952
10126,10127,10128,10129,10130,10131,10132,10133,10134,10135,10136,10137,10138,10139,10140,3807, # 9968
4186,4925,10141,10142,10143,10144,10145,10146,10147,4477,4187,10148,10149,10150,10151,10152, # 9984
10153,4188,10154,10155,10156,10157,10158,10159,10160,10161,4926,10162,10163,10164,10165,10166, #10000
10167,10168,10169,10170,10171,10172,10173,10174,10175,10176,10177,10178,10179,10180,10181,10182, #10016
10183,10184,10185,10186,10187,10188,10189,10190,10191,10192,3203,10193,10194,10195,10196,10197, #10032
10198,10199,10200,4478,10201,10202,10203,10204,4479,10205,10206,10207,10208,10209,10210,10211, #10048
10212,10213,10214,10215,10216,10217,10218,10219,10220,10221,10222,10223,10224,10225,10226,10227, #10064
10228,10229,10230,10231,10232,10233,10234,4927,10235,10236,10237,10238,10239,10240,10241,10242, #10080
10243,10244,10245,10246,10247,10248,10249,10250,10251,10252,10253,10254,10255,10256,10257,10258, #10096
10259,10260,10261,10262,10263,10264,10265,10266,10267,10268,10269,10270,10271,10272,10273,4480, #10112
4928,4929,10274,10275,10276,10277,10278,10279,10280,10281,10282,10283,10284,10285,10286,10287, #10128
10288,10289,10290,10291,10292,10293,10294,10295,10296,10297,10298,10299,10300,10301,10302,10303, #10144
10304,10305,10306,10307,10308,10309,10310,10311,10312,10313,10314,10315,10316,10317,10318,10319, #10160
10320,10321,10322,10323,10324,10325,10326,10327,10328,10329,10330,10331,10332,10333,10334,4930, #10176
10335,10336,10337,10338,10339,10340,10341,10342,4931,10343,10344,10345,10346,10347,10348,10349, #10192
10350,10351,10352,10353,10354,10355,3088,10356,2786,10357,10358,10359,10360,4189,10361,10362, #10208
10363,10364,10365,10366,10367,10368,10369,10370,10371,10372,10373,10374,10375,4932,10376,10377, #10224
10378,10379,10380,10381,10382,10383,10384,10385,10386,10387,10388,10389,10390,10391,10392,4933, #10240
10393,10394,10395,4934,10396,10397,10398,10399,10400,10401,10402,10403,10404,10405,10406,10407, #10256
10408,10409,10410,10411,10412,3446,10413,10414,10415,10416,10417,10418,10419,10420,10421,10422, #10272
10423,4935,10424,10425,10426,10427,10428,10429,10430,4936,10431,10432,10433,10434,10435,10436, #10288
10437,10438,10439,10440,10441,10442,10443,4937,10444,10445,10446,10447,4481,10448,10449,10450, #10304
10451,10452,10453,10454,10455,10456,10457,10458,10459,10460,10461,10462,10463,10464,10465,10466, #10320
10467,10468,10469,10470,10471,10472,10473,10474,10475,10476,10477,10478,10479,10480,10481,10482, #10336
10483,10484,10485,10486,10487,10488,10489,10490,10491,10492,10493,10494,10495,10496,10497,10498, #10352
10499,10500,10501,10502,10503,10504,10505,4938,10506,10507,10508,10509,10510,2552,10511,10512, #10368
10513,10514,10515,10516,3447,10517,10518,10519,10520,10521,10522,10523,10524,10525,10526,10527, #10384
10528,10529,10530,10531,10532,10533,10534,10535,10536,10537,10538,10539,10540,10541,10542,10543, #10400
4482,10544,4939,10545,10546,10547,10548,10549,10550,10551,10552,10553,10554,10555,10556,10557, #10416
10558,10559,10560,10561,10562,10563,10564,10565,10566,10567,3676,4483,10568,10569,10570,10571, #10432
10572,3448,10573,10574,10575,10576,10577,10578,10579,10580,10581,10582,10583,10584,10585,10586, #10448
10587,10588,10589,10590,10591,10592,10593,10594,10595,10596,10597,10598,10599,10600,10601,10602, #10464
10603,10604,10605,10606,10607,10608,10609,10610,10611,10612,10613,10614,10615,10616,10617,10618, #10480
10619,10620,10621,10622,10623,10624,10625,10626,10627,4484,10628,10629,10630,10631,10632,4940, #10496
10633,10634,10635,10636,10637,10638,10639,10640,10641,10642,10643,10644,10645,10646,10647,10648, #10512
10649,10650,10651,10652,10653,10654,10655,10656,4941,10657,10658,10659,2599,10660,10661,10662, #10528
10663,10664,10665,10666,3089,10667,10668,10669,10670,10671,10672,10673,10674,10675,10676,10677, #10544
10678,10679,10680,4942,10681,10682,10683,10684,10685,10686,10687,10688,10689,10690,10691,10692, #10560
10693,10694,10695,10696,10697,4485,10698,10699,10700,10701,10702,10703,10704,4943,10705,3677, #10576
10706,10707,10708,10709,10710,10711,10712,4944,10713,10714,10715,10716,10717,10718,10719,10720, #10592
10721,10722,10723,10724,10725,10726,10727,10728,4945,10729,10730,10731,10732,10733,10734,10735, #10608
10736,10737,10738,10739,10740,10741,10742,10743,10744,10745,10746,10747,10748,10749,10750,10751, #10624
10752,10753,10754,10755,10756,10757,10758,10759,10760,10761,4946,10762,10763,10764,10765,10766, #10640
10767,4947,4948,10768,10769,10770,10771,10772,10773,10774,10775,10776,10777,10778,10779,10780, #10656
10781,10782,10783,10784,10785,10786,10787,10788,10789,10790,10791,10792,10793,10794,10795,10796, #10672
10797,10798,10799,10800,10801,10802,10803,10804,10805,10806,10807,10808,10809,10810,10811,10812, #10688
10813,10814,10815,10816,10817,10818,10819,10820,10821,10822,10823,10824,10825,10826,10827,10828, #10704
10829,10830,10831,10832,10833,10834,10835,10836,10837,10838,10839,10840,10841,10842,10843,10844, #10720
10845,10846,10847,10848,10849,10850,10851,10852,10853,10854,10855,10856,10857,10858,10859,10860, #10736
10861,10862,10863,10864,10865,10866,10867,10868,10869,10870,10871,10872,10873,10874,10875,10876, #10752
10877,10878,4486,10879,10880,10881,10882,10883,10884,10885,4949,10886,10887,10888,10889,10890, #10768
10891,10892,10893,10894,10895,10896,10897,10898,10899,10900,10901,10902,10903,10904,10905,10906, #10784
10907,10908,10909,10910,10911,10912,10913,10914,10915,10916,10917,10918,10919,4487,10920,10921, #10800
10922,10923,10924,10925,10926,10927,10928,10929,10930,10931,10932,4950,10933,10934,10935,10936, #10816
10937,10938,10939,10940,10941,10942,10943,10944,10945,10946,10947,10948,10949,4488,10950,10951, #10832
10952,10953,10954,10955,10956,10957,10958,10959,4190,10960,10961,10962,10963,10964,10965,10966, #10848
10967,10968,10969,10970,10971,10972,10973,10974,10975,10976,10977,10978,10979,10980,10981,10982, #10864
10983,10984,10985,10986,10987,10988,10989,10990,10991,10992,10993,10994,10995,10996,10997,10998, #10880
10999,11000,11001,11002,11003,11004,11005,11006,3960,11007,11008,11009,11010,11011,11012,11013, #10896
11014,11015,11016,11017,11018,11019,11020,11021,11022,11023,11024,11025,11026,11027,11028,11029, #10912
11030,11031,11032,4951,11033,11034,11035,11036,11037,11038,11039,11040,11041,11042,11043,11044, #10928
11045,11046,11047,4489,11048,11049,11050,11051,4952,11052,11053,11054,11055,11056,11057,11058, #10944
4953,11059,11060,11061,11062,11063,11064,11065,11066,11067,11068,11069,11070,11071,4954,11072, #10960
11073,11074,11075,11076,11077,11078,11079,11080,11081,11082,11083,11084,11085,11086,11087,11088, #10976
11089,11090,11091,11092,11093,11094,11095,11096,11097,11098,11099,11100,11101,11102,11103,11104, #10992
11105,11106,11107,11108,11109,11110,11111,11112,11113,11114,11115,3808,11116,11117,11118,11119, #11008
11120,11121,11122,11123,11124,11125,11126,11127,11128,11129,11130,11131,11132,11133,11134,4955, #11024
11135,11136,11137,11138,11139,11140,11141,11142,11143,11144,11145,11146,11147,11148,11149,11150, #11040
11151,11152,11153,11154,11155,11156,11157,11158,11159,11160,11161,4956,11162,11163,11164,11165, #11056
11166,11167,11168,11169,11170,11171,11172,11173,11174,11175,11176,11177,11178,11179,11180,4957, #11072
11181,11182,11183,11184,11185,11186,4958,11187,11188,11189,11190,11191,11192,11193,11194,11195, #11088
11196,11197,11198,11199,11200,3678,11201,11202,11203,11204,11205,11206,4191,11207,11208,11209, #11104
11210,11211,11212,11213,11214,11215,11216,11217,11218,11219,11220,11221,11222,11223,11224,11225, #11120
11226,11227,11228,11229,11230,11231,11232,11233,11234,11235,11236,11237,11238,11239,11240,11241, #11136
11242,11243,11244,11245,11246,11247,11248,11249,11250,11251,4959,11252,11253,11254,11255,11256, #11152
11257,11258,11259,11260,11261,11262,11263,11264,11265,11266,11267,11268,11269,11270,11271,11272, #11168
11273,11274,11275,11276,11277,11278,11279,11280,11281,11282,11283,11284,11285,11286,11287,11288, #11184
11289,11290,11291,11292,11293,11294,11295,11296,11297,11298,11299,11300,11301,11302,11303,11304, #11200
11305,11306,11307,11308,11309,11310,11311,11312,11313,11314,3679,11315,11316,11317,11318,4490, #11216
11319,11320,11321,11322,11323,11324,11325,11326,11327,11328,11329,11330,11331,11332,11333,11334, #11232
11335,11336,11337,11338,11339,11340,11341,11342,11343,11344,11345,11346,11347,4960,11348,11349, #11248
11350,11351,11352,11353,11354,11355,11356,11357,11358,11359,11360,11361,11362,11363,11364,11365, #11264
11366,11367,11368,11369,11370,11371,11372,11373,11374,11375,11376,11377,3961,4961,11378,11379, #11280
11380,11381,11382,11383,11384,11385,11386,11387,11388,11389,11390,11391,11392,11393,11394,11395, #11296
11396,11397,4192,11398,11399,11400,11401,11402,11403,11404,11405,11406,11407,11408,11409,11410, #11312
11411,4962,11412,11413,11414,11415,11416,11417,11418,11419,11420,11421,11422,11423,11424,11425, #11328
11426,11427,11428,11429,11430,11431,11432,11433,11434,11435,11436,11437,11438,11439,11440,11441, #11344
11442,11443,11444,11445,11446,11447,11448,11449,11450,11451,11452,11453,11454,11455,11456,11457, #11360
11458,11459,11460,11461,11462,11463,11464,11465,11466,11467,11468,11469,4963,11470,11471,4491, #11376
11472,11473,11474,11475,4964,11476,11477,11478,11479,11480,11481,11482,11483,11484,11485,11486, #11392
11487,11488,11489,11490,11491,11492,4965,11493,11494,11495,11496,11497,11498,11499,11500,11501, #11408
11502,11503,11504,11505,11506,11507,11508,11509,11510,11511,11512,11513,11514,11515,11516,11517, #11424
11518,11519,11520,11521,11522,11523,11524,11525,11526,11527,11528,11529,3962,11530,11531,11532, #11440
11533,11534,11535,11536,11537,11538,11539,11540,11541,11542,11543,11544,11545,11546,11547,11548, #11456
11549,11550,11551,11552,11553,11554,11555,11556,11557,11558,11559,11560,11561,11562,11563,11564, #11472
4193,4194,11565,11566,11567,11568,11569,11570,11571,11572,11573,11574,11575,11576,11577,11578, #11488
11579,11580,11581,11582,11583,11584,11585,11586,11587,11588,11589,11590,11591,4966,4195,11592, #11504
11593,11594,11595,11596,11597,11598,11599,11600,11601,11602,11603,11604,3090,11605,11606,11607, #11520
11608,11609,11610,4967,11611,11612,11613,11614,11615,11616,11617,11618,11619,11620,11621,11622, #11536
11623,11624,11625,11626,11627,11628,11629,11630,11631,11632,11633,11634,11635,11636,11637,11638, #11552
11639,11640,11641,11642,11643,11644,11645,11646,11647,11648,11649,11650,11651,11652,11653,11654, #11568
11655,11656,11657,11658,11659,11660,11661,11662,11663,11664,11665,11666,11667,11668,11669,11670, #11584
11671,11672,11673,11674,4968,11675,11676,11677,11678,11679,11680,11681,11682,11683,11684,11685, #11600
11686,11687,11688,11689,11690,11691,11692,11693,3809,11694,11695,11696,11697,11698,11699,11700, #11616
11701,11702,11703,11704,11705,11706,11707,11708,11709,11710,11711,11712,11713,11714,11715,11716, #11632
11717,11718,3553,11719,11720,11721,11722,11723,11724,11725,11726,11727,11728,11729,11730,4969, #11648
11731,11732,11733,11734,11735,11736,11737,11738,11739,11740,4492,11741,11742,11743,11744,11745, #11664
11746,11747,11748,11749,11750,11751,11752,4970,11753,11754,11755,11756,11757,11758,11759,11760, #11680
11761,11762,11763,11764,11765,11766,11767,11768,11769,11770,11771,11772,11773,11774,11775,11776, #11696
11777,11778,11779,11780,11781,11782,11783,11784,11785,11786,11787,11788,11789,11790,4971,11791, #11712
11792,11793,11794,11795,11796,11797,4972,11798,11799,11800,11801,11802,11803,11804,11805,11806, #11728
11807,11808,11809,11810,4973,11811,11812,11813,11814,11815,11816,11817,11818,11819,11820,11821, #11744
11822,11823,11824,11825,11826,11827,11828,11829,11830,11831,11832,11833,11834,3680,3810,11835, #11760
11836,4974,11837,11838,11839,11840,11841,11842,11843,11844,11845,11846,11847,11848,11849,11850, #11776
11851,11852,11853,11854,11855,11856,11857,11858,11859,11860,11861,11862,11863,11864,11865,11866, #11792
11867,11868,11869,11870,11871,11872,11873,11874,11875,11876,11877,11878,11879,11880,11881,11882, #11808
11883,11884,4493,11885,11886,11887,11888,11889,11890,11891,11892,11893,11894,11895,11896,11897, #11824
11898,11899,11900,11901,11902,11903,11904,11905,11906,11907,11908,11909,11910,11911,11912,11913, #11840
11914,11915,4975,11916,11917,11918,11919,11920,11921,11922,11923,11924,11925,11926,11927,11928, #11856
11929,11930,11931,11932,11933,11934,11935,11936,11937,11938,11939,11940,11941,11942,11943,11944, #11872
11945,11946,11947,11948,11949,4976,11950,11951,11952,11953,11954,11955,11956,11957,11958,11959, #11888
11960,11961,11962,11963,11964,11965,11966,11967,11968,11969,11970,11971,11972,11973,11974,11975, #11904
11976,11977,11978,11979,11980,11981,11982,11983,11984,11985,11986,11987,4196,11988,11989,11990, #11920
11991,11992,4977,11993,11994,11995,11996,11997,11998,11999,12000,12001,12002,12003,12004,12005, #11936
12006,12007,12008,12009,12010,12011,12012,12013,12014,12015,12016,12017,12018,12019,12020,12021, #11952
12022,12023,12024,12025,12026,12027,12028,12029,12030,12031,12032,12033,12034,12035,12036,12037, #11968
12038,12039,12040,12041,12042,12043,12044,12045,12046,12047,12048,12049,12050,12051,12052,12053, #11984
12054,12055,12056,12057,12058,12059,12060,12061,4978,12062,12063,12064,12065,12066,12067,12068, #12000
12069,12070,12071,12072,12073,12074,12075,12076,12077,12078,12079,12080,12081,12082,12083,12084, #12016
12085,12086,12087,12088,12089,12090,12091,12092,12093,12094,12095,12096,12097,12098,12099,12100, #12032
12101,12102,12103,12104,12105,12106,12107,12108,12109,12110,12111,12112,12113,12114,12115,12116, #12048
12117,12118,12119,12120,12121,12122,12123,4979,12124,12125,12126,12127,12128,4197,12129,12130, #12064
12131,12132,12133,12134,12135,12136,12137,12138,12139,12140,12141,12142,12143,12144,12145,12146, #12080
12147,12148,12149,12150,12151,12152,12153,12154,4980,12155,12156,12157,12158,12159,12160,4494, #12096
12161,12162,12163,12164,3811,12165,12166,12167,12168,12169,4495,12170,12171,4496,12172,12173, #12112
12174,12175,12176,3812,12177,12178,12179,12180,12181,12182,12183,12184,12185,12186,12187,12188, #12128
12189,12190,12191,12192,12193,12194,12195,12196,12197,12198,12199,12200,12201,12202,12203,12204, #12144
12205,12206,12207,12208,12209,12210,12211,12212,12213,12214,12215,12216,12217,12218,12219,12220, #12160
12221,4981,12222,12223,12224,12225,12226,12227,12228,12229,12230,12231,12232,12233,12234,12235, #12176
4982,12236,12237,12238,12239,12240,12241,12242,12243,12244,12245,4983,12246,12247,12248,12249, #12192
4984,12250,12251,12252,12253,12254,12255,12256,12257,12258,12259,12260,12261,12262,12263,12264, #12208
4985,12265,4497,12266,12267,12268,12269,12270,12271,12272,12273,12274,12275,12276,12277,12278, #12224
12279,12280,12281,12282,12283,12284,12285,12286,12287,4986,12288,12289,12290,12291,12292,12293, #12240
12294,12295,12296,2473,12297,12298,12299,12300,12301,12302,12303,12304,12305,12306,12307,12308, #12256
12309,12310,12311,12312,12313,12314,12315,12316,12317,12318,12319,3963,12320,12321,12322,12323, #12272
12324,12325,12326,12327,12328,12329,12330,12331,12332,4987,12333,12334,12335,12336,12337,12338, #12288
12339,12340,12341,12342,12343,12344,12345,12346,12347,12348,12349,12350,12351,12352,12353,12354, #12304
12355,12356,12357,12358,12359,3964,12360,12361,12362,12363,12364,12365,12366,12367,12368,12369, #12320
12370,3965,12371,12372,12373,12374,12375,12376,12377,12378,12379,12380,12381,12382,12383,12384, #12336
12385,12386,12387,12388,12389,12390,12391,12392,12393,12394,12395,12396,12397,12398,12399,12400, #12352
12401,12402,12403,12404,12405,12406,12407,12408,4988,12409,12410,12411,12412,12413,12414,12415, #12368
12416,12417,12418,12419,12420,12421,12422,12423,12424,12425,12426,12427,12428,12429,12430,12431, #12384
12432,12433,12434,12435,12436,12437,12438,3554,12439,12440,12441,12442,12443,12444,12445,12446, #12400
12447,12448,12449,12450,12451,12452,12453,12454,12455,12456,12457,12458,12459,12460,12461,12462, #12416
12463,12464,4989,12465,12466,12467,12468,12469,12470,12471,12472,12473,12474,12475,12476,12477, #12432
12478,12479,12480,4990,12481,12482,12483,12484,12485,12486,12487,12488,12489,4498,12490,12491, #12448
12492,12493,12494,12495,12496,12497,12498,12499,12500,12501,12502,12503,12504,12505,12506,12507, #12464
12508,12509,12510,12511,12512,12513,12514,12515,12516,12517,12518,12519,12520,12521,12522,12523, #12480
12524,12525,12526,12527,12528,12529,12530,12531,12532,12533,12534,12535,12536,12537,12538,12539, #12496
12540,12541,12542,12543,12544,12545,12546,12547,12548,12549,12550,12551,4991,12552,12553,12554, #12512
12555,12556,12557,12558,12559,12560,12561,12562,12563,12564,12565,12566,12567,12568,12569,12570, #12528
12571,12572,12573,12574,12575,12576,12577,12578,3036,12579,12580,12581,12582,12583,3966,12584, #12544
12585,12586,12587,12588,12589,12590,12591,12592,12593,12594,12595,12596,12597,12598,12599,12600, #12560
12601,12602,12603,12604,12605,12606,12607,12608,12609,12610,12611,12612,12613,12614,12615,12616, #12576
12617,12618,12619,12620,12621,12622,12623,12624,12625,12626,12627,12628,12629,12630,12631,12632, #12592
12633,12634,12635,12636,12637,12638,12639,12640,12641,12642,12643,12644,12645,12646,4499,12647, #12608
12648,12649,12650,12651,12652,12653,12654,12655,12656,12657,12658,12659,12660,12661,12662,12663, #12624
12664,12665,12666,12667,12668,12669,12670,12671,12672,12673,12674,12675,12676,12677,12678,12679, #12640
12680,12681,12682,12683,12684,12685,12686,12687,12688,12689,12690,12691,12692,12693,12694,12695, #12656
12696,12697,12698,4992,12699,12700,12701,12702,12703,12704,12705,12706,12707,12708,12709,12710, #12672
12711,12712,12713,12714,12715,12716,12717,12718,12719,12720,12721,12722,12723,12724,12725,12726, #12688
12727,12728,12729,12730,12731,12732,12733,12734,12735,12736,12737,12738,12739,12740,12741,12742, #12704
12743,12744,12745,12746,12747,12748,12749,12750,12751,12752,12753,12754,12755,12756,12757,12758, #12720
12759,12760,12761,12762,12763,12764,12765,12766,12767,12768,12769,12770,12771,12772,12773,12774, #12736
12775,12776,12777,12778,4993,2175,12779,12780,12781,12782,12783,12784,12785,12786,4500,12787, #12752
12788,12789,12790,12791,12792,12793,12794,12795,12796,12797,12798,12799,12800,12801,12802,12803, #12768
12804,12805,12806,12807,12808,12809,12810,12811,12812,12813,12814,12815,12816,12817,12818,12819, #12784
12820,12821,12822,12823,12824,12825,12826,4198,3967,12827,12828,12829,12830,12831,12832,12833, #12800
12834,12835,12836,12837,12838,12839,12840,12841,12842,12843,12844,12845,12846,12847,12848,12849, #12816
12850,12851,12852,12853,12854,12855,12856,12857,12858,12859,12860,12861,4199,12862,12863,12864, #12832
12865,12866,12867,12868,12869,12870,12871,12872,12873,12874,12875,12876,12877,12878,12879,12880, #12848
12881,12882,12883,12884,12885,12886,12887,4501,12888,12889,12890,12891,12892,12893,12894,12895, #12864
12896,12897,12898,12899,12900,12901,12902,12903,12904,12905,12906,12907,12908,12909,12910,12911, #12880
12912,4994,12913,12914,12915,12916,12917,12918,12919,12920,12921,12922,12923,12924,12925,12926, #12896
12927,12928,12929,12930,12931,12932,12933,12934,12935,12936,12937,12938,12939,12940,12941,12942, #12912
12943,12944,12945,12946,12947,12948,12949,12950,12951,12952,12953,12954,12955,12956,1772,12957, #12928
12958,12959,12960,12961,12962,12963,12964,12965,12966,12967,12968,12969,12970,12971,12972,12973, #12944
12974,12975,12976,12977,12978,12979,12980,12981,12982,12983,12984,12985,12986,12987,12988,12989, #12960
12990,12991,12992,12993,12994,12995,12996,12997,4502,12998,4503,12999,13000,13001,13002,13003, #12976
4504,13004,13005,13006,13007,13008,13009,13010,13011,13012,13013,13014,13015,13016,13017,13018, #12992
13019,13020,13021,13022,13023,13024,13025,13026,13027,13028,13029,3449,13030,13031,13032,13033, #13008
13034,13035,13036,13037,13038,13039,13040,13041,13042,13043,13044,13045,13046,13047,13048,13049, #13024
13050,13051,13052,13053,13054,13055,13056,13057,13058,13059,13060,13061,13062,13063,13064,13065, #13040
13066,13067,13068,13069,13070,13071,13072,13073,13074,13075,13076,13077,13078,13079,13080,13081, #13056
13082,13083,13084,13085,13086,13087,13088,13089,13090,13091,13092,13093,13094,13095,13096,13097, #13072
13098,13099,13100,13101,13102,13103,13104,13105,13106,13107,13108,13109,13110,13111,13112,13113, #13088
13114,13115,13116,13117,13118,3968,13119,4995,13120,13121,13122,13123,13124,13125,13126,13127, #13104
4505,13128,13129,13130,13131,13132,13133,13134,4996,4506,13135,13136,13137,13138,13139,4997, #13120
13140,13141,13142,13143,13144,13145,13146,13147,13148,13149,13150,13151,13152,13153,13154,13155, #13136
13156,13157,13158,13159,4998,13160,13161,13162,13163,13164,13165,13166,13167,13168,13169,13170, #13152
13171,13172,13173,13174,13175,13176,4999,13177,13178,13179,13180,13181,13182,13183,13184,13185, #13168
13186,13187,13188,13189,13190,13191,13192,13193,13194,13195,13196,13197,13198,13199,13200,13201, #13184
13202,13203,13204,13205,13206,5000,13207,13208,13209,13210,13211,13212,13213,13214,13215,13216, #13200
13217,13218,13219,13220,13221,13222,13223,13224,13225,13226,13227,4200,5001,13228,13229,13230, #13216
13231,13232,13233,13234,13235,13236,13237,13238,13239,13240,3969,13241,13242,13243,13244,3970, #13232
13245,13246,13247,13248,13249,13250,13251,13252,13253,13254,13255,13256,13257,13258,13259,13260, #13248
13261,13262,13263,13264,13265,13266,13267,13268,3450,13269,13270,13271,13272,13273,13274,13275, #13264
13276,5002,13277,13278,13279,13280,13281,13282,13283,13284,13285,13286,13287,13288,13289,13290, #13280
13291,13292,13293,13294,13295,13296,13297,13298,13299,13300,13301,13302,3813,13303,13304,13305, #13296
13306,13307,13308,13309,13310,13311,13312,13313,13314,13315,13316,13317,13318,13319,13320,13321, #13312
13322,13323,13324,13325,13326,13327,13328,4507,13329,13330,13331,13332,13333,13334,13335,13336, #13328
13337,13338,13339,13340,13341,5003,13342,13343,13344,13345,13346,13347,13348,13349,13350,13351, #13344
13352,13353,13354,13355,13356,13357,13358,13359,13360,13361,13362,13363,13364,13365,13366,13367, #13360
5004,13368,13369,13370,13371,13372,13373,13374,13375,13376,13377,13378,13379,13380,13381,13382, #13376
13383,13384,13385,13386,13387,13388,13389,13390,13391,13392,13393,13394,13395,13396,13397,13398, #13392
13399,13400,13401,13402,13403,13404,13405,13406,13407,13408,13409,13410,13411,13412,13413,13414, #13408
13415,13416,13417,13418,13419,13420,13421,13422,13423,13424,13425,13426,13427,13428,13429,13430, #13424
13431,13432,4508,13433,13434,13435,4201,13436,13437,13438,13439,13440,13441,13442,13443,13444, #13440
13445,13446,13447,13448,13449,13450,13451,13452,13453,13454,13455,13456,13457,5005,13458,13459, #13456
13460,13461,13462,13463,13464,13465,13466,13467,13468,13469,13470,4509,13471,13472,13473,13474, #13472
13475,13476,13477,13478,13479,13480,13481,13482,13483,13484,13485,13486,13487,13488,13489,13490, #13488
13491,13492,13493,13494,13495,13496,13497,13498,13499,13500,13501,13502,13503,13504,13505,13506, #13504
13507,13508,13509,13510,13511,13512,13513,13514,13515,13516,13517,13518,13519,13520,13521,13522, #13520
13523,13524,13525,13526,13527,13528,13529,13530,13531,13532,13533,13534,13535,13536,13537,13538, #13536
13539,13540,13541,13542,13543,13544,13545,13546,13547,13548,13549,13550,13551,13552,13553,13554, #13552
13555,13556,13557,13558,13559,13560,13561,13562,13563,13564,13565,13566,13567,13568,13569,13570, #13568
13571,13572,13573,13574,13575,13576,13577,13578,13579,13580,13581,13582,13583,13584,13585,13586, #13584
13587,13588,13589,13590,13591,13592,13593,13594,13595,13596,13597,13598,13599,13600,13601,13602, #13600
13603,13604,13605,13606,13607,13608,13609,13610,13611,13612,13613,13614,13615,13616,13617,13618, #13616
13619,13620,13621,13622,13623,13624,13625,13626,13627,13628,13629,13630,13631,13632,13633,13634, #13632
13635,13636,13637,13638,13639,13640,13641,13642,5006,13643,13644,13645,13646,13647,13648,13649, #13648
13650,13651,5007,13652,13653,13654,13655,13656,13657,13658,13659,13660,13661,13662,13663,13664, #13664
13665,13666,13667,13668,13669,13670,13671,13672,13673,13674,13675,13676,13677,13678,13679,13680, #13680
13681,13682,13683,13684,13685,13686,13687,13688,13689,13690,13691,13692,13693,13694,13695,13696, #13696
13697,13698,13699,13700,13701,13702,13703,13704,13705,13706,13707,13708,13709,13710,13711,13712, #13712
13713,13714,13715,13716,13717,13718,13719,13720,13721,13722,13723,13724,13725,13726,13727,13728, #13728
13729,13730,13731,13732,13733,13734,13735,13736,13737,13738,13739,13740,13741,13742,13743,13744, #13744
13745,13746,13747,13748,13749,13750,13751,13752,13753,13754,13755,13756,13757,13758,13759,13760, #13760
13761,13762,13763,13764,13765,13766,13767,13768,13769,13770,13771,13772,13773,13774,3273,13775, #13776
13776,13777,13778,13779,13780,13781,13782,13783,13784,13785,13786,13787,13788,13789,13790,13791, #13792
13792,13793,13794,13795,13796,13797,13798,13799,13800,13801,13802,13803,13804,13805,13806,13807, #13808
13808,13809,13810,13811,13812,13813,13814,13815,13816,13817,13818,13819,13820,13821,13822,13823, #13824
13824,13825,13826,13827,13828,13829,13830,13831,13832,13833,13834,13835,13836,13837,13838,13839, #13840
13840,13841,13842,13843,13844,13845,13846,13847,13848,13849,13850,13851,13852,13853,13854,13855, #13856
13856,13857,13858,13859,13860,13861,13862,13863,13864,13865,13866,13867,13868,13869,13870,13871, #13872
13872,13873,13874,13875,13876,13877,13878,13879,13880,13881,13882,13883,13884,13885,13886,13887, #13888
13888,13889,13890,13891,13892,13893,13894,13895,13896,13897,13898,13899,13900,13901,13902,13903, #13904
13904,13905,13906,13907,13908,13909,13910,13911,13912,13913,13914,13915,13916,13917,13918,13919, #13920
13920,13921,13922,13923,13924,13925,13926,13927,13928,13929,13930,13931,13932,13933,13934,13935, #13936
13936,13937,13938,13939,13940,13941,13942,13943,13944,13945,13946,13947,13948,13949,13950,13951, #13952
13952,13953,13954,13955,13956,13957,13958,13959,13960,13961,13962,13963,13964,13965,13966,13967, #13968
13968,13969,13970,13971,13972) #13973
# flake8: noqa
| gpl-3.0 |
Akasurde/virt-manager | virtinst/devicechar.py | 3 | 10828 | #
# Copyright 2009, 2013 Red Hat, Inc.
# Cole Robinson <crobinso@redhat.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
# MA 02110-1301 USA.
from .device import VirtualDevice
from .xmlbuilder import XMLProperty
class _VirtualCharDevice(VirtualDevice):
"""
Base class for all character devices. Shouldn't be instantiated
directly.
"""
TYPE_PTY = "pty"
TYPE_DEV = "dev"
TYPE_STDIO = "stdio"
TYPE_PIPE = "pipe"
TYPE_FILE = "file"
TYPE_VC = "vc"
TYPE_NULL = "null"
TYPE_TCP = "tcp"
TYPE_UDP = "udp"
TYPE_UNIX = "unix"
TYPE_SPICEVMC = "spicevmc"
TYPE_SPICEPORT = "spiceport"
# We don't list the non-UI friendly types here
_TYPES_FOR_ALL = [TYPE_PTY, TYPE_DEV, TYPE_FILE,
TYPE_TCP, TYPE_UDP, TYPE_UNIX]
_TYPES_FOR_CHANNEL = [TYPE_SPICEVMC, TYPE_SPICEPORT]
TYPES = _TYPES_FOR_ALL
MODE_CONNECT = "connect"
MODE_BIND = "bind"
MODES = [MODE_CONNECT, MODE_BIND]
PROTOCOL_RAW = "raw"
PROTOCOL_TELNET = "telnet"
PROTOCOLS = [PROTOCOL_RAW, PROTOCOL_TELNET]
CHANNEL_TARGET_GUESTFWD = "guestfwd"
CHANNEL_TARGET_VIRTIO = "virtio"
CHANNEL_TARGETS = [CHANNEL_TARGET_GUESTFWD,
CHANNEL_TARGET_VIRTIO]
CONSOLE_TARGET_SERIAL = "serial"
CONSOLE_TARGET_UML = "uml"
CONSOLE_TARGET_XEN = "xen"
CONSOLE_TARGET_VIRTIO = "virtio"
CONSOLE_TARGETS = [CONSOLE_TARGET_SERIAL, CONSOLE_TARGET_UML,
CONSOLE_TARGET_XEN, CONSOLE_TARGET_VIRTIO]
CHANNEL_NAME_SPICE = "com.redhat.spice.0"
CHANNEL_NAME_QEMUGA = "org.qemu.guest_agent.0"
CHANNEL_NAME_LIBGUESTFS = "org.libguestfs.channel.0"
CHANNEL_NAME_SPICE_WEBDAV = "org.spice-space.webdav.0"
CHANNEL_NAMES = [CHANNEL_NAME_SPICE,
CHANNEL_NAME_QEMUGA,
CHANNEL_NAME_LIBGUESTFS,
CHANNEL_NAME_SPICE_WEBDAV]
@staticmethod
def pretty_channel_name(val):
if val == _VirtualCharDevice.CHANNEL_NAME_SPICE:
return "spice"
if val == _VirtualCharDevice.CHANNEL_NAME_QEMUGA:
return "qemu-ga"
if val == _VirtualCharDevice.CHANNEL_NAME_LIBGUESTFS:
return "libguestfs"
if val == _VirtualCharDevice.CHANNEL_NAME_SPICE_WEBDAV:
return "spice-webdav"
return None
@staticmethod
def pretty_type(ctype):
"""
Return a human readable description of the passed char type
"""
desc = ""
if ctype == _VirtualCharDevice.TYPE_PTY:
desc = _("Pseudo TTY")
elif ctype == _VirtualCharDevice.TYPE_DEV:
desc = _("Physical host character device")
elif ctype == _VirtualCharDevice.TYPE_STDIO:
desc = _("Standard input/output")
elif ctype == _VirtualCharDevice.TYPE_PIPE:
desc = _("Named pipe")
elif ctype == _VirtualCharDevice.TYPE_FILE:
desc = _("Output to a file")
elif ctype == _VirtualCharDevice.TYPE_VC:
desc = _("Virtual console")
elif ctype == _VirtualCharDevice.TYPE_NULL:
desc = _("Null device")
elif ctype == _VirtualCharDevice.TYPE_TCP:
desc = _("TCP net console")
elif ctype == _VirtualCharDevice.TYPE_UDP:
desc = _("UDP net console")
elif ctype == _VirtualCharDevice.TYPE_UNIX:
desc = _("Unix socket")
elif ctype == _VirtualCharDevice.TYPE_SPICEVMC:
desc = _("Spice agent")
elif ctype == _VirtualCharDevice.TYPE_SPICEPORT:
desc = _("Spice port")
return desc
@staticmethod
def pretty_mode(char_mode):
"""
Return a human readable description of the passed char type
"""
desc = ""
if char_mode == _VirtualCharDevice.MODE_CONNECT:
desc = _("Client mode")
elif char_mode == _VirtualCharDevice.MODE_BIND:
desc = _("Server mode")
return desc
def supports_property(self, propname, ro=False):
"""
Whether the character dev type supports the passed property name
"""
users = {
"source_path" : [self.TYPE_FILE, self.TYPE_UNIX,
self.TYPE_DEV, self.TYPE_PIPE],
"source_mode" : [self.TYPE_UNIX, self.TYPE_TCP],
"source_host" : [self.TYPE_TCP, self.TYPE_UDP],
"source_port" : [self.TYPE_TCP, self.TYPE_UDP],
"source_channel": [self.TYPE_SPICEPORT],
"protocol" : [self.TYPE_TCP],
"bind_host" : [self.TYPE_UDP],
"bind_port" : [self.TYPE_UDP],
}
if ro:
users["source_path"] += [self.TYPE_PTY]
if users.get(propname):
return self.type in users[propname]
return hasattr(self, propname)
def set_defaults(self, guest):
ignore = guest
if not self.source_mode and self.supports_property("source_mode"):
self.source_mode = self.MODE_BIND
def _set_host_helper(self, hostparam, portparam, val):
def parse_host(val):
host, ignore, port = (val or "").partition(":")
return host or None, port or None
host, port = parse_host(val)
if not host:
host = "127.0.0.1"
if host:
setattr(self, hostparam, host)
if port:
setattr(self, portparam, port)
def set_friendly_source(self, val):
self._set_host_helper("source_host", "source_port", val)
def set_friendly_bind(self, val):
self._set_host_helper("bind_host", "bind_port", val)
def set_friendly_target(self, val):
self._set_host_helper("target_address", "target_port", val)
_XML_PROP_ORDER = ["type", "_has_mode_bind", "_has_mode_connect",
"bind_host", "bind_port",
"source_mode", "source_host", "source_port",
"_source_path", "source_channel",
"target_type", "target_name"]
type = XMLProperty("./@type",
doc=_("Method used to expose character device in the host."))
_tty = XMLProperty("./@tty")
_source_path = XMLProperty("./source/@path",
doc=_("Host input path to attach to the guest."))
def _get_source_path(self):
source = self._source_path
if source is None and self._tty:
return self._tty
return source
def _set_source_path(self, val):
self._source_path = val
source_path = property(_get_source_path, _set_source_path)
source_channel = XMLProperty("./source/@channel",
doc=_("Source channel name."))
###################
# source handling #
###################
source_mode = XMLProperty("./source/@mode")
_has_mode_connect = XMLProperty("./source[@mode='connect']/@mode")
_has_mode_bind = XMLProperty("./source[@mode='bind']/@mode")
def _set_source_validate(self, val):
if val is None:
return None
self._has_mode_connect = self.MODE_CONNECT
return val
source_host = XMLProperty("./source[@mode='connect']/@host",
doc=_("Host address to connect to."),
set_converter=_set_source_validate)
source_port = XMLProperty("./source[@mode='connect']/@service",
doc=_("Host port to connect to."),
set_converter=_set_source_validate,
is_int=True)
def _set_bind_validate(self, val):
if val is None:
return None
self._has_mode_bind = self.MODE_BIND
return val
bind_host = XMLProperty("./source[@mode='bind']/@host",
doc=_("Host address to bind to."),
set_converter=_set_bind_validate)
bind_port = XMLProperty("./source[@mode='bind']/@service",
doc=_("Host port to bind to."),
set_converter=_set_bind_validate,
is_int=True)
#######################
# Remaining XML props #
#######################
def _get_default_protocol(self):
if not self.supports_property("protocol"):
return None
return self.PROTOCOL_RAW
protocol = XMLProperty("./protocol/@type",
doc=_("Format used when sending data."),
default_cb=_get_default_protocol)
def _get_default_target_type(self):
if self.virtual_device_type == "channel":
return self.CHANNEL_TARGET_VIRTIO
return None
target_type = XMLProperty("./target/@type",
doc=_("Channel type as exposed in the guest."),
default_cb=_get_default_target_type)
target_address = XMLProperty("./target/@address",
doc=_("Guest forward channel address in the guest."))
target_port = XMLProperty("./target/@port", is_int=True,
doc=_("Guest forward channel port in the guest."))
def _default_target_name(self):
if self.type == self.TYPE_SPICEVMC:
return self.CHANNEL_NAME_SPICE
return None
target_name = XMLProperty("./target/@name",
doc=_("Sysfs name of virtio port in the guest"),
default_cb=_default_target_name)
class VirtualConsoleDevice(_VirtualCharDevice):
virtual_device_type = "console"
TYPES = [_VirtualCharDevice.TYPE_PTY]
class VirtualSerialDevice(_VirtualCharDevice):
virtual_device_type = "serial"
class VirtualParallelDevice(_VirtualCharDevice):
virtual_device_type = "parallel"
class VirtualChannelDevice(_VirtualCharDevice):
virtual_device_type = "channel"
TYPES = (_VirtualCharDevice._TYPES_FOR_CHANNEL +
_VirtualCharDevice._TYPES_FOR_ALL)
VirtualConsoleDevice.register_type()
VirtualSerialDevice.register_type()
VirtualParallelDevice.register_type()
VirtualChannelDevice.register_type()
| gpl-2.0 |
NeonMercury/python-lua | runtests.py | 1 | 2759 | #!/usr/bin/env python3
"""Run all tests from the tests folder"""
import os
import re
import subprocess
import sys
from tempfile import mkstemp
from colorama import init, Fore, Style
from pythonlua.translator import Translator
TESTS_FOLDER = "tests"
LUA_PATH = "lua"
EXPECTED_FORMAT = "{}.expected"
def get_all_tests(folder):
"""Get all test filenames"""
filenames = [os.path.join(folder, f) for f in os.listdir(folder)
if re.match(r".*\.py$", f)]
for fname in filenames:
test_name = fname
expected = EXPECTED_FORMAT.format(test_name)
if not os.path.isfile(fname):
raise RuntimeError("Object '{}' is not a file.".format(test_name))
if not os.path.isfile(expected):
raise RuntimeError("Expected output in a file '{}'.".format(expected))
return filenames
def make_test(filename):
"""Make test"""
print("Testing file: {}".format(filename), end=" ")
content = None
expected = None
with open(filename) as file:
content = file.read()
with open(EXPECTED_FORMAT.format(filename)) as file:
expected = file.read()
if content is None or expected is None:
return False
result = None
try:
translator = Translator()
lua_code = translator.translate(content)
file_desc, filename = mkstemp()
tmp_file = os.fdopen(file_desc, "w")
tmp_file.write(Translator.get_luainit()+ "\n")
tmp_file.write(lua_code)
tmp_file.flush()
tmp_file.close()
output = []
proc = subprocess.Popen([LUA_PATH, filename],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,)
while True:
line = proc.stdout.readline()
if line == b"":
break
else:
output.append(line.decode("utf-8"))
output = "".join(output)
os.remove(filename)
output = [item.strip() for item in output.split("\n")]
expected = [item.strip() for item in expected.split("\n")]
if output != expected:
print("output: ", output)
print("expected: ", expected)
result = output == expected
except RuntimeError:
result = False
print(Fore.GREEN + "PASSED" if result else Fore.RED + "FAILED")
print(Style.RESET_ALL, end="")
return result
def main():
"""Main tests entrypoint"""
init()
tests = get_all_tests(TESTS_FOLDER)
passed = 0
for test in tests:
if make_test(test):
passed += 1
print("=" * 80)
print("Passed: {}/{}".format(passed, len(tests)))
return 0
if __name__ == "__main__":
sys.exit(main()) | apache-2.0 |
akintolga/superdesk-core | apps/publish/enqueue/enqueue_corrected.py | 1 | 3860 | # -*- coding: utf-8; -*-
#
# This file is part of Superdesk.
#
# Copyright 2013, 2014, 2015 Sourcefabric z.u. and contributors.
#
# For the full copyright and license information, please see the
# AUTHORS and LICENSE files distributed with this source code, or
# at https://www.sourcefabric.org/superdesk/license
from superdesk import get_resource_service
from superdesk.metadata.item import EMBARGO, CONTENT_STATE
from superdesk.publish import SUBSCRIBER_TYPES
from superdesk.utc import utcnow
from apps.archive.common import get_utc_schedule
from eve.utils import config
from apps.publish.enqueue.enqueue_service import EnqueueService
class EnqueueCorrectedService(EnqueueService):
publish_type = 'correct'
published_state = 'corrected'
def get_subscribers(self, doc, target_media_type):
"""
Get the subscribers for this document based on the target_media_type for article Correction.
1. The article is sent to Subscribers (digital and wire) who has received the article previously.
2. For subsequent takes, only published to previously published wire clients. Digital clients don't get
individual takes but digital client takes package.
3. If the item has embargo and is a future date then fetch active Wire Subscribers.
Otherwise fetch Active Subscribers. After fetching exclude those who received the article previously from
active subscribers list.
4. If article has 'targeted_for' property then exclude subscribers of type Internet from Subscribers list.
5. Filter the subscriber that have not received the article previously against publish filters
and global filters for this document.
:param doc: Document to correct
:param target_media_type: dictate if the doc being queued is a Takes Package or an Individual Article.
Valid values are - Wire, Digital. If Digital then the doc being queued is a Takes Package and if Wire
then the doc being queues is an Individual Article.
:return: (list, list) List of filtered subscribers, List of subscribers that have not received item previously
"""
subscribers, subscribers_yet_to_receive = [], []
# step 1
query = {'$and': [{'item_id': doc['item_id']},
{'publishing_action': {'$in': [CONTENT_STATE.PUBLISHED, CONTENT_STATE.CORRECTED]}}]}
subscribers, subscriber_codes = self._get_subscribers_for_previously_sent_items(query)
if subscribers:
# step 2
if not self.takes_package_service.get_take_package_id(doc):
# Step 3
query = {'is_active': True}
if doc.get(EMBARGO) and get_utc_schedule(doc, EMBARGO) > utcnow():
query['subscriber_type'] = SUBSCRIBER_TYPES.WIRE
active_subscribers = list(get_resource_service('subscribers').get(req=None, lookup=query))
subscribers_yet_to_receive = [a for a in active_subscribers
if not any(a[config.ID_FIELD] == s[config.ID_FIELD]
for s in subscribers)]
if len(subscribers_yet_to_receive) > 0:
# Step 4
if doc.get('targeted_for'):
subscribers_yet_to_receive = list(self.non_digital(subscribers_yet_to_receive))
# Step 5
subscribers_yet_to_receive, codes = \
self.filter_subscribers(doc, subscribers_yet_to_receive,
SUBSCRIBER_TYPES.WIRE if doc.get('targeted_for') else target_media_type)
if codes:
subscriber_codes.update(codes)
return subscribers, subscribers_yet_to_receive, subscriber_codes
| agpl-3.0 |
bjarniegill/Cordova-Survey | csv_parser/.env/lib/python2.7/site-packages/pip/_vendor/requests/packages/__init__.py | 838 | 1384 | '''
Debian and other distributions "unbundle" requests' vendored dependencies, and
rewrite all imports to use the global versions of ``urllib3`` and ``chardet``.
The problem with this is that not only requests itself imports those
dependencies, but third-party code outside of the distros' control too.
In reaction to these problems, the distro maintainers replaced
``requests.packages`` with a magical "stub module" that imports the correct
modules. The implementations were varying in quality and all had severe
problems. For example, a symlink (or hardlink) that links the correct modules
into place introduces problems regarding object identity, since you now have
two modules in `sys.modules` with the same API, but different identities::
requests.packages.urllib3 is not urllib3
With version ``2.5.2``, requests started to maintain its own stub, so that
distro-specific breakage would be reduced to a minimum, even though the whole
issue is not requests' fault in the first place. See
https://github.com/kennethreitz/requests/pull/2375 for the corresponding pull
request.
'''
from __future__ import absolute_import
import sys
try:
from . import urllib3
except ImportError:
import urllib3
sys.modules['%s.urllib3' % __name__] = urllib3
try:
from . import chardet
except ImportError:
import chardet
sys.modules['%s.chardet' % __name__] = chardet
| mit |
sauloal/arduino | control/firmata/autoglade/build/lib/autoglade/autoglade.py | 2 | 86702 | #! /usr/bin/env python
# -*- coding: utf-8 -*-
"""
$Id: autoglade.py 88 2009-02-03 15:35:03Z dtmilano $
"""
__license__ = """
Copyright (C) 2007-2009 Diego Torres Milano <diego@codtech.com>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307,
USA
"""
__version__ = '0.5'
__rev__ = '$Rev: 88 $'
"""
AutoGlade
@var AUTO_INVOKE_RE: Default regular expression to map Glade widget names
@type AUTO_INVOKE_RE: str
@var AUTO_RADIOBUTTON_NAME_RE: Default regular expression to obtain the
main radio button name from the full name
@type AUTO_RADIOBUTTON_NAME_RE: str
@var AUTO_INVOKE_WIDGET: Default position of the widget name group in L{AUTO_INVOKE_RE}
@type AUTO_INVOKE_WIDGET: int
@var AUTO_INVOKE_METHOD: Default position of the group indicating the method name to invoke
@type AUTO_INVOKE_METHOD: int
@var AGO_POSTPONED: Constant to indicate that some initialization is
deferred because the value is not yet available.
Used in L{AUtoGladeObject}.
@type AGO_POSTPONED: int
"""
AUTO_DELIMITER = ':'
# greedy
#AUTO_INVOKE_RE = '(.*)' + AUTO_DELIMITER + 'auto' + AUTO_DELIMITER + \
# '([^' + AUTO_DELIMITER + ']*)(' + AUTO_DELIMITER +'(.+))?'
# non greedy
# to allow more than one 'auto' in name
AUTO_INVOKE_RE = '(.*?)' + AUTO_DELIMITER + 'auto' + AUTO_DELIMITER + \
'([^' + AUTO_DELIMITER + ']*)(' + AUTO_DELIMITER +'(.+))?'
AUTO_INVOKE_WIDGET = 1 # the position containing widget name
# FIXME:
# this was AUTO_INVOKE_OBJECT, and perhaps it's a better name, because it's
# possible that objects being passed insted of methods.
# Verify its usage.
AUTO_INVOKE_METHOD = 2 # the position of the group inside the re containing method name
AUTO_INVOKE_HASARGS = 3 # the position of the group inside the re containing method args
AUTO_INVOKE_ARGS = 4 # the position of the group inside the re containing the arg
AUTO_TREEVIEW_SET_CELL_RE = 'setTreeview(.+)Cell(\d+)'
AUTO_RADIOBUTTON_NAME_RE = '(.+\D)\d+$'
AGO_POSTPONED = -2 # AutoGladeObject POSTPONED
AGO_DIALOG_PREFERENCES = 'dialogPreferences'
AGO_BUTTON_PREFERENCES = 'buttonPreferences'
AGO_MENU_ITEM_PREFERENCES = 'menuItemPreferences'
AGO_TOOL_BUTTON_PREFERENCES = 'toolButtonPreferences'
AGO_BUTTON_NEW = 'buttonNew'
AGO_MENU_ITEM_NEW = 'menuItemNew'
AGO_TOOL_BUTTON_NEW = 'toolButtonNew'
AGO_BUTTON_OPEN = 'buttonOpen'
AGO_MENU_ITEM_OPEN = 'menuItemOpen'
AGO_TOOL_BUTTON_OPEN = 'toolButtonOpen'
AGO_BUTTON_CLOSE = 'buttonClose'
AGO_MENU_ITEM_CLOSE = 'menuItemClose'
AGO_TOOL_BUTTON_CLOSE = 'toolButtonClose'
AGO_BUTTON_SAVE = 'buttonSave'
AGO_MENU_ITEM_SAVE = 'menuItemSave'
AGO_TOOL_BUTTON_SAVE = 'toolButtonSave'
AGO_MENU_ITEM_SAVE_AS = 'menuItemSaveas' # should be SaveAs, but...
AGO_BUTTON_COPY = 'buttonCopy'
AGO_MENU_ITEM_COPY = 'menuItemCopy'
AGO_TOOL_BUTTON_COPY = 'toolButtonCopy'
AGO_BUTTON_CUT = 'buttonCut'
AGO_MENU_ITEM_CUT = 'menuItemCut'
AGO_TOOL_BUTTON_CUT = 'toolButtonCut'
AGO_BUTTON_PASTE = 'buttonPaste'
AGO_MENU_ITEM_PASTE = 'menuItemPaste'
AGO_TOOL_BUTTON_PASTE = 'toolButtonPaste'
AGO_BUTTON_DELETE = 'buttonDelete'
AGO_MENU_ITEM_DELETE = 'menuItemDelete'
AGO_TOOL_BUTTON_DELETE = 'toolButtonDelete'
AGO_BUTTON_QUIT = 'buttonQuit'
AGO_MENU_ITEM_QUIT = 'menuItemQuit'
AGO_TOOL_BUTTON_QUIT = 'toolButtonQuit'
AGO_DIALOG_ABOUT = 'dialogAbout'
AGO_BUTTON_ABOUT = 'buttonAbout'
AGO_MENU_ITEM_ABOUT = 'menuItemAbout'
AGO_TOOL_BUTTON_ABOUT = 'toolButtonAbout'
AGOS = [
# file
AGO_BUTTON_NEW,
AGO_MENU_ITEM_NEW,
AGO_TOOL_BUTTON_NEW,
AGO_BUTTON_OPEN,
AGO_MENU_ITEM_OPEN,
AGO_TOOL_BUTTON_OPEN,
AGO_BUTTON_CLOSE,
AGO_MENU_ITEM_CLOSE,
AGO_TOOL_BUTTON_CLOSE,
AGO_BUTTON_SAVE,
AGO_MENU_ITEM_SAVE,
AGO_TOOL_BUTTON_SAVE,
AGO_MENU_ITEM_SAVE_AS,
# clipboard
AGO_BUTTON_COPY,
AGO_MENU_ITEM_COPY,
AGO_TOOL_BUTTON_COPY,
AGO_BUTTON_CUT,
AGO_MENU_ITEM_CUT,
AGO_TOOL_BUTTON_CUT,
AGO_BUTTON_PASTE,
AGO_MENU_ITEM_PASTE,
AGO_TOOL_BUTTON_PASTE,
AGO_BUTTON_DELETE,
AGO_MENU_ITEM_DELETE,
AGO_TOOL_BUTTON_DELETE,
# actions
AGO_BUTTON_QUIT,
AGO_MENU_ITEM_QUIT,
AGO_TOOL_BUTTON_QUIT,
# dialogs
AGO_DIALOG_ABOUT,
AGO_BUTTON_ABOUT,
AGO_MENU_ITEM_ABOUT,
AGO_TOOL_BUTTON_ABOUT,
AGO_DIALOG_PREFERENCES,
AGO_BUTTON_PREFERENCES,
AGO_MENU_ITEM_PREFERENCES,
AGO_TOOL_BUTTON_PREFERENCES,
]
ASI_STOCK = 0
ASI_GTKCLASS = 1
import sys
import os
import re
import gobject
import gtk.glade
try:
import gconf
except:
pass
import warnings
import traceback
from optparse import OptionParser
from xml.dom import minidom
prog = os.path.basename(sys.argv[0])
version = __version__
revision = __rev__
#DEBUG = None
DEBUG = [ '__map.*' ]
#DEBUG = [ '__map.*', '__autoConnect.*' ]
#DEBUG = [ '__autoConnectAGO' ]
#DEBUG = [ '__autoConnect', 'autoCopy', 'autoOpen', 'open', 'autoNew',
# 'autoSaveas', 'save']
#WARNING = [ '__autoConnect' ]
WARNING = None
colors = {"default":"",
"blue": "\x1b[01;34m",
"cyan": "\x1b[01;36m",
"green": "\x1b[01;32m",
"red": "\x1b[01;05;37;41m",
"magenta": "\x1b[01;35m",
"sgr0": "\x1b[m\x1b(B"
}
CYAN = colors['cyan']
RED = colors['red']
BLUE = colors['blue']
GREEN = colors['green']
MAGENTA = colors['magenta']
SGR0 = colors['sgr0']
def FN():
"""
Get the function name from the previous (calling) frame
"""
return sys._getframe(1).f_code.co_name
def warning(str, *cond):
if WARNING:
if cond:
for c in cond:
if c in WARNING:
break
return
warnings.warn(str, RuntimeWarning)
#def debug(__str, cond=FN()):
def debug(__str, cond=True):
DEBUGDEBUG = False
if DEBUGDEBUG:
print >>sys.stderr, "debug(%s, %s)" % (__str, cond)
if DEBUG:
if cond:
found = False
if isinstance(cond, str):
# don't iterate over str (would get chars)
cond = [ cond ]
try:
for c in cond:
if DEBUGDEBUG:
print >>sys.stderr, "&&& debug: testing %s in %s" % (c, DEBUG)
if c in DEBUG:
if DEBUGDEBUG:
print >>sys.stderr, "&&& debug: true"
found = True
break
for d in DEBUG:
if DEBUGDEBUG:
print >>sys.stderr, "&&& debug: testing %s match re %s" % (c, d)
if re.compile(d).match(c):
if DEBUGDEBUG:
print >>sys.stderr, "&&& debug: true"
found = True
break
if not found:
return
cond = " ".join(cond)
except TypeError:
# not iterable, should be a bool (True)
pass
print >>sys.stderr, "%s: %s" % (cond, __str)
EMPTY_GLADE = """<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE glade-interface SYSTEM "glade-2.0.dtd">
<!--Generated with glade3 3.2.0 on Tue Sep 25 21:27:17 2007 by diego@bruce-->
<glade-interface>
</glade-interface>
"""
INPUT_CLASS = ['GtkRadioButton', 'GtkCheckButton',
'GtkToggleButton', 'GtkEntry', 'GtkHScale', 'GtkVScale',
'GtkSpinButton', 'GtkComboBox', 'GtkFileChooserButton',
'GtkFontButton', 'GtkColorButton', 'GtkCalendar',
'GtkCurve', 'GtkGammaCurve',
'GtkTreeView',
]
class AutoGladeAttributeError(AttributeError):
"""
AutoGladeAttributeError is an identificable AttributeError
"""
def __init__(self, key):
"""
Constructor
@param key: key not found
@type key: str
"""
AttributeError.__init__(self, key)
class AutoGladeRuntimeError(RuntimeError):
"""
AutoGladeRuntimeError
"""
def __init__(self, msg):
RuntimeError.__init__(self, msg)
class AutoGladeItemNotFoundException(Exception):
"""
AutoGladeItemNotFoundException
"""
def __init__(self, msg):
Exception.__init__(self, msg)
class AutoGladeObject:
"""
AutoGladeObject is an utility class that relates together the Glade
widget, its name and the XML element in a particular autoglade instance.
FIXME:
Composite pattern should be applied to be able to handle a widget or
a list of widgets, when there's more than one.
This could be when there's more than one stock item
(see tests/new-double.glade)
"""
DEBUG = None
__autoglade = None
__name = None
__widget = None
__element = None
def __init__(self, autoglade, name=None, widget=None, element=None):
"""
Constructor
@param autoglade: Autoglade objet to which object is related to
@type autoglade: AutoGlade.AutoGlade
@param name: Name of the AutoGladeObject
@type name: str
@param widget: Widget of the AutoGladeObject. This accepts the
special value L{autoglade.AGO_POSTPONED} to defer the
initialization of the widget until some conditions are met.
@type widget: L{gtk.Widget}
@param element: Element of the AutoGladeObject
@type element: str
@raise AutoGladeRuntimeError: If the AutoGladeObject cannot be
initialized because some values are missing this L{Exception}
is raised.
"""
cond = FN()
debug("%s: AutoGladeObject(autoglade=%s, name=%s, " \
"element=%s, widget=%s)" % \
(cond, autoglade, name, element, widget), cond)
self.__autoglade = None
self.__name = None
self.__widget = None
self.__element = None
self.__autoglade = autoglade
if name:
self.__name = name
if widget:
self.__widget = widget
if element:
self.__element = element
if not self.__element:
if not self.__name:
if self.__widget:
self.__name = self.__widget.get_name()
if not self.__name:
raise AutoGladeRuntimeError("Cannot get widget name for widget=%s" % self.__widget)
else:
raise AutoGladeRuntimeError("Cannot determine element unless name or widget are specified.")
self.__element = self.__getElementByName(self.__name)
if not self.__widget:
if not self.__name:
self.__name = self.__element.getAttribute('id')
else:
self.__widget = self.__autoglade.__getattr__(self.__name)
if not (self.__name and self.__widget and self.__element):
raise AutoGladeRuntimeError("One attribute is missing (name=%s, element=%s, widget=%s)" %
(self.__name, self.__element, self.__widget))
def getName(self):
return self.__name
def getWidget(self):
debug("AutoGladeObject::getWidget()\n" +
"\t__widget = %s\n" % self.__widget +
"\t__name = %s" % self.__name,
FN())
if self.__widget != AGO_POSTPONED:
return self.__widget
else:
self.__widget = self.__autoglade.__getattr__(self.__name)
return self.__widget
def getElement(self):
return self.__element
def __getElementByName(self, name):
"""
Get the DOM element by name.
@param name: Element name to find
@type name: str
@raise AutoGladeRuntimeError: If there's no element matching name
"""
cond = FN()
for element in self.__autoglade.getGladeInterface().getElementsByTagName('widget'):
debug("__getElementByName::Looking for %s found %s" % (name,
element.getAttribute('id')), cond)
if element.getAttribute('id') == name:
return element
debug("\tNOT FOUND !", cond);
raise AutoGladeRuntimeError("Couldn't find element for name=%s" %
name)
def connectIfNotConnected(self, signal, handler, *args):
"""
Connect the specified signal with handler if there's no another
handler defined.
@param signal: Signal name
@type signal: str
@param handler: Signal handler
@type handler: Callable
@param args: Extra arguments
@type args: List
"""
cond = FN()
debug("%s(signal=%s, handler=%s)" % (cond, signal, handler), cond)
debug("\telement = %s" % self.__element, cond)
debug("\tsignal = %s" % self.__element.getElementsByTagName('signal'), cond)
debug("\tname = %s" % self.getName(), cond)
# check for signals handlers defined in glade XML file
for child in self.__element.childNodes:
if child.localName == 'signal' and child.attributes['name'] == signal :
return
debug("\tconnecting signal " + signal + " to auto handler", cond)
debug("\twidget %s" % self.getWidget(), cond)
debug("\thandler %s" % handler, cond)
if args:
self.getWidget().connect(signal, handler, args[0])
else:
self.getWidget().connect(signal, handler)
class AutoTreeviewSetCell:
"""
AutoTreeviewSetCell helper class.
This is a helper class to set cells in L{gtk.Treeview}.
Usually, to set cell properties the call used is as:
C{cell.set_property('pixbuf', treeModel.get_value(iter, B{0}))}
But to avoid the problem of having to hardcode the cell index, 0 in this
case, this helper class is used.
This is used in conjuntion with L{AutoGlade.__getitem__} which returns
and instance of L{AutoTreeviewSetCell} if C{key} matches the
C{AUTO_TREEVIEW_SET_CELL_RE}.
It's used, most frequently in initialization funtions
(see L{AutoGlade.autoInit}) like this
C{tvcolumn.set_cell_data_func(cellPixbuf, self.setTreeviewPixbufCell0)}
or
C{tvcolumn.set_cell_data_func(cellText, self.setTreeviewTextCell1)}
"""
DEBUG = False
def __init__(self, cellType, cellIndex):
"""
Constructor
@param cellType: the cell type (i.e.: text, pixbuf, activatable)
@type cellType: str
@param cellIndex: the cell (column) index inside the Treeview
@type cellIndex: int
"""
debug("AutoTreeviewSetCell::__init__(cellType=%s, cellIndex=%s)" % (
cellType, cellIndex))
if cellType == 'toggle':
cellType = 'active'
self.__cellType = cellType
self.__cellIndex = int(cellIndex)
def __call__(self, column, cell, treeModel, iter, *data):
debug("AutoTreeviewSetCell::__call__(column=%s, cell=%s, treeModel=%s, iter=%s)%s" % (column, cell, treeModel, iter))
cell.set_property(self.__cellType, treeModel.get_value(iter,
self.__cellIndex))
return
class AutoGlade:
"""
AutoGlade main class.
This is the main AutoGlade class.
Conventions
===========
These are the conventions used to relate Glade files.
* The glade definition file should have the same name as the parent
class or should be specified by C{glade=filename} parameter
* Signal handler names must start with C{'on_'} @see{__getitem__}
* For C{on_automenuitem_activate} to work, in the Glade interface
designer the signal handler for this menu item should be
C{on_menuitem_activate} and user data must be the name of the
widget to call (.i.e: dialog)
@cvar DEBUG: Set debugging output
@type DEBUG: bool
@ivar __reAutoInvoke: The regular expression to parse the Glade widget name
@type __reAutoInvoke: str
@ivar __menuItemAbout: Stock menu item about
@type __menuItemAbout: L{gtk.Widget}
@ivar __menuItemQuit: Stock menu item quit
@type __menuItemQuit: L{gtk.Widget}
@ivar __menuItemPreferences: Stock menu item preferences
@type __menuItemPreferences: L{gtk.Widget}
"""
DEBUG = False
__reAutoInvoke = re.compile(AUTO_INVOKE_RE)
__reSetTreeviewCell = re.compile(AUTO_TREEVIEW_SET_CELL_RE)
__reRadioButtonName = re.compile(AUTO_RADIOBUTTON_NAME_RE)
__topLevelWidgetNames = []
__mainTopLevelWidget = None # AutoGladeObject
__autoGladeObjects = {} # AutoGladeObjetcs
for ago in AGOS:
__autoGladeObjects[ago] = None
__topLevelWidgets = {}
__signalHandlers = {}
__autoDumpMap = {}
__gconf = None
__dump = {}
__autoArgs = ''
__autoProperties = {}
__autoStockItems = {}
__autoStockItems[AGO_BUTTON_PREFERENCES] = ['gtk-preferences', gtk.Button]
__autoStockItems[AGO_MENU_ITEM_PREFERENCES] = ['gtk-preferences', gtk.MenuItem]
__autoStockItems[AGO_TOOL_BUTTON_PREFERENCES] = ['gtk-preferences', gtk.ToolButton]
__autoStockItems[AGO_BUTTON_NEW] = ['gtk-new', gtk.Button]
__autoStockItems[AGO_MENU_ITEM_NEW] = ['gtk-new', gtk.MenuItem]
__autoStockItems[AGO_TOOL_BUTTON_NEW] = ['gtk-new', gtk.ToolButton]
__autoStockItems[AGO_BUTTON_OPEN] = ['gtk-open', gtk.Button]
__autoStockItems[AGO_MENU_ITEM_OPEN] = ['gtk-open', gtk.MenuItem]
__autoStockItems[AGO_TOOL_BUTTON_OPEN] = ['gtk-open', gtk.ToolButton]
__autoStockItems[AGO_BUTTON_SAVE] = ['gtk-save', gtk.Button]
__autoStockItems[AGO_MENU_ITEM_SAVE] = ['gtk-save', gtk.MenuItem]
__autoStockItems[AGO_TOOL_BUTTON_SAVE] = ['gtk-save', gtk.ToolButton]
__autoStockItems[AGO_MENU_ITEM_SAVE_AS] = ['gtk-save-as', gtk.MenuItem]
__autoStockItems[AGO_BUTTON_COPY] = ['gtk-copy', gtk.Button]
__autoStockItems[AGO_MENU_ITEM_COPY] = ['gtk-copy', gtk.MenuItem]
__autoStockItems[AGO_TOOL_BUTTON_COPY] = ['gtk-copy', gtk.ToolButton]
__autoStockItems[AGO_BUTTON_CUT] = ['gtk-cut', gtk.Button]
__autoStockItems[AGO_MENU_ITEM_CUT] = ['gtk-cut', gtk.MenuItem]
__autoStockItems[AGO_TOOL_BUTTON_CUT] = ['gtk-cut', gtk.ToolButton]
__autoStockItems[AGO_BUTTON_PASTE] = ['gtk-paste', gtk.Button]
__autoStockItems[AGO_MENU_ITEM_PASTE] = ['gtk-paste', gtk.MenuItem]
__autoStockItems[AGO_TOOL_BUTTON_PASTE] = ['gtk-paste', gtk.ToolButton]
__autoStockItems[AGO_BUTTON_DELETE] = ['gtk-delete', gtk.Button]
__autoStockItems[AGO_MENU_ITEM_DELETE] = ['gtk-delete', gtk.MenuItem]
__autoStockItems[AGO_TOOL_BUTTON_DELETE] = ['gtk-delete', gtk.ToolButton]
__autoStockItems[AGO_BUTTON_QUIT] = ['gtk-quit', gtk.Button]
__autoStockItems[AGO_MENU_ITEM_QUIT] = ['gtk-quit', gtk.MenuItem]
__autoStockItems[AGO_TOOL_BUTTON_QUIT] = ['gtk-quit', gtk.ToolButton]
__autoStockItems[AGO_BUTTON_ABOUT] = ['gtk-about', gtk.Button]
__autoStockItems[AGO_MENU_ITEM_ABOUT] = ['gtk-about', gtk.MenuItem]
__autoStockItems[AGO_TOOL_BUTTON_ABOUT] = ['gtk-about', gtk.ToolButton]
cellText = gtk.CellRendererText()
cellPixbuf = gtk.CellRendererPixbuf()
cellToggle = gtk.CellRendererToggle()
# FIXME
# WARNING
# autorun default value changed from False to True
def __init__(self, glade=None, root=None, autorun=True, autoinit=None,
autoinitSplit=':', autodump='text'):
"""
Constructor
Constructs the AutoGlade object based on the arguments passed.
@param glade: The glade filename, defaults to the name of the class.
Default C{None}
@type glade: str
@param root: The root widget name. Default C{None}.
@type root: str
@param autorun: Will autoglade auto run the GUI ?
@type autorun: boolean
@param autoinit: Autoinit initialization string
@type autoinit: str
@param autodump: Autodump type output format (i.e.: text, shell)
@type autodump: str
"""
debug("AutoGlade::__init__(glade=%s, root=%s, autorun=%s, autoinit=%s, autodump=%s)" % (glade, root, autorun, autoinit, autodump))
cn = self.__class__.__name__
if not glade :
if cn != 'AutoGlade':
glade = cn + '.glade'
#else:
# raise RuntimeError('Parameter glade is not set and no class name to obtain galde file.')
debug('Should open %s' % glade)
debug('\tworking directory: %s' % os.getcwdu())
self.__glade = glade
# FIXME
# perhaps the program name should come from somewhere
# an extra argument for programs
# the name of the main widget for autorun
self.__programName = cn
self.__autoDump = autodump
self.__autoDumpMap = {
"text": self.autoDumpText,
"shell": self.autoDumpShell,
}
if self.__glade:
self.__dom = minidom.parse(self.__glade)
else:
self.__dom = minidom.parseString(EMPTY_GLADE)
self.__gladeInterface = self.__dom.documentElement
try:
self.__gconf = gconf.client_get_default()
except NameError:
self.__gconf = None
self.abbreviations()
if autorun:
try:
import gnome
properties = {gnome.PARAM_APP_DATADIR : '/usr/share'}
# FIXME
# perhaps, gnome.program_init should be invoked with the
# main top level widget (__mainTopLevelWidget) name as its
# first arg
# FIXME
# version is constant
gnome.program_init(self.__programName, 'version',
properties=properties)
except:
pass
if root:
self.__topLevelWidgetNames = [root]
else:
self.__getTopLevelWidgetNames()
self.__getTopLevelWidgets()
self.__mapAutoInvokeWidgetNames()
self.__getSignalHandlers()
self.__getStockItems()
self.__fixComboBoxNotShowingActiveItem()
self.__autoConnect()
self.__autoinitSplit = autoinitSplit
exitval = 0
# FIXME
# the try except block here is to support methods that don't return
# an integer value
try:
exitval = self.autoInit(autoinit)
except Exception, ex:
print >>sys.stderr, "Exception: ", ex
print >>sys.stderr, sys.exc_info()[0]
print >>sys.stderr, sys.exc_info()[1]
print >>sys.stderr, sys.exc_info()[2]
print >>sys.stderr, ''.join(traceback.format_exception(
*sys.exc_info())[-2:]).strip().replace('\n',': ')
except:
print >>sys.stderr, "Unexpected error in autoInit:", \
sys.exc_info()[0]
pass
if autorun:
debug("autorun")
# now exitval holds the return value of self.autoInit, which
# was called before. See comments in autoInit
#exitval = 0
# FIXME
# TEST ONLY
# this function, should be added in some way in autoinit
# the name of the progress bar should be specified somewhere
# maybe stting a class attribute in autoinit
#self.__timerId = gobject.timeout_add(1000, self.autoProgressBar)
#self.__timerId = gobject.timeout_add(1000, self.autoProgressBar, ['progressbar1'])
resp = self.autoRun()
debug("autorun resp=%s" % resp)
if resp:
debug("autorun exitval=%s" % exitval)
if resp == gtk.RESPONSE_OK:
self.autoDumpValues(None)
else:
exitval = -resp
debug("autorun exit=%s" % exitval)
sys.exit(exitval)
# Provides self['name'].method()
def __getitem__(self, key):
"""
__getitem__
Provides B{self['name'].method()} access.
If the C{key} starts with C{on_} then the corresponding method is
executed instead of returning the attribute value.
@param key: The key to search
@type key: str
@return: if key starts with 'on_' returns the value of the execution
of B{self.key}, if key matches C{AUTO_TREEVIEW_SET_CELL_RE} (or
whatever C{self.__reSetTreeviewCell} has compiled in) then returns an
instance of L{AutoTreeviewSetCell} or returns the corresponding
widget if exists, otherwise raise an L{AutoGladeAttributeError}.
@raise AutoGladeAttributeError: If the key is not found
"""
cond = FN()
debug('__getitem__(%s, %s)' % (self.__class__.__name__, key.__str__()), cond)
if key:
if key[0:3] == 'on_':
try:
exec 'return self.' + key
except SyntaxError:
raise AutoGladeAttributeError("method " + key +
" not defined")
else:
debug("\tchecking if key=%s matches reSetTreeviewCell RE" % key, cond)
mo = self.__reSetTreeviewCell.match(key)
if mo:
return AutoTreeviewSetCell(mo.group(1).lower(), mo.group(2))
w = None
for g in self.__topLevelWidgets.itervalues():
w = g.get_widget(key)
if w:
"""
This was taken from Mitch Chapman's article in LJ
Cache the widget to speed up future lookups. If multiple
widgets in a hierarchy have the same name, the lookup
behavior is non-deterministic just as for libglade.
"""
setattr(self, key, w)
debug("__getitem__: FOUND", cond)
return w
raise AutoGladeAttributeError(key)
# Provides self.name.method()
def __getattr__(self, name):
"""
__getattr__
Provides B{self.name.method()} access
@param name: Item name
@type name: L{str}
@return: Returns L{__getitem__}C{(name)}
"""
return self.__getitem__(name)
def __call__(args):
warning("__call__(%s)")
# Misc method
def __getTopLevelWidgetNames(self):
"""
Get the top level widget names.
IMPORTANT: Glade XML files have not been parsed yet.
"""
cond = FN()
debug("start", cond)
first = True
for element in self.__gladeInterface.getElementsByTagName('widget'):
debug("\twidget:%s" % element.getAttribute('id'), cond)
if element.parentNode == self.__gladeInterface:
name = element.getAttribute('id')
self.__topLevelWidgetNames.append(name)
debug("\tappending " + name, cond)
if first:
debug("\t\tfirst", cond)
self.__mainTopLevelWidget = \
AutoGladeObject(self, name=name, widget=AGO_POSTPONED)
first = False
debug("\ttop level name: %s" % element.getAttribute('id'), cond)
debug("finish", cond)
def __getSignalHandlers(self):
"""
Get signal handlers from Galde XML file
C{self.__signalHandlers} dictionary is filled with signal handlers
found, using the handler name as key and a tuple containing the
signal name and widget name as value.
"""
for element in self.__gladeInterface.getElementsByTagName('signal'):
self.__signalHandlers[element.getAttribute('handler')] = (
element.getAttribute('name'),
element.parentNode.getAttribute('id')
)
#debug("signal handler: %s" % element.getAttribute('handler'))
def __getSignalHandlerFromAGOKey(self, agokey):
"""
Get signal handler from Auto Glade Object key
This method obtains the signal handler from the Auto Galde Object
key in X{camel case}, assuming the handler method named is formed by
the last component of the camel case key, capitalized and with the
prefix C{auto} prepended.
Examples::
agokey = menuItemOpen
method = autoOpen
agokey = toolButtonSaveas # note the lowercase in as
method = autoSaveas
@param agokey: The Auto Glade Object key
@type agokey: str
@return: The signal handler method instance or None if there's no
match
"""
handler = None
m = re.compile('(.*)([A-Z][a-z0-9]*)').match(agokey)
if m:
if m.group(1) == 'dialog':
method = 'autoDialog'
else:
method = 'auto' + m.group(2)
handler = getattr(self, method)
return handler
def __autoConnectAGO(self, agokey, handler='auto', signal='auto'):
cond = FN()
debug("\n%s(%s, %s, %s) start" % (cond, agokey, handler, signal),
cond)
try:
ago = self.__autoGladeObjects[agokey]
#print ">>>>>>> ago[%s]=%s" % (agokey, ago)
if not ago:
# None because this specific AGO key was not found in the
# Glade definition file (.i.e: there's no preferences button)
return
except Exception, ex:
# FIXME
print >>sys.stderr, "\n"
print >>sys.stderr, "*" * 70
print >>sys.stderr, "Unhundled exception in %s" % FN()
print >>sys.stderr, "Exception: ", ex
print >>sys.stderr, sys.exc_info()[0]
print >>sys.stderr, sys.exc_info()[1]
print >>sys.stderr, sys.exc_info()[2]
print >>sys.stderr, ''.join(traceback.format_exception(
*sys.exc_info())[-2:]).strip().replace('\n',': ')
print >>sys.stderr, "*" * 70
print >>sys.stderr, "\n"
return
except:
print >>sys.stderr, "\n"
print >>sys.stderr, "*" * 70
print >>sys.stderr, "Unexpected error in %s: %s" % (FN(),
sys.exc_info()[0])
print >>sys.stderr, "*" * 70
print >>sys.stderr, "\n"
return
if handler == 'auto':
handler = self.__getSignalHandlerFromAGOKey(agokey)
debug("%s: handler=%s" % (cond, handler), cond)
if signal == 'auto':
widget = ago.getWidget()
if isinstance(widget, gtk.Button) or isinstance(widget, gtk.ToolButton):
signal = 'clicked'
elif isinstance(widget, gtk.MenuItem):
signal = 'activate'
elif isinstance(widget, gtk.Dialog):
signal = 'response'
else:
print >>sys.stderr, ">>>>>> NOT IMPLEMENTED: %s" % widget
signal = None
debug("%s: signal=%s" % (cond, signal), cond)
if signal:
debug("%s: autoconnecting" % cond, cond)
ago.connectIfNotConnected(signal, handler)
debug("%s finish" % (cond), cond)
def __autoConnect(self):
cond = FN()
debug("__autoConnect start", cond)
for tlwn,tlw in self.__topLevelWidgets.iteritems():
debug("\tautoConnecting top level: " + tlwn, cond)
tlw.signal_autoconnect(self)
# connect quit to destory of main widget
# there's a special case where there's no top level widget when the
# interface is empty
if self.__mainTopLevelWidget:
debug("\tautoConnecting 'destroy' if not connected", cond)
self.__mainTopLevelWidget.connectIfNotConnected('destroy',
self.autoQuit)
for agokey in AGOS:
self.__autoConnectAGO(agokey)
debug("__autoConnect finish")
def __getTopLevelWidgets(self):
"""
Get the top level widgets parsing the glade XML file.
The widget trees (one for every top level widget) are also created
in this operation.
"""
cond = FN()
debug("start", cond)
for tl in self.__topLevelWidgetNames:
debug('toplevel=%s' % tl, cond)
w = gtk.glade.XML(self.__glade, tl)
self.__topLevelWidgets[tl] = w
debug("\tw=%s tlw=%s" % (w, self.__topLevelWidgets), cond)
debug("finish", cond)
def __mapAutoInvokeWidgetNames(self):
"""
Map the autoInvoke names (widget:auto:method) to its plain form
(widget).
Invoke 'auto:init' method if present.
Connect 'auto:sensitize' signals.
Connect signal for menu items (and tool buttons ?) if not connected
"""
cond = FN()
debug("start", cond)
# in normal situations initMethod is initialized below in this
# method, so this copes with some missing cases
initMethod = None
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
# FIXME
# this should involve root (top level widgets)
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
for element in self.__gladeInterface.getElementsByTagName('widget'):
debug("widget:%s" % (element.getAttribute('id')), cond)
name = element.getAttribute('id')
widgetClass = element.getAttribute('class')
tail = name
while tail:
debug("%%%% tail:%s %%%%" % (tail), cond)
m = self.__reAutoInvoke.match(tail)
# FIXME
# tail is set in every case to contains the tail of the list
# of :auto: commands and args
tail = None
debug("widget:%s matches auto invoke:%s" % (name, m), cond)
if m:
# this is the case of one widget named as the autoInvoke
# convention widget:auto:method
# because of this name it cannot be invoked as self.name
# so a workaround should be used
# another alternative would be to use a '_' instead of ':'
# get the widget
widget = self.__getitem__(name)
# n is the 'real' widget name, the first part of the mapped
# name widget:auto:method
n = m.group(AUTO_INVOKE_WIDGET)
setattr(self, n, self.__getitem__(name))
method = m.group(AUTO_INVOKE_METHOD)
args = m.group(AUTO_INVOKE_ARGS)
debug("\tname:%s" % (n), cond)
debug("\tmethod:%s" % (method), cond)
debug("\targs:%s" % (args), cond)
m = self.__reAutoInvoke.match(args or "")
if m:
args = m.group(AUTO_INVOKE_WIDGET)
debug("\targs:%s #####" % (args), cond)
tail = n + ':auto:' + m.group(AUTO_INVOKE_METHOD) + ':' + m.group(AUTO_INVOKE_ARGS)
debug("\ttail:%s #####" % (tail), cond)
# special methods handling
if method == 'init':
val = None
if args == 'env':
# widgetname:auto:init:env will initialize the widget
# value from environment variable 'widgetname'
# if 'widgetname' does not exist with an empty value.
try:
val = os.environ[n]
except KeyError:
debug("**** looking for '%s' in environ, but not found ****" % (n))
# FIXME
# there's a special case handled in the if widgetClass
# section below, because for example for radio buttons
# we have to strip the last number to find the variable
# name
val = None
# FIXME
# this is (was ?) the same as in 'gconf'
if widgetClass in ['GtkFileChooser',
'GtkFileChooserButton']:
initMethod = self[name].set_filename
elif widgetClass in ['GtkHScale', 'GtkVScale',
'GtkSpinButton']:
initMethod = self[name].set_value
# thanks to python for this dynamic typing
val = float(val and val or 0)
elif widgetClass == 'GtkLabel':
# FIXME
# overwrites previous markup, which could not be
# the intention
initMethod = self[name].set_markup
if not val:
val = ""
elif widgetClass == 'GtkEntry':
initMethod = self[name].set_text
if not val:
val = ""
elif widgetClass == 'GtkCheckButton':
# if val = 'True' (str) active, inactive otherwise
initMethod = self[name].set_active
val = int((val == 'True') or 0)
elif widgetClass == 'GtkRadioButton':
initMethod = self[name].set_active
nnn = self.__reRadioButtonName.match(n).group(1)
debug("GtkRadioButton: name=%s n=%s nnn=%s val=%s" % (
name, n, nnn, val))
try:
debug("GtkRadioButton: name=%s val=%s nnn=%s label=%s" % (name, val, nnn, self[name].get_label()))
debug("GtkRadioButton: env[%s]=%s" % (nnn, os.environ[nnn]))
val = (self[name].get_label() == os.environ[nnn])
debug("GtkRadioButton: val=%s" % (val))
except KeyError:
# not in environment, we can't do anything
# just don't set it as active
val = False
debug("GtkRadioButton: (final) name=%s val=%s nnn=%s label=%s" % (name, val, nnn, self[name].get_label()))
elif widgetClass == 'GtkComboBox':
initMethod = self[name].set_active
model = self[name].get_model()
try:
val = [ item[0] for item in model ].index(val)
except ValueError:
# not in environment, we can't do anything
# just set first as active
val = 0
except TypeError:
# combobox model not defined
val = 0
elif widgetClass == 'GtkCurve':
# FIXME
# test only
initMethod = self[name].set_vector
#initMethod = self[name].set_gamma
if val:
val = [float(x) for x in re.split("[,() ]+", val)[1:-1]]
val = tuple(val)
#val = float(val)
else:
val = ()
#val = float(1)
elif widgetClass == 'GtkGammaCurve':
initMethod = self[name].curve.set_vector
if val:
val = [float(x) for x in re.split("[,() ]+", val)[1:-1]]
val = tuple(val)
else:
val = ()
elif widgetClass == 'GtkImage':
initMethod = self[name].set_from_file
elif widgetClass == 'GtkColorButton':
initMethod = self[name].set_color
if not val:
val = "#000"
val = gtk.gdk.color_parse(val)
elif widgetClass == 'GtkTreeView':
initMethod = self[name].set_model
splitter = re.compile('" ?"?')
# strip parenthesis, and then empty strings
list = splitter.split(val[1:-1])[1:-1]
# header is the first element
header = list.pop(0)
debug("list=%s" % list)
tvcs = []
i = 0
cmd = "model = gtk.ListStore("
first = True
# FIXME: [0] constant, means only 1 column
# there should be a way to specify more columns
# perhpas nested parens ( ("a" "b") ("c" "d" )
for l in [0]:
if not first:
cmd += ","
else:
first = False
cmd += "gobject.TYPE_STRING"
tvcs.append(gtk.TreeViewColumn(header,
gtk.CellRendererText(), text=i))
cmd += ")"
debug("cmd=%s" % cmd)
exec cmd
for tvc in tvcs:
self[name].append_column(tvc)
for s in list:
model.append([s])
val = model
else:
debug("**** No initMethod defined to initialize from environment for this widget class: %s (%s) ****" % (widgetClass, type(self[name])))
initMethod = None
elif args == 'gconf':
baseKey = '/apps/%s/autoglade/init/%s' % (self.__topLevelWidgetNames[0], n)
if self.DEBUG:
print >>sys.stderr, '@' * 50
print >>sys.stderr, '@' + ' should initialize the widget %s using method %s with args %s' % (n, method, args)
print >>sys.stderr, '@' + ' gconf baseKey: %s' % baseKey
key = baseKey + '/'
# FIXME
# this is the same as in 'env'
if widgetClass in ['GtkFileChooser',
'GtkFileChooserButton']:
key += 'filename'
initMethod = self[name].set_filename
val = self.__gconf.get_string(key)
elif re.match('cmdlineargs\[(\d+)\]', args):
m2 = re.match('cmdlineargs\[(\d+)\]', args)
n2 = int(m2.group(1))
if widgetClass in ['GtkTextView']:
self[name].get_buffer().set_text(open(cmdlineargs[n2]).read())
#if val and initMethod:
# perhaps I want to set val=False or val=None
if initMethod:
debug("\n\n**** initMethod=%s val=%s (%s) ****\n\n" % (initMethod, val, type(val)))
initMethod(val)
# FIXME
# debug only
if n == 'curve1':
debug("\n\n**** after init=%s val=%s ****\n\n" % (name, self[name].get_vector(-1)))
#self[name].reset()
# FIXME
# debug only
elif n == 'gammacurve1':
self[name].curve.reset()
self[name].curve.set_gamma(1.5)
elif method in ['show', 'sensitize' ]:
debug("show/sensitize: widget%s %s method=%s args=%s" % (widget, widgetClass, method, args), cond)
if args:
h = getattr(self, 'on_auto' + method)
if isinstance(widget, gtk.ToggleButton) or isinstance(widget, gtk.CheckMenuItem):
for targetWidget in args.split(AUTO_DELIMITER):
debug('Connecting %s' % (targetWidget), cond)
try:
widget.connect('toggled', h,
self.__getitem__(targetWidget))
except AutoGladeAttributeError:
# I should use the long name !!!
# see an example in tests/chain
# FIXME
# this is the second time in this method that
# we iterate over the interface elements
# it should be cached
for element in self.__gladeInterface.getElementsByTagName('widget'):
__twln = element.getAttribute('id')
if __twln.startswith(targetWidget):
widget.connect('toggled', h,
self.__getitem__(__twln))
elif method == 'dump':
self.__dump[n] = args
elif method == 'property':
val = "?"
nnn = self.__reRadioButtonName.match(n).group(1)
debug("\n\n!!!!!!!!!!!!!!!!!!!!! property not implemented: %s = %s => %s !!!!!!!!!!!!\n\n" % (nnn, val, args))
if widgetClass in ['GtkRadioButton']:
#val = self[name].get_label()
property = nnn
(target, val) = args.split(AUTO_DELIMITER)
AutoGladeObject(self, name=name, widget=widget).connectIfNotConnected('toggled', self.autoProperty, (property, val, target))
else:
debug("!!!!! NOT IMPLEMENTED !!!!")
elif method == 'exec':
if isinstance(widget, gtk.Button):
AutoGladeObject(self, name=name, widget=widget).connectIfNotConnected('clicked', self.autoExec, args)
else:
############################################################
# FIXME
# This is not working as intended becasue
# connectIfNotConnected
# in next statement is called BEFORE __autoConnect is called
# so no signals are connected yet
# It must be tested what happens if the call to this method
# is positioned after __autoConnect
############################################################
# FIXME
# Connect the signal for menu items, tool buttons.
# Don't know what to do with other classes yet
############################################################
if widgetClass in ['GtkMenuItem', 'GtkImageMenuItem',
'GtkCheckMenuItem']:
AutoGladeObject(self, name=name, widget=widget).connectIfNotConnected('activate', self.on_automenuitem_activate, args)
# FIXME
# Analyze the case for GtkToolButton, should it connect
# on_automenuitem_activate or on_autobutton_clicked ?
elif widgetClass == 'GtkToolButton':
AutoGladeObject(self, name=name, widget=widget).connectIfNotConnected('clicked', self.on_automenuitem_activate, args)
elif widgetClass == 'GtkFontButton':
AutoGladeObject(self, name=name, widget=widget).connectIfNotConnected('font-set', self.on_autofontbutton_font_set, method)
# FIXME
# When is more convenient to use gtk classes like here
# gtk.Button or string class names 'GtkButton' ?
# Using gtk classes we have inheritance.
elif isinstance(widget, gtk.Button):
if self.DEBUG:
print >>sys.stderr, "Button %s" % name
print >>sys.stderr, "args %s" % args
print >>sys.stderr, "type %s" % args.__class__.__name__
# FIXME
# perhaps this should be applied to every other case
if isinstance(args, str) or isinstance(args, unicode):
if self.DEBUG:
print >>sys.stderr, "Splitting args=%s" % args
args = tuple(args.split(AUTO_DELIMITER))
if self.DEBUG:
print >>sys.stderr, "args=%s" % args
AutoGladeObject(self, name=name, widget=widget).connectIfNotConnected('clicked', self.on_autobutton_clicked, args)
elif isinstance(widget, gtk.ProgressBar):
if self.DEBUG:
print >>sys.stderr, "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%"
print >>sys.stderr, "ProgressBar %s" % name
print >>sys.stderr, "args %s" % args
print >>sys.stderr, "type %s" % args.__class__.__name__
arglist = [name]
if args:
for arg in args.split(':'):
arglist.append(arg)
self.__timerId = gobject.timeout_add(1000, self.autoProgressBar, arglist)
def __expandAutoProperties(self):
"""
Expand properties values (property:auto:method) to its
corresponding value.
"""
#################################################################
# under construction
#
#
#
cond = FN()
debug("%s() start" % cond, cond)
#_# # in normal situations initMethod is initialized below in this
#_# # method, so this copes with some missing cases
#_# initMethod = None
#_#
#_# for element in self.__gladeInterface.getElementsByTagName('property'):
#_# debug("\tproperty:%s" % element.getAttribute('name'), cond)
#_# name = element.getAttribute('name')
#_# parent = element.parentNode
#_# parentName = parent.getAttribute('id')
#_# m = self.__reAutoInvoke.match(name)
#_# if m:
#_# # this is the case of one property named as the autoInvoke
#_# # convention 'auto:method'
#_#
#_# n = m.group(AUTO_INVOKE_WIDGET)
#_# method = m.group(AUTO_INVOKE_METHOD)
#_# args = m.group(AUTO_INVOKE_ARGS)
#_# debug("\tmethod:%s" % method, cond)
#_# # special methods handling
#_# if method == 'init':
#_# val = None
#_# if args == 'env':
#_# # property:auto:init:env will initialize the property
#_# # value from environment variable 'widgetname'
#_# # if 'widgetname' does not exist with an empty value.
#_# try:
#_# val = os.environ[n]
#_# except KeyError:
#_# debug("**** looking for '%s' in environ, but not found ****" % n)
#_# # FIXME
#_# # there's a special case handled in the if widgetClass
#_# # section below, because for example for radio buttons
#_# # we have to strip the last number to find the variable
#_# # name
#_# val = None
#_#
#_# # FIXME
#_# # this is the same as in 'gconf'
#_# if widgetClass in ['GtkFileChooser',
#_# 'GtkFileChooserButton']:
#_# initMethod = self[name].set_filename
#_# elif widgetClass in ['GtkHScale', 'GtkVScale']:
#_# initMethod = self[name].set_value
#_# # thanks to python for this dynamic typing
#_# val = float(val and val or 0)
#_# elif widgetClass in ['GtkLabel']:
#_# # FIXME
#_# # overwrites previous markup, which could not be
#_# # the intention
#_# initMethod = self[name].set_markup
#_# if not val:
#_# val = ""
#_# elif widgetClass in ['GtkEntry']:
#_# initMethod = self[name].set_text
#_# elif widgetClass in ['GtkRadioButton']:
#_# initMethod = self[name].set_active
#_# # this is also close to line 1650
#_# debug("**** name=%s n=%s val=%s ****" % (name, n, val))
#_# nnn = re.match('(.+\D)\d+$', n).group(1)
#_# try:
#_# debug("**** &&& name=%s val=%s nnn=%s label=%s ****" % (name, val, nnn, self[name].get_label()))
#_# debug("**** env[%s]=%s ****" % (nnn, os.environ[nnn]))
#_# val = (self[name].get_label() == os.environ[nnn])
#_# except KeyError:
#_# # not in environment, use first radio as default
#_# # we don't know if there's more than one
#_# val = 1
#_#
#_# debug("**** name=%s val=%s nnn=%s label=%s ****" % (name, val, nnn, self[name].get_label()))
#_# else:
#_# debug("**** No initMethod defined to initialize from environment for this widget class: %s (%s) ****" % (widgetClass, type(self[name])))
#_#
#_# elif args == 'gconf':
#_# baseKey = '/apps/%s/autoglade/init/%s' % (self.__topLevelWidgetNames[0], n)
#_# if self.DEBUG:
#_# print >>sys.stderr, '@' * 50
#_# print >>sys.stderr, '@' + ' should initialize the widget %s using method %s with args %s' % (n, method, args)
#_# print >>sys.stderr, '@' + ' gconf baseKey: %s' % baseKey
#_#
#_# key = baseKey + '/'
#_# # FIXME
#_# # this is the same as in 'env'
#_# if widgetClass in ['GtkFileChooser',
#_# 'GtkFileChooserButton']:
#_# key += 'filename'
#_# initMethod = self[name].set_filename
#_# val = self.__gconf.get_string(key)
#_#
#_# #if val and initMethod:
#_# # perhaps I want to set val=False or val=None
#_# if initMethod:
#_# if self.DEBUG:
#_# print >>sys.stderr, "\n\n**** initMethod=%s val=%s ****" % (initMethod, val)
#_# initMethod(val)
#_# elif method in ['show', 'sensitize' ]:
#_# if self.DEBUG:
#_# print >>sys.stderr, MAGENTA + "show/sensitize: widget%s %s method=%s args=%s" % (widget, widgetClass, method, args) + SGR0
#_# if args:
#_# h = getattr(self, 'on_auto' + method)
#_# if isinstance(widget, gtk.ToggleButton) or isinstance(widget, gtk.CheckMenuItem):
#_# for targetWidget in args.split(AUTO_DELIMITER):
#_# if self.DEBUG:
#_# print >>sys.stderr, 'Connecting'
#_# widget.connect('toggled', h,
#_# self.__getitem__(targetWidget))
#_# elif method == 'dump':
#_# self.__dump[n] = args
#_# else:
#_# ############################################################
#_# # FIXME
#_# # This is not working as intended becasue
#_# # connectIfNotConnected
#_# # in next statement is called BEFORE __autoConnect is called
#_# # so no signals are connected yet
#_# # It must be tested what happens if the call to this method
#_# # is positioned after __autoConnect
#_# ############################################################
#_# # FIXME
#_# # Connect the signal for menu items.
#_# # Don't know what to do with other classes yet
#_# ############################################################
#_# if widgetClass in ['GtkMenuItem', 'GtkImageMenuItem',
#_# 'GtkCheckMenuItem']:
#_# AutoGladeObject(self, name=name, widget=widget).connectIfNotConnected('activate', self.on_automenuitem_activate, args)
#_# # FIXME
#_# # Analyze the case for GtkTookButton, should it connect
#_# # on_automenuitem_activate or on_autobutton_clicked ?
#_# elif widgetClass == 'GtkToolButton':
#_# AutoGladeObject(self, name=name, widget=widget).connectIfNotConnected('clicked', self.on_automenuitem_activate, args)
#_# # FIXME
#_# # When is more convenient to use gtk classes like here
#_# # gtk.Button or string class names 'GtkButton' ?
#_# # Using gtk classes we have inheritance.
#_# elif isinstance(widget, gtk.Button):
#_# if self.DEBUG:
#_# print >>sys.stderr, "Button %s" % name
#_# print >>sys.stderr, "args %s" % args
#_# print >>sys.stderr, "type %s" % args.__class__.__name__
#_# # FIXME
#_# # perhaps this should be applied to every other case
#_# if isinstance(args, str) or isinstance(args, unicode):
#_# if self.DEBUG:
#_# print >>sys.stderr, "Splitting args=%s" % args
#_# args = tuple(args.split(AUTO_DELIMITER))
#_# if self.DEBUG:
#_# print >>sys.stderr, "args=%s" % args
#_# AutoGladeObject(self, name=name, widget=widget).connectIfNotConnected('clicked', self.on_autobutton_clicked, args)
def __fixComboBoxNotShowingActiveItem(self):
"""
Fix a problem found with Combo Box widgets.
Is this a libglade bug ?
"""
warning("FIXING COMBOBOX PROBLEM")
for element in self.__gladeInterface.getElementsByTagName('widget'):
if element.getAttribute('class') == 'GtkComboBox':
name = element.getAttribute('id')
a = -1
for property in element.getElementsByTagName('property'):
if property.getAttribute('name') == 'active':
for node in property.childNodes:
if node.nodeType == node.TEXT_NODE:
a = int(node.data)
# get the widget
widget = self.__getitem__(name)
if widget.get_active() == -1:
if a != -1:
widget.set_active(a)
def __getAboutDialog(self):
"""
Get the about dialog from the internal list of top level widgets
and set the L{AutoGladeObject} accordingly.
"""
for n,g in self.__topLevelWidgets.iteritems():
w = self.__getattr__(n)
if isinstance(w, gtk.AboutDialog):
"""
In the special case of gtk.AboutDialog, get_name() doesn't
return the widget name but the application name set by gnome
application.
What the developers were thinking ?
So, to compensate from this unusual an not orthogonal behavior
next AutoGladeObject creation includes the name too.
"""
self.__autoGladeObjects[AGO_DIALOG_ABOUT] = \
AutoGladeObject(self, widget=w, name=n)
return
warning("About dialog not found")
def __getPreferencesDialog(self):
"""
Get the preferences dialog from the internal list of top level
widgets and set the L{AutoGladeObject} accordingly.
To find it, widget name is matched against 'preferences' ignoring
case.
"""
for n,g in self.__topLevelWidgets.iteritems():
w = self.__getattr__(n)
if isinstance(w, gtk.Dialog) and \
re.search('preferences', n, re.IGNORECASE):
debug("Setting preferences dialog to '%s': %s" % (n, w))
self.__autoGladeObjects[AGO_DIALOG_PREFERENCES] = \
AutoGladeObject(self, widget=w, name=n)
return
warning("Preferences dialog not found")
def __getStockItem(self, agokey, stock, gtkClass=gtk.Widget):
try:
self.__autoGladeObjects[agokey] = self.__findStockItem(stock, gtkClass)
except AutoGladeItemNotFoundException:
warning("Stock item not found: ago=%s stock=%s gtkClass=%s" % (
agokey, stock, gtkClass))
self.__autoGladeObjects[agokey] = None
def __getStockItems(self):
"""
Get stock items.
"""
# special cases
self.__getAboutDialog()
self.__getPreferencesDialog()
# generic cases
for asi in self.__autoStockItems:
self.__getStockItem(asi, self.__autoStockItems[asi][ASI_STOCK],
self.__autoStockItems[asi][ASI_GTKCLASS])
def __findStockItem(self, stock, gtkClass=gtk.Widget):
"""
Find a stock item in the elements tree.
WARNING: Right now only find the first widget if more than one
satisfies the conditions
@param stock: The stock item to find
@type stock: str
"""
cond = FN()
debug("%s(%s, %s) start" % (cond, stock, gtkClass), cond)
# a bit too much: all the properties in the glade file !
for element in self.__gladeInterface.getElementsByTagName('property'):
if element.getAttribute('name') == 'stock_id' and \
element.childNodes[0].nodeValue == stock:
debug("%s: value=%s" % (cond, element.childNodes[0].nodeValue),
cond)
parent = element.parentNode
name = parent.getAttribute('id')
widget = self.__getattr__(name)
debug("%s: testing if %s (%s) is an instance of %s" % (cond,
name, widget, gtkClass), cond)
if isinstance(widget, gtkClass):
debug("%s: FOUND", cond)
return AutoGladeObject(self, name=name, element=element,
widget=widget)
elif element.getAttribute('name') == 'label':
for node in element.childNodes:
if node.nodeType == node.TEXT_NODE:
if node.data == stock:
parent = element.parentNode
debug("%s: parent = %s" % (cond, parent), cond)
if parent.tagName != 'widget':
raise RuntimeError('Parent is not widget')
name = parent.getAttribute('id')
element = parent
widget = self.__getattr__(name)
if isinstance(widget, gtkClass):
debug("%s: FOUND %s" % (cond, name), cond)
return AutoGladeObject(self, name=name,
element=element, widget=widget)
raise AutoGladeItemNotFoundException("Stock item %s not found" % stock)
# Getters and setters
def getGladeInterface(self):
return self.__gladeInterface
def getDom(self):
return self.__dom
def getTopLevelWidgets(self):
return self.__topLevelWidgets
def getSignalHandlers(self):
#return self.__signalHandlers.keys()
return self.__signalHandlers
def getWidgetNames(self, widgetClassFilter=None,
widgetCanonicalNames=False):
"""
List the widget names possible filtered.
This method was an idea suggested by Charles Edward Pax and
Christopher Pax from Gladex project (http://www.openphysics.org/~gladex/)
"""
wn = []
wcf = None
if widgetClassFilter:
if isinstance(widgetClassFilter, str):
if widgetClassFilter == 'input':
wcf = INPUT_CLASS
else:
wcf = [ widgetClassFilter ]
elif isinstance(widgetClassFilter, list):
wcf = widgetClassFilter
else:
raise TypeError('Invalid widget class filter')
for element in self.__gladeInterface.getElementsByTagName('widget'):
if wcf:
widgetClass = element.getAttribute('class')
if not widgetClass in wcf:
continue
name = element.getAttribute('id')
if widgetCanonicalNames:
m = self.__reAutoInvoke.match(name)
if m:
name = m.group(AUTO_INVOKE_WIDGET)
wn.append(name)
return wn
# Methods
def autoInit(self, autoinit=None):
"""
Default autoInit method, can be overriden by children.
@param autoinit: The string containing autoinit commands
@type autoinit: L{str}
"""
retval = 0
if autoinit:
if self.DEBUG:
print >>sys.stderr, '$$$$ executing: self.' + autoinit
# FIXME
# here ':' is used as a separator between autoinit statements
# but trere would be a possibility that something is intended
# with a widget that conains ':' in its name
if self.__autoinitSplit != 'NONE':
for init in autoinit.split(self.__autoinitSplit):
# FIXME
# this is a little tricky. To obtain the return value of
# multiple autoinits sequences we are substracting the values
# because return values are mostly negatives from gtk.RESPONSE
# values. This is mainly to obtain the response from an
# autoinit question dialog, for example.
# FIXME
# If autoinit calls a method that has a different return type
# it its used in this operation and raises an exception
exec 'retval -= self.' + init
else:
exec 'retval -= self.' + autoinit
return retval
# Default handlers
def on_cancelbutton_clicked(self, widget):
"""
Default handler for B{Cancel} buttons C{clicked} signal.
@param widget: The widget receiving the signal
@type widget: L{gtk.Widget}
"""
if self.DEBUG:
print >>sys.stderr, BLUE + "on_cancelbutton_clicked" + SGR0
gtk.main_quit()
def on_autosensitize(self, widget, targetWidget):
"""
Toggle the 'sensitive' property on a target widget
"""
# Please !!!!
# Tell me why there's no get_sensitive !!!!
#s = targetWidget.get_sensitive()
s = targetWidget.get_property('sensitive')
targetWidget.set_sensitive(not s)
def on_autoshow(self, widget, targetWidget):
"""
Toggle the 'visible' property on a target widget
"""
cond = FN()
debug("%s(%s, %s)" % (cond, widget, targetWidget), cond)
if targetWidget.get_property('visible'):
targetWidget.hide()
else:
targetWidget.show()
# FIXME
# This is perhaps not true, but it's a good assumption,
# if widgets were showed or hidden it's a good opportunity to
# resize
# Actually it seems to be false, when resized the window returns
# to its original size, not its current size if it was resized
self.__mainTopLevelWidget.getWidget().resize(1,1)
def on_automenuitem_activate(self, widget, *args):
"""
Default handler for menu items C{activate} signal
This is a handler method intended to be a simple menu item handler.
The idea is to simplify handling menu items usually connected to
dialog boxes.
activate signal on the menu item object must point to this function
and user data parameter of this signal must point to the object to
call.
In the case of a dialog, user data parameter is the dialog object
which this method will run.
This can also be used (and it's used by autoInvoke) in
L{gtk.ToolButton} objects.
@param widget: The widget receiving the signal
@type widget: L{gtk.Widget}
"""
cond = FN()
debug("%s(%s, %s)" % (cond, widget, args), cond)
if isinstance(widget, gtk.MenuItem):
self.autoInvoke(widget)
elif isinstance(widget, gtk.ToolButton):
self.autoInvoke(widget)
elif isinstance(widget, gtk.Dialog):
widget.run()
else:
warning("Not implemented yet: %s" % widget)
def on_autobutton_clicked(self, widget, *args):
"""
on_autobutton_clicked
@param widget: The widget receiving the signal
@type widget: L{gtk.Widget}
"""
cond = FN()
debug("%s(%s, %s)" % (cond, widget, args), cond)
if isinstance(widget, gtk.Button):
if args:
self.autoInvoke(widget, args[0])
else:
self.autoInvoke(widget)
elif isinstance(widget, gtk.ToolButton):
self.autoInvoke(widget)
elif isinstance(widget, gtk.Dialog):
widget.run()
else:
warning("%s: Not implemented yet: %s" % (cond, widget))
def on_autotoolbutton_clicked(self, widget):
"""
on_autotoolbutton_clicked
@param widget: The widget receiving the signal
@type widget: L{gtk.Widget}
"""
return self.on_autobutton_clicked(widget)
def on_autofontbutton_font_set(self, widget, *args):
"""
on_autofontbutton_font_set
"""
cond = FN()
debug("%s(%s, %s)" % (cond, widget, args), cond)
fontButton = widget
targetWidget = self[args[0]]
fontName = fontButton.get_font_name()
try:
import pango
pangoFont = pango.FontDescription(fontName)
except:
pangoFont = None
targetWidget.modify_font(pangoFont)
def autoInvoke(self, widget, *args):
"""
Auto invoke the method codified in widget name
Auto invoke the method codified and described in the Glade widget name.
The pattern string is described by the regular expression in X{self.__reAutoInvoke}
which typically is '(.*):auto:(.*)' or everything before ':auto:' is the
standard widget name, and everything after is the method name or widget (in
the case of L{gtk.Dialog}) to be invoked.
The methods C{name}Pre, C{name} and C{name}Post are invoked in order (if exist)
and if and only if the predecesor returns C{True}.
@param widget: The widget receiving the signal
@type widget: L{gtk.Widget}
"""
cond = FN()
debug("%s start" % cond, cond)
name = widget.get_name()
m = self.__reAutoInvoke.match(name)
if m:
f = m.group(AUTO_INVOKE_METHOD)
debug("%s: should invoke method '%s' with args %s" % (cond, f,
args), cond)
# Pre
pre = True
try:
pre = getattr(self, f + 'Pre')(args)
except AutoGladeAttributeError, ex:
if self.DEBUG:
print >>sys.stderr, 50*'@'
print >>sys.stderr, 'Not raising exception because should correspond to a' \
'not yet implemented method self.' + f + 'Pre.'
print >>sys.stderr, f + "Pre() undefined. ex=%s pre=%s" % (ex, pre)
print >>sys.stderr, ex.__str__()
print >>sys.stderr, 50*'@'
# FIXME !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
#except FuckingAttributeError, ex:
# # FIXME
# # For some reason self. is removed from the method name when
# # the exception is raised
# if 'self.' + ex.__str__() == method:
# if self.DEBUG:
# print >>sys.stderr, 50*'@'
# print >>sys.stderr, 'Not raising exception because should correspond to a' \
# 'not yet implemented method self.' + method
# print >>sys.stderr, f + "Pre() undefined. ex=%s pre=%s" % (ex, pre)
# print >>sys.stderr, ex.__str__()
# print >>sys.stderr, 50*'@'
# else:
# if self.DEBUG:
# print >>sys.stderr, 50*'@'
# print >>sys.stderr, 'Raisin exception'
# print >>sys.stderr, 50*'@'
# raise ex
except NameError, ex:
if self.DEBUG:
print >>sys.stderr, 50*'@'
print >>sys.stderr, 'Not raising exception because should correspond to a' \
'not yet implemented method self.' + method
print >>sys.stderr, f + "Pre() undefined. ex=%s pre=%s" % (ex, pre)
print >>sys.stderr, ex.__str__()
print >>sys.stderr, 50*'@'
except Exception, ex:
raise ex
post = False
# Method
if pre:
try:
# try to find 'f' as an attribute
obj = getattr(self, f)
debug("%s: obj=%s class=%s" % (cond, obj,
obj.__class__.__name__), cond)
if callable(obj):
post = obj(args)
elif isinstance(obj, gtk.Dialog):
# FIXME:
# I'm not sure that post should be assigned here and that
# it receives the correct value after assignement
resp = obj.run()
debug("%s: resp=%s" % (cond, resp), cond)
# FIXME:
# Not sure about this hide
obj.hide()
#except:
#try:
# # if 'f' was not an attribute, then call it
# # FIXME:
# # perhpas self. should not be imposed
# #post = eval('self.' + f + '(%s)' % args)
# #method = self.__getattr__(f)
# method = getattr(self, f)
# post = method(args)
#except AutoGladeAttributeError, ex:
# raise RuntimeError("Method '%s' not implemented yet or attribute not found exception inside this method: %s" % (f, ex))
except AutoGladeAttributeError, ex:
raise RuntimeError("Method '%s' not implemented yet or 'attribute not found exception' inside this method: %s" % (f, ex))
except Exception, ex:
raise ex
# Post
if post:
try:
getattr(self, f + 'Post')(args)
except:
pass
else:
"""
Sometimes C{aboutdialog} is not added as C{user data} parameter to
the C{on_automenuitem_activate} handler, so here is a fallback.
Is this a libglade or gtk.Glade bug ?
"""
debug("%s: autoInvoke: NO MATCH" % cond, cond)
if len(args) >= 1 and isinstance(args[0], gtk.Dialog):
# FIXME
# realy goes here ?
debug("%s: fallback running dialog %s (%s)"% (cond, name,
args[0]))
return args[0].run()
def on_autodialog_response(self, widget, response, *args):
"""
Default handler for L{gtk.Dialog} C{response} signal
This is a handler method intended to be a simple dialog handler.
response signal of widget must be connected to this method and the
user data parameter must be left untouched (as of Glade 3.0 and
libglade 2).
Note:
Perhaps this method should set a Singleton object value to the response
received
gtk response values
===================
These are the response values::
gtk.RESPONSE_NONE=-1
gtk.RESPONSE_REJECT=-2
gtk.RESPONSE_ACCEPT=-3
gtk.RESPONSE_DELETE_EVENT=-4
gtk.RESPONSE_OK=-5
gtk.RESPONSE_CANCEL=-6
gtk.RESPONSE_CLOSE=-7
gtk.RESPONSE_YES=-8
gtk.RESPONSE_NO=-9
gtk.RESPONSE_APPLY=-10
gtk.RESPONSE_HELP=-11
@param widget: The widget receiving the signal
@type widget: L{gtk.Widget}
@param response: The dialog response (i.e.: button pressed)
@type response: int
"""
cond = FN()
debug("%s(%s, %s, %s)" % (cond, widget, response, args), cond)
if response in [gtk.RESPONSE_CLOSE, gtk.RESPONSE_OK,
gtk.RESPONSE_CANCEL,
gtk.RESPONSE_ACCEPT, gtk.RESPONSE_REJECT,
gtk.RESPONSE_DELETE_EVENT, gtk.RESPONSE_NONE]:
debug('\thiding widget=%s' % widget)
widget.hide()
def on_autobuttonexpandall_clicked(self, widget):
cond = FN()
debug('%s' % cond, cond)
widget.expand_all()
def on_autobuttoncollapseall_clicked(self, widget):
cond = FN()
debug('%s' % cond, cond)
widget.collapse_all()
def autoRun(self):
"""
auto run the graphical user interface
Runs the graphical user interface automatically.
There ase some special cases contempled.
1. If there's no L{__mainTopLevelWidget} then it does nothing
2. If the L{__mainTopLevelWidget} is a L{gtk.Dialog} then the
dialog box is run. Loops forever until one of the values of a
valid response is received, then return this value
3. if the L{__mainTopLevelWidget} is not a L{gtk.Dialog} then the
main GTK loop is entered
"""
if self.__mainTopLevelWidget:
if self.DEBUG:
print >>sys.stderr, "main widget: %s" % \
self.__mainTopLevelWidget.getName()
mw = self.__mainTopLevelWidget.getWidget()
mw.show()
if isinstance(mw, gtk.Dialog):
if self.DEBUG:
print >>sys.stderr, "It's a dialog instance, running it until one of the valid responses is received..."
while True:
# we could have been done this if gtk.Dialog implements
# __call__(self):
# self.run()
#resp = mw()
resp = mw.run()
if self.DEBUG:
print >>sys.stderr, "\tresp=", resp
if resp in [gtk.RESPONSE_CLOSE, gtk.RESPONSE_OK,
gtk.RESPONSE_CANCEL,
gtk.RESPONSE_DELETE_EVENT, gtk.RESPONSE_NONE]:
return resp
elif resp == gtk.RESPONSE_HELP:
self.autoHelp()
else:
gtk.main()
def autoProperty(self, widget, args):
debug("autoProperty(%s, %s)" % (widget, args))
property = args[0]
val = args[1]
target = args[2]
targetWidget = self[target]
method = getattr(targetWidget, 'set_' + property)
debug("autoProperty: method=%s val=%s (%s)" % (method, int(val), type(val)))
method(int(val))
def autoExec(self, widget, args):
debug("autoExec(%s, %s)" % (widget, args))
def autoErrorDialog(self, ex):
m = gtk.MessageDialog(type=gtk.MESSAGE_ERROR,
buttons=gtk.BUTTONS_OK, message_format=None)
m.set_title("Error")
m.set_markup(ex.__str__())
m.run()
m.hide()
def autoWarningDialog(self, msg, message_format=None):
w = gtk.MessageDialog(type=gtk.MESSAGE_WARNING,
buttons=gtk.BUTTONS_OK, message_format=message_format)
w.set_title("Warning")
w.set_markup(msg)
w.run()
w.hide()
def autoQuestionDialog(self, msg, buttons=gtk.BUTTONS_YES_NO):
if self.DEBUG:
print >>sys.stderr, "autoQuestionDialog(%s)" % msg
q = gtk.MessageDialog(type=gtk.MESSAGE_QUESTION,
buttons=buttons, message_format=None)
q.set_title("Question")
q.set_markup(msg)
resp = q.run()
if self.DEBUG:
print >>sys.stderr, "\tresp=%s" % resp
q.hide()
return resp
def autoInfoDialog(self, msg="No message", message_format=None):
i = gtk.MessageDialog(type=gtk.MESSAGE_INFO,
buttons=gtk.BUTTONS_OK, message_format=message_format)
i.set_title("Info")
# msg must be flattened
if isinstance(msg, tuple):
msg = msg[0]
i.set_markup(msg)
resp = i.run()
if self.DEBUG:
print >>sys.stderr, "**** autoInfoDialog: resp=%s" % resp
i.hide()
return resp
def autoAddTimeout(self, msecs=1000, method=None, *args):
if isinstance(method, str):
method = getattr(self, method)
self.__timerId = gobject.timeout_add(msecs, method, args)
def autoProgressBar(self, args):
if self.DEBUG:
print >>sys.stderr, ">>>> autoProgressBar (BEGIN) args='%s'" % args
# flatten args
if not args:
return False
name = args[0]
if len(args) > 1:
okButton = args[1]
else:
okButton = None
if len(args) > 2:
cancelButton = args[2]
else:
cancelButton = None
line = sys.stdin.readline()
if not line:
return False
if self.DEBUG:
print >>sys.stderr, "\tline='%s'" % line
n = 0
try:
n = int(line)
except:
pass
v = n/100.0
if self.DEBUG:
print >>sys.stderr, "\tupdating n=%s %s" % (n, v)
self[name].set_fraction(v)
self[name].set_text("%s %%" % n)
if v >= 1.0:
if okButton:
if self.DEBUG:
print >>sys.stderr, "\tsetting sensitive on %s" % (okButton)
self[okButton].set_sensitive(True)
if cancelButton:
if self.DEBUG:
print >>sys.stderr, "\tsetting sensitive on %s" % (cancelButton)
self[cancelButton].set_sensitive(False)
return False
if self.DEBUG:
print >>sys.stderr, ">>>> autoProgressBar (END)"
return True
def isInputClass(self, widgetClass):
return widgetClass in INPUT_CLASS
def autoDumpText(self, var, val):
print "%s:%s" % (var, val)
def autoDumpShell(self, var, val):
if var != 'autoargs':
if self.__autoArgs:
self.__autoArgs += ' '
self.__autoArgs += '$' + var
# FIXME
# special charcters in var should be mapped
# single quotes in val should be mapped or escaped
print "%s='%s'" % (var, val)
def autoDumpValues(self, dummy):
# if autoDumpValues is used in a widget as widget:auto:autoDumpValues
# then a None value is added because there's no parameters
# specified, that's why a 'dummy' is needed here
cond = FN()
debug("autoDumpValues(%s)" % (dummy), cond)
# FIXME
# this may not be needed
self.__autoArgs = ''
for element in self.__gladeInterface.getElementsByTagName('widget'):
widgetClass = element.getAttribute('class')
if self.isInputClass(widgetClass):
name = element.getAttribute('id')
m = self.__reAutoInvoke.match(name)
if m:
# this is the case of one widget named as the autoInvoke
# convention widget:auto:method
# because of this name it cannot be invoked as self.name
# so a workaround should be used
# another alternative would be to use a '_' instead of ':'
# n is the 'real' widget name, the first part of the mapped
# name widget:auto:method
n = m.group(AUTO_INVOKE_WIDGET)
else:
n = name
if self.DEBUG:
print >>sys.stderr, '%s\tshould dump %s (%s)%s' % (RED, n, name, SGR0)
if widgetClass == 'GtkRadioButton':
# it seems that there's no group leader, or even there's no
# such concept in glade, and we don't have a way to obtain
# the group leader to obtain its name and use it to
# print the values (label values)
# to solve this a convention is used, all of the
# radiobuttons must be named name<n> and the number is
# stripped when printed
for rb in self[name].get_group():
if self.DEBUG:
print >>sys.stderr, "group contains rb: %s %s" % (
rb.get_name(), rb.get_active())
n = rb.get_name()
if n == name and rb.get_active():
if self.DEBUG:
print >>sys.stderr, "rb %s is active" % n
# remove the last number from widget name (group)
m = self.__reAutoInvoke.match(n)
if m:
n = m.group(AUTO_INVOKE_WIDGET)
nnn = re.match('(.+\D)\d+$', n).group(1)
debug("**** n=%s" % n)
debug("***** nnn=%s" % nnn)
debug("***** name=%s" % name)
v = None
try:
v = self.__dump[n]
except:
v = rb.get_label().replace('_','')
self.__autoDumpMap[self.__autoDump](nnn, v)
# rb.get_label().replace('_',''))
elif widgetClass in ['GtkCheckButton', 'GtkToggleButton']:
v = self[name].get_active()
d = None
try:
d = self.__dump[n]
except:
pass
if v:
if d:
v = d
else:
if d:
v = ''
self.__autoDumpMap[self.__autoDump](n, v)
elif widgetClass == 'GtkEntry':
w = self[name]
if isinstance(self[name], gtk.ComboBoxEntry):
w = w.child
self.__autoDumpMap[self.__autoDump](n, w.get_text())
elif widgetClass == 'GtkComboBox':
self.__autoDumpMap[self.__autoDump](n, self[name].get_active_text())
elif widgetClass in ['GtkHScale','GtkVscale','GtkSpinButton']:
fmt = "%d"
if self[name].get_digits() > 0:
fmt = "%f"
self.__autoDumpMap[self.__autoDump](n, fmt % self[name].get_value())
elif widgetClass == 'GtkFileChooserButton':
self.__autoDumpMap[self.__autoDump](n, self[name].get_filename())
elif widgetClass == 'GtkColorButton':
color = self[name].get_color()
self.__autoDumpMap[self.__autoDump](n,
"#%04x%04x%04x" % (color.red, color.green, color.blue))
elif widgetClass == 'GtkFontButton':
self.__autoDumpMap[self.__autoDump](n, self[name].get_font_name())
elif widgetClass == 'GtkCalendar':
self.__autoDumpMap[self.__autoDump](n, self[name].get_date())
elif widgetClass == 'GtkCurve':
self.__autoDumpMap[self.__autoDump](n, self[name].get_vector(size=-1))
elif widgetClass == 'GtkGammaCurve':
self.__autoDumpMap[self.__autoDump](n, self[name].curve.get_vector(size=-1))
elif widgetClass == 'GtkTreeView':
(model, paths) = self[name].get_selection().get_selected_rows()
s = "("
for path in paths:
first = True
for c in model[path]:
if not first:
s += " "
else:
first = False
s += '"' + c + '"'
s += ")"
self.__autoDumpMap[self.__autoDump](n, s)
else:
print >>sys.stderr, "autoDumpValues: Not implemented: %s" % widgetClass
self.__autoDumpMap[self.__autoDump]('autoargs', self.__autoArgs)
def autoDialog(self, widget, *args):
cond = FN()
debug("%s: widget=%s args=%s" % (cond, widget, args), cond)
self.on_autodialog_response(widget, *args)
def autoHelp(self):
if self.DEBUG:
print >>sys.stderr, "=================================="
print >>sys.stderr, "HELP"
print >>sys.stderr, "=================================="
try:
import gnome
gnome.help_display(self.__programName)
except:
pass
def autoQuit(self, widget, *args):
gtk.main_quit()
def autoAbout(self, widget, *args):
cond = FN()
debug("%s: widget=%s args=%s" % (cond, widget, args), cond)
ago = self.__autoGladeObjects[AGO_DIALOG_ABOUT]
if ago:
dialog = ago.getWidget()
if dialog:
dialog.run()
def autoNew(self, widget, *args):
cond = FN()
debug("%s: widget=%s args=%s" % (cond, widget, args), cond)
autoNewMethod = None
methodOrWidgetName = None
name = widget.get_name()
m = self.__reAutoInvoke.match(name)
if m:
methodOrWidgetName = m.group(AUTO_INVOKE_METHOD)
methodOrWidget = getattr(self, methodOrWidgetName)
if isinstance(methodOrWidget, gtk.TextView):
tv = methodOrWidget
widget = tv
if tv.get_buffer().get_modified():
debug("autoNew: buffer was modified, should save !", cond)
# FIXME
# this is copied from autoOpen (refactor!)
try:
filename = self.__autoProperties[methodOrWidgetName]['filename']
message_format = "Do you want to save changes to %s ?" % \
filename
except:
filename = None
message_format = "Do you want to save changes ?"
dialog = gtk.MessageDialog(parent=None,
type=gtk.MESSAGE_QUESTION,
buttons=gtk.BUTTONS_YES_NO,
message_format=message_format)
resp = dialog.run()
debug("%s: resp=%s" % (cond, resp), cond)
dialog.hide()
if resp == gtk.RESPONSE_ACCEPT:
self.autoSave(tv, args)
debug("%s: new(%s)" % (cond, widget), cond)
self.new(widget)
try:
del self.__autoProperties[methodOrWidgetName]['filename']
debug("%s: deleting property filename for %s" % (cond, methodOrWidgetName), cond)
except Exception, ex:
# if it wasn't the property not found because it was never set
if ex.message != methodOrWidgetName:
raise Exception(ex)
def autoOpen(self, widget, *args):
cond = FN()
autoOpenMethod = None
name = widget.get_name()
debug("%s: name: %s" % (cond, name), cond)
m = self.__reAutoInvoke.match(name)
if m:
methodOrWidgetName = m.group(AUTO_INVOKE_METHOD)
methodOrWidget = getattr(self, methodOrWidgetName)
debug("%s: method or widget: %s" % (cond, methodOrWidget), cond)
if isinstance(methodOrWidget, gtk.TextView):
widget = methodOrWidget
if widget.get_buffer().get_modified():
debug("%s: buffer was modified, should save !" % cond, cond)
# The Buttons Type constants specify the pre-defined sets of buttons for the dialog. If none of these choices are appropriate, simply use gtk.BUTTONS_NONE then call the add_buttons() method.
try:
filename = self.__autoProperties[methodOrWidgetName]['filename']
message_format = "Do you want to save changes to %s ?" % \
filename
except:
filename = None
message_format = "Do you want to save changes ?"
dialog = gtk.MessageDialog(parent=None,
type=gtk.MESSAGE_QUESTION,
buttons=gtk.BUTTONS_YES_NO,
message_format=message_format)
resp = dialog.run()
debug("%s: resp=%s" % (cond, resp), cond)
dialog.hide()
if resp == gtk.RESPONSE_ACCEPT:
self.autoSave(widget, args)
fcd = gtk.FileChooserDialog(parent=None,
buttons=(gtk.STOCK_CANCEL, gtk.RESPONSE_REJECT,
gtk.STOCK_OPEN, gtk.RESPONSE_ACCEPT))
# FIXME
# why multiple selection when then is discarded in self.open() bellow ?
#fcd.set_select_multiple(True)
fcd.set_select_multiple(False)
resp = fcd.run()
fcd.hide()
if resp == gtk.RESPONSE_ACCEPT:
self.open(fcd.get_filenames()[0], widget)
def autoSaveas(self, widget, *args):
cond = FN()
autoOpenMethod = None
name = widget.get_name()
debug("%s: name: %s" % (cond, name), cond)
m = self.__reAutoInvoke.match(name)
if m:
methodOrWidgetName = m.group(AUTO_INVOKE_METHOD)
fcd = gtk.FileChooserDialog(parent=None,
action=gtk.FILE_CHOOSER_ACTION_SAVE,
buttons=(gtk.STOCK_CANCEL, gtk.RESPONSE_REJECT,
gtk.STOCK_SAVE, gtk.RESPONSE_ACCEPT))
fcd.set_select_multiple(False)
try:
fcd.set_filename(self.__autoProperties[methodOrWidgetName]['filename'])
except:
pass
resp = fcd.run()
fcd.hide()
if resp == gtk.RESPONSE_ACCEPT:
if m:
methodOrWidget = getattr(self, methodOrWidgetName)
debug("autoOpen: method or widget: %s" % methodOrWidget, cond)
if isinstance(methodOrWidget, gtk.TextView):
widget = methodOrWidget
name = methodOrWidgetName
filename = fcd.get_filenames()[0]
self.save(filename, widget)
self.__autoProperties.setdefault(methodOrWidgetName, {}).update(
{'filename':filename})
def autoSave(self, widget, *args):
cond = FN()
autoOpenMethod = None
name = widget.get_name()
m = self.__reAutoInvoke.match(name)
if m:
methodOrWidgetName = m.group(AUTO_INVOKE_METHOD)
try:
filename = self.__autoProperties[methodOrWidgetName]['filename']
except:
return self.autoSaveas(widget, args)
methodOrWidget = getattr(self, methodOrWidgetName)
debug("%s: method or widget: %s" % (cond, methodOrWidget), cond)
if isinstance(methodOrWidget, gtk.TextView):
widget = methodOrWidget
self.save(filename, widget)
def autoCopy(self, widget, *args):
cond = FN()
name = widget.get_name()
debug("autoCopy: name: %s" % name, cond)
m = self.__reAutoInvoke.match(name)
if m:
methodOrWidgetName = m.group(AUTO_INVOKE_METHOD)
methodOrWidget = getattr(self, methodOrWidgetName)
debug("autoCopy: method: %s" % methodOrWidget, cond)
if isinstance(methodOrWidget, gtk.TextView):
tv = methodOrWidget
tv.get_buffer().copy_clipboard(gtk.Clipboard())
def autoCut(self, widget, *args):
cond = FN()
name = widget.get_name()
debug("autoCut: name: %s" % name, cond)
m = self.__reAutoInvoke.match(name)
if m:
methodOrWidgetName = m.group(AUTO_INVOKE_METHOD)
methodOrWidget = getattr(self, methodOrWidgetName)
debug("autoCut: method: %s" % methodOrWidget, cond)
if isinstance(methodOrWidget, gtk.TextView):
tv = methodOrWidget
tv.get_buffer().cut_clipboard(gtk.Clipboard(), tv.get_editable())
def autoPaste(self, widget, *args):
cond = FN()
name = widget.get_name()
debug("autoCopy: name: %s" % name, cond)
m = self.__reAutoInvoke.match(name)
if m:
methodOrWidgetName = m.group(AUTO_INVOKE_METHOD)
methodOrWidget = getattr(self, methodOrWidgetName)
debug("autoCopy: method: %s" % methodOrWidget, cond)
if isinstance(methodOrWidget, gtk.TextView):
tv = methodOrWidget
tv.get_buffer().paste_clipboard(gtk.Clipboard(), None,
tv.get_editable())
def autoDelete(self, widget, *args):
cond = FN()
name = widget.get_name()
debug("autoCopy: name: %s" % name, cond)
m = self.__reAutoInvoke.match(name)
if m:
methodOrWidgetName = m.group(AUTO_INVOKE_METHOD)
methodOrWidget = getattr(self, methodOrWidgetName)
debug("autoCopy: method: %s" % methodOrWidget, cond)
if isinstance(methodOrWidget, gtk.TextView):
tv = methodOrWidget
tv.get_buffer().delete_selection(True, tv.get_editable())
def autoPreferences(self, widget, *args):
if self.DEBUG:
print >>sys.stderr, "autoPreferences: name: %s" % self.__menuItemPreferences.getName()
print >>sys.stderr, "autoPreferences: ***** SEMI IMPLEMENTED *****"
# do something with preferences values
# FIXME
# this code is copied form autoRun
# needs to be refactored
#resp = self.__preferencesDialog.getWidget().run()
ago = self.__autoGladeObjects[AGO_DIALOG_PREFERENCES]
if ago:
dialog = ago.getWidget()
resp = dialog.run()
if self.DEBUG:
print >>sys.stderr, "\tresp=", resp
if resp in [gtk.RESPONSE_CLOSE, gtk.RESPONSE_OK,
gtk.RESPONSE_CANCEL,
gtk.RESPONSE_DELETE_EVENT, gtk.RESPONSE_NONE]:
dialog.hide()
return resp
elif resp == gtk.RESPONSE_HELP:
self.autoHelp()
def printval(self, *args):
str = "No value"
exitval = -1
if self.DEBUG:
print "1) printval: args=%s len=%d" % (args, len(args))
# msg must be flattened
if isinstance(args, tuple):
args = args[0]
if isinstance(args, tuple):
args = args[0]
if args and len(args) > 0:
str = args[0]
if args and len(args) > 1:
exitval = int(args[1])
# print the str
print str
if exitval >= 0:
sys.exit(exitval)
def new(self, widget):
if widget:
if isinstance(widget, gtk.TextView):
tv = widget
tv.get_buffer().set_text("")
tv.get_buffer().set_modified(False)
def open(self, filename, widget):
cond = FN()
debug("%s: open filename=%s" % (cond, filename), cond)
if widget:
name = widget.get_name()
debug("%s: and set %s" % (cond, name), cond)
if isinstance(widget, gtk.TextView):
tv = widget
f = open(filename)
tv.get_buffer().set_text(f.read())
f.close()
tv.get_buffer().set_modified(False)
self.__autoProperties.setdefault(name, {}).update(
{'filename':filename})
def save(self, filename, widget):
cond = FN()
debug("%s: save filename=%s" % (cond, filename), cond)
if widget:
debug("%s: from %s" % (cond, widget.get_name()), cond)
if isinstance(widget, gtk.TextView):
tv = widget
f = open(filename, "w")
buf = tv.get_buffer()
(start, end) = buf.get_bounds()
f.write(buf.get_text(start, end))
f.close()
tv.get_buffer().set_modified(False)
def abbreviations(self):
self.aed = self.autoErrorDialog
self.aid = self.autoInfoDialog
self.aqd = self.autoQuestionDialog
self.awd = self.autoWarningDialog
self.apb = self.autoProgressBar
self.aat = self.autoAddTimeout
# Utility methods
# not AutoGlade class members
def treeview_toogle_expansion(treeview, path):
if treeview.row_expanded(path):
treeview.collapse_row(path)
else:
treeview.expand_row(path, open_all=False)
usage = "usage: autoglade [options] [file.glade]"
if __name__ == "__main__":
autorun = True
parser = OptionParser(usage=usage,
version = "%s version %s (%s)" % (prog, version, revision))
parser.add_option("-?", action="help",
help="show this help message and exit")
parser.add_option("-V", "--long-version", action="store_true",
dest="longversion", help="Get the long version message")
parser.add_option("-i", "--autoinit", type="string",
dest="autoinit",
help="Pass an autoinit sequence")
parser.add_option("", "--autoinit-split", type="string",
dest="autoinitSplit", default=':',
help="Split autoinit sequence at specified delimiter (NONE to avoid splitting")
parser.add_option("-d", "--autodump", type="string",
dest="autodump", default='shell',
help="Use the specified syntax type for autodump (shell, text)")
parser.add_option("", "--get-widget-names", action="store_true",
dest="getWidgetNames", default=False,
help="Get the list of widget names in the glade file")
parser.add_option("", "--get-signal-handlers", action="store_true",
dest="getSignalHandlers", default=False,
help="Get the list of signal handlers in the glade file")
parser.add_option("", "--widget-class-filter", type="string",
dest="widgetClassFilter", default=None,
help="Specifies a widget class filter for some operations")
parser.add_option("", "--widget-canonical-names", action="store_true",
dest="widgetCanonicalNames", default=False,
help="Widget's canonical names instead of autoglade full names")
parser.add_option("-r", "--root", type="string",
dest="root", default=None,
help="Name of the root widget")
parser.add_option("-x", "--debug", type="string",
dest="debug", help="Print debug messages", default=DEBUG)
(options, args) = parser.parse_args()
DEBUG.append(options.debug)
if options.longversion:
print "autoglade version %s (%s)" % (version, revision)
print __license__
sys.exit(0)
l = len(args)
if l >= 1:
glade = args[0]
cmdlineargs = args
elif l == 0:
glade = None
cmdlineargs = None
if options.getWidgetNames or options.getSignalHandlers:
print >>sys.stderr, "ERROR: to obtain the list of widget names or signal handlers a glade file must be specified"
sys.exit(1)
if not options.autoinit:
print >>sys.stderr, "WARNING: autoinit is empty and no glade file specified."
else:
print >>sys.stderr, usage
sys.exit(1)
debug("root=%s" % options.root, True)
debug("autoinit=%s" % options.autoinit)
debug("autonitsplit=%s" % options.autoinitSplit)
debug("autodump=%s" % options.autodump)
if options.getWidgetNames or options.getSignalHandlers:
autorun = False
ag = AutoGlade(glade, autorun=autorun, root=options.root,
autoinit=options.autoinit, autoinitSplit=options.autoinitSplit,
autodump=options.autodump)
if not autorun:
if options.getWidgetNames:
for wn in ag.getWidgetNames(options.widgetClassFilter,
options.widgetCanonicalNames):
print wn
elif options.getSignalHandlers:
for (sh, s) in ag.getSignalHandlers().iteritems():
print "%s: %s %s" % (sh, s[1], s[0])
| mit |
youdar/work | work/Clashes/Old work/collect_pdb_files_with_different_clashscore.py | 1 | 4379 | from __future__ import division
import cPickle as pickle
import os
'''
Collecting pdb files that have very different clashscore and nb_clashscore
So we can look at the cause and evaluate the difference in scoring methods
'''
def get_data():
''' () -> dict,dict,list
Read results of clash score servey for PROBE clashscore and restraints manager nonbonded clashscore
c:\Phenix\Dev\Work\work\Clashes\Data\clashscore_compare_ready_set-12-5-13_dict
pdb_clash_scores = list([score_with_hydrogen,score_without_hydrogen]...)
pdb_clash_score_and_name = list([score_with_hydrogen,score_without_hydrogen,experment_type,file_name]...)
pdb_clash_score_dict[file_name] = [score_with_hydrogen,score_without_hydrogen,experment_type]
Returns:
data_dict: data_dict[pdb_file_name] = [total_nb_clashscore,without_sym_nb_clashscore,clashscore_probe]
experiment_type_dict: experiment_type_dict[experiment_type] = list of pdb_file_name
pdb_file_list: a list of all files that were compared (those are all the pdb files with clashscore < 50)
>>>experiment_type_dict['NMR'][:10]
['103d', '124d', '141d', '142d', '169d', '175d', '1a1d', '1ac3', '1al5', '1anp']
>>>data_dict['142d']
[0.0, 0.0, 0.0]
'''
datapath = os.path.realpath('c:\Phenix\Dev\Work\work\Clashes\Data')
#data_dict_file = 'clashscore_compare_reduce_12_6_2013_dict' # Probe O vdw is 1.4
data_dict_file = 'clashscore_compare_reduce_12_11_2013_dict' # Probe O vdw is 1.52
experiment_dict_file = 'experiment_type_to_files_dict' # source for experiment_type_dict
# Get data
data_dict = pickle.load(open(os.path.join(datapath,data_dict_file),'r'))
experiment_type_dict = pickle.load(open(os.path.join(datapath,experiment_dict_file),'r'))
# Collect all files that we compared
pdb_file_list = [key for key in data_dict]
return data_dict,experiment_type_dict,pdb_file_list
def run():
data_dict,experiment_type_dict,pdb_file_list = get_data()
# Collect files where the abs(clashscore - nb_clashscore) > 10
results = {}
delta = 30
ratio = 0.3
print 'The number of files with abs(clashscore - without_sym_nb_clashscore) > {}: '.format(delta)
print '-'*85
for experiment_type in experiment_type_dict:
temp = [x for x in experiment_type_dict[experiment_type]
if data_dict.has_key(x) and (abs(data_dict[x][1]-data_dict[x][2]) > delta)]
results[experiment_type] = temp
print '{0:30} has {1:>5} out of {2:>5}'.format(
experiment_type,len(temp),len(experiment_type_dict[experiment_type]))
if experiment_type == 'X-RAY DIFFRACTION':
i = 0
for x in temp:
if data_dict[x][1] > data_dict[x][2]:
print x,data_dict[x]
i += 1
if i>15: break
#for x in temp[50:55]:
#print x,data_dict[x]
print '='*85
print 'The number of files with abs(clashscore - nb_clashscore) > {}: '.format(delta)
print '-'*85
for experiment_type in experiment_type_dict:
temp = [x for x in experiment_type_dict[experiment_type]
if data_dict.has_key(x) and (abs(data_dict[x][0]-data_dict[x][2]) > delta)]
results[experiment_type] = temp
print '{0:30} has {1:>5} out of {2:>5}'.format(
experiment_type,len(temp),len(experiment_type_dict[experiment_type]))
print '='*85
print 'The number of files with abs(clashscore - nb_clashscore)/(clashscore + 0.001) > {}: '.format(ratio)
print '-'*85
for experiment_type in experiment_type_dict:
temp = [x for x in experiment_type_dict[experiment_type]
if data_dict.has_key(x) and (abs(data_dict[x][0]-data_dict[x][2])/(data_dict[x][2] + 0.001)> ratio)]
results[experiment_type] = temp
print '{0:30} has {1:>5} out of {2:>5}'.format(
experiment_type,len(temp),len(experiment_type_dict[experiment_type]))
print '='*85
print 'The number of files with abs(total_nb_clashscore - without_sym_nb_clashscore) > {}: '.format(delta)
print '-'*85
for experiment_type in experiment_type_dict:
temp = [x for x in experiment_type_dict[experiment_type]
if data_dict.has_key(x) and (abs(data_dict[x][0]-data_dict[x][1]) > delta)]
results[experiment_type] = temp
print '{0:30} has {1:>5} out of {2:>5}'.format(
experiment_type,len(temp),len(experiment_type_dict[experiment_type]))
print '='*85
print 'Done...'
if __name__=='__main__':
run() | mit |
deepaks4077/myjam | node_modules/socket.io-client/node_modules/engine.io-client/node_modules/engine.io-parser/node_modules/utf8/tests/generate-test-data.py | 2214 | 1347 | #!/usr/bin/env python
import re
import json
# http://mathiasbynens.be/notes/javascript-encoding#surrogate-formulae
# http://stackoverflow.com/a/13436167/96656
def unisymbol(codePoint):
if codePoint >= 0x0000 and codePoint <= 0xFFFF:
return unichr(codePoint)
elif codePoint >= 0x010000 and codePoint <= 0x10FFFF:
highSurrogate = int((codePoint - 0x10000) / 0x400) + 0xD800
lowSurrogate = int((codePoint - 0x10000) % 0x400) + 0xDC00
return unichr(highSurrogate) + unichr(lowSurrogate)
else:
return 'Error'
def hexify(codePoint):
return 'U+' + hex(codePoint)[2:].upper().zfill(6)
def writeFile(filename, contents):
print filename
with open(filename, 'w') as f:
f.write(contents.strip() + '\n')
data = []
for codePoint in range(0x000000, 0x10FFFF + 1):
symbol = unisymbol(codePoint)
# http://stackoverflow.com/a/17199950/96656
bytes = symbol.encode('utf8').decode('latin1')
data.append({
'codePoint': codePoint,
'decoded': symbol,
'encoded': bytes
});
jsonData = json.dumps(data, sort_keys=False, indent=2, separators=(',', ': '))
# Use tabs instead of double spaces for indentation
jsonData = jsonData.replace(' ', '\t')
# Escape hexadecimal digits in escape sequences
jsonData = re.sub(
r'\\u([a-fA-F0-9]{4})',
lambda match: r'\u{}'.format(match.group(1).upper()),
jsonData
)
writeFile('data.json', jsonData)
| mit |
uw-it-aca/myuw | myuw/views/api/__init__.py | 1 | 1385 | # Copyright 2021 UW-IT, University of Washington
# SPDX-License-Identifier: Apache-2.0
import json
import re
from django.http import HttpResponse
from django.views import View
from django.utils.decorators import method_decorator
from django.contrib.auth.decorators import login_required
from myuw.views import prefetch_resources
SPACE_PATTERN = r'%20'
AMP_PATTERN = r'%26'
def unescape_curriculum_abbr(cur_abb):
if re.search(SPACE_PATTERN, cur_abb):
cur_abb = re.sub(SPACE_PATTERN, ' ', cur_abb)
if re.search(AMP_PATTERN, cur_abb):
cur_abb = re.sub(AMP_PATTERN, '&', cur_abb)
return cur_abb
def json_serializer(obj):
if hasattr(obj, 'isoformat'):
return obj.isoformat()
class OpenAPI(View):
"""
Default MyUW API class, does not require AuthN.
"""
def json_response(self, content='', status=200):
return HttpResponse(json.dumps(content, default=str),
status=status,
content_type='application/json')
def html_response(self, content='', status=200):
return HttpResponse(content,
status=status,
content_type='text/html')
@method_decorator(login_required, name='dispatch')
class ProtectedAPI(OpenAPI):
"""
Protected MyUW API class that adds login AuthN requirement.
"""
pass
| apache-2.0 |
dimaspivak/docker-py | docker/utils/ports.py | 5 | 2796 | import re
PORT_SPEC = re.compile(
"^" # Match full string
"(" # External part
"((?P<host>[a-fA-F\d.:]+):)?" # Address
"(?P<ext>[\d]*)(-(?P<ext_end>[\d]+))?:" # External range
")?"
"(?P<int>[\d]+)(-(?P<int_end>[\d]+))?" # Internal range
"(?P<proto>/(udp|tcp))?" # Protocol
"$" # Match full string
)
def add_port_mapping(port_bindings, internal_port, external):
if internal_port in port_bindings:
port_bindings[internal_port].append(external)
else:
port_bindings[internal_port] = [external]
def add_port(port_bindings, internal_port_range, external_range):
if external_range is None:
for internal_port in internal_port_range:
add_port_mapping(port_bindings, internal_port, None)
else:
ports = zip(internal_port_range, external_range)
for internal_port, external_port in ports:
add_port_mapping(port_bindings, internal_port, external_port)
def build_port_bindings(ports):
port_bindings = {}
for port in ports:
internal_port_range, external_range = split_port(port)
add_port(port_bindings, internal_port_range, external_range)
return port_bindings
def _raise_invalid_port(port):
raise ValueError('Invalid port "%s", should be '
'[[remote_ip:]remote_port[-remote_port]:]'
'port[/protocol]' % port)
def port_range(start, end, proto, randomly_available_port=False):
if not start:
return start
if not end:
return [start + proto]
if randomly_available_port:
return ['{}-{}'.format(start, end) + proto]
return [str(port) + proto for port in range(int(start), int(end) + 1)]
def split_port(port):
if hasattr(port, 'legacy_repr'):
# This is the worst hack, but it prevents a bug in Compose 1.14.0
# https://github.com/docker/docker-py/issues/1668
# TODO: remove once fixed in Compose stable
port = port.legacy_repr()
port = str(port)
match = PORT_SPEC.match(port)
if match is None:
_raise_invalid_port(port)
parts = match.groupdict()
host = parts['host']
proto = parts['proto'] or ''
internal = port_range(parts['int'], parts['int_end'], proto)
external = port_range(
parts['ext'], parts['ext_end'], '', len(internal) == 1)
if host is None:
if external is not None and len(internal) != len(external):
raise ValueError('Port ranges don\'t match in length')
return internal, external
else:
if not external:
external = [None] * len(internal)
elif len(internal) != len(external):
raise ValueError('Port ranges don\'t match in length')
return internal, [(host, ext_port) for ext_port in external]
| apache-2.0 |
petecummings/django | tests/foreign_object/test_forms.py | 379 | 1098 | import datetime
from django import forms
from django.test import TestCase
from .models import Article
class FormsTests(TestCase):
# ForeignObjects should not have any form fields, currently the user needs
# to manually deal with the foreignobject relation.
class ArticleForm(forms.ModelForm):
class Meta:
model = Article
fields = '__all__'
def test_foreign_object_form(self):
# A very crude test checking that the non-concrete fields do not get form fields.
form = FormsTests.ArticleForm()
self.assertIn('id_pub_date', form.as_table())
self.assertNotIn('active_translation', form.as_table())
form = FormsTests.ArticleForm(data={'pub_date': str(datetime.date.today())})
self.assertTrue(form.is_valid())
a = form.save()
self.assertEqual(a.pub_date, datetime.date.today())
form = FormsTests.ArticleForm(instance=a, data={'pub_date': '2013-01-01'})
a2 = form.save()
self.assertEqual(a.pk, a2.pk)
self.assertEqual(a2.pub_date, datetime.date(2013, 1, 1))
| bsd-3-clause |
shakamunyi/neutron | neutron/tests/unit/services/metering/test_metering_plugin.py | 6 | 21497 | # Copyright (C) 2013 eNovance SAS <licensing@enovance.com>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from oslo_utils import timeutils
from neutron.api.v2 import attributes as attr
from neutron.common import constants as n_constants
from neutron.common import topics
from neutron import context
from neutron.db import agents_db
from neutron.db import l3_agentschedulers_db
from neutron.db.metering import metering_rpc
from neutron.extensions import l3 as ext_l3
from neutron.extensions import metering as ext_metering
from neutron import manager
from neutron.openstack.common import uuidutils
from neutron.plugins.common import constants
from neutron.tests.unit.db.metering import test_metering_db
from neutron.tests.unit.db import test_db_base_plugin_v2
from neutron.tests.unit.extensions import test_l3
_uuid = uuidutils.generate_uuid
METERING_SERVICE_PLUGIN_KLASS = (
"neutron.services.metering."
"metering_plugin.MeteringPlugin"
)
class MeteringTestExtensionManager(object):
def get_resources(self):
attr.RESOURCE_ATTRIBUTE_MAP.update(ext_metering.RESOURCE_ATTRIBUTE_MAP)
attr.RESOURCE_ATTRIBUTE_MAP.update(ext_l3.RESOURCE_ATTRIBUTE_MAP)
l3_res = ext_l3.L3.get_resources()
metering_res = ext_metering.Metering.get_resources()
return l3_res + metering_res
def get_actions(self):
return []
def get_request_extensions(self):
return []
class TestMeteringPlugin(test_db_base_plugin_v2.NeutronDbPluginV2TestCase,
test_l3.L3NatTestCaseMixin,
test_metering_db.MeteringPluginDbTestCaseMixin):
resource_prefix_map = dict(
(k.replace('_', '-'), constants.COMMON_PREFIXES[constants.METERING])
for k in ext_metering.RESOURCE_ATTRIBUTE_MAP.keys()
)
def setUp(self):
plugin = 'neutron.tests.unit.extensions.test_l3.TestL3NatIntPlugin'
service_plugins = {'metering_plugin_name':
METERING_SERVICE_PLUGIN_KLASS}
ext_mgr = MeteringTestExtensionManager()
super(TestMeteringPlugin, self).setUp(plugin=plugin, ext_mgr=ext_mgr,
service_plugins=service_plugins)
self.uuid = '654f6b9d-0f36-4ae5-bd1b-01616794ca60'
uuid = 'neutron.openstack.common.uuidutils.generate_uuid'
self.uuid_patch = mock.patch(uuid, return_value=self.uuid)
self.mock_uuid = self.uuid_patch.start()
self.tenant_id = 'a7e61382-47b8-4d40-bae3-f95981b5637b'
self.ctx = context.Context('', self.tenant_id, is_admin=True)
self.context_patch = mock.patch('neutron.context.Context',
return_value=self.ctx)
self.mock_context = self.context_patch.start()
self.topic = 'metering_agent'
add = ('neutron.api.rpc.agentnotifiers.' +
'metering_rpc_agent_api.MeteringAgentNotifyAPI' +
'.add_metering_label')
self.add_patch = mock.patch(add)
self.mock_add = self.add_patch.start()
remove = ('neutron.api.rpc.agentnotifiers.' +
'metering_rpc_agent_api.MeteringAgentNotifyAPI' +
'.remove_metering_label')
self.remove_patch = mock.patch(remove)
self.mock_remove = self.remove_patch.start()
update = ('neutron.api.rpc.agentnotifiers.' +
'metering_rpc_agent_api.MeteringAgentNotifyAPI' +
'.update_metering_label_rules')
self.update_patch = mock.patch(update)
self.mock_update = self.update_patch.start()
add_rule = ('neutron.api.rpc.agentnotifiers.' +
'metering_rpc_agent_api.MeteringAgentNotifyAPI' +
'.add_metering_label_rule')
self.add_rule_patch = mock.patch(add_rule)
self.mock_add_rule = self.add_rule_patch.start()
remove_rule = ('neutron.api.rpc.agentnotifiers.' +
'metering_rpc_agent_api.MeteringAgentNotifyAPI' +
'.remove_metering_label_rule')
self.remove_rule_patch = mock.patch(remove_rule)
self.mock_remove_rule = self.remove_rule_patch.start()
def test_add_metering_label_rpc_call(self):
second_uuid = 'e27fe2df-376e-4ac7-ae13-92f050a21f84'
expected = [{'status': 'ACTIVE',
'name': 'router1',
'gw_port_id': None,
'admin_state_up': True,
'tenant_id': self.tenant_id,
'_metering_labels': [
{'rules': [],
'id': self.uuid}],
'id': self.uuid}]
tenant_id_2 = '8a268a58-1610-4890-87e0-07abb8231206'
self.mock_uuid.return_value = second_uuid
with self.router(name='router2', tenant_id=tenant_id_2,
set_context=True):
self.mock_uuid.return_value = self.uuid
with self.router(name='router1', tenant_id=self.tenant_id,
set_context=True):
with self.metering_label(tenant_id=self.tenant_id,
set_context=True):
self.mock_add.assert_called_with(self.ctx, expected)
def test_add_metering_label_shared_rpc_call(self):
second_uuid = 'e27fe2df-376e-4ac7-ae13-92f050a21f84'
expected = [{'status': 'ACTIVE',
'name': 'router1',
'gw_port_id': None,
'admin_state_up': True,
'tenant_id': self.tenant_id,
'_metering_labels': [
{'rules': [],
'id': self.uuid},
{'rules': [],
'id': second_uuid}],
'id': self.uuid}]
tenant_id_2 = '8a268a58-1610-4890-87e0-07abb8231206'
with self.router(name='router1', tenant_id=self.tenant_id,
set_context=True):
with self.metering_label(tenant_id=self.tenant_id,
set_context=True):
self.mock_uuid.return_value = second_uuid
with self.metering_label(tenant_id=tenant_id_2, shared=True,
set_context=True):
self.mock_add.assert_called_with(self.ctx, expected)
def test_remove_metering_label_rpc_call(self):
expected = [{'status': 'ACTIVE',
'name': 'router1',
'gw_port_id': None,
'admin_state_up': True,
'tenant_id': self.tenant_id,
'_metering_labels': [
{'rules': [],
'id': self.uuid}],
'id': self.uuid}]
with self.router(tenant_id=self.tenant_id, set_context=True):
with self.metering_label(tenant_id=self.tenant_id,
set_context=True) as label:
self.mock_add.assert_called_with(self.ctx, expected)
self._delete('metering-labels',
label['metering_label']['id'])
self.mock_remove.assert_called_with(self.ctx, expected)
def test_remove_one_metering_label_rpc_call(self):
second_uuid = 'e27fe2df-376e-4ac7-ae13-92f050a21f84'
expected_add = [{'status': 'ACTIVE',
'name': 'router1',
'gw_port_id': None,
'admin_state_up': True,
'tenant_id': self.tenant_id,
'_metering_labels': [
{'rules': [],
'id': self.uuid},
{'rules': [],
'id': second_uuid}],
'id': self.uuid}]
expected_remove = [{'status': 'ACTIVE',
'name': 'router1',
'gw_port_id': None,
'admin_state_up': True,
'tenant_id': self.tenant_id,
'_metering_labels': [
{'rules': [],
'id': second_uuid}],
'id': self.uuid}]
with self.router(tenant_id=self.tenant_id, set_context=True):
with self.metering_label(tenant_id=self.tenant_id,
set_context=True):
self.mock_uuid.return_value = second_uuid
with self.metering_label(tenant_id=self.tenant_id,
set_context=True) as label:
self.mock_add.assert_called_with(self.ctx, expected_add)
self._delete('metering-labels',
label['metering_label']['id'])
self.mock_remove.assert_called_with(self.ctx, expected_remove)
def test_add_and_remove_metering_label_rule_rpc_call(self):
second_uuid = 'e27fe2df-376e-4ac7-ae13-92f050a21f84'
expected_add = [{'status': 'ACTIVE',
'name': 'router1',
'gw_port_id': None,
'admin_state_up': True,
'tenant_id': self.tenant_id,
'_metering_labels': [
{'rule': {
'remote_ip_prefix': '10.0.0.0/24',
'direction': 'ingress',
'metering_label_id': self.uuid,
'excluded': False,
'id': second_uuid},
'id': self.uuid}],
'id': self.uuid}]
expected_del = [{'status': 'ACTIVE',
'name': 'router1',
'gw_port_id': None,
'admin_state_up': True,
'tenant_id': self.tenant_id,
'_metering_labels': [
{'rule': {
'remote_ip_prefix': '10.0.0.0/24',
'direction': 'ingress',
'metering_label_id': self.uuid,
'excluded': False,
'id': second_uuid},
'id': self.uuid}],
'id': self.uuid}]
with self.router(tenant_id=self.tenant_id, set_context=True):
with self.metering_label(tenant_id=self.tenant_id,
set_context=True) as label:
l = label['metering_label']
self.mock_uuid.return_value = second_uuid
with self.metering_label_rule(l['id']):
self.mock_add_rule.assert_called_with(self.ctx,
expected_add)
self._delete('metering-label-rules', second_uuid)
self.mock_remove_rule.assert_called_with(self.ctx,
expected_del)
def test_delete_metering_label_does_not_clear_router_tenant_id(self):
tenant_id = '654f6b9d-0f36-4ae5-bd1b-01616794ca60'
with self.metering_label(tenant_id=tenant_id) as metering_label:
with self.router(tenant_id=tenant_id, set_context=True) as r:
router = self._show('routers', r['router']['id'])
self.assertEqual(tenant_id, router['router']['tenant_id'])
metering_label_id = metering_label['metering_label']['id']
self._delete('metering-labels', metering_label_id, 204)
router = self._show('routers', r['router']['id'])
self.assertEqual(tenant_id, router['router']['tenant_id'])
class TestMeteringPluginL3AgentScheduler(
l3_agentschedulers_db.L3AgentSchedulerDbMixin,
test_db_base_plugin_v2.NeutronDbPluginV2TestCase,
test_l3.L3NatTestCaseMixin,
test_metering_db.MeteringPluginDbTestCaseMixin):
resource_prefix_map = dict(
(k.replace('_', '-'), constants.COMMON_PREFIXES[constants.METERING])
for k in ext_metering.RESOURCE_ATTRIBUTE_MAP.keys()
)
def setUp(self, plugin_str=None, service_plugins=None, scheduler=None):
if not plugin_str:
plugin_str = ('neutron.tests.unit.extensions.test_l3.'
'TestL3NatIntAgentSchedulingPlugin')
if not service_plugins:
service_plugins = {'metering_plugin_name':
METERING_SERVICE_PLUGIN_KLASS}
if not scheduler:
scheduler = plugin_str
ext_mgr = MeteringTestExtensionManager()
super(TestMeteringPluginL3AgentScheduler,
self).setUp(plugin=plugin_str, ext_mgr=ext_mgr,
service_plugins=service_plugins)
self.uuid = '654f6b9d-0f36-4ae5-bd1b-01616794ca60'
uuid = 'neutron.openstack.common.uuidutils.generate_uuid'
self.uuid_patch = mock.patch(uuid, return_value=self.uuid)
self.mock_uuid = self.uuid_patch.start()
self.tenant_id = 'a7e61382-47b8-4d40-bae3-f95981b5637b'
self.ctx = context.Context('', self.tenant_id, is_admin=True)
self.context_patch = mock.patch('neutron.context.Context',
return_value=self.ctx)
self.mock_context = self.context_patch.start()
self.l3routers_patch = mock.patch(scheduler +
'.get_l3_agents_hosting_routers')
self.l3routers_mock = self.l3routers_patch.start()
self.topic = 'metering_agent'
add = ('neutron.api.rpc.agentnotifiers.' +
'metering_rpc_agent_api.MeteringAgentNotifyAPI' +
'.add_metering_label')
self.add_patch = mock.patch(add)
self.mock_add = self.add_patch.start()
remove = ('neutron.api.rpc.agentnotifiers.' +
'metering_rpc_agent_api.MeteringAgentNotifyAPI' +
'.remove_metering_label')
self.remove_patch = mock.patch(remove)
self.mock_remove = self.remove_patch.start()
def test_add_metering_label_rpc_call(self):
second_uuid = 'e27fe2df-376e-4ac7-ae13-92f050a21f84'
expected = [{'status': 'ACTIVE',
'name': 'router1',
'gw_port_id': None,
'admin_state_up': True,
'tenant_id': self.tenant_id,
'_metering_labels': [
{'rules': [],
'id': second_uuid}],
'id': self.uuid},
{'status': 'ACTIVE',
'name': 'router2',
'gw_port_id': None,
'admin_state_up': True,
'tenant_id': self.tenant_id,
'_metering_labels': [
{'rules': [],
'id': second_uuid}],
'id': second_uuid}]
# bind each router to a specific agent
agent1 = agents_db.Agent(host='agent1')
agent2 = agents_db.Agent(host='agent2')
agents = {self.uuid: agent1,
second_uuid: agent2}
def side_effect(context, routers, admin_state_up, active):
return [agents[routers[0]]]
self.l3routers_mock.side_effect = side_effect
with self.router(name='router1', tenant_id=self.tenant_id,
set_context=True):
self.mock_uuid.return_value = second_uuid
with self.router(name='router2', tenant_id=self.tenant_id,
set_context=True):
with self.metering_label(tenant_id=self.tenant_id,
set_context=True):
self.mock_add.assert_called_with(self.ctx, expected)
class TestMeteringPluginL3AgentSchedulerServicePlugin(
TestMeteringPluginL3AgentScheduler):
"""Unit tests for the case where separate service plugin
implements L3 routing.
"""
def setUp(self):
l3_plugin = ('neutron.tests.unit.extensions.test_l3.'
'TestL3NatAgentSchedulingServicePlugin')
service_plugins = {'metering_plugin_name':
METERING_SERVICE_PLUGIN_KLASS,
'l3_plugin_name': l3_plugin}
plugin_str = ('neutron.tests.unit.extensions.test_l3.'
'TestNoL3NatPlugin')
super(TestMeteringPluginL3AgentSchedulerServicePlugin, self).setUp(
plugin_str=plugin_str, service_plugins=service_plugins,
scheduler=l3_plugin)
class TestMeteringPluginRpcFromL3Agent(
test_db_base_plugin_v2.NeutronDbPluginV2TestCase,
test_l3.L3NatTestCaseMixin,
test_metering_db.MeteringPluginDbTestCaseMixin):
resource_prefix_map = dict(
(k.replace('_', '-'), constants.COMMON_PREFIXES[constants.METERING])
for k in ext_metering.RESOURCE_ATTRIBUTE_MAP
)
def setUp(self):
service_plugins = {'metering_plugin_name':
METERING_SERVICE_PLUGIN_KLASS}
plugin = ('neutron.tests.unit.extensions.test_l3.'
'TestL3NatIntAgentSchedulingPlugin')
ext_mgr = MeteringTestExtensionManager()
super(TestMeteringPluginRpcFromL3Agent,
self).setUp(plugin=plugin, service_plugins=service_plugins,
ext_mgr=ext_mgr)
self.meter_plugin = manager.NeutronManager.get_service_plugins().get(
constants.METERING)
self.tenant_id = 'admin_tenant_id'
self.tenant_id_1 = 'tenant_id_1'
self.tenant_id_2 = 'tenant_id_2'
self.adminContext = context.get_admin_context()
self._register_l3_agent('agent1')
def _register_l3_agent(self, host):
agent = {
'binary': 'neutron-l3-agent',
'host': host,
'topic': topics.L3_AGENT,
'configurations': {},
'agent_type': n_constants.AGENT_TYPE_L3,
'start_flag': True
}
callback = agents_db.AgentExtRpcCallback()
callback.report_state(self.adminContext,
agent_state={'agent_state': agent},
time=timeutils.strtime())
def test_get_sync_data_metering(self):
with self.subnet() as subnet:
s = subnet['subnet']
self._set_net_external(s['network_id'])
with self.router(name='router1', subnet=subnet) as router:
r = router['router']
self._add_external_gateway_to_router(r['id'], s['network_id'])
with self.metering_label(tenant_id=r['tenant_id']):
callbacks = metering_rpc.MeteringRpcCallbacks(
self.meter_plugin)
data = callbacks.get_sync_data_metering(self.adminContext,
host='agent1')
self.assertEqual('router1', data[0]['name'])
self._register_l3_agent('agent2')
data = callbacks.get_sync_data_metering(self.adminContext,
host='agent2')
self.assertFalse(data)
self._remove_external_gateway_from_router(
r['id'], s['network_id'])
def test_get_sync_data_metering_shared(self):
with self.router(name='router1', tenant_id=self.tenant_id_1):
with self.router(name='router2', tenant_id=self.tenant_id_2):
with self.metering_label(tenant_id=self.tenant_id,
shared=True):
callbacks = metering_rpc.MeteringRpcCallbacks(
self.meter_plugin)
data = callbacks.get_sync_data_metering(self.adminContext)
routers = [router['name'] for router in data]
self.assertIn('router1', routers)
self.assertIn('router2', routers)
def test_get_sync_data_metering_not_shared(self):
with self.router(name='router1', tenant_id=self.tenant_id_1):
with self.router(name='router2', tenant_id=self.tenant_id_2):
with self.metering_label(tenant_id=self.tenant_id):
callbacks = metering_rpc.MeteringRpcCallbacks(
self.meter_plugin)
data = callbacks.get_sync_data_metering(self.adminContext)
routers = [router['name'] for router in data]
self.assertEqual([], routers)
| apache-2.0 |
Samsung/TizenRT | external/protobuf/python/google/protobuf/internal/api_implementation.py | 19 | 7070 | # Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Determine which implementation of the protobuf API is used in this process.
"""
import os
import warnings
import sys
try:
# pylint: disable=g-import-not-at-top
from google.protobuf.internal import _api_implementation
# The compile-time constants in the _api_implementation module can be used to
# switch to a certain implementation of the Python API at build time.
_api_version = _api_implementation.api_version
_proto_extension_modules_exist_in_build = True
except ImportError:
_api_version = -1 # Unspecified by compiler flags.
_proto_extension_modules_exist_in_build = False
if _api_version == 1:
raise ValueError('api_version=1 is no longer supported.')
if _api_version < 0: # Still unspecified?
try:
# The presence of this module in a build allows the proto implementation to
# be upgraded merely via build deps rather than a compiler flag or the
# runtime environment variable.
# pylint: disable=g-import-not-at-top
from google.protobuf import _use_fast_cpp_protos
# Work around a known issue in the classic bootstrap .par import hook.
if not _use_fast_cpp_protos:
raise ImportError('_use_fast_cpp_protos import succeeded but was None')
del _use_fast_cpp_protos
_api_version = 2
except ImportError:
try:
# pylint: disable=g-import-not-at-top
from google.protobuf.internal import use_pure_python
del use_pure_python # Avoids a pylint error and namespace pollution.
except ImportError:
if _proto_extension_modules_exist_in_build:
if sys.version_info[0] >= 3: # Python 3 defaults to C++ impl v2.
_api_version = 2
# TODO(b/17427486): Make Python 2 default to C++ impl v2.
_default_implementation_type = (
'python' if _api_version <= 0 else 'cpp')
# This environment variable can be used to switch to a certain implementation
# of the Python API, overriding the compile-time constants in the
# _api_implementation module. Right now only 'python' and 'cpp' are valid
# values. Any other value will be ignored.
_implementation_type = os.getenv('PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION',
_default_implementation_type)
if _implementation_type != 'python':
_implementation_type = 'cpp'
if 'PyPy' in sys.version and _implementation_type == 'cpp':
warnings.warn('PyPy does not work yet with cpp protocol buffers. '
'Falling back to the python implementation.')
_implementation_type = 'python'
# This environment variable can be used to switch between the two
# 'cpp' implementations, overriding the compile-time constants in the
# _api_implementation module. Right now only '2' is supported. Any other
# value will cause an error to be raised.
_implementation_version_str = os.getenv(
'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION_VERSION', '2')
if _implementation_version_str != '2':
raise ValueError(
'unsupported PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION_VERSION: "' +
_implementation_version_str + '" (supported versions: 2)'
)
_implementation_version = int(_implementation_version_str)
# Detect if serialization should be deterministic by default
try:
# The presence of this module in a build allows the proto implementation to
# be upgraded merely via build deps.
#
# NOTE: Merely importing this automatically enables deterministic proto
# serialization for C++ code, but we still need to export it as a boolean so
# that we can do the same for `_implementation_type == 'python'`.
#
# NOTE2: It is possible for C++ code to enable deterministic serialization by
# default _without_ affecting Python code, if the C++ implementation is not in
# use by this module. That is intended behavior, so we don't actually expose
# this boolean outside of this module.
#
# pylint: disable=g-import-not-at-top,unused-import
from google.protobuf import enable_deterministic_proto_serialization
_python_deterministic_proto_serialization = True
except ImportError:
_python_deterministic_proto_serialization = False
# Usage of this function is discouraged. Clients shouldn't care which
# implementation of the API is in use. Note that there is no guarantee
# that differences between APIs will be maintained.
# Please don't use this function if possible.
def Type():
return _implementation_type
# See comment on 'Type' above.
def Version():
return _implementation_version
# For internal use only
def IsPythonDefaultSerializationDeterministic():
return _python_deterministic_proto_serialization
# DO NOT USE: For migration and testing only. Will be removed when Proto3
# defaults to preserve unknowns.
if _implementation_type == 'cpp':
try:
# pylint: disable=g-import-not-at-top
from google.protobuf.pyext import _message
def GetPythonProto3PreserveUnknownsDefault():
return _message.GetPythonProto3PreserveUnknownsDefault()
def SetPythonProto3PreserveUnknownsDefault(preserve):
_message.SetPythonProto3PreserveUnknownsDefault(preserve)
except ImportError:
# Unrecognized cpp implementation. Skipping the unknown fields APIs.
pass
else:
_python_proto3_preserve_unknowns_default = True
def GetPythonProto3PreserveUnknownsDefault():
return _python_proto3_preserve_unknowns_default
def SetPythonProto3PreserveUnknownsDefault(preserve):
global _python_proto3_preserve_unknowns_default
_python_proto3_preserve_unknowns_default = preserve
| apache-2.0 |
Chilipp/psyplot | psyplot/warning.py | 1 | 3411 | # coding: utf-8
"""Warning module of the psyplot python module
This module controls the warning behaviour of the module via the python
builtin warnings module and introduces three new warning classes:
..autosummay::
PsPylotRuntimeWarning
PsyPlotWarning
PsyPlotCritical"""
import warnings
import logging
# disable a warning about "comparison to 'None' in backend_pdf which occurs
# in the matplotlib.backends.backend_pdf.PdfPages class
warnings.filterwarnings(
'ignore', 'comparison', FutureWarning, 'matplotlib.backends.backend_pdf',
2264)
# disable a warning about "np.array_split" that occurs for certain numpy
# versions
warnings.filterwarnings(
'ignore', 'in the future np.array_split will retain', FutureWarning,
'numpy.lib.shape_base', 431)
# disable a warning about "elementwise comparison of a string" in the
# matplotlib.collection.Collection.get_edgecolor method that occurs for certain
# matplotlib and numpy versions
warnings.filterwarnings(
'ignore', 'elementwise comparison failed', FutureWarning,
'matplotlib.collections', 590)
logger = logging.getLogger(__name__)
class PsyPlotRuntimeWarning(RuntimeWarning):
"""Runtime warning that appears only ones"""
pass
class PsyPlotWarning(UserWarning):
"""Normal UserWarning for psyplot module"""
pass
class PsyPlotCritical(UserWarning):
"""Critical UserWarning for psyplot module"""
pass
warnings.simplefilter('always', PsyPlotWarning, append=True)
warnings.simplefilter('always', PsyPlotCritical, append=True)
def disable_warnings(critical=False):
"""Function that disables all warnings and all critical warnings (if
critical evaluates to True) related to the psyplot Module.
Please note that you can also configure the warnings via the
psyplot.warning logger (logging.getLogger(psyplot.warning))."""
warnings.filterwarnings('ignore', '\w', PsyPlotWarning, 'psyplot', 0)
if critical:
warnings.filterwarnings('ignore', '\w', PsyPlotCritical, 'psyplot', 0)
def warn(message, category=PsyPlotWarning, logger=None):
"""wrapper around the warnings.warn function for non-critical warnings.
logger may be a logging.Logger instance"""
if logger is not None:
message = "[Warning by %s]\n%s" % (logger.name, message)
warnings.warn(message, category, stacklevel=3)
def critical(message, category=PsyPlotCritical, logger=None):
"""wrapper around the warnings.warn function for critical warnings.
logger may be a logging.Logger instance"""
if logger is not None:
message = "[Critical warning by %s]\n%s" % (logger.name, message)
warnings.warn(message, category, stacklevel=2)
old_showwarning = warnings.showwarning
def customwarn(message, category, filename, lineno, *args, **kwargs):
"""Use the psyplot.warning logger for categories being out of
PsyPlotWarning and PsyPlotCritical and the default warnings.showwarning
function for all the others."""
if category is PsyPlotWarning:
logger.warning(warnings.formatwarning(
"\n%s" % message, category, filename, lineno))
elif category is PsyPlotCritical:
logger.critical(warnings.formatwarning(
"\n%s" % message, category, filename, lineno),
exc_info=True)
else:
old_showwarning(message, category, filename, lineno, *args, **kwargs)
warnings.showwarning = customwarn
| gpl-2.0 |
mvj3/leetcode | 225-implement-stack-using-queues.py | 1 | 3205 | """
Question:
Implement Stack using Queues
Implement the following operations of a stack using queues.
push(x) -- Push element x onto stack.
pop() -- Removes the element on top of the stack.
top() -- Get the top element.
empty() -- Return whether the stack is empty.
Notes:
You must use only standard operations of a queue -- which means only push to back, peek/pop from front, size, and is empty operations are valid.
Depending on your language, queue may not be supported natively. You may simulate a queue by using a list or deque (double-ended queue), as long as you use only standard operations of a queue.
You may assume that all operations are valid (for example, no pop or top operations will be called on an empty stack).
Update (2015-06-11):
The class name of the Java function had been updated to MyStack instead of Stack.
Credits:
Special thanks to @jianchao.li.fighter for adding this problem and all test cases.
Performance:
1. Total Accepted: 19217 Total Submissions: 63106 Difficulty: Easy
Annotation:
1. refer to 232-implement-queue-using-stacks.py
2. Your runtime beats 99.44% of python submissions.
"""
class Queue(object):
def __init__(self):
self.queue = list()
def push(self, x):
self.queue.append(x)
def pop(self):
if self.empty():
return None
first_item = self.queue[0]
self.queue = self.queue[1:]
return first_item
def peek(self):
if self.queue:
return self.queue[0]
return None
def empty(self):
return not self.queue
def __repr__(self):
return str(self.queue)
class Stack(object):
def __init__(self):
"""
initialize your data structure here.
"""
self.master_queue = Queue()
self.slave_queue = Queue()
def __repr__(self):
return "<Stack {}>".format(self.master_queue)
def push(self, x):
"""
:type x: int
:rtype: nothing
"""
self.master_queue.push(x)
def pop(self):
"""
:rtype: nothing
"""
return self.common_pop_top(False)
def top(self):
"""
:rtype: int
"""
return self.common_pop_top(True)
def common_pop_top(self, keep):
top_item = None
while not self.master_queue.empty(): # loop over the master_queue
item = self.master_queue.pop()
if self.master_queue.empty():
top_item = item # select the last one in the queue
if keep or (top_item != item):
self.slave_queue.push(item)
self.master_queue, self.slave_queue = self.slave_queue, self.master_queue
return top_item
def empty(self):
"""
:rtype: bool
"""
return self.master_queue.empty()
s = Stack()
assert s.empty() is True
s.push(1)
assert s.top() == 1
assert s.empty() is False
assert s.pop() == 1
assert s.empty() is True
s.push(2)
s.push(3)
s.push(4)
assert s.empty() is False
v = s.top()
assert v == 4, v
assert s.pop() == 4
assert s.pop() == 3
assert s.pop() == 2
assert s.empty() is True
| mit |
postlund/home-assistant | homeassistant/components/command_line/sensor.py | 2 | 6076 | """Allows to configure custom shell commands to turn a value for a sensor."""
import collections
from datetime import timedelta
import json
import logging
import shlex
import subprocess
import voluptuous as vol
from homeassistant.components.sensor import PLATFORM_SCHEMA
from homeassistant.const import (
CONF_COMMAND,
CONF_NAME,
CONF_UNIT_OF_MEASUREMENT,
CONF_VALUE_TEMPLATE,
STATE_UNKNOWN,
)
from homeassistant.exceptions import TemplateError
from homeassistant.helpers import template
import homeassistant.helpers.config_validation as cv
from homeassistant.helpers.entity import Entity
_LOGGER = logging.getLogger(__name__)
CONF_COMMAND_TIMEOUT = "command_timeout"
CONF_JSON_ATTRIBUTES = "json_attributes"
DEFAULT_NAME = "Command Sensor"
DEFAULT_TIMEOUT = 15
SCAN_INTERVAL = timedelta(seconds=60)
PLATFORM_SCHEMA = PLATFORM_SCHEMA.extend(
{
vol.Required(CONF_COMMAND): cv.string,
vol.Optional(CONF_COMMAND_TIMEOUT, default=DEFAULT_TIMEOUT): cv.positive_int,
vol.Optional(CONF_JSON_ATTRIBUTES): cv.ensure_list_csv,
vol.Optional(CONF_NAME, default=DEFAULT_NAME): cv.string,
vol.Optional(CONF_UNIT_OF_MEASUREMENT): cv.string,
vol.Optional(CONF_VALUE_TEMPLATE): cv.template,
}
)
def setup_platform(hass, config, add_entities, discovery_info=None):
"""Set up the Command Sensor."""
name = config.get(CONF_NAME)
command = config.get(CONF_COMMAND)
unit = config.get(CONF_UNIT_OF_MEASUREMENT)
value_template = config.get(CONF_VALUE_TEMPLATE)
command_timeout = config.get(CONF_COMMAND_TIMEOUT)
if value_template is not None:
value_template.hass = hass
json_attributes = config.get(CONF_JSON_ATTRIBUTES)
data = CommandSensorData(hass, command, command_timeout)
add_entities(
[CommandSensor(hass, data, name, unit, value_template, json_attributes)], True
)
class CommandSensor(Entity):
"""Representation of a sensor that is using shell commands."""
def __init__(
self, hass, data, name, unit_of_measurement, value_template, json_attributes
):
"""Initialize the sensor."""
self._hass = hass
self.data = data
self._attributes = None
self._json_attributes = json_attributes
self._name = name
self._state = None
self._unit_of_measurement = unit_of_measurement
self._value_template = value_template
@property
def name(self):
"""Return the name of the sensor."""
return self._name
@property
def unit_of_measurement(self):
"""Return the unit the value is expressed in."""
return self._unit_of_measurement
@property
def state(self):
"""Return the state of the device."""
return self._state
@property
def device_state_attributes(self):
"""Return the state attributes."""
return self._attributes
def update(self):
"""Get the latest data and updates the state."""
self.data.update()
value = self.data.value
if self._json_attributes:
self._attributes = {}
if value:
try:
json_dict = json.loads(value)
if isinstance(json_dict, collections.Mapping):
self._attributes = {
k: json_dict[k]
for k in self._json_attributes
if k in json_dict
}
else:
_LOGGER.warning("JSON result was not a dictionary")
except ValueError:
_LOGGER.warning("Unable to parse output as JSON: %s", value)
else:
_LOGGER.warning("Empty reply found when expecting JSON data")
if value is None:
value = STATE_UNKNOWN
elif self._value_template is not None:
self._state = self._value_template.render_with_possible_json_value(
value, STATE_UNKNOWN
)
else:
self._state = value
class CommandSensorData:
"""The class for handling the data retrieval."""
def __init__(self, hass, command, command_timeout):
"""Initialize the data object."""
self.value = None
self.hass = hass
self.command = command
self.timeout = command_timeout
def update(self):
"""Get the latest data with a shell command."""
command = self.command
cache = {}
if command in cache:
prog, args, args_compiled = cache[command]
elif " " not in command:
prog = command
args = None
args_compiled = None
cache[command] = (prog, args, args_compiled)
else:
prog, args = command.split(" ", 1)
args_compiled = template.Template(args, self.hass)
cache[command] = (prog, args, args_compiled)
if args_compiled:
try:
args_to_render = {"arguments": args}
rendered_args = args_compiled.render(args_to_render)
except TemplateError as ex:
_LOGGER.exception("Error rendering command template: %s", ex)
return
else:
rendered_args = None
if rendered_args == args:
# No template used. default behavior
pass
else:
# Template used. Construct the string used in the shell
command = str(" ".join([prog] + shlex.split(rendered_args)))
try:
_LOGGER.debug("Running command: %s", command)
return_value = subprocess.check_output(
command, shell=True, timeout=self.timeout # nosec # shell by design
)
self.value = return_value.strip().decode("utf-8")
except subprocess.CalledProcessError:
_LOGGER.error("Command failed: %s", command)
except subprocess.TimeoutExpired:
_LOGGER.error("Timeout for command: %s", command)
| apache-2.0 |
GauravSahu/odoo | addons/l10n_multilang/__init__.py | 438 | 1082 | # -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2010 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
import account
import l10n_multilang
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 |
Coolexe/shooteru-ics-crc-3.0.16-e733189 | scripts/gcc-wrapper.py | 484 | 3824 | #! /usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright (c) 2011, Code Aurora Forum. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of Code Aurora nor
# the names of its contributors may be used to endorse or promote
# products derived from this software without specific prior written
# permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NON-INFRINGEMENT ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
# OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
# WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
# OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
# ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# Invoke gcc, looking for warnings, and causing a failure if there are
# non-whitelisted warnings.
import errno
import re
import os
import sys
import subprocess
# Note that gcc uses unicode, which may depend on the locale. TODO:
# force LANG to be set to en_US.UTF-8 to get consistent warnings.
allowed_warnings = set([
"alignment.c:720",
"async.c:122",
"async.c:270",
"dir.c:43",
"dm.c:1053",
"dm.c:1080",
"dm-table.c:1120",
"dm-table.c:1126",
"drm_edid.c:1303",
"eventpoll.c:1143",
"f_mass_storage.c:3368",
"inode.c:72",
"inode.c:73",
"inode.c:74",
"msm_sdcc.c:126",
"msm_sdcc.c:128",
"nf_conntrack_netlink.c:790",
"nf_nat_standalone.c:118",
"return_address.c:62",
"soc-core.c:1719",
"xt_log.h:50",
"vx6953.c:3124",
])
# Capture the name of the object file, can find it.
ofile = None
warning_re = re.compile(r'''(.*/|)([^/]+\.[a-z]+:\d+):(\d+:)? warning:''')
def interpret_warning(line):
"""Decode the message from gcc. The messages we care about have a filename, and a warning"""
line = line.rstrip('\n')
m = warning_re.match(line)
if m and m.group(2) not in allowed_warnings:
print "error, forbidden warning:", m.group(2)
# If there is a warning, remove any object if it exists.
if ofile:
try:
os.remove(ofile)
except OSError:
pass
sys.exit(1)
def run_gcc():
args = sys.argv[1:]
# Look for -o
try:
i = args.index('-o')
global ofile
ofile = args[i+1]
except (ValueError, IndexError):
pass
compiler = sys.argv[0]
try:
proc = subprocess.Popen(args, stderr=subprocess.PIPE)
for line in proc.stderr:
print line,
interpret_warning(line)
result = proc.wait()
except OSError as e:
result = e.errno
if result == errno.ENOENT:
print args[0] + ':',e.strerror
print 'Is your PATH set correctly?'
else:
print ' '.join(args), str(e)
return result
if __name__ == '__main__':
status = run_gcc()
sys.exit(status)
| gpl-2.0 |
dzbarsky/servo | tests/wpt/web-platform-tests/webdriver/command_contexts/open_and_close_window_test.py | 141 | 2529 | import os
import sys
import random
import unittest
sys.path.insert(1, os.path.abspath(os.path.join(__file__, "../..")))
import base_test
repo_root = os.path.abspath(os.path.join(__file__, "../../.."))
sys.path.insert(1, os.path.join(repo_root, "tools", "webdriver"))
from webdriver import exceptions
class OpenAndCloseWindowTest(base_test.WebDriverBaseTest):
def setUp(self):
self.driver.get(self.webserver.where_is("command_contexts/res/first-page.html"))
def tearDown(self):
handles = self.driver.get_window_handles()
for i in range(len(handles) - 1):
self.driver.switch_to_window(handles[i])
self.driver.close()
self.driver.switch_to_window(self.driver.get_window_handles()[0])
def test_open_new_window(self):
handles = self.driver.get_window_handles()
self.driver.find_element_by_id("open_new_window").click()
self.assertEquals(len(handles) + 1, len(self.driver.get_window_handles()))
def test_get_window_handles_returns_the_windows_that_have_been_opened(self):
self.driver.find_element_by_id("open_new_window").click()
handles = self.driver.get_window_handles()
self.driver.switch_to_window(handles[0])
url1 = self.driver.get_current_url()
self.driver.switch_to_window(handles[1])
url2 = self.driver.get_current_url()
if url1 == self.webserver.where_is("controlling_windows/res/other-page.html"):
self.assertEquals(url2, self.webserver.where_is("controlling_windows/res/first-page.html"))
elif url1 == self.webserver.where_is("controlling_windows/res/first-page.html"):
self.assertEquals(url2, self.webserver.where_is("controlling_windows/res/other-page.html"))
else:
self.fail("The wrong set of URLs were returned")
def test_close_window(self):
open_windows = len(self.driver.get_window_handles())
self.driver.find_element_by_id("open_new_window").click()
self.assertEquals(1 + open_windows, len(self.driver.get_window_handles()))
self.driver.close()
self.assertEquals(open_windows, len(self.driver.get_window_handles()))
def test_command_sent_to_closed_window_returns_no_such_window_exception(self):
self.driver.find_element_by_id("open_new_window").click()
self.driver.close()
with self.assertRaises(exceptions.NoSuchWindowException):
self.driver.get_window_handle()
if __name__ == "__main__":
unittest.main()
| mpl-2.0 |
Hubert51/AutoGrading | learning/web_Haotian/venv/Lib/site-packages/pip/utils/appdirs.py | 340 | 8811 | """
This code was taken from https://github.com/ActiveState/appdirs and modified
to suit our purposes.
"""
from __future__ import absolute_import
import os
import sys
from pip.compat import WINDOWS, expanduser
from pip._vendor.six import PY2, text_type
def user_cache_dir(appname):
r"""
Return full path to the user-specific cache dir for this application.
"appname" is the name of application.
Typical user cache directories are:
macOS: ~/Library/Caches/<AppName>
Unix: ~/.cache/<AppName> (XDG default)
Windows: C:\Users\<username>\AppData\Local\<AppName>\Cache
On Windows the only suggestion in the MSDN docs is that local settings go
in the `CSIDL_LOCAL_APPDATA` directory. This is identical to the
non-roaming app data dir (the default returned by `user_data_dir`). Apps
typically put cache data somewhere *under* the given dir here. Some
examples:
...\Mozilla\Firefox\Profiles\<ProfileName>\Cache
...\Acme\SuperApp\Cache\1.0
OPINION: This function appends "Cache" to the `CSIDL_LOCAL_APPDATA` value.
"""
if WINDOWS:
# Get the base path
path = os.path.normpath(_get_win_folder("CSIDL_LOCAL_APPDATA"))
# When using Python 2, return paths as bytes on Windows like we do on
# other operating systems. See helper function docs for more details.
if PY2 and isinstance(path, text_type):
path = _win_path_to_bytes(path)
# Add our app name and Cache directory to it
path = os.path.join(path, appname, "Cache")
elif sys.platform == "darwin":
# Get the base path
path = expanduser("~/Library/Caches")
# Add our app name to it
path = os.path.join(path, appname)
else:
# Get the base path
path = os.getenv("XDG_CACHE_HOME", expanduser("~/.cache"))
# Add our app name to it
path = os.path.join(path, appname)
return path
def user_data_dir(appname, roaming=False):
"""
Return full path to the user-specific data dir for this application.
"appname" is the name of application.
If None, just the system directory is returned.
"roaming" (boolean, default False) can be set True to use the Windows
roaming appdata directory. That means that for users on a Windows
network setup for roaming profiles, this user data will be
sync'd on login. See
<http://technet.microsoft.com/en-us/library/cc766489(WS.10).aspx>
for a discussion of issues.
Typical user data directories are:
macOS: ~/Library/Application Support/<AppName>
Unix: ~/.local/share/<AppName> # or in
$XDG_DATA_HOME, if defined
Win XP (not roaming): C:\Documents and Settings\<username>\ ...
...Application Data\<AppName>
Win XP (roaming): C:\Documents and Settings\<username>\Local ...
...Settings\Application Data\<AppName>
Win 7 (not roaming): C:\\Users\<username>\AppData\Local\<AppName>
Win 7 (roaming): C:\\Users\<username>\AppData\Roaming\<AppName>
For Unix, we follow the XDG spec and support $XDG_DATA_HOME.
That means, by default "~/.local/share/<AppName>".
"""
if WINDOWS:
const = roaming and "CSIDL_APPDATA" or "CSIDL_LOCAL_APPDATA"
path = os.path.join(os.path.normpath(_get_win_folder(const)), appname)
elif sys.platform == "darwin":
path = os.path.join(
expanduser('~/Library/Application Support/'),
appname,
)
else:
path = os.path.join(
os.getenv('XDG_DATA_HOME', expanduser("~/.local/share")),
appname,
)
return path
def user_config_dir(appname, roaming=True):
"""Return full path to the user-specific config dir for this application.
"appname" is the name of application.
If None, just the system directory is returned.
"roaming" (boolean, default True) can be set False to not use the
Windows roaming appdata directory. That means that for users on a
Windows network setup for roaming profiles, this user data will be
sync'd on login. See
<http://technet.microsoft.com/en-us/library/cc766489(WS.10).aspx>
for a discussion of issues.
Typical user data directories are:
macOS: same as user_data_dir
Unix: ~/.config/<AppName>
Win *: same as user_data_dir
For Unix, we follow the XDG spec and support $XDG_CONFIG_HOME.
That means, by default "~/.config/<AppName>".
"""
if WINDOWS:
path = user_data_dir(appname, roaming=roaming)
elif sys.platform == "darwin":
path = user_data_dir(appname)
else:
path = os.getenv('XDG_CONFIG_HOME', expanduser("~/.config"))
path = os.path.join(path, appname)
return path
# for the discussion regarding site_config_dirs locations
# see <https://github.com/pypa/pip/issues/1733>
def site_config_dirs(appname):
"""Return a list of potential user-shared config dirs for this application.
"appname" is the name of application.
Typical user config directories are:
macOS: /Library/Application Support/<AppName>/
Unix: /etc or $XDG_CONFIG_DIRS[i]/<AppName>/ for each value in
$XDG_CONFIG_DIRS
Win XP: C:\Documents and Settings\All Users\Application ...
...Data\<AppName>\
Vista: (Fail! "C:\ProgramData" is a hidden *system* directory
on Vista.)
Win 7: Hidden, but writeable on Win 7:
C:\ProgramData\<AppName>\
"""
if WINDOWS:
path = os.path.normpath(_get_win_folder("CSIDL_COMMON_APPDATA"))
pathlist = [os.path.join(path, appname)]
elif sys.platform == 'darwin':
pathlist = [os.path.join('/Library/Application Support', appname)]
else:
# try looking in $XDG_CONFIG_DIRS
xdg_config_dirs = os.getenv('XDG_CONFIG_DIRS', '/etc/xdg')
if xdg_config_dirs:
pathlist = [
os.path.join(expanduser(x), appname)
for x in xdg_config_dirs.split(os.pathsep)
]
else:
pathlist = []
# always look in /etc directly as well
pathlist.append('/etc')
return pathlist
# -- Windows support functions --
def _get_win_folder_from_registry(csidl_name):
"""
This is a fallback technique at best. I'm not sure if using the
registry for this guarantees us the correct answer for all CSIDL_*
names.
"""
import _winreg
shell_folder_name = {
"CSIDL_APPDATA": "AppData",
"CSIDL_COMMON_APPDATA": "Common AppData",
"CSIDL_LOCAL_APPDATA": "Local AppData",
}[csidl_name]
key = _winreg.OpenKey(
_winreg.HKEY_CURRENT_USER,
r"Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders"
)
directory, _type = _winreg.QueryValueEx(key, shell_folder_name)
return directory
def _get_win_folder_with_ctypes(csidl_name):
csidl_const = {
"CSIDL_APPDATA": 26,
"CSIDL_COMMON_APPDATA": 35,
"CSIDL_LOCAL_APPDATA": 28,
}[csidl_name]
buf = ctypes.create_unicode_buffer(1024)
ctypes.windll.shell32.SHGetFolderPathW(None, csidl_const, None, 0, buf)
# Downgrade to short path name if have highbit chars. See
# <http://bugs.activestate.com/show_bug.cgi?id=85099>.
has_high_char = False
for c in buf:
if ord(c) > 255:
has_high_char = True
break
if has_high_char:
buf2 = ctypes.create_unicode_buffer(1024)
if ctypes.windll.kernel32.GetShortPathNameW(buf.value, buf2, 1024):
buf = buf2
return buf.value
if WINDOWS:
try:
import ctypes
_get_win_folder = _get_win_folder_with_ctypes
except ImportError:
_get_win_folder = _get_win_folder_from_registry
def _win_path_to_bytes(path):
"""Encode Windows paths to bytes. Only used on Python 2.
Motivation is to be consistent with other operating systems where paths
are also returned as bytes. This avoids problems mixing bytes and Unicode
elsewhere in the codebase. For more details and discussion see
<https://github.com/pypa/pip/issues/3463>.
If encoding using ASCII and MBCS fails, return the original Unicode path.
"""
for encoding in ('ASCII', 'MBCS'):
try:
return path.encode(encoding)
except (UnicodeEncodeError, LookupError):
pass
return path
| mit |
40223235/w16b_test | static/Brython3.1.1-20150328-091302/Lib/multiprocessing/dummy/connection.py | 707 | 3049 | #
# Analogue of `multiprocessing.connection` which uses queues instead of sockets
#
# multiprocessing/dummy/connection.py
#
# Copyright (c) 2006-2008, R Oudkerk
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
# 1. Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# 3. Neither the name of author nor the names of any contributors may be
# used to endorse or promote products derived from this software
# without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
# SUCH DAMAGE.
#
__all__ = [ 'Client', 'Listener', 'Pipe' ]
from queue import Queue
families = [None]
class Listener(object):
def __init__(self, address=None, family=None, backlog=1):
self._backlog_queue = Queue(backlog)
def accept(self):
return Connection(*self._backlog_queue.get())
def close(self):
self._backlog_queue = None
address = property(lambda self: self._backlog_queue)
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, exc_tb):
self.close()
def Client(address):
_in, _out = Queue(), Queue()
address.put((_out, _in))
return Connection(_in, _out)
def Pipe(duplex=True):
a, b = Queue(), Queue()
return Connection(a, b), Connection(b, a)
class Connection(object):
def __init__(self, _in, _out):
self._out = _out
self._in = _in
self.send = self.send_bytes = _out.put
self.recv = self.recv_bytes = _in.get
def poll(self, timeout=0.0):
if self._in.qsize() > 0:
return True
if timeout <= 0.0:
return False
self._in.not_empty.acquire()
self._in.not_empty.wait(timeout)
self._in.not_empty.release()
return self._in.qsize() > 0
def close(self):
pass
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, exc_tb):
self.close()
| agpl-3.0 |
hwjworld/xiaodun-platform | lms/djangoapps/shoppingcart/views.py | 18 | 9527 | import logging
import datetime
import pytz
from django.conf import settings
from django.contrib.auth.models import Group
from django.http import (HttpResponse, HttpResponseRedirect, HttpResponseNotFound,
HttpResponseBadRequest, HttpResponseForbidden, Http404)
from django.utils.translation import ugettext as _
from django.views.decorators.http import require_POST
from django.core.urlresolvers import reverse
from django.views.decorators.csrf import csrf_exempt
from django.contrib.auth.decorators import login_required
from edxmako.shortcuts import render_to_response
from shoppingcart.reports import RefundReport, ItemizedPurchaseReport, UniversityRevenueShareReport, CertificateStatusReport
from student.models import CourseEnrollment
from .exceptions import ItemAlreadyInCartException, AlreadyEnrolledInCourseException, CourseDoesNotExistException, ReportTypeDoesNotExistException
from .models import Order, PaidCourseRegistration, OrderItem
from .processors import process_postpay_callback, render_purchase_form_html
log = logging.getLogger("shoppingcart")
EVENT_NAME_USER_UPGRADED = 'edx.course.enrollment.upgrade.succeeded'
REPORT_TYPES = [
("refund_report", RefundReport),
("itemized_purchase_report", ItemizedPurchaseReport),
("university_revenue_share", UniversityRevenueShareReport),
("certificate_status", CertificateStatusReport),
]
def initialize_report(report_type, start_date, end_date, start_letter=None, end_letter=None):
"""
Creates the appropriate type of Report object based on the string report_type.
"""
for item in REPORT_TYPES:
if report_type in item:
return item[1](start_date, end_date, start_letter, end_letter)
raise ReportTypeDoesNotExistException
@require_POST
def add_course_to_cart(request, course_id):
"""
Adds course specified by course_id to the cart. The model function add_to_order does all the
heavy lifting (logging, error checking, etc)
"""
if not request.user.is_authenticated():
log.info("Anon user trying to add course {} to cart".format(course_id))
return HttpResponseForbidden(_('You must be logged-in to add to a shopping cart'))
cart = Order.get_cart_for_user(request.user)
# All logging from here handled by the model
try:
PaidCourseRegistration.add_to_order(cart, course_id)
except CourseDoesNotExistException:
return HttpResponseNotFound(_('The course you requested does not exist.'))
except ItemAlreadyInCartException:
return HttpResponseBadRequest(_('The course {0} is already in your cart.'.format(course_id)))
except AlreadyEnrolledInCourseException:
return HttpResponseBadRequest(_('You are already registered in course {0}.'.format(course_id)))
return HttpResponse(_("Course added to cart."))
@login_required
def show_cart(request):
cart = Order.get_cart_for_user(request.user)
total_cost = cart.total_cost
cart_items = cart.orderitem_set.all()
form_html = render_purchase_form_html(cart)
return render_to_response("shoppingcart/list.html",
{'shoppingcart_items': cart_items,
'amount': total_cost,
'form_html': form_html,
})
@login_required
def clear_cart(request):
cart = Order.get_cart_for_user(request.user)
cart.clear()
return HttpResponse('Cleared')
@login_required
def remove_item(request):
item_id = request.REQUEST.get('id', '-1')
try:
item = OrderItem.objects.get(id=item_id, status='cart')
if item.user == request.user:
item.delete()
except OrderItem.DoesNotExist:
log.exception('Cannot remove cart OrderItem id={0}. DoesNotExist or item is already purchased'.format(item_id))
return HttpResponse('OK')
@csrf_exempt
@require_POST
def postpay_callback(request):
"""
Receives the POST-back from processor.
Mainly this calls the processor-specific code to check if the payment was accepted, and to record the order
if it was, and to generate an error page.
If successful this function should have the side effect of changing the "cart" into a full "order" in the DB.
The cart can then render a success page which links to receipt pages.
If unsuccessful the order will be left untouched and HTML messages giving more detailed error info will be
returned.
"""
params = request.POST.dict()
result = process_postpay_callback(params)
if result['success']:
return HttpResponseRedirect(reverse('shoppingcart.views.show_receipt', args=[result['order'].id]))
else:
return render_to_response('shoppingcart/error.html', {'order': result['order'],
'error_html': result['error_html']})
@login_required
def show_receipt(request, ordernum):
"""
Displays a receipt for a particular order.
404 if order is not yet purchased or request.user != order.user
"""
try:
order = Order.objects.get(id=ordernum)
except Order.DoesNotExist:
raise Http404('Order not found!')
if order.user != request.user or order.status != 'purchased':
raise Http404('Order not found!')
order_items = OrderItem.objects.filter(order=order).select_subclasses()
any_refunds = any(i.status == "refunded" for i in order_items)
receipt_template = 'shoppingcart/receipt.html'
__, instructions = order.generate_receipt_instructions()
# we want to have the ability to override the default receipt page when
# there is only one item in the order
context = {
'order': order,
'order_items': order_items,
'any_refunds': any_refunds,
'instructions': instructions,
}
if order_items.count() == 1:
receipt_template = order_items[0].single_item_receipt_template
context.update(order_items[0].single_item_receipt_context)
# Only orders where order_items.count() == 1 might be attempting to upgrade
attempting_upgrade = request.session.get('attempting_upgrade', False)
if attempting_upgrade:
course_enrollment = CourseEnrollment.get_or_create_enrollment(request.user, order_items[0].course_id)
course_enrollment.emit_event(EVENT_NAME_USER_UPGRADED)
request.session['attempting_upgrade'] = False
return render_to_response(receipt_template, context)
def _can_download_report(user):
"""
Tests if the user can download the payments report, based on membership in a group whose name is determined
in settings. If the group does not exist, denies all access
"""
try:
access_group = Group.objects.get(name=settings.PAYMENT_REPORT_GENERATOR_GROUP)
except Group.DoesNotExist:
return False
return access_group in user.groups.all()
def _get_date_from_str(date_input):
"""
Gets date from the date input string. Lets the ValueError raised by invalid strings be processed by the caller
"""
return datetime.datetime.strptime(date_input.strip(), "%Y-%m-%d").replace(tzinfo=pytz.UTC)
def _render_report_form(start_str, end_str, start_letter, end_letter, report_type, total_count_error=False, date_fmt_error=False):
"""
Helper function that renders the purchase form. Reduces repetition
"""
context = {
'total_count_error': total_count_error,
'date_fmt_error': date_fmt_error,
'start_date': start_str,
'end_date': end_str,
'start_letter': start_letter,
'end_letter': end_letter,
'requested_report': report_type,
}
return render_to_response('shoppingcart/download_report.html', context)
@login_required
def csv_report(request):
"""
Downloads csv reporting of orderitems
"""
if not _can_download_report(request.user):
return HttpResponseForbidden(_('You do not have permission to view this page.'))
if request.method == 'POST':
start_date = request.POST.get('start_date', '')
end_date = request.POST.get('end_date', '')
start_letter = request.POST.get('start_letter', '')
end_letter = request.POST.get('end_letter', '')
report_type = request.POST.get('requested_report', '')
try:
start_date = _get_date_from_str(start_date) + datetime.timedelta(days=0)
end_date = _get_date_from_str(end_date) + datetime.timedelta(days=1)
except ValueError:
# Error case: there was a badly formatted user-input date string
return _render_report_form(start_date, end_date, start_letter, end_letter, report_type, date_fmt_error=True)
report = initialize_report(report_type, start_date, end_date, start_letter, end_letter)
items = report.rows()
response = HttpResponse(mimetype='text/csv')
filename = "purchases_report_{}.csv".format(datetime.datetime.now(pytz.UTC).strftime("%Y-%m-%d-%H-%M-%S"))
response['Content-Disposition'] = 'attachment; filename="{}"'.format(filename)
report.write_csv(response)
return response
elif request.method == 'GET':
end_date = datetime.datetime.now(pytz.UTC)
start_date = end_date - datetime.timedelta(days=30)
start_letter = ""
end_letter = ""
return _render_report_form(start_date.strftime("%Y-%m-%d"), end_date.strftime("%Y-%m-%d"), start_letter, end_letter, report_type="")
else:
return HttpResponseBadRequest("HTTP Method Not Supported")
| agpl-3.0 |
wakatime/wakatime | wakatime/packages/py27/pygments/lexers/theorem.py | 3 | 18908 | # -*- coding: utf-8 -*-
"""
pygments.lexers.theorem
~~~~~~~~~~~~~~~~~~~~~~~
Lexers for theorem-proving languages.
:copyright: Copyright 2006-2019 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import re
from pygments.lexer import RegexLexer, default, words
from pygments.token import Text, Comment, Operator, Keyword, Name, String, \
Number, Punctuation, Generic
__all__ = ['CoqLexer', 'IsabelleLexer', 'LeanLexer']
class CoqLexer(RegexLexer):
"""
For the `Coq <http://coq.inria.fr/>`_ theorem prover.
.. versionadded:: 1.5
"""
name = 'Coq'
aliases = ['coq']
filenames = ['*.v']
mimetypes = ['text/x-coq']
keywords1 = (
# Vernacular commands
'Section', 'Module', 'End', 'Require', 'Import', 'Export', 'Variable',
'Variables', 'Parameter', 'Parameters', 'Axiom', 'Hypothesis',
'Hypotheses', 'Notation', 'Local', 'Tactic', 'Reserved', 'Scope',
'Open', 'Close', 'Bind', 'Delimit', 'Definition', 'Let', 'Ltac',
'Fixpoint', 'CoFixpoint', 'Morphism', 'Relation', 'Implicit',
'Arguments', 'Set', 'Unset', 'Contextual', 'Strict', 'Prenex',
'Implicits', 'Inductive', 'CoInductive', 'Record', 'Structure',
'Canonical', 'Coercion', 'Theorem', 'Lemma', 'Corollary',
'Proposition', 'Fact', 'Remark', 'Example', 'Proof', 'Goal', 'Save',
'Qed', 'Defined', 'Hint', 'Resolve', 'Rewrite', 'View', 'Search',
'Show', 'Print', 'Printing', 'All', 'Graph', 'Projections', 'inside',
'outside', 'Check', 'Global', 'Instance', 'Class', 'Existing',
'Universe', 'Polymorphic', 'Monomorphic', 'Context'
)
keywords2 = (
# Gallina
'forall', 'exists', 'exists2', 'fun', 'fix', 'cofix', 'struct',
'match', 'end', 'in', 'return', 'let', 'if', 'is', 'then', 'else',
'for', 'of', 'nosimpl', 'with', 'as',
)
keywords3 = (
# Sorts
'Type', 'Prop',
)
keywords4 = (
# Tactics
'pose', 'set', 'move', 'case', 'elim', 'apply', 'clear', 'hnf', 'intro',
'intros', 'generalize', 'rename', 'pattern', 'after', 'destruct',
'induction', 'using', 'refine', 'inversion', 'injection', 'rewrite',
'congr', 'unlock', 'compute', 'ring', 'field', 'replace', 'fold',
'unfold', 'change', 'cutrewrite', 'simpl', 'have', 'suff', 'wlog',
'suffices', 'without', 'loss', 'nat_norm', 'assert', 'cut', 'trivial',
'revert', 'bool_congr', 'nat_congr', 'symmetry', 'transitivity', 'auto',
'split', 'left', 'right', 'autorewrite', 'tauto', 'setoid_rewrite',
'intuition', 'eauto', 'eapply', 'econstructor', 'etransitivity',
'constructor', 'erewrite', 'red', 'cbv', 'lazy', 'vm_compute',
'native_compute', 'subst',
)
keywords5 = (
# Terminators
'by', 'done', 'exact', 'reflexivity', 'tauto', 'romega', 'omega',
'assumption', 'solve', 'contradiction', 'discriminate',
'congruence',
)
keywords6 = (
# Control
'do', 'last', 'first', 'try', 'idtac', 'repeat',
)
# 'as', 'assert', 'begin', 'class', 'constraint', 'do', 'done',
# 'downto', 'else', 'end', 'exception', 'external', 'false',
# 'for', 'fun', 'function', 'functor', 'if', 'in', 'include',
# 'inherit', 'initializer', 'lazy', 'let', 'match', 'method',
# 'module', 'mutable', 'new', 'object', 'of', 'open', 'private',
# 'raise', 'rec', 'sig', 'struct', 'then', 'to', 'true', 'try',
# 'type', 'val', 'virtual', 'when', 'while', 'with'
keyopts = (
'!=', '#', '&', '&&', r'\(', r'\)', r'\*', r'\+', ',', '-', r'-\.',
'->', r'\.', r'\.\.', ':', '::', ':=', ':>', ';', ';;', '<', '<-',
'<->', '=', '>', '>]', r'>\}', r'\?', r'\?\?', r'\[', r'\[<', r'\[>',
r'\[\|', ']', '_', '`', r'\{', r'\{<', r'\|', r'\|]', r'\}', '~', '=>',
r'/\\', r'\\/', r'\{\|', r'\|\}',
u'Π', u'λ',
)
operators = r'[!$%&*+\./:<=>?@^|~-]'
prefix_syms = r'[!?~]'
infix_syms = r'[=<>@^|&+\*/$%-]'
tokens = {
'root': [
(r'\s+', Text),
(r'false|true|\(\)|\[\]', Name.Builtin.Pseudo),
(r'\(\*', Comment, 'comment'),
(words(keywords1, prefix=r'\b', suffix=r'\b'), Keyword.Namespace),
(words(keywords2, prefix=r'\b', suffix=r'\b'), Keyword),
(words(keywords3, prefix=r'\b', suffix=r'\b'), Keyword.Type),
(words(keywords4, prefix=r'\b', suffix=r'\b'), Keyword),
(words(keywords5, prefix=r'\b', suffix=r'\b'), Keyword.Pseudo),
(words(keywords6, prefix=r'\b', suffix=r'\b'), Keyword.Reserved),
# (r'\b([A-Z][\w\']*)(\.)', Name.Namespace, 'dotted'),
(r'\b([A-Z][\w\']*)', Name),
(r'(%s)' % '|'.join(keyopts[::-1]), Operator),
(r'(%s|%s)?%s' % (infix_syms, prefix_syms, operators), Operator),
(r"[^\W\d][\w']*", Name),
(r'\d[\d_]*', Number.Integer),
(r'0[xX][\da-fA-F][\da-fA-F_]*', Number.Hex),
(r'0[oO][0-7][0-7_]*', Number.Oct),
(r'0[bB][01][01_]*', Number.Bin),
(r'-?\d[\d_]*(.[\d_]*)?([eE][+\-]?\d[\d_]*)', Number.Float),
(r"'(?:(\\[\\\"'ntbr ])|(\\[0-9]{3})|(\\x[0-9a-fA-F]{2}))'",
String.Char),
(r"'.'", String.Char),
(r"'", Keyword), # a stray quote is another syntax element
(r'"', String.Double, 'string'),
(r'[~?][a-z][\w\']*:', Name),
],
'comment': [
(r'[^(*)]+', Comment),
(r'\(\*', Comment, '#push'),
(r'\*\)', Comment, '#pop'),
(r'[(*)]', Comment),
],
'string': [
(r'[^"]+', String.Double),
(r'""', String.Double),
(r'"', String.Double, '#pop'),
],
'dotted': [
(r'\s+', Text),
(r'\.', Punctuation),
(r'[A-Z][\w\']*(?=\s*\.)', Name.Namespace),
(r'[A-Z][\w\']*', Name.Class, '#pop'),
(r'[a-z][a-z0-9_\']*', Name, '#pop'),
default('#pop')
],
}
def analyse_text(text):
if text.startswith('(*'):
return True
class IsabelleLexer(RegexLexer):
"""
For the `Isabelle <http://isabelle.in.tum.de/>`_ proof assistant.
.. versionadded:: 2.0
"""
name = 'Isabelle'
aliases = ['isabelle']
filenames = ['*.thy']
mimetypes = ['text/x-isabelle']
keyword_minor = (
'and', 'assumes', 'attach', 'avoids', 'binder', 'checking',
'class_instance', 'class_relation', 'code_module', 'congs',
'constant', 'constrains', 'datatypes', 'defines', 'file', 'fixes',
'for', 'functions', 'hints', 'identifier', 'if', 'imports', 'in',
'includes', 'infix', 'infixl', 'infixr', 'is', 'keywords', 'lazy',
'module_name', 'monos', 'morphisms', 'no_discs_sels', 'notes',
'obtains', 'open', 'output', 'overloaded', 'parametric', 'permissive',
'pervasive', 'rep_compat', 'shows', 'structure', 'type_class',
'type_constructor', 'unchecked', 'unsafe', 'where',
)
keyword_diag = (
'ML_command', 'ML_val', 'class_deps', 'code_deps', 'code_thms',
'display_drafts', 'find_consts', 'find_theorems', 'find_unused_assms',
'full_prf', 'help', 'locale_deps', 'nitpick', 'pr', 'prf',
'print_abbrevs', 'print_antiquotations', 'print_attributes',
'print_binds', 'print_bnfs', 'print_bundles',
'print_case_translations', 'print_cases', 'print_claset',
'print_classes', 'print_codeproc', 'print_codesetup',
'print_coercions', 'print_commands', 'print_context',
'print_defn_rules', 'print_dependencies', 'print_facts',
'print_induct_rules', 'print_inductives', 'print_interps',
'print_locale', 'print_locales', 'print_methods', 'print_options',
'print_orders', 'print_quot_maps', 'print_quotconsts',
'print_quotients', 'print_quotientsQ3', 'print_quotmapsQ3',
'print_rules', 'print_simpset', 'print_state', 'print_statement',
'print_syntax', 'print_theorems', 'print_theory', 'print_trans_rules',
'prop', 'pwd', 'quickcheck', 'refute', 'sledgehammer', 'smt_status',
'solve_direct', 'spark_status', 'term', 'thm', 'thm_deps', 'thy_deps',
'try', 'try0', 'typ', 'unused_thms', 'value', 'values', 'welcome',
'print_ML_antiquotations', 'print_term_bindings', 'values_prolog',
)
keyword_thy = ('theory', 'begin', 'end')
keyword_section = ('header', 'chapter')
keyword_subsection = (
'section', 'subsection', 'subsubsection', 'sect', 'subsect',
'subsubsect',
)
keyword_theory_decl = (
'ML', 'ML_file', 'abbreviation', 'adhoc_overloading', 'arities',
'atom_decl', 'attribute_setup', 'axiomatization', 'bundle',
'case_of_simps', 'class', 'classes', 'classrel', 'codatatype',
'code_abort', 'code_class', 'code_const', 'code_datatype',
'code_identifier', 'code_include', 'code_instance', 'code_modulename',
'code_monad', 'code_printing', 'code_reflect', 'code_reserved',
'code_type', 'coinductive', 'coinductive_set', 'consts', 'context',
'datatype', 'datatype_new', 'datatype_new_compat', 'declaration',
'declare', 'default_sort', 'defer_recdef', 'definition', 'defs',
'domain', 'domain_isomorphism', 'domaindef', 'equivariance',
'export_code', 'extract', 'extract_type', 'fixrec', 'fun',
'fun_cases', 'hide_class', 'hide_const', 'hide_fact', 'hide_type',
'import_const_map', 'import_file', 'import_tptp', 'import_type_map',
'inductive', 'inductive_set', 'instantiation', 'judgment', 'lemmas',
'lifting_forget', 'lifting_update', 'local_setup', 'locale',
'method_setup', 'nitpick_params', 'no_adhoc_overloading',
'no_notation', 'no_syntax', 'no_translations', 'no_type_notation',
'nominal_datatype', 'nonterminal', 'notation', 'notepad', 'oracle',
'overloading', 'parse_ast_translation', 'parse_translation',
'partial_function', 'primcorec', 'primrec', 'primrec_new',
'print_ast_translation', 'print_translation', 'quickcheck_generator',
'quickcheck_params', 'realizability', 'realizers', 'recdef', 'record',
'refute_params', 'setup', 'setup_lifting', 'simproc_setup',
'simps_of_case', 'sledgehammer_params', 'spark_end', 'spark_open',
'spark_open_siv', 'spark_open_vcg', 'spark_proof_functions',
'spark_types', 'statespace', 'syntax', 'syntax_declaration', 'text',
'text_raw', 'theorems', 'translations', 'type_notation',
'type_synonym', 'typed_print_translation', 'typedecl', 'hoarestate',
'install_C_file', 'install_C_types', 'wpc_setup', 'c_defs', 'c_types',
'memsafe', 'SML_export', 'SML_file', 'SML_import', 'approximate',
'bnf_axiomatization', 'cartouche', 'datatype_compat',
'free_constructors', 'functor', 'nominal_function',
'nominal_termination', 'permanent_interpretation',
'binds', 'defining', 'smt2_status', 'term_cartouche',
'boogie_file', 'text_cartouche',
)
keyword_theory_script = ('inductive_cases', 'inductive_simps')
keyword_theory_goal = (
'ax_specification', 'bnf', 'code_pred', 'corollary', 'cpodef',
'crunch', 'crunch_ignore',
'enriched_type', 'function', 'instance', 'interpretation', 'lemma',
'lift_definition', 'nominal_inductive', 'nominal_inductive2',
'nominal_primrec', 'pcpodef', 'primcorecursive',
'quotient_definition', 'quotient_type', 'recdef_tc', 'rep_datatype',
'schematic_corollary', 'schematic_lemma', 'schematic_theorem',
'spark_vc', 'specification', 'subclass', 'sublocale', 'termination',
'theorem', 'typedef', 'wrap_free_constructors',
)
keyword_qed = ('by', 'done', 'qed')
keyword_abandon_proof = ('sorry', 'oops')
keyword_proof_goal = ('have', 'hence', 'interpret')
keyword_proof_block = ('next', 'proof')
keyword_proof_chain = (
'finally', 'from', 'then', 'ultimately', 'with',
)
keyword_proof_decl = (
'ML_prf', 'also', 'include', 'including', 'let', 'moreover', 'note',
'txt', 'txt_raw', 'unfolding', 'using', 'write',
)
keyword_proof_asm = ('assume', 'case', 'def', 'fix', 'presume')
keyword_proof_asm_goal = ('guess', 'obtain', 'show', 'thus')
keyword_proof_script = (
'apply', 'apply_end', 'apply_trace', 'back', 'defer', 'prefer',
)
operators = (
'::', ':', '(', ')', '[', ']', '_', '=', ',', '|',
'+', '-', '!', '?',
)
proof_operators = ('{', '}', '.', '..')
tokens = {
'root': [
(r'\s+', Text),
(r'\(\*', Comment, 'comment'),
(r'\{\*', Comment, 'text'),
(words(operators), Operator),
(words(proof_operators), Operator.Word),
(words(keyword_minor, prefix=r'\b', suffix=r'\b'), Keyword.Pseudo),
(words(keyword_diag, prefix=r'\b', suffix=r'\b'), Keyword.Type),
(words(keyword_thy, prefix=r'\b', suffix=r'\b'), Keyword),
(words(keyword_theory_decl, prefix=r'\b', suffix=r'\b'), Keyword),
(words(keyword_section, prefix=r'\b', suffix=r'\b'), Generic.Heading),
(words(keyword_subsection, prefix=r'\b', suffix=r'\b'), Generic.Subheading),
(words(keyword_theory_goal, prefix=r'\b', suffix=r'\b'), Keyword.Namespace),
(words(keyword_theory_script, prefix=r'\b', suffix=r'\b'), Keyword.Namespace),
(words(keyword_abandon_proof, prefix=r'\b', suffix=r'\b'), Generic.Error),
(words(keyword_qed, prefix=r'\b', suffix=r'\b'), Keyword),
(words(keyword_proof_goal, prefix=r'\b', suffix=r'\b'), Keyword),
(words(keyword_proof_block, prefix=r'\b', suffix=r'\b'), Keyword),
(words(keyword_proof_decl, prefix=r'\b', suffix=r'\b'), Keyword),
(words(keyword_proof_chain, prefix=r'\b', suffix=r'\b'), Keyword),
(words(keyword_proof_asm, prefix=r'\b', suffix=r'\b'), Keyword),
(words(keyword_proof_asm_goal, prefix=r'\b', suffix=r'\b'), Keyword),
(words(keyword_proof_script, prefix=r'\b', suffix=r'\b'), Keyword.Pseudo),
(r'\\<\w*>', Text.Symbol),
(r"[^\W\d][.\w']*", Name),
(r"\?[^\W\d][.\w']*", Name),
(r"'[^\W\d][.\w']*", Name.Type),
(r'\d[\d_]*', Name), # display numbers as name
(r'0[xX][\da-fA-F][\da-fA-F_]*', Number.Hex),
(r'0[oO][0-7][0-7_]*', Number.Oct),
(r'0[bB][01][01_]*', Number.Bin),
(r'"', String, 'string'),
(r'`', String.Other, 'fact'),
],
'comment': [
(r'[^(*)]+', Comment),
(r'\(\*', Comment, '#push'),
(r'\*\)', Comment, '#pop'),
(r'[(*)]', Comment),
],
'text': [
(r'[^*}]+', Comment),
(r'\*\}', Comment, '#pop'),
(r'\*', Comment),
(r'\}', Comment),
],
'string': [
(r'[^"\\]+', String),
(r'\\<\w*>', String.Symbol),
(r'\\"', String),
(r'\\', String),
(r'"', String, '#pop'),
],
'fact': [
(r'[^`\\]+', String.Other),
(r'\\<\w*>', String.Symbol),
(r'\\`', String.Other),
(r'\\', String.Other),
(r'`', String.Other, '#pop'),
],
}
class LeanLexer(RegexLexer):
"""
For the `Lean <https://github.com/leanprover/lean>`_
theorem prover.
.. versionadded:: 2.0
"""
name = 'Lean'
aliases = ['lean']
filenames = ['*.lean']
mimetypes = ['text/x-lean']
flags = re.MULTILINE | re.UNICODE
keywords1 = (
'import', 'abbreviation', 'opaque_hint', 'tactic_hint', 'definition',
'renaming', 'inline', 'hiding', 'exposing', 'parameter', 'parameters',
'conjecture', 'hypothesis', 'lemma', 'corollary', 'variable', 'variables',
'theorem', 'axiom', 'inductive', 'structure', 'universe', 'alias',
'help', 'options', 'precedence', 'postfix', 'prefix', 'calc_trans',
'calc_subst', 'calc_refl', 'infix', 'infixl', 'infixr', 'notation', 'eval',
'check', 'exit', 'coercion', 'end', 'private', 'using', 'namespace',
'including', 'instance', 'section', 'context', 'protected', 'expose',
'export', 'set_option', 'add_rewrite', 'extends', 'open', 'example',
'constant', 'constants', 'print', 'opaque', 'reducible', 'irreducible',
)
keywords2 = (
'forall', 'fun', 'Pi', 'obtain', 'from', 'have', 'show', 'assume',
'take', 'let', 'if', 'else', 'then', 'by', 'in', 'with', 'begin',
'proof', 'qed', 'calc', 'match',
)
keywords3 = (
# Sorts
'Type', 'Prop',
)
operators = (
u'!=', u'#', u'&', u'&&', u'*', u'+', u'-', u'/', u'@', u'!', u'`',
u'-.', u'->', u'.', u'..', u'...', u'::', u':>', u';', u';;', u'<',
u'<-', u'=', u'==', u'>', u'_', u'|', u'||', u'~', u'=>', u'<=', u'>=',
u'/\\', u'\\/', u'∀', u'Π', u'λ', u'↔', u'∧', u'∨', u'≠', u'≤', u'≥',
u'¬', u'⁻¹', u'⬝', u'▸', u'→', u'∃', u'ℕ', u'ℤ', u'≈', u'×', u'⌞',
u'⌟', u'≡', u'⟨', u'⟩', u'^',
)
punctuation = (u'(', u')', u':', u'{', u'}', u'[', u']', u'⦃', u'⦄',
u':=', u',')
tokens = {
'root': [
(r'\s+', Text),
(r'/-', Comment, 'comment'),
(r'--.*?$', Comment.Single),
(words(keywords1, prefix=r'\b', suffix=r'\b'), Keyword.Namespace),
(words(keywords2, prefix=r'\b', suffix=r'\b'), Keyword),
(words(keywords3, prefix=r'\b', suffix=r'\b'), Keyword.Type),
(words(operators), Name.Builtin.Pseudo),
(words(punctuation), Operator),
(u"[A-Za-z_\u03b1-\u03ba\u03bc-\u03fb\u1f00-\u1ffe\u2100-\u214f]"
u"[A-Za-z_'\u03b1-\u03ba\u03bc-\u03fb\u1f00-\u1ffe\u2070-\u2079"
u"\u207f-\u2089\u2090-\u209c\u2100-\u214f0-9]*", Name),
(r'\d+', Number.Integer),
(r'"', String.Double, 'string'),
(r'[~?][a-z][\w\']*:', Name.Variable)
],
'comment': [
# Multiline Comments
(r'[^/-]', Comment.Multiline),
(r'/-', Comment.Multiline, '#push'),
(r'-/', Comment.Multiline, '#pop'),
(r'[/-]', Comment.Multiline)
],
'string': [
(r'[^\\"]+', String.Double),
(r'\\[n"\\]', String.Escape),
('"', String.Double, '#pop'),
],
}
| bsd-3-clause |
JazzeYoung/VeryDeepAutoEncoder | pylearn2/scripts/tests/test_print_monitor_cv.py | 48 | 1927 | """
Test print_monitor_cv.py by training on a short TrainCV YAML file and
analyzing the output pickle.
"""
import os
import tempfile
from pylearn2.config import yaml_parse
from pylearn2.scripts import print_monitor_cv
from pylearn2.testing.skip import skip_if_no_sklearn
def test_print_monitor_cv():
"""Test print_monitor_cv.py."""
skip_if_no_sklearn()
handle, filename = tempfile.mkstemp()
trainer = yaml_parse.load(test_print_monitor_cv_yaml %
{'filename': filename})
trainer.main_loop()
# run print_monitor_cv.py main
print_monitor_cv.main(filename)
# run print_monitor_cv.py main with all=True
print_monitor_cv.main(filename, all=True)
# cleanup
os.remove(filename)
test_print_monitor_cv_yaml = """
!obj:pylearn2.cross_validation.TrainCV {
dataset_iterator:
!obj:pylearn2.cross_validation.dataset_iterators.DatasetKFold {
dataset:
!obj:pylearn2.testing.datasets.random_one_hot_dense_design_matrix
{
rng: !obj:numpy.random.RandomState { seed: 1 },
num_examples: 10,
dim: 10,
num_classes: 2,
},
},
model: !obj:pylearn2.models.mlp.MLP {
layers: [
!obj:pylearn2.models.mlp.Sigmoid {
layer_name: h0,
dim: 8,
irange: 0.05,
},
!obj:pylearn2.models.mlp.Softmax {
layer_name: y,
n_classes: 2,
irange: 0.05,
},
],
nvis: 10,
},
algorithm: !obj:pylearn2.training_algorithms.bgd.BGD {
batch_size: 5,
line_search_mode: 'exhaustive',
conjugate: 1,
termination_criterion:
!obj:pylearn2.termination_criteria.EpochCounter {
max_epochs: 1,
},
},
save_path: %(filename)s,
}
"""
| bsd-3-clause |
ofrobots/grpc | src/python/grpcio_test/grpc_test/framework/face/testing/service.py | 17 | 11617 | # Copyright 2015, Google Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Private interfaces implemented by data sets used in Face-layer tests."""
import abc
# interfaces is referenced from specification in this module.
from grpc.framework.face import interfaces as face_interfaces # pylint: disable=unused-import
from grpc_test.framework.face.testing import interfaces
class UnaryUnaryTestMethodImplementation(interfaces.Method):
"""A controllable implementation of a unary-unary RPC method."""
__metaclass__ = abc.ABCMeta
@abc.abstractmethod
def service(self, request, response_callback, context, control):
"""Services an RPC that accepts one message and produces one message.
Args:
request: The single request message for the RPC.
response_callback: A callback to be called to accept the response message
of the RPC.
context: An face_interfaces.RpcContext object.
control: A test_control.Control to control execution of this method.
Raises:
abandonment.Abandoned: May or may not be raised when the RPC has been
aborted.
"""
raise NotImplementedError()
class UnaryUnaryTestMessages(object):
"""A type for unary-request-unary-response message pairings."""
__metaclass__ = abc.ABCMeta
@abc.abstractmethod
def request(self):
"""Affords a request message.
Implementations of this method should return a different message with each
call so that multiple test executions of the test method may be made with
different inputs.
Returns:
A request message.
"""
raise NotImplementedError()
@abc.abstractmethod
def verify(self, request, response, test_case):
"""Verifies that the computed response matches the given request.
Args:
request: A request message.
response: A response message.
test_case: A unittest.TestCase object affording useful assertion methods.
Raises:
AssertionError: If the request and response do not match, indicating that
there was some problem executing the RPC under test.
"""
raise NotImplementedError()
class UnaryStreamTestMethodImplementation(interfaces.Method):
"""A controllable implementation of a unary-stream RPC method."""
__metaclass__ = abc.ABCMeta
@abc.abstractmethod
def service(self, request, response_consumer, context, control):
"""Services an RPC that takes one message and produces a stream of messages.
Args:
request: The single request message for the RPC.
response_consumer: A stream.Consumer to be called to accept the response
messages of the RPC.
context: A face_interfaces.RpcContext object.
control: A test_control.Control to control execution of this method.
Raises:
abandonment.Abandoned: May or may not be raised when the RPC has been
aborted.
"""
raise NotImplementedError()
class UnaryStreamTestMessages(object):
"""A type for unary-request-stream-response message pairings."""
__metaclass__ = abc.ABCMeta
@abc.abstractmethod
def request(self):
"""Affords a request message.
Implementations of this method should return a different message with each
call so that multiple test executions of the test method may be made with
different inputs.
Returns:
A request message.
"""
raise NotImplementedError()
@abc.abstractmethod
def verify(self, request, responses, test_case):
"""Verifies that the computed responses match the given request.
Args:
request: A request message.
responses: A sequence of response messages.
test_case: A unittest.TestCase object affording useful assertion methods.
Raises:
AssertionError: If the request and responses do not match, indicating that
there was some problem executing the RPC under test.
"""
raise NotImplementedError()
class StreamUnaryTestMethodImplementation(interfaces.Method):
"""A controllable implementation of a stream-unary RPC method."""
__metaclass__ = abc.ABCMeta
@abc.abstractmethod
def service(self, response_callback, context, control):
"""Services an RPC that takes a stream of messages and produces one message.
Args:
response_callback: A callback to be called to accept the response message
of the RPC.
context: A face_interfaces.RpcContext object.
control: A test_control.Control to control execution of this method.
Returns:
A stream.Consumer with which to accept the request messages of the RPC.
The consumer returned from this method may or may not be invoked to
completion: in the case of RPC abortion, RPC Framework will simply stop
passing messages to this object. Implementations must not assume that
this object will be called to completion of the request stream or even
called at all.
Raises:
abandonment.Abandoned: May or may not be raised when the RPC has been
aborted.
"""
raise NotImplementedError()
class StreamUnaryTestMessages(object):
"""A type for stream-request-unary-response message pairings."""
__metaclass__ = abc.ABCMeta
@abc.abstractmethod
def requests(self):
"""Affords a sequence of request messages.
Implementations of this method should return a different sequences with each
call so that multiple test executions of the test method may be made with
different inputs.
Returns:
A sequence of request messages.
"""
raise NotImplementedError()
@abc.abstractmethod
def verify(self, requests, response, test_case):
"""Verifies that the computed response matches the given requests.
Args:
requests: A sequence of request messages.
response: A response message.
test_case: A unittest.TestCase object affording useful assertion methods.
Raises:
AssertionError: If the requests and response do not match, indicating that
there was some problem executing the RPC under test.
"""
raise NotImplementedError()
class StreamStreamTestMethodImplementation(interfaces.Method):
"""A controllable implementation of a stream-stream RPC method."""
__metaclass__ = abc.ABCMeta
@abc.abstractmethod
def service(self, response_consumer, context, control):
"""Services an RPC that accepts and produces streams of messages.
Args:
response_consumer: A stream.Consumer to be called to accept the response
messages of the RPC.
context: A face_interfaces.RpcContext object.
control: A test_control.Control to control execution of this method.
Returns:
A stream.Consumer with which to accept the request messages of the RPC.
The consumer returned from this method may or may not be invoked to
completion: in the case of RPC abortion, RPC Framework will simply stop
passing messages to this object. Implementations must not assume that
this object will be called to completion of the request stream or even
called at all.
Raises:
abandonment.Abandoned: May or may not be raised when the RPC has been
aborted.
"""
raise NotImplementedError()
class StreamStreamTestMessages(object):
"""A type for stream-request-stream-response message pairings."""
__metaclass__ = abc.ABCMeta
@abc.abstractmethod
def requests(self):
"""Affords a sequence of request messages.
Implementations of this method should return a different sequences with each
call so that multiple test executions of the test method may be made with
different inputs.
Returns:
A sequence of request messages.
"""
raise NotImplementedError()
@abc.abstractmethod
def verify(self, requests, responses, test_case):
"""Verifies that the computed response matches the given requests.
Args:
requests: A sequence of request messages.
responses: A sequence of response messages.
test_case: A unittest.TestCase object affording useful assertion methods.
Raises:
AssertionError: If the requests and responses do not match, indicating
that there was some problem executing the RPC under test.
"""
raise NotImplementedError()
class TestService(object):
"""A specification of implemented RPC methods to use in tests."""
__metaclass__ = abc.ABCMeta
@abc.abstractmethod
def name(self):
"""Identifies the RPC service name used during the test.
Returns:
The RPC service name to be used for the test.
"""
raise NotImplementedError()
@abc.abstractmethod
def unary_unary_scenarios(self):
"""Affords unary-request-unary-response test methods and their messages.
Returns:
A dict from method name to pair. The first element of the pair
is a UnaryUnaryTestMethodImplementation object and the second element
is a sequence of UnaryUnaryTestMethodMessages objects.
"""
raise NotImplementedError()
@abc.abstractmethod
def unary_stream_scenarios(self):
"""Affords unary-request-stream-response test methods and their messages.
Returns:
A dict from method name to pair. The first element of the pair is a
UnaryStreamTestMethodImplementation object and the second element is a
sequence of UnaryStreamTestMethodMessages objects.
"""
raise NotImplementedError()
@abc.abstractmethod
def stream_unary_scenarios(self):
"""Affords stream-request-unary-response test methods and their messages.
Returns:
A dict from method name to pair. The first element of the pair is a
StreamUnaryTestMethodImplementation object and the second element is a
sequence of StreamUnaryTestMethodMessages objects.
"""
raise NotImplementedError()
@abc.abstractmethod
def stream_stream_scenarios(self):
"""Affords stream-request-stream-response test methods and their messages.
Returns:
A dict from method name to pair. The first element of the pair is a
StreamStreamTestMethodImplementation object and the second element is a
sequence of StreamStreamTestMethodMessages objects.
"""
raise NotImplementedError()
| bsd-3-clause |
asiersarasua/QGIS | python/plugins/processing/algs/qgis/Ruggedness.py | 8 | 3299 | # -*- coding: utf-8 -*-
"""
***************************************************************************
Ruggedness.py
---------------------
Date : October 2016
Copyright : (C) 2016 by Alexander Bruy
Email : alexander dot bruy at gmail dot com
***************************************************************************
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
* the Free Software Foundation; either version 2 of the License, or *
* (at your option) any later version. *
* *
***************************************************************************
"""
__author__ = 'Alexander Bruy'
__date__ = 'October 2016'
__copyright__ = '(C) 2016, Alexander Bruy'
# This will get replaced with a git SHA1 when you do a git archive
__revision__ = '$Format:%H$'
import os
from qgis.PyQt.QtGui import QIcon
from qgis.analysis import QgsRuggednessFilter
from qgis.core import (QgsRasterFileWriter,
QgsProcessingParameterRasterLayer,
QgsProcessingParameterNumber,
QgsProcessingParameterRasterDestination)
from processing.algs.qgis.QgisAlgorithm import QgisAlgorithm
pluginPath = os.path.split(os.path.split(os.path.dirname(__file__))[0])[0]
class Ruggedness(QgisAlgorithm):
INPUT = 'INPUT'
Z_FACTOR = 'Z_FACTOR'
OUTPUT = 'OUTPUT'
def icon(self):
return QIcon(os.path.join(pluginPath, 'images', 'dem.png'))
def group(self):
return self.tr('Raster terrain analysis')
def groupId(self):
return 'rasterterrainanalysis'
def __init__(self):
super().__init__()
def initAlgorithm(self, config=None):
self.addParameter(QgsProcessingParameterRasterLayer(self.INPUT,
self.tr('Elevation layer')))
self.addParameter(QgsProcessingParameterNumber(self.Z_FACTOR,
self.tr('Z factor'),
QgsProcessingParameterNumber.Double,
1, False, 0.00))
self.addParameter(QgsProcessingParameterRasterDestination(self.OUTPUT, self.tr('Ruggedness')))
def name(self):
return 'ruggednessindex'
def displayName(self):
return self.tr('Ruggedness index')
def processAlgorithm(self, parameters, context, feedback):
inputFile = self.parameterAsRasterLayer(parameters, self.INPUT, context).source()
zFactor = self.parameterAsDouble(parameters, self.Z_FACTOR, context)
outputFile = self.parameterAsOutputLayer(parameters, self.OUTPUT, context)
outputFormat = QgsRasterFileWriter.driverForExtension(os.path.splitext(outputFile)[1])
ruggedness = QgsRuggednessFilter(inputFile, outputFile, outputFormat)
ruggedness.setZFactor(zFactor)
ruggedness.processRaster(feedback)
return {self.OUTPUT: outputFile}
| gpl-2.0 |
bbc/kamaelia | Sketches/MPS/BugReports/FixTests/Kamaelia/Examples/PythonInterpreter/ServerBasedPythonInterpreter.py | 6 | 1350 | #!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright 2010 British Broadcasting Corporation and Kamaelia Contributors(1)
#
# (1) Kamaelia Contributors are listed in the AUTHORS file and at
# http://www.kamaelia.org/AUTHORS - please extend this file,
# not this notice.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from Kamaelia.Chassis.Pipeline import Pipeline
from Kamaelia.Chassis.ConnectedServer import ServerCore
from Kamaelia.Util.PureTransformer import PureTransformer
from Kamaelia.Experimental.PythonInterpreter import InterpreterTransformer
def NetInterpreter(*args, **argv):
return Pipeline(
PureTransformer(lambda x: str(x).rstrip()),
InterpreterTransformer(),
PureTransformer(lambda x: str(x)+"\r\n>>> "),
)
ServerCore(protocol=NetInterpreter, port=1236).run()
| apache-2.0 |
formiano/enigma2-4.4 | lib/python/Components/TuneTest.py | 13 | 10730 | from enigma import eDVBFrontendParametersSatellite, eDVBFrontendParametersTerrestrial, eDVBFrontendParametersCable, eDVBFrontendParameters, eDVBResourceManager, eTimer
class Tuner:
def __init__(self, frontend, ignore_rotor=False):
self.frontend = frontend
self.ignore_rotor = ignore_rotor
# transponder = (frequency, symbolrate, polarisation, fec, inversion, orbpos, system, modulation, rolloff, pilot, tsid, onid)
# 0 1 2 3 4 5 6 7 8 9 10 11
def tune(self, transponder):
if self.frontend:
print "[TuneTest] tuning to transponder with data", transponder
parm = eDVBFrontendParametersSatellite()
parm.frequency = transponder[0] * 1000
parm.symbol_rate = transponder[1] * 1000
parm.polarisation = transponder[2]
parm.fec = transponder[3]
parm.inversion = transponder[4]
parm.orbital_position = transponder[5]
parm.system = transponder[6]
parm.modulation = transponder[7]
parm.rolloff = transponder[8]
parm.pilot = transponder[9]
self.tuneSatObj(parm)
def tuneSatObj(self, transponderObj):
if self.frontend:
feparm = eDVBFrontendParameters()
feparm.setDVBS(transponderObj, self.ignore_rotor)
self.lastparm = feparm
self.frontend.tune(feparm)
def tuneTerr(self, frequency,
inversion=2, bandwidth = 7000000, fechigh = 6, feclow = 6,
modulation = 2, transmission = 2, guard = 4,
hierarchy = 4, system = 0, plpid = 0):
if self.frontend:
print "[TuneTest] tuning to transponder with data", [frequency, inversion, bandwidth, fechigh, feclow, modulation, transmission, guard, hierarchy, system, plpid]
parm = eDVBFrontendParametersTerrestrial()
parm.frequency = frequency
parm.inversion = inversion
parm.bandwidth = bandwidth
parm.code_rate_HP = fechigh
parm.code_rate_LP = feclow
parm.modulation = modulation
parm.transmission_mode = transmission
parm.guard_interval = guard
parm.hierarchy = hierarchy
parm.system = system
parm.plpid = plpid
self.tuneTerrObj(parm)
def tuneTerrObj(self, transponderObj):
if self.frontend:
feparm = eDVBFrontendParameters()
feparm.setDVBT(transponderObj)
self.lastparm = feparm
self.frontend.tune(feparm)
def tuneCab(self, transponder):
if self.frontend:
print "[TuneTest] tuning to transponder with data", transponder
parm = eDVBFrontendParametersCable()
parm.frequency = transponder[0]
parm.symbol_rate = transponder[1]
parm.modulation = transponder[2]
parm.fec_inner = transponder[3]
parm.inversion = transponder[4]
#parm.system = transponder[5]
self.tuneCabObj(parm)
def tuneCabObj(self, transponderObj):
if self.frontend:
feparm = eDVBFrontendParameters()
feparm.setDVBC(transponderObj)
self.lastparm = feparm
self.frontend.tune(feparm)
def retune(self):
if self.frontend:
self.frontend.tune(self.lastparm)
def getTransponderData(self):
ret = { }
if self.frontend:
self.frontend.getTransponderData(ret, True)
return ret
# tunes a list of transponders and checks, if they lock and optionally checks the onid/tsid combination
# 1) add transponders with addTransponder()
# 2) call run(<checkPIDs = True>)
# 3) finishedChecking() is called, when the run is finished
class TuneTest:
def __init__(self, feid, stopOnSuccess = -1, stopOnError = -1):
self.stopOnSuccess = stopOnSuccess
self.stopOnError = stopOnError
self.feid = feid
self.transponderlist = []
self.currTuned = None
print "TuneTest for feid %d" % self.feid
if not self.openFrontend():
self.oldref = self.session.nav.getCurrentlyPlayingServiceOrGroup()
self.session.nav.stopService() # try to disable foreground service
if not self.openFrontend():
if self.session.pipshown: # try to disable pip
if hasattr(self.session, 'infobar'):
if self.session.infobar.servicelist.dopipzap:
self.session.infobar.servicelist.togglePipzap()
if hasattr(self.session, 'pip'):
del self.session.pip
self.session.pipshown = False
if not self.openFrontend():
self.frontend = None # in normal case this should not happen
self.tuner = Tuner(self.frontend)
self.timer = eTimer()
self.timer.callback.append(self.updateStatus)
def gotTsidOnid(self, tsid, onid):
print "******** got tsid, onid:", tsid, onid
if tsid is not -1 and onid is not -1:
self.pidStatus = self.INTERNAL_PID_STATUS_SUCCESSFUL
self.tsid = tsid
self.onid = onid
else:
self.pidStatus = self.INTERNAL_PID_STATUS_FAILED
self.tsid = -1
self.onid = -1
self.timer.start(100, True)
def updateStatus(self):
dict = {}
self.frontend.getFrontendStatus(dict)
stop = False
print "status:", dict
if dict["tuner_state"] == "TUNING":
print "TUNING"
self.timer.start(100, True)
self.progressCallback((self.getProgressLength(), self.tuningtransponder, self.STATUS_TUNING, self.currTuned))
elif self.checkPIDs and self.pidStatus == self.INTERNAL_PID_STATUS_NOOP:
print "2nd choice"
if dict["tuner_state"] == "LOCKED":
print "acquiring TSID/ONID"
self.raw_channel.receivedTsidOnid.get().append(self.gotTsidOnid)
self.raw_channel.requestTsidOnid()
self.pidStatus = self.INTERNAL_PID_STATUS_WAITING
else:
self.pidStatus = self.INTERNAL_PID_STATUS_FAILED
elif self.checkPIDs and self.pidStatus == self.INTERNAL_PID_STATUS_WAITING:
print "waiting for pids"
else:
if dict["tuner_state"] == "LOSTLOCK" or dict["tuner_state"] == "FAILED":
self.tuningtransponder = self.nextTransponder()
self.failedTune.append([self.currTuned, self.oldTuned, "tune_failed", dict]) # last parameter is the frontend status)
if self.stopOnError != -1 and self.stopOnError <= len(self.failedTune):
stop = True
elif dict["tuner_state"] == "LOCKED":
pidsFailed = False
if self.checkPIDs:
if self.currTuned is not None:
if self.tsid != self.currTuned[10] or self.onid != self.currTuned[11]:
self.failedTune.append([self.currTuned, self.oldTuned, "pids_failed", {"real": (self.tsid, self.onid), "expected": (self.currTuned[10], self.currTuned[11])}, dict]) # last parameter is the frontend status
pidsFailed = True
else:
self.successfullyTune.append([self.currTuned, self.oldTuned, dict]) # 3rd parameter is the frontend status
if self.stopOnSuccess != -1 and self.stopOnSuccess <= len(self.successfullyTune):
stop = True
elif not self.checkPIDs or (self.checkPids and not pidsFailed):
self.successfullyTune.append([self.currTuned, self.oldTuned, dict]) # 3rd parameter is the frontend status
if self.stopOnSuccess != -1 and self.stopOnSuccess <= len(self.successfullyTune):
stop = True
self.tuningtransponder = self.nextTransponder()
else:
print "************* tuner_state:", dict["tuner_state"]
self.progressCallback((self.getProgressLength(), self.tuningtransponder, self.STATUS_NOOP, self.currTuned))
if not stop:
self.tune()
if self.tuningtransponder < len(self.transponderlist) and not stop:
if self.pidStatus != self.INTERNAL_PID_STATUS_WAITING:
self.timer.start(100, True)
print "restart timer"
else:
print "not restarting timers (waiting for pids)"
else:
self.progressCallback((self.getProgressLength(), len(self.transponderlist), self.STATUS_DONE, self.currTuned))
print "finishedChecking"
self.finishedChecking()
def firstTransponder(self):
print "firstTransponder:"
index = 0
if self.checkPIDs:
print "checkPIDs-loop"
# check for tsid != -1 and onid != -1
print "index:", index
print "len(self.transponderlist):", len(self.transponderlist)
while index < len(self.transponderlist) and (self.transponderlist[index][10] == -1 or self.transponderlist[index][11] == -1):
index += 1
print "FirstTransponder final index:", index
return index
def nextTransponder(self):
print "getting next transponder", self.tuningtransponder
index = self.tuningtransponder + 1
if self.checkPIDs:
print "checkPIDs-loop"
# check for tsid != -1 and onid != -1
print "index:", index
print "len(self.transponderlist):", len(self.transponderlist)
while index < len(self.transponderlist) and (self.transponderlist[index][10] == -1 or self.transponderlist[index][11] == -1):
index += 1
print "next transponder index:", index
return index
def finishedChecking(self):
print "finished testing"
print "successfull:", self.successfullyTune
print "failed:", self.failedTune
def openFrontend(self):
res_mgr = eDVBResourceManager.getInstance()
if res_mgr:
self.raw_channel = res_mgr.allocateRawChannel(self.feid)
if self.raw_channel:
self.frontend = self.raw_channel.getFrontend()
if self.frontend:
return True
else:
print "getFrontend failed"
else:
print "getRawChannel failed"
else:
print "getResourceManager instance failed"
return False
def tune(self):
print "tuning to", self.tuningtransponder
if self.tuningtransponder < len(self.transponderlist):
self.pidStatus = self.INTERNAL_PID_STATUS_NOOP
self.oldTuned = self.currTuned
self.currTuned = self.transponderlist[self.tuningtransponder]
self.tuner.tune(self.transponderlist[self.tuningtransponder])
INTERNAL_PID_STATUS_NOOP = 0
INTERNAL_PID_STATUS_WAITING = 1
INTERNAL_PID_STATUS_SUCCESSFUL = 2
INTERNAL_PID_STATUS_FAILED = 3
def run(self, checkPIDs = False):
self.checkPIDs = checkPIDs
self.pidStatus = self.INTERNAL_PID_STATUS_NOOP
self.failedTune = []
self.successfullyTune = []
self.tuningtransponder = self.firstTransponder()
self.tune()
self.progressCallback((self.getProgressLength(), self.tuningtransponder, self.STATUS_START, self.currTuned))
self.timer.start(100, True)
# transponder = (frequency, symbolrate, polarisation, fec, inversion, orbpos, <system>, <modulation>, <rolloff>, <pilot>, <tsid>, <onid>)
# 0 1 2 3 4 5 6 7 8 9 10 11
def addTransponder(self, transponder):
self.transponderlist.append(transponder)
def clearTransponder(self):
self.transponderlist = []
def getProgressLength(self):
count = 0
if self.stopOnError == -1:
count = len(self.transponderlist)
else:
if count < self.stopOnError:
count = self.stopOnError
if self.stopOnSuccess == -1:
count = len(self.transponderlist)
else:
if count < self.stopOnSuccess:
count = self.stopOnSuccess
return count
STATUS_START = 0
STATUS_TUNING = 1
STATUS_DONE = 2
STATUS_NOOP = 3
# can be overwritten
# progress = (range, value, status, transponder)
def progressCallback(self, progress):
pass
| gpl-2.0 |
mzizzi/ansible | lib/ansible/modules/web_infrastructure/jenkins_script.py | 9 | 6571 | #!/usr/bin/python
# encoding: utf-8
# (c) 2016, James Hogarth <james.hogarth@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
ANSIBLE_METADATA = {'metadata_version': '1.0',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
author: James Hogarth
module: jenkins_script
short_description: Executes a groovy script in the jenkins instance
version_added: '2.3'
description:
- The C(jenkins_script) module takes a script plus a dict of values
to use within the script and returns the result of the script being run.
options:
script:
description:
- The groovy script to be executed.
This gets passed as a string Template if args is defined.
required: true
default: null
url:
description:
- The jenkins server to execute the script against. The default is a local
jenkins instance that is not being proxied through a webserver.
required: false
default: http://localhost:8080
validate_certs:
description:
- If set to C(no), the SSL certificates will not be validated.
This should only set to C(no) used on personally controlled sites
using self-signed certificates as it avoids verifying the source site.
required: false
default: True
user:
description:
- The username to connect to the jenkins server with.
required: false
default: null
password:
description:
- The password to connect to the jenkins server with.
required: false
default: null
timeout:
description:
- The request timeout in seconds
required: false
default: 10
version_added: "2.4"
args:
description:
- A dict of key-value pairs used in formatting the script using string.Template (see https://docs.python.org/2/library/string.html#template-strings).
required: false
default: null
notes:
- Since the script can do anything this does not report on changes.
Knowing the script is being run it's important to set changed_when
for the ansible output to be clear on any alterations made.
'''
EXAMPLES = '''
- name: Obtaining a list of plugins
jenkins_script:
script: 'println(Jenkins.instance.pluginManager.plugins)'
user: admin
password: admin
- name: Setting master using a variable to hold a more complicate script
vars:
setmaster_mode: |
import jenkins.model.*
instance = Jenkins.getInstance()
instance.setMode(${jenkins_mode})
instance.save()
- name: use the variable as the script
jenkins_script:
script: "{{ setmaster_mode }}"
args:
jenkins_mode: Node.Mode.EXCLUSIVE
- name: interacting with an untrusted HTTPS connection
jenkins_script:
script: "println(Jenkins.instance.pluginManager.plugins)"
user: admin
password: admin
url: https://localhost
validate_certs: no
'''
RETURN = '''
output:
description: Result of script
returned: success
type: string
sample: 'Result: true'
'''
import json
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six.moves.urllib.parse import urlencode
from ansible.module_utils.urls import fetch_url
def is_csrf_protection_enabled(module):
resp, info = fetch_url(module,
module.params['url'] + '/api/json',
method='GET')
if info["status"] != 200:
module.fail_json(msg="HTTP error " + str(info["status"]) + " " + info["msg"])
content = resp.read()
return json.loads(content).get('useCrumbs', False)
def get_crumb(module):
resp, info = fetch_url(module,
module.params['url'] + '/crumbIssuer/api/json',
method='GET')
if info["status"] != 200:
module.fail_json(msg="HTTP error " + str(info["status"]) + " " + info["msg"])
content = resp.read()
return json.loads(content)
def main():
module = AnsibleModule(
argument_spec=dict(
script=dict(required=True, type="str"),
url=dict(required=False, type="str", default="http://localhost:8080"),
validate_certs=dict(required=False, type="bool", default=True),
user=dict(required=False, no_log=True, type="str", default=None),
password=dict(required=False, no_log=True, type="str", default=None),
timeout=dict(required=False, type="int", default=10),
args=dict(required=False, type="dict", default=None)
)
)
if module.params['user'] is not None:
if module.params['password'] is None:
module.fail_json(msg="password required when user provided")
module.params['url_username'] = module.params['user']
module.params['url_password'] = module.params['password']
module.params['force_basic_auth'] = True
if module.params['args'] is not None:
from string import Template
script_contents = Template(module.params['script']).substitute(module.params['args'])
else:
script_contents = module.params['script']
headers = {}
if is_csrf_protection_enabled(module):
crumb = get_crumb(module)
headers = {crumb['crumbRequestField']: crumb['crumb']}
resp, info = fetch_url(module,
module.params['url'] + "/scriptText",
data=urlencode({'script': script_contents}),
headers=headers,
method="POST",
timeout=module.params['timeout'])
if info["status"] != 200:
module.fail_json(msg="HTTP error " + str(info["status"]) + " " + info["msg"])
result = resp.read()
if 'Exception:' in result and 'at java.lang.Thread' in result:
module.fail_json(msg="script failed with stacktrace:\n " + result)
module.exit_json(
output=result,
)
if __name__ == '__main__':
main()
| gpl-3.0 |
prisae/pelican-plugins | optimize_images/optimize_images.py | 68 | 1719 | # -*- coding: utf-8 -*-
"""
Optimized images (jpg and png)
Assumes that jpegtran and optipng are isntalled on path.
http://jpegclub.org/jpegtran/
http://optipng.sourceforge.net/
Copyright (c) 2012 Irfan Ahmad (http://i.com.pk)
"""
import logging
import os
from subprocess import call
from pelican import signals
logger = logging.getLogger(__name__)
# Display command output on DEBUG and TRACE
SHOW_OUTPUT = logger.getEffectiveLevel() <= logging.DEBUG
# A list of file types with their respective commands
COMMANDS = {
# '.ext': ('command {flags} {filename', 'silent_flag', 'verbose_flag')
'.jpg': ('jpegtran {flags} -copy none -optimize -outfile "{filename}" "{filename}"', '', '-v'),
'.png': ('optipng {flags} "{filename}"', '--quiet', ''),
}
def optimize_images(pelican):
"""
Optimized jpg and png images
:param pelican: The Pelican instance
"""
for dirpath, _, filenames in os.walk(pelican.settings['OUTPUT_PATH']):
for name in filenames:
if os.path.splitext(name)[1] in COMMANDS.keys():
optimize(dirpath, name)
def optimize(dirpath, filename):
"""
Check if the name is a type of file that should be optimized.
And optimizes it if required.
:param dirpath: Path of the file to be optimzed
:param name: A file name to be optimized
"""
filepath = os.path.join(dirpath, filename)
logger.info('optimizing %s', filepath)
ext = os.path.splitext(filename)[1]
command, silent, verbose = COMMANDS[ext]
flags = verbose if SHOW_OUTPUT else silent
command = command.format(filename=filepath, flags=flags)
call(command, shell=True)
def register():
signals.finalized.connect(optimize_images)
| agpl-3.0 |
dreibh/planetlab-lxc-nodemanager | sliver_lxc.py | 1 | 18070 | #
"""LXC slivers"""
import subprocess
import sys
import time
import os, os.path
import grp
from pwd import getpwnam
from string import Template
# vsys probably should not be a plugin
# the thing is, the right way to handle stuff would be that
# if slivers get created by doing a,b,c
# then they should be deleted by doing c,b,a
# the current ordering model for vsys plugins completely fails to capture that
from plugins.vsys import removeSliverFromVsys, startService as vsysStartService
import libvirt
import logger
import plnode.bwlimit as bwlimit
from initscript import Initscript
from account import Account
from sliver_libvirt import Sliver_Libvirt
BTRFS_TIMEOUT = 15*60
class Sliver_LXC(Sliver_Libvirt, Initscript):
"""This class wraps LXC commands"""
SHELL = '/usr/sbin/vsh'
TYPE = 'sliver.LXC'
# Need to add a tag at myplc to actually use this account
# type = 'sliver.LXC'
REF_IMG_BASE_DIR = '/vservers/.lvref'
CON_BASE_DIR = '/vservers'
def __init__(self, rec):
name = rec['name']
Sliver_Libvirt.__init__(self, rec)
Initscript.__init__(self, name)
def configure(self, rec):
logger.log('========== sliver_lxc.configure {}'.format(self.name))
Sliver_Libvirt.configure(self, rec)
# in case we update nodemanager..
self.install_and_enable_vinit()
# do the configure part from Initscript
Initscript.configure(self, rec)
# remember configure() always gets called *before* start()
# in particular the slice initscript
# is expected to be in place already at this point
def start(self, delay=0):
logger.log('==================== sliver_lxc.start {}'.format(self.name))
if 'enabled' in self.rspec and self.rspec['enabled'] <= 0:
logger.log('sliver_lxc: not starting {}, is not enabled'.format(self.name))
return
# the generic /etc/init.d/vinit script is permanently refreshed, and enabled
self.install_and_enable_vinit()
# expose .ssh for omf_friendly slivers
if 'tags' in self.rspec and 'omf_control' in self.rspec['tags']:
Account.mount_ssh_dir(self.name)
# logger.log("NM is exiting for debug - just about to start {}".format(self.name))
# exit(0)
Sliver_Libvirt.start(self, delay)
def rerun_slice_vinit(self):
"""This is called at startup, and whenever the initscript code changes"""
logger.log("sliver_lxc.rerun_slice_vinit {}".format(self.name))
plain = "virsh -c lxc:/// lxc-enter-namespace --noseclabel -- {} /usr/bin/systemctl --system daemon-reload"\
.format(self.name)
command = plain.split()
logger.log_call(command, timeout=3)
plain = "virsh -c lxc:/// lxc-enter-namespace --noseclabel -- {} /usr/bin/systemctl restart vinit.service"\
.format(self.name)
command = plain.split()
logger.log_call(command, timeout=3)
@staticmethod
def create(name, rec=None):
'''
Create dirs, copy fs image, lxc_create
'''
logger.verbose('sliver_lxc: {} create'.format(name))
conn = Sliver_Libvirt.getConnection(Sliver_LXC.TYPE)
vref = rec['vref']
if vref is None:
vref = "lxc-f24-x86_64"
logger.log("sliver_libvirt: {}: WARNING - no vref attached, using hard-wired default {}"
.format(name, vref))
# compute guest arch from vref
# essentially we want x86_64 (default) or i686 here for libvirt
try:
(x, y, arch) = vref.split('-')
arch = "x86_64" if arch.find("64") >= 0 else "i686"
except:
arch = 'x86_64'
# Get the type of image from vref myplc tags specified as:
# pldistro = lxc
# fcdistro = squeeze
# arch x86_64
arch = 'x86_64'
tags = rec['rspec']['tags']
if 'arch' in tags:
arch = tags['arch']
if arch == 'i386':
arch = 'i686'
refImgDir = os.path.join(Sliver_LXC.REF_IMG_BASE_DIR, vref)
containerDir = os.path.join(Sliver_LXC.CON_BASE_DIR, name)
# check the template exists -- there's probably a better way..
if not os.path.isdir(refImgDir):
logger.log('sliver_lxc: {}: ERROR Could not create sliver - reference image {} not found'
.format(name, vref))
logger.log('sliver_lxc: {}: ERROR Expected reference image in {}'.format(name, refImgDir))
return
# during some time this fragment had been commented out
# but we're seeing cases where this code might actually be useful, so..
# this hopefully should be fixed now
# # in fedora20 we have some difficulty in properly cleaning up /vservers/<slicename>
# # also note that running e.g. btrfs subvolume create /vservers/.lvref/image /vservers/foo
# # behaves differently, whether /vservers/foo exists or not:
# # if /vservers/foo does not exist, it creates /vservers/foo
# # but if it does exist, then it creates /vservers/foo/image !!
# # so we need to check the expected container rootfs does not exist yet
# # this hopefully could be removed in a future release
if os.path.exists (containerDir):
logger.log("sliver_lxc: {}: WARNING cleaning up pre-existing {}".format(name, containerDir))
command = ['btrfs', 'subvolume', 'delete', containerDir]
logger.log_call(command, BTRFS_TIMEOUT)
# re-check
if os.path.exists (containerDir):
logger.log('sliver_lxc: {}: ERROR Could not create sliver - could not clean up empty {}'
.format(name, containerDir))
return
# Snapshot the reference image fs
# this assumes the reference image is in its own subvolume
command = ['btrfs', 'subvolume', 'snapshot', refImgDir, containerDir]
if not logger.log_call(command, timeout=BTRFS_TIMEOUT):
logger.log('sliver_lxc: ERROR Could not create BTRFS snapshot at {}'
.format(containerDir))
return
command = ['chmod', '755', containerDir]
logger.log_call(command)
# TODO: set quotas...
# Set hostname. A valid hostname cannot have '_'
#with open(os.path.join(containerDir, 'etc/hostname'), 'w') as f:
# print >>f, name.replace('_', '-')
# Add slices group if not already present
try:
group = grp.getgrnam('slices')
except:
command = ['/usr/sbin/groupadd', 'slices']
logger.log_call(command)
# Add unix account (TYPE is specified in the subclass)
command = ['/usr/sbin/useradd', '-g', 'slices', '-s', Sliver_LXC.SHELL, name, '-p', '*']
logger.log_call(command)
command = ['mkdir', '/home/{}/.ssh'.format(name)]
logger.log_call(command)
# Create PK pair keys to connect from the host to the guest without
# password... maybe remove the need for authentication inside the
# guest?
command = ['su', '-s', '/bin/bash', '-c',
'ssh-keygen -t rsa -N "" -f /home/{}/.ssh/id_rsa'.format(name)]
logger.log_call(command)
command = ['chown', '-R', '{}:slices'.format(name), '/home/{}/.ssh'.format(name)]
logger.log_call(command)
command = ['mkdir', '{}/root/.ssh'.format(containerDir)]
logger.log_call(command)
command = ['cp', '/home/{}/.ssh/id_rsa.pub'.format(name),
'{}/root/.ssh/authorized_keys'.format(containerDir)]
logger.log_call(command)
logger.log("creating /etc/slicename file in {}".format(os.path.join(containerDir, 'etc/slicename')))
try:
with open(os.path.join(containerDir, 'etc/slicename'), 'w') as f:
f.write(name)
except:
logger.log_exc("exception while creating /etc/slicename")
try:
with open(os.path.join(containerDir, 'etc/slicefamily'), 'w') as f:
f.write(vref)
except:
logger.log_exc("exception while creating /etc/slicefamily")
uid = None
try:
uid = getpwnam(name).pw_uid
except KeyError:
# keyerror will happen if user id was not created successfully
logger.log_exc("exception while getting user id")
if uid is not None:
logger.log("uid is {}".format(uid))
command = ['mkdir', '{}/home/{}'.format(containerDir, name)]
logger.log_call(command)
command = ['chown', name, '{}/home/{}'.format(containerDir, name)]
logger.log_call(command)
etcpasswd = os.path.join(containerDir, 'etc/passwd')
etcgroup = os.path.join(containerDir, 'etc/group')
if os.path.exists(etcpasswd):
# create all accounts with gid=1001 - i.e. 'slices' like it is in the root context
slices_gid = 1001
logger.log("adding user {name} id {uid} gid {slices_gid} to {etcpasswd}"
.format(**(locals())))
try:
with open(etcpasswd, 'a') as passwdfile:
passwdfile.write("{name}:x:{uid}:{slices_gid}::/home/{name}:/bin/bash\n"
.format(**locals()))
except:
logger.log_exc("exception while updating {}".format(etcpasswd))
logger.log("adding group slices with gid {slices_gid} to {etcgroup}"
.format(**locals()))
try:
with open(etcgroup, 'a') as groupfile:
groupfile.write("slices:x:{slices_gid}\n"
.format(**locals()))
except:
logger.log_exc("exception while updating {}".format(etcgroup))
sudoers = os.path.join(containerDir, 'etc/sudoers')
if os.path.exists(sudoers):
try:
with open(sudoers, 'a') as f:
f.write("{} ALL=(ALL) NOPASSWD: ALL\n".format(name))
except:
logger.log_exc("exception while updating /etc/sudoers")
# customizations for the user environment - root or slice uid
# we save the whole business in /etc/planetlab.profile
# and source this file for both root and the slice uid's .profile
# prompt for slice owner, + LD_PRELOAD for transparently wrap bind
pl_profile = os.path.join(containerDir, "etc/planetlab.profile")
ld_preload_text = """# by default, we define this setting so that calls to bind(2),
# when invoked on 0.0.0.0, get transparently redirected to the public interface of this node
# see https://svn.planet-lab.org/wiki/LxcPortForwarding"""
usrmove_path_text = """# VM's before Features/UsrMove need /bin and /sbin in their PATH"""
usrmove_path_code = """
pathmunge () {
if ! echo $PATH | /bin/egrep -q "(^|:)$1($|:)" ; then
if [ "$2" = "after" ] ; then
PATH=$PATH:$1
else
PATH=$1:$PATH
fi
fi
}
pathmunge /bin after
pathmunge /sbin after
unset pathmunge
"""
with open(pl_profile, 'w') as f:
f.write("export PS1='{}@\H \$ '\n".format(name))
f.write("{}\n".format(ld_preload_text))
f.write("if [ -e /etc/planetlab/lib/bind_public.so ] ; then # Only preload bind_public if it exists.\n")
f.write(" export LD_PRELOAD=/etc/planetlab/lib/bind_public.so\n")
f.write("fi\n")
f.write("{}\n".format(usrmove_path_text))
f.write("{}\n".format(usrmove_path_code))
# make sure this file is sourced from both root's and slice's .profile
enforced_line = "[ -f /etc/planetlab.profile ] && source /etc/planetlab.profile\n"
for path in [ 'root/.profile', 'home/{}/.profile'.format(name) ]:
from_root = os.path.join(containerDir, path)
# if dir is not yet existing let's forget it for now
if not os.path.isdir(os.path.dirname(from_root)): continue
found = False
try:
with open(from_root) as f:
contents = f.readlines()
for content in contents:
if content == enforced_line:
found = True
except IOError:
pass
if not found:
with open(from_root, "a") as user_profile:
user_profile.write(enforced_line)
# in case we create the slice's .profile when writing
if from_root.find("/home") >= 0:
command = ['chown', '{}:slices'.format(name), from_root]
logger.log_call(command)
# Lookup for xid and create template after the user is created so we
# can get the correct xid based on the name of the slice
xid = bwlimit.get_xid(name)
# Template for libvirt sliver configuration
template_filename_sliceimage = os.path.join(Sliver_LXC.REF_IMG_BASE_DIR, 'lxc_template.xml')
if os.path.isfile(template_filename_sliceimage):
logger.verbose("Using XML template {}".format(template_filename_sliceimage))
template_filename = template_filename_sliceimage
else:
logger.log("Cannot find XML template {}".format(template_filename_sliceimage))
return
interfaces = Sliver_Libvirt.get_interfaces_xml(rec)
try:
with open(template_filename) as f:
template = Template(f.read())
xml = template.substitute(name=name, xid=xid, interfaces=interfaces, arch=arch)
except IOError:
logger.log('Failed to parse or use XML template file {}'.format(template_filename))
return
# Lookup for the sliver before actually
# defining it, just in case it was already defined.
try:
dom = conn.lookupByName(name)
except:
dom = conn.defineXML(xml)
logger.verbose('lxc_create: {} -> {}'.format(name, Sliver_Libvirt.dom_details(dom)))
@staticmethod
def destroy(name):
# umount .ssh directory - only if mounted
Account.umount_ssh_dir(name)
logger.verbose ('sliver_lxc: {} destroy'.format(name))
conn = Sliver_Libvirt.getConnection(Sliver_LXC.TYPE)
containerDir = os.path.join(Sliver_LXC.CON_BASE_DIR, name)
try:
# Destroy libvirt domain
dom = conn.lookupByName(name)
except:
logger.verbose('sliver_lxc.destroy: Domain {} does not exist!'.format(name))
return
# Slivers with vsys running will fail the subvolume delete
# removeSliverFromVsys return True if it stops vsys, telling us to start it again later
vsys_stopped = removeSliverFromVsys (name)
try:
logger.log("sliver_lxc.destroy: destroying domain {}".format(name))
dom.destroy()
except:
logger.verbose("sliver_lxc.destroy: Domain {} not running... continuing.".format(name))
try:
logger.log("sliver_lxc.destroy: undefining domain {}".format(name))
dom.undefine()
except:
logger.verbose('sliver_lxc.destroy: Domain {} is not defined... continuing.'.format(name))
# Remove user after destroy domain to force logout
command = ['/usr/sbin/userdel', '-f', '-r', name]
logger.log_call(command)
# Remove rootfs of destroyed domain
command = ['/usr/bin/rm', '-rf', containerDir]
logger.log_call(command, timeout=BTRFS_TIMEOUT)
# ???
logger.log("-TMP-ls-l {}".format(name))
command = ['ls', '-lR', containerDir]
logger.log_call(command)
logger.log("-TMP-vsys-status")
command = ['/usr/bin/systemctl', 'status', 'vsys']
logger.log_call(command)
# ???
# Remove rootfs of destroyed domain
command = ['btrfs', 'subvolume', 'delete', containerDir]
logger.log_call(command, timeout=BTRFS_TIMEOUT)
# For some reason I am seeing this :
#log_call: running command btrfs subvolume delete /vservers/inri_sl1
#log_call: ERROR: cannot delete '/vservers/inri_sl1' - Device or resource busy
#log_call: Delete subvolume '/vservers/inri_sl1'
#log_call:end command (btrfs subvolume delete /vservers/inri_sl1) returned with code 1
#
# something must have an open handle to a file in there, but I can't find out what it is
# the following code aims at gathering data on what is going on in the system at this point in time
# note that some time later (typically when the sliver gets re-created) the same
# attempt at deleting the subvolume does work
# also lsof never shows anything relevant; this is painful..
if not os.path.exists(containerDir):
logger.log('sliver_lxc.destroy: {} cleanly destroyed.'.format(name))
else:
# we're in /
#logger.log("-TMP-cwd {} : {}".format(name, os.getcwd()))
# also lsof never shows anything relevant; this is painful..
#logger.log("-TMP-lsof {}".format(name))
#command = ['lsof']
#logger.log_call(command)
logger.log("-TMP-ls-l {}".format(name))
command = ['ls', '-lR', containerDir]
logger.log_call(command)
logger.log("-TMP-lsof")
command = ['lsof']
logger.log_call(command)
if os.path.exists(containerDir):
logger.log('sliver_lxc.destroy: ERROR could not cleanly destroy {} - giving up'.format(name))
if vsys_stopped:
vsysStartService()
| bsd-3-clause |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.