repo_name
stringlengths 5
100
| ref
stringlengths 12
67
| path
stringlengths 4
244
| copies
stringlengths 1
8
| content
stringlengths 0
1.05M
⌀ |
|---|---|---|---|---|
rossburton/yocto-autobuilder
|
refs/heads/ross
|
lib/python2.7/site-packages/SQLAlchemy-0.7.0-py2.7-linux-x86_64.egg/sqlalchemy/ext/compiler.py
|
8
|
# ext/compiler.py
# Copyright (C) 2005-2011 the SQLAlchemy authors and contributors <see AUTHORS file>
#
# This module is part of SQLAlchemy and is released under
# the MIT License: http://www.opensource.org/licenses/mit-license.php
"""Provides an API for creation of custom ClauseElements and compilers.
Synopsis
========
Usage involves the creation of one or more :class:`~sqlalchemy.sql.expression.ClauseElement`
subclasses and one or more callables defining its compilation::
from sqlalchemy.ext.compiler import compiles
from sqlalchemy.sql.expression import ColumnClause
class MyColumn(ColumnClause):
pass
@compiles(MyColumn)
def compile_mycolumn(element, compiler, **kw):
return "[%s]" % element.name
Above, ``MyColumn`` extends :class:`~sqlalchemy.sql.expression.ColumnClause`,
the base expression element for named column objects. The ``compiles``
decorator registers itself with the ``MyColumn`` class so that it is invoked
when the object is compiled to a string::
from sqlalchemy import select
s = select([MyColumn('x'), MyColumn('y')])
print str(s)
Produces::
SELECT [x], [y]
Dialect-specific compilation rules
==================================
Compilers can also be made dialect-specific. The appropriate compiler will be
invoked for the dialect in use::
from sqlalchemy.schema import DDLElement
class AlterColumn(DDLElement):
def __init__(self, column, cmd):
self.column = column
self.cmd = cmd
@compiles(AlterColumn)
def visit_alter_column(element, compiler, **kw):
return "ALTER COLUMN %s ..." % element.column.name
@compiles(AlterColumn, 'postgresql')
def visit_alter_column(element, compiler, **kw):
return "ALTER TABLE %s ALTER COLUMN %s ..." % (element.table.name, element.column.name)
The second ``visit_alter_table`` will be invoked when any ``postgresql`` dialect is used.
Compiling sub-elements of a custom expression construct
=======================================================
The ``compiler`` argument is the :class:`~sqlalchemy.engine.base.Compiled`
object in use. This object can be inspected for any information about the
in-progress compilation, including ``compiler.dialect``,
``compiler.statement`` etc. The :class:`~sqlalchemy.sql.compiler.SQLCompiler`
and :class:`~sqlalchemy.sql.compiler.DDLCompiler` both include a ``process()``
method which can be used for compilation of embedded attributes::
from sqlalchemy.sql.expression import Executable, ClauseElement
class InsertFromSelect(Executable, ClauseElement):
def __init__(self, table, select):
self.table = table
self.select = select
@compiles(InsertFromSelect)
def visit_insert_from_select(element, compiler, **kw):
return "INSERT INTO %s (%s)" % (
compiler.process(element.table, asfrom=True),
compiler.process(element.select)
)
insert = InsertFromSelect(t1, select([t1]).where(t1.c.x>5))
print insert
Produces::
"INSERT INTO mytable (SELECT mytable.x, mytable.y, mytable.z FROM mytable WHERE mytable.x > :x_1)"
Cross Compiling between SQL and DDL compilers
---------------------------------------------
SQL and DDL constructs are each compiled using different base compilers - ``SQLCompiler``
and ``DDLCompiler``. A common need is to access the compilation rules of SQL expressions
from within a DDL expression. The ``DDLCompiler`` includes an accessor ``sql_compiler`` for this reason, such as below where we generate a CHECK
constraint that embeds a SQL expression::
@compiles(MyConstraint)
def compile_my_constraint(constraint, ddlcompiler, **kw):
return "CONSTRAINT %s CHECK (%s)" % (
constraint.name,
ddlcompiler.sql_compiler.process(constraint.expression)
)
Changing the default compilation of existing constructs
=======================================================
The compiler extension applies just as well to the existing constructs. When overriding
the compilation of a built in SQL construct, the @compiles decorator is invoked upon
the appropriate class (be sure to use the class, i.e. ``Insert`` or ``Select``, instead of the creation function such as ``insert()`` or ``select()``).
Within the new compilation function, to get at the "original" compilation routine,
use the appropriate visit_XXX method - this because compiler.process() will call upon the
overriding routine and cause an endless loop. Such as, to add "prefix" to all insert statements::
from sqlalchemy.sql.expression import Insert
@compiles(Insert)
def prefix_inserts(insert, compiler, **kw):
return compiler.visit_insert(insert.prefix_with("some prefix"), **kw)
The above compiler will prefix all INSERT statements with "some prefix" when compiled.
.. _type_compilation_extension:
Changing Compilation of Types
=============================
``compiler`` works for types, too, such as below where we implement the MS-SQL specific 'max' keyword for ``String``/``VARCHAR``::
@compiles(String, 'mssql')
@compiles(VARCHAR, 'mssql')
def compile_varchar(element, compiler, **kw):
if element.length == 'max':
return "VARCHAR('max')"
else:
return compiler.visit_VARCHAR(element, **kw)
foo = Table('foo', metadata,
Column('data', VARCHAR('max'))
)
Subclassing Guidelines
======================
A big part of using the compiler extension is subclassing SQLAlchemy
expression constructs. To make this easier, the expression and
schema packages feature a set of "bases" intended for common tasks.
A synopsis is as follows:
* :class:`~sqlalchemy.sql.expression.ClauseElement` - This is the root
expression class. Any SQL expression can be derived from this base, and is
probably the best choice for longer constructs such as specialized INSERT
statements.
* :class:`~sqlalchemy.sql.expression.ColumnElement` - The root of all
"column-like" elements. Anything that you'd place in the "columns" clause of
a SELECT statement (as well as order by and group by) can derive from this -
the object will automatically have Python "comparison" behavior.
:class:`~sqlalchemy.sql.expression.ColumnElement` classes want to have a
``type`` member which is expression's return type. This can be established
at the instance level in the constructor, or at the class level if its
generally constant::
class timestamp(ColumnElement):
type = TIMESTAMP()
* :class:`~sqlalchemy.sql.expression.FunctionElement` - This is a hybrid of a
``ColumnElement`` and a "from clause" like object, and represents a SQL
function or stored procedure type of call. Since most databases support
statements along the line of "SELECT FROM <some function>"
``FunctionElement`` adds in the ability to be used in the FROM clause of a
``select()`` construct::
from sqlalchemy.sql.expression import FunctionElement
class coalesce(FunctionElement):
name = 'coalesce'
@compiles(coalesce)
def compile(element, compiler, **kw):
return "coalesce(%s)" % compiler.process(element.clauses)
@compiles(coalesce, 'oracle')
def compile(element, compiler, **kw):
if len(element.clauses) > 2:
raise TypeError("coalesce only supports two arguments on Oracle")
return "nvl(%s)" % compiler.process(element.clauses)
* :class:`~sqlalchemy.schema.DDLElement` - The root of all DDL expressions,
like CREATE TABLE, ALTER TABLE, etc. Compilation of ``DDLElement``
subclasses is issued by a ``DDLCompiler`` instead of a ``SQLCompiler``.
``DDLElement`` also features ``Table`` and ``MetaData`` event hooks via the
``execute_at()`` method, allowing the construct to be invoked during CREATE
TABLE and DROP TABLE sequences.
* :class:`~sqlalchemy.sql.expression.Executable` - This is a mixin which should be
used with any expression class that represents a "standalone" SQL statement that
can be passed directly to an ``execute()`` method. It is already implicit
within ``DDLElement`` and ``FunctionElement``.
Further Examples
================
"UTC timestamp" function
-------------------------
A function that works like "CURRENT_TIMESTAMP" except applies the appropriate conversions
so that the time is in UTC time. Timestamps are best stored in relational databases
as UTC, without time zones. UTC so that your database doesn't think time has gone
backwards in the hour when daylight savings ends, without timezones because timezones
are like character encodings - they're best applied only at the endpoints of an
application (i.e. convert to UTC upon user input, re-apply desired timezone upon display).
For Postgresql and Microsoft SQL Server::
from sqlalchemy.sql import expression
from sqlalchemy.ext.compiler import compiles
from sqlalchemy.types import DateTime
class utcnow(expression.FunctionElement):
type = DateTime()
@compiles(utcnow, 'postgresql')
def pg_utcnow(element, compiler, **kw):
return "TIMEZONE('utc', CURRENT_TIMESTAMP)"
@compiles(utcnow, 'mssql')
def ms_utcnow(element, compiler, **kw):
return "GETUTCDATE()"
Example usage::
from sqlalchemy import (
Table, Column, Integer, String, DateTime, MetaData
)
metadata = MetaData()
event = Table("event", metadata,
Column("id", Integer, primary_key=True),
Column("description", String(50), nullable=False),
Column("timestamp", DateTime, server_default=utcnow())
)
"GREATEST" function
-------------------
The "GREATEST" function is given any number of arguments and returns the one that is
of the highest value - it's equivalent to Python's ``max`` function. A SQL
standard version versus a CASE based version which only accommodates two
arguments::
from sqlalchemy.sql import expression
from sqlalchemy.ext.compiler import compiles
from sqlalchemy.types import Numeric
class greatest(expression.FunctionElement):
type = Numeric()
name = 'greatest'
@compiles(greatest)
def default_greatest(element, compiler, **kw):
return compiler.visit_function(element)
@compiles(greatest, 'sqlite')
@compiles(greatest, 'mssql')
@compiles(greatest, 'oracle')
def case_greatest(element, compiler, **kw):
arg1, arg2 = list(element.clauses)
return "CASE WHEN %s > %s THEN %s ELSE %s END" % (
compiler.process(arg1),
compiler.process(arg2),
compiler.process(arg1),
compiler.process(arg2),
)
Example usage::
Session.query(Account).\\
filter(
greatest(
Account.checking_balance,
Account.savings_balance) > 10000
)
"false" expression
------------------
Render a "false" constant expression, rendering as "0" on platforms that don't have a "false" constant::
from sqlalchemy.sql import expression
from sqlalchemy.ext.compiler import compiles
class sql_false(expression.ColumnElement):
pass
@compiles(sql_false)
def default_false(element, compiler, **kw):
return "false"
@compiles(sql_false, 'mssql')
@compiles(sql_false, 'mysql')
@compiles(sql_false, 'oracle')
def int_false(element, compiler, **kw):
return "0"
Example usage::
from sqlalchemy import select, union_all
exp = union_all(
select([users.c.name, sql_false().label("enrolled")]),
select([customers.c.name, customers.c.enrolled])
)
"""
def compiles(class_, *specs):
def decorate(fn):
existing = class_.__dict__.get('_compiler_dispatcher', None)
existing_dispatch = class_.__dict__.get('_compiler_dispatch')
if not existing:
existing = _dispatcher()
if existing_dispatch:
existing.specs['default'] = existing_dispatch
# TODO: why is the lambda needed ?
setattr(class_, '_compiler_dispatch', lambda *arg, **kw: existing(*arg, **kw))
setattr(class_, '_compiler_dispatcher', existing)
if specs:
for s in specs:
existing.specs[s] = fn
else:
existing.specs['default'] = fn
return fn
return decorate
class _dispatcher(object):
def __init__(self):
self.specs = {}
def __call__(self, element, compiler, **kw):
# TODO: yes, this could also switch off of DBAPI in use.
fn = self.specs.get(compiler.dialect.name, None)
if not fn:
fn = self.specs['default']
return fn(element, compiler, **kw)
|
acsone/stock-logistics-warehouse
|
refs/heads/8.0
|
stock_optional_valuation/__init__.py
|
23
|
# -*- coding: utf-8 -*-
##############################################################################
#
# Copyright (C) 2013 Agile Business Group sagl (<http://www.agilebg.com>)
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
from . import stock
|
jonathanslenders/python-prompt-toolkit
|
refs/heads/master
|
prompt_toolkit/document.py
|
1
|
"""
The `Document` that implements all the text operations/querying.
"""
import bisect
import re
import string
import weakref
from typing import Callable, Dict, Iterable, List, Optional, Pattern, Tuple, cast
from .clipboard import ClipboardData
from .filters import vi_mode
from .selection import PasteMode, SelectionState, SelectionType
__all__ = [
"Document",
]
# Regex for finding "words" in documents. (We consider a group of alnum
# characters a word, but also a group of special characters a word, as long as
# it doesn't contain a space.)
# (This is a 'word' in Vi.)
_FIND_WORD_RE = re.compile(r"([a-zA-Z0-9_]+|[^a-zA-Z0-9_\s]+)")
_FIND_CURRENT_WORD_RE = re.compile(r"^([a-zA-Z0-9_]+|[^a-zA-Z0-9_\s]+)")
_FIND_CURRENT_WORD_INCLUDE_TRAILING_WHITESPACE_RE = re.compile(
r"^(([a-zA-Z0-9_]+|[^a-zA-Z0-9_\s]+)\s*)"
)
# Regex for finding "WORDS" in documents.
# (This is a 'WORD in Vi.)
_FIND_BIG_WORD_RE = re.compile(r"([^\s]+)")
_FIND_CURRENT_BIG_WORD_RE = re.compile(r"^([^\s]+)")
_FIND_CURRENT_BIG_WORD_INCLUDE_TRAILING_WHITESPACE_RE = re.compile(r"^([^\s]+\s*)")
# Share the Document._cache between all Document instances.
# (Document instances are considered immutable. That means that if another
# `Document` is constructed with the same text, it should have the same
# `_DocumentCache`.)
_text_to_document_cache: Dict[str, "_DocumentCache"] = cast(
Dict[str, "_DocumentCache"],
weakref.WeakValueDictionary(), # Maps document.text to DocumentCache instance.
)
class _ImmutableLineList(list):
"""
Some protection for our 'lines' list, which is assumed to be immutable in the cache.
(Useful for detecting obvious bugs.)
"""
def _error(self, *a: object, **kw: object) -> None:
raise NotImplementedError("Attempt to modify an immutable list.")
__setitem__ = _error # type: ignore
append = _error
clear = _error
extend = _error
insert = _error
pop = _error
remove = _error
reverse = _error
sort = _error # type: ignore
class _DocumentCache:
def __init__(self) -> None:
#: List of lines for the Document text.
self.lines: Optional[_ImmutableLineList] = None
#: List of index positions, pointing to the start of all the lines.
self.line_indexes: Optional[List[int]] = None
class Document:
"""
This is a immutable class around the text and cursor position, and contains
methods for querying this data, e.g. to give the text before the cursor.
This class is usually instantiated by a :class:`~prompt_toolkit.buffer.Buffer`
object, and accessed as the `document` property of that class.
:param text: string
:param cursor_position: int
:param selection: :class:`.SelectionState`
"""
__slots__ = ("_text", "_cursor_position", "_selection", "_cache")
def __init__(
self,
text: str = "",
cursor_position: Optional[int] = None,
selection: Optional[SelectionState] = None,
) -> None:
# Check cursor position. It can also be right after the end. (Where we
# insert text.)
assert cursor_position is None or cursor_position <= len(text), AssertionError(
"cursor_position=%r, len_text=%r" % (cursor_position, len(text))
)
# By default, if no cursor position was given, make sure to put the
# cursor position is at the end of the document. This is what makes
# sense in most places.
if cursor_position is None:
cursor_position = len(text)
# Keep these attributes private. A `Document` really has to be
# considered to be immutable, because otherwise the caching will break
# things. Because of that, we wrap these into read-only properties.
self._text = text
self._cursor_position = cursor_position
self._selection = selection
# Cache for lines/indexes. (Shared with other Document instances that
# contain the same text.
try:
self._cache = _text_to_document_cache[self.text]
except KeyError:
self._cache = _DocumentCache()
_text_to_document_cache[self.text] = self._cache
# XX: For some reason, above, we can't use 'WeakValueDictionary.setdefault'.
# This fails in Pypy3. `self._cache` becomes None, because that's what
# 'setdefault' returns.
# self._cache = _text_to_document_cache.setdefault(self.text, _DocumentCache())
# assert self._cache
def __repr__(self) -> str:
return "%s(%r, %r)" % (self.__class__.__name__, self.text, self.cursor_position)
def __eq__(self, other: object) -> bool:
if not isinstance(other, Document):
return False
return (
self.text == other.text
and self.cursor_position == other.cursor_position
and self.selection == other.selection
)
@property
def text(self) -> str:
" The document text. "
return self._text
@property
def cursor_position(self) -> int:
" The document cursor position. "
return self._cursor_position
@property
def selection(self) -> Optional[SelectionState]:
" :class:`.SelectionState` object. "
return self._selection
@property
def current_char(self) -> str:
""" Return character under cursor or an empty string. """
return self._get_char_relative_to_cursor(0) or ""
@property
def char_before_cursor(self) -> str:
""" Return character before the cursor or an empty string. """
return self._get_char_relative_to_cursor(-1) or ""
@property
def text_before_cursor(self) -> str:
return self.text[: self.cursor_position :]
@property
def text_after_cursor(self) -> str:
return self.text[self.cursor_position :]
@property
def current_line_before_cursor(self) -> str:
""" Text from the start of the line until the cursor. """
_, _, text = self.text_before_cursor.rpartition("\n")
return text
@property
def current_line_after_cursor(self) -> str:
""" Text from the cursor until the end of the line. """
text, _, _ = self.text_after_cursor.partition("\n")
return text
@property
def lines(self) -> List[str]:
"""
Array of all the lines.
"""
# Cache, because this one is reused very often.
if self._cache.lines is None:
self._cache.lines = _ImmutableLineList(self.text.split("\n"))
return self._cache.lines
@property
def _line_start_indexes(self) -> List[int]:
"""
Array pointing to the start indexes of all the lines.
"""
# Cache, because this is often reused. (If it is used, it's often used
# many times. And this has to be fast for editing big documents!)
if self._cache.line_indexes is None:
# Create list of line lengths.
line_lengths = map(len, self.lines)
# Calculate cumulative sums.
indexes = [0]
append = indexes.append
pos = 0
for line_length in line_lengths:
pos += line_length + 1
append(pos)
# Remove the last item. (This is not a new line.)
if len(indexes) > 1:
indexes.pop()
self._cache.line_indexes = indexes
return self._cache.line_indexes
@property
def lines_from_current(self) -> List[str]:
"""
Array of the lines starting from the current line, until the last line.
"""
return self.lines[self.cursor_position_row :]
@property
def line_count(self) -> int:
r"""Return the number of lines in this document. If the document ends
with a trailing \n, that counts as the beginning of a new line."""
return len(self.lines)
@property
def current_line(self) -> str:
"""Return the text on the line where the cursor is. (when the input
consists of just one line, it equals `text`."""
return self.current_line_before_cursor + self.current_line_after_cursor
@property
def leading_whitespace_in_current_line(self) -> str:
""" The leading whitespace in the left margin of the current line. """
current_line = self.current_line
length = len(current_line) - len(current_line.lstrip())
return current_line[:length]
def _get_char_relative_to_cursor(self, offset: int = 0) -> str:
"""
Return character relative to cursor position, or empty string
"""
try:
return self.text[self.cursor_position + offset]
except IndexError:
return ""
@property
def on_first_line(self) -> bool:
"""
True when we are at the first line.
"""
return self.cursor_position_row == 0
@property
def on_last_line(self) -> bool:
"""
True when we are at the last line.
"""
return self.cursor_position_row == self.line_count - 1
@property
def cursor_position_row(self) -> int:
"""
Current row. (0-based.)
"""
row, _ = self._find_line_start_index(self.cursor_position)
return row
@property
def cursor_position_col(self) -> int:
"""
Current column. (0-based.)
"""
# (Don't use self.text_before_cursor to calculate this. Creating
# substrings and doing rsplit is too expensive for getting the cursor
# position.)
_, line_start_index = self._find_line_start_index(self.cursor_position)
return self.cursor_position - line_start_index
def _find_line_start_index(self, index: int) -> Tuple[int, int]:
"""
For the index of a character at a certain line, calculate the index of
the first character on that line.
Return (row, index) tuple.
"""
indexes = self._line_start_indexes
pos = bisect.bisect_right(indexes, index) - 1
return pos, indexes[pos]
def translate_index_to_position(self, index: int) -> Tuple[int, int]:
"""
Given an index for the text, return the corresponding (row, col) tuple.
(0-based. Returns (0, 0) for index=0.)
"""
# Find start of this line.
row, row_index = self._find_line_start_index(index)
col = index - row_index
return row, col
def translate_row_col_to_index(self, row: int, col: int) -> int:
"""
Given a (row, col) tuple, return the corresponding index.
(Row and col params are 0-based.)
Negative row/col values are turned into zero.
"""
try:
result = self._line_start_indexes[row]
line = self.lines[row]
except IndexError:
if row < 0:
result = self._line_start_indexes[0]
line = self.lines[0]
else:
result = self._line_start_indexes[-1]
line = self.lines[-1]
result += max(0, min(col, len(line)))
# Keep in range. (len(self.text) is included, because the cursor can be
# right after the end of the text as well.)
result = max(0, min(result, len(self.text)))
return result
@property
def is_cursor_at_the_end(self) -> bool:
""" True when the cursor is at the end of the text. """
return self.cursor_position == len(self.text)
@property
def is_cursor_at_the_end_of_line(self) -> bool:
""" True when the cursor is at the end of this line. """
return self.current_char in ("\n", "")
def has_match_at_current_position(self, sub: str) -> bool:
"""
`True` when this substring is found at the cursor position.
"""
return self.text.find(sub, self.cursor_position) == self.cursor_position
def find(
self,
sub: str,
in_current_line: bool = False,
include_current_position: bool = False,
ignore_case: bool = False,
count: int = 1,
) -> Optional[int]:
"""
Find `text` after the cursor, return position relative to the cursor
position. Return `None` if nothing was found.
:param count: Find the n-th occurrence.
"""
assert isinstance(ignore_case, bool)
if in_current_line:
text = self.current_line_after_cursor
else:
text = self.text_after_cursor
if not include_current_position:
if len(text) == 0:
return None # (Otherwise, we always get a match for the empty string.)
else:
text = text[1:]
flags = re.IGNORECASE if ignore_case else 0
iterator = re.finditer(re.escape(sub), text, flags)
try:
for i, match in enumerate(iterator):
if i + 1 == count:
if include_current_position:
return match.start(0)
else:
return match.start(0) + 1
except StopIteration:
pass
return None
def find_all(self, sub: str, ignore_case: bool = False) -> List[int]:
"""
Find all occurrences of the substring. Return a list of absolute
positions in the document.
"""
flags = re.IGNORECASE if ignore_case else 0
return [a.start() for a in re.finditer(re.escape(sub), self.text, flags)]
def find_backwards(
self,
sub: str,
in_current_line: bool = False,
ignore_case: bool = False,
count: int = 1,
) -> Optional[int]:
"""
Find `text` before the cursor, return position relative to the cursor
position. Return `None` if nothing was found.
:param count: Find the n-th occurrence.
"""
if in_current_line:
before_cursor = self.current_line_before_cursor[::-1]
else:
before_cursor = self.text_before_cursor[::-1]
flags = re.IGNORECASE if ignore_case else 0
iterator = re.finditer(re.escape(sub[::-1]), before_cursor, flags)
try:
for i, match in enumerate(iterator):
if i + 1 == count:
return -match.start(0) - len(sub)
except StopIteration:
pass
return None
def get_word_before_cursor(
self, WORD: bool = False, pattern: Optional[Pattern[str]] = None
) -> str:
"""
Give the word before the cursor.
If we have whitespace before the cursor this returns an empty string.
:param pattern: (None or compiled regex). When given, use this regex
pattern.
"""
if self._is_word_before_cursor_complete(WORD=WORD, pattern=pattern):
# Space before the cursor or no text before cursor.
return ""
text_before_cursor = self.text_before_cursor
start = self.find_start_of_previous_word(WORD=WORD, pattern=pattern) or 0
return text_before_cursor[len(text_before_cursor) + start :]
def _is_word_before_cursor_complete(
self, WORD: bool = False, pattern: Optional[Pattern[str]] = None
) -> bool:
if pattern:
return self.find_start_of_previous_word(WORD=WORD, pattern=pattern) is None
else:
return (
self.text_before_cursor == "" or self.text_before_cursor[-1:].isspace()
)
def find_start_of_previous_word(
self, count: int = 1, WORD: bool = False, pattern: Optional[Pattern[str]] = None
) -> Optional[int]:
"""
Return an index relative to the cursor position pointing to the start
of the previous word. Return `None` if nothing was found.
:param pattern: (None or compiled regex). When given, use this regex
pattern.
"""
assert not (WORD and pattern)
# Reverse the text before the cursor, in order to do an efficient
# backwards search.
text_before_cursor = self.text_before_cursor[::-1]
if pattern:
regex = pattern
elif WORD:
regex = _FIND_BIG_WORD_RE
else:
regex = _FIND_WORD_RE
iterator = regex.finditer(text_before_cursor)
try:
for i, match in enumerate(iterator):
if i + 1 == count:
return -match.end(0)
except StopIteration:
pass
return None
def find_boundaries_of_current_word(
self,
WORD: bool = False,
include_leading_whitespace: bool = False,
include_trailing_whitespace: bool = False,
) -> Tuple[int, int]:
"""
Return the relative boundaries (startpos, endpos) of the current word under the
cursor. (This is at the current line, because line boundaries obviously
don't belong to any word.)
If not on a word, this returns (0,0)
"""
text_before_cursor = self.current_line_before_cursor[::-1]
text_after_cursor = self.current_line_after_cursor
def get_regex(include_whitespace: bool) -> Pattern[str]:
return {
(False, False): _FIND_CURRENT_WORD_RE,
(False, True): _FIND_CURRENT_WORD_INCLUDE_TRAILING_WHITESPACE_RE,
(True, False): _FIND_CURRENT_BIG_WORD_RE,
(True, True): _FIND_CURRENT_BIG_WORD_INCLUDE_TRAILING_WHITESPACE_RE,
}[(WORD, include_whitespace)]
match_before = get_regex(include_leading_whitespace).search(text_before_cursor)
match_after = get_regex(include_trailing_whitespace).search(text_after_cursor)
# When there is a match before and after, and we're not looking for
# WORDs, make sure that both the part before and after the cursor are
# either in the [a-zA-Z_] alphabet or not. Otherwise, drop the part
# before the cursor.
if not WORD and match_before and match_after:
c1 = self.text[self.cursor_position - 1]
c2 = self.text[self.cursor_position]
alphabet = string.ascii_letters + "0123456789_"
if (c1 in alphabet) != (c2 in alphabet):
match_before = None
return (
-match_before.end(1) if match_before else 0,
match_after.end(1) if match_after else 0,
)
def get_word_under_cursor(self, WORD: bool = False) -> str:
"""
Return the word, currently below the cursor.
This returns an empty string when the cursor is on a whitespace region.
"""
start, end = self.find_boundaries_of_current_word(WORD=WORD)
return self.text[self.cursor_position + start : self.cursor_position + end]
def find_next_word_beginning(
self, count: int = 1, WORD: bool = False
) -> Optional[int]:
"""
Return an index relative to the cursor position pointing to the start
of the next word. Return `None` if nothing was found.
"""
if count < 0:
return self.find_previous_word_beginning(count=-count, WORD=WORD)
regex = _FIND_BIG_WORD_RE if WORD else _FIND_WORD_RE
iterator = regex.finditer(self.text_after_cursor)
try:
for i, match in enumerate(iterator):
# Take first match, unless it's the word on which we're right now.
if i == 0 and match.start(1) == 0:
count += 1
if i + 1 == count:
return match.start(1)
except StopIteration:
pass
return None
def find_next_word_ending(
self, include_current_position: bool = False, count: int = 1, WORD: bool = False
) -> Optional[int]:
"""
Return an index relative to the cursor position pointing to the end
of the next word. Return `None` if nothing was found.
"""
if count < 0:
return self.find_previous_word_ending(count=-count, WORD=WORD)
if include_current_position:
text = self.text_after_cursor
else:
text = self.text_after_cursor[1:]
regex = _FIND_BIG_WORD_RE if WORD else _FIND_WORD_RE
iterable = regex.finditer(text)
try:
for i, match in enumerate(iterable):
if i + 1 == count:
value = match.end(1)
if include_current_position:
return value
else:
return value + 1
except StopIteration:
pass
return None
def find_previous_word_beginning(
self, count: int = 1, WORD: bool = False
) -> Optional[int]:
"""
Return an index relative to the cursor position pointing to the start
of the previous word. Return `None` if nothing was found.
"""
if count < 0:
return self.find_next_word_beginning(count=-count, WORD=WORD)
regex = _FIND_BIG_WORD_RE if WORD else _FIND_WORD_RE
iterator = regex.finditer(self.text_before_cursor[::-1])
try:
for i, match in enumerate(iterator):
if i + 1 == count:
return -match.end(1)
except StopIteration:
pass
return None
def find_previous_word_ending(
self, count: int = 1, WORD: bool = False
) -> Optional[int]:
"""
Return an index relative to the cursor position pointing to the end
of the previous word. Return `None` if nothing was found.
"""
if count < 0:
return self.find_next_word_ending(count=-count, WORD=WORD)
text_before_cursor = self.text_after_cursor[:1] + self.text_before_cursor[::-1]
regex = _FIND_BIG_WORD_RE if WORD else _FIND_WORD_RE
iterator = regex.finditer(text_before_cursor)
try:
for i, match in enumerate(iterator):
# Take first match, unless it's the word on which we're right now.
if i == 0 and match.start(1) == 0:
count += 1
if i + 1 == count:
return -match.start(1) + 1
except StopIteration:
pass
return None
def find_next_matching_line(
self, match_func: Callable[[str], bool], count: int = 1
) -> Optional[int]:
"""
Look downwards for empty lines.
Return the line index, relative to the current line.
"""
result = None
for index, line in enumerate(self.lines[self.cursor_position_row + 1 :]):
if match_func(line):
result = 1 + index
count -= 1
if count == 0:
break
return result
def find_previous_matching_line(
self, match_func: Callable[[str], bool], count: int = 1
) -> Optional[int]:
"""
Look upwards for empty lines.
Return the line index, relative to the current line.
"""
result = None
for index, line in enumerate(self.lines[: self.cursor_position_row][::-1]):
if match_func(line):
result = -1 - index
count -= 1
if count == 0:
break
return result
def get_cursor_left_position(self, count: int = 1) -> int:
"""
Relative position for cursor left.
"""
if count < 0:
return self.get_cursor_right_position(-count)
return -min(self.cursor_position_col, count)
def get_cursor_right_position(self, count: int = 1) -> int:
"""
Relative position for cursor_right.
"""
if count < 0:
return self.get_cursor_left_position(-count)
return min(count, len(self.current_line_after_cursor))
def get_cursor_up_position(
self, count: int = 1, preferred_column: Optional[int] = None
) -> int:
"""
Return the relative cursor position (character index) where we would be if the
user pressed the arrow-up button.
:param preferred_column: When given, go to this column instead of
staying at the current column.
"""
assert count >= 1
column = (
self.cursor_position_col if preferred_column is None else preferred_column
)
return (
self.translate_row_col_to_index(
max(0, self.cursor_position_row - count), column
)
- self.cursor_position
)
def get_cursor_down_position(
self, count: int = 1, preferred_column: Optional[int] = None
) -> int:
"""
Return the relative cursor position (character index) where we would be if the
user pressed the arrow-down button.
:param preferred_column: When given, go to this column instead of
staying at the current column.
"""
assert count >= 1
column = (
self.cursor_position_col if preferred_column is None else preferred_column
)
return (
self.translate_row_col_to_index(self.cursor_position_row + count, column)
- self.cursor_position
)
def find_enclosing_bracket_right(
self, left_ch: str, right_ch: str, end_pos: Optional[int] = None
) -> Optional[int]:
"""
Find the right bracket enclosing current position. Return the relative
position to the cursor position.
When `end_pos` is given, don't look past the position.
"""
if self.current_char == right_ch:
return 0
if end_pos is None:
end_pos = len(self.text)
else:
end_pos = min(len(self.text), end_pos)
stack = 1
# Look forward.
for i in range(self.cursor_position + 1, end_pos):
c = self.text[i]
if c == left_ch:
stack += 1
elif c == right_ch:
stack -= 1
if stack == 0:
return i - self.cursor_position
return None
def find_enclosing_bracket_left(
self, left_ch: str, right_ch: str, start_pos: Optional[int] = None
) -> Optional[int]:
"""
Find the left bracket enclosing current position. Return the relative
position to the cursor position.
When `start_pos` is given, don't look past the position.
"""
if self.current_char == left_ch:
return 0
if start_pos is None:
start_pos = 0
else:
start_pos = max(0, start_pos)
stack = 1
# Look backward.
for i in range(self.cursor_position - 1, start_pos - 1, -1):
c = self.text[i]
if c == right_ch:
stack += 1
elif c == left_ch:
stack -= 1
if stack == 0:
return i - self.cursor_position
return None
def find_matching_bracket_position(
self, start_pos: Optional[int] = None, end_pos: Optional[int] = None
) -> int:
"""
Return relative cursor position of matching [, (, { or < bracket.
When `start_pos` or `end_pos` are given. Don't look past the positions.
"""
# Look for a match.
for pair in "()", "[]", "{}", "<>":
A = pair[0]
B = pair[1]
if self.current_char == A:
return self.find_enclosing_bracket_right(A, B, end_pos=end_pos) or 0
elif self.current_char == B:
return self.find_enclosing_bracket_left(A, B, start_pos=start_pos) or 0
return 0
def get_start_of_document_position(self) -> int:
""" Relative position for the start of the document. """
return -self.cursor_position
def get_end_of_document_position(self) -> int:
""" Relative position for the end of the document. """
return len(self.text) - self.cursor_position
def get_start_of_line_position(self, after_whitespace: bool = False) -> int:
""" Relative position for the start of this line. """
if after_whitespace:
current_line = self.current_line
return (
len(current_line)
- len(current_line.lstrip())
- self.cursor_position_col
)
else:
return -len(self.current_line_before_cursor)
def get_end_of_line_position(self) -> int:
""" Relative position for the end of this line. """
return len(self.current_line_after_cursor)
def last_non_blank_of_current_line_position(self) -> int:
"""
Relative position for the last non blank character of this line.
"""
return len(self.current_line.rstrip()) - self.cursor_position_col - 1
def get_column_cursor_position(self, column: int) -> int:
"""
Return the relative cursor position for this column at the current
line. (It will stay between the boundaries of the line in case of a
larger number.)
"""
line_length = len(self.current_line)
current_column = self.cursor_position_col
column = max(0, min(line_length, column))
return column - current_column
def selection_range(
self,
) -> Tuple[
int, int
]: # XXX: shouldn't this return `None` if there is no selection???
"""
Return (from, to) tuple of the selection.
start and end position are included.
This doesn't take the selection type into account. Use
`selection_ranges` instead.
"""
if self.selection:
from_, to = sorted(
[self.cursor_position, self.selection.original_cursor_position]
)
else:
from_, to = self.cursor_position, self.cursor_position
return from_, to
def selection_ranges(self) -> Iterable[Tuple[int, int]]:
"""
Return a list of `(from, to)` tuples for the selection or none if
nothing was selected. The upper boundary is not included.
This will yield several (from, to) tuples in case of a BLOCK selection.
This will return zero ranges, like (8,8) for empty lines in a block
selection.
"""
if self.selection:
from_, to = sorted(
[self.cursor_position, self.selection.original_cursor_position]
)
if self.selection.type == SelectionType.BLOCK:
from_line, from_column = self.translate_index_to_position(from_)
to_line, to_column = self.translate_index_to_position(to)
from_column, to_column = sorted([from_column, to_column])
lines = self.lines
if vi_mode():
to_column += 1
for l in range(from_line, to_line + 1):
line_length = len(lines[l])
if from_column <= line_length:
yield (
self.translate_row_col_to_index(l, from_column),
self.translate_row_col_to_index(
l, min(line_length, to_column)
),
)
else:
# In case of a LINES selection, go to the start/end of the lines.
if self.selection.type == SelectionType.LINES:
from_ = max(0, self.text.rfind("\n", 0, from_) + 1)
if self.text.find("\n", to) >= 0:
to = self.text.find("\n", to)
else:
to = len(self.text) - 1
# In Vi mode, the upper boundary is always included. For Emacs,
# that's not the case.
if vi_mode():
to += 1
yield from_, to
def selection_range_at_line(self, row: int) -> Optional[Tuple[int, int]]:
"""
If the selection spans a portion of the given line, return a (from, to) tuple.
The returned upper boundary is not included in the selection, so
`(0, 0)` is an empty selection. `(0, 1)`, is a one character selection.
Returns None if the selection doesn't cover this line at all.
"""
if self.selection:
line = self.lines[row]
row_start = self.translate_row_col_to_index(row, 0)
row_end = self.translate_row_col_to_index(row, len(line))
from_, to = sorted(
[self.cursor_position, self.selection.original_cursor_position]
)
# Take the intersection of the current line and the selection.
intersection_start = max(row_start, from_)
intersection_end = min(row_end, to)
if intersection_start <= intersection_end:
if self.selection.type == SelectionType.LINES:
intersection_start = row_start
intersection_end = row_end
elif self.selection.type == SelectionType.BLOCK:
_, col1 = self.translate_index_to_position(from_)
_, col2 = self.translate_index_to_position(to)
col1, col2 = sorted([col1, col2])
if col1 > len(line):
return None # Block selection doesn't cross this line.
intersection_start = self.translate_row_col_to_index(row, col1)
intersection_end = self.translate_row_col_to_index(row, col2)
_, from_column = self.translate_index_to_position(intersection_start)
_, to_column = self.translate_index_to_position(intersection_end)
# In Vi mode, the upper boundary is always included. For Emacs
# mode, that's not the case.
if vi_mode():
to_column += 1
return from_column, to_column
return None
def cut_selection(self) -> Tuple["Document", ClipboardData]:
"""
Return a (:class:`.Document`, :class:`.ClipboardData`) tuple, where the
document represents the new document when the selection is cut, and the
clipboard data, represents whatever has to be put on the clipboard.
"""
if self.selection:
cut_parts = []
remaining_parts = []
new_cursor_position = self.cursor_position
last_to = 0
for from_, to in self.selection_ranges():
if last_to == 0:
new_cursor_position = from_
remaining_parts.append(self.text[last_to:from_])
cut_parts.append(self.text[from_:to])
last_to = to
remaining_parts.append(self.text[last_to:])
cut_text = "\n".join(cut_parts)
remaining_text = "".join(remaining_parts)
# In case of a LINES selection, don't include the trailing newline.
if self.selection.type == SelectionType.LINES and cut_text.endswith("\n"):
cut_text = cut_text[:-1]
return (
Document(text=remaining_text, cursor_position=new_cursor_position),
ClipboardData(cut_text, self.selection.type),
)
else:
return self, ClipboardData("")
def paste_clipboard_data(
self,
data: ClipboardData,
paste_mode: PasteMode = PasteMode.EMACS,
count: int = 1,
) -> "Document":
"""
Return a new :class:`.Document` instance which contains the result if
we would paste this data at the current cursor position.
:param paste_mode: Where to paste. (Before/after/emacs.)
:param count: When >1, Paste multiple times.
"""
before = paste_mode == PasteMode.VI_BEFORE
after = paste_mode == PasteMode.VI_AFTER
if data.type == SelectionType.CHARACTERS:
if after:
new_text = (
self.text[: self.cursor_position + 1]
+ data.text * count
+ self.text[self.cursor_position + 1 :]
)
else:
new_text = (
self.text_before_cursor + data.text * count + self.text_after_cursor
)
new_cursor_position = self.cursor_position + len(data.text) * count
if before:
new_cursor_position -= 1
elif data.type == SelectionType.LINES:
l = self.cursor_position_row
if before:
lines = self.lines[:l] + [data.text] * count + self.lines[l:]
new_text = "\n".join(lines)
new_cursor_position = len("".join(self.lines[:l])) + l
else:
lines = self.lines[: l + 1] + [data.text] * count + self.lines[l + 1 :]
new_cursor_position = len("".join(self.lines[: l + 1])) + l + 1
new_text = "\n".join(lines)
elif data.type == SelectionType.BLOCK:
lines = self.lines[:]
start_line = self.cursor_position_row
start_column = self.cursor_position_col + (0 if before else 1)
for i, line in enumerate(data.text.split("\n")):
index = i + start_line
if index >= len(lines):
lines.append("")
lines[index] = lines[index].ljust(start_column)
lines[index] = (
lines[index][:start_column]
+ line * count
+ lines[index][start_column:]
)
new_text = "\n".join(lines)
new_cursor_position = self.cursor_position + (0 if before else 1)
return Document(text=new_text, cursor_position=new_cursor_position)
def empty_line_count_at_the_end(self) -> int:
"""
Return number of empty lines at the end of the document.
"""
count = 0
for line in self.lines[::-1]:
if not line or line.isspace():
count += 1
else:
break
return count
def start_of_paragraph(self, count: int = 1, before: bool = False) -> int:
"""
Return the start of the current paragraph. (Relative cursor position.)
"""
def match_func(text: str) -> bool:
return not text or text.isspace()
line_index = self.find_previous_matching_line(
match_func=match_func, count=count
)
if line_index:
add = 0 if before else 1
return min(0, self.get_cursor_up_position(count=-line_index) + add)
else:
return -self.cursor_position
def end_of_paragraph(self, count: int = 1, after: bool = False) -> int:
"""
Return the end of the current paragraph. (Relative cursor position.)
"""
def match_func(text: str) -> bool:
return not text or text.isspace()
line_index = self.find_next_matching_line(match_func=match_func, count=count)
if line_index:
add = 0 if after else 1
return max(0, self.get_cursor_down_position(count=line_index) - add)
else:
return len(self.text_after_cursor)
# Modifiers.
def insert_after(self, text: str) -> "Document":
"""
Create a new document, with this text inserted after the buffer.
It keeps selection ranges and cursor position in sync.
"""
return Document(
text=self.text + text,
cursor_position=self.cursor_position,
selection=self.selection,
)
def insert_before(self, text: str) -> "Document":
"""
Create a new document, with this text inserted before the buffer.
It keeps selection ranges and cursor position in sync.
"""
selection_state = self.selection
if selection_state:
selection_state = SelectionState(
original_cursor_position=selection_state.original_cursor_position
+ len(text),
type=selection_state.type,
)
return Document(
text=text + self.text,
cursor_position=self.cursor_position + len(text),
selection=selection_state,
)
|
neavouli/yournextrepresentative
|
refs/heads/release-neavouli
|
candidates/tests/test_date_parsing.py
|
1
|
from __future__ import unicode_literals
from django.test import TestCase
from django.test.utils import override_settings
from django.utils.translation import override
from django_date_extensions.fields import ApproximateDate
from candidates.models import parse_approximate_date
# These tests supplement the doctests; they're not done as
# doctests because we need to override settings to pick
# either US or non-US day/month default ordering:
class DateParsingTests(TestCase):
def test_only_year(self):
parsed = parse_approximate_date('1977')
self.assertEqual(type(parsed), ApproximateDate)
self.assertEqual(repr(parsed), '1977-00-00')
def test_iso_8601(self):
parsed = parse_approximate_date('1977-04-01')
self.assertEqual(type(parsed), ApproximateDate)
self.assertEqual(repr(parsed), '1977-04-01')
def test_nonsense(self):
with self.assertRaises(ValueError):
parse_approximate_date('12345678')
def test_dd_mm_yyyy_with_slashes(self):
parsed = parse_approximate_date('1/4/1977')
self.assertEqual(type(parsed), ApproximateDate)
self.assertEqual(repr(parsed), '1977-04-01')
@override_settings(DD_MM_DATE_FORMAT_PREFERRED=False)
def test_mm_dd_yyyy_with_slashes(self):
parsed = parse_approximate_date('4/1/1977')
self.assertEqual(type(parsed), ApproximateDate)
self.assertEqual(repr(parsed), '1977-04-01')
def test_dd_mm_yyyy_with_dashes(self):
parsed = parse_approximate_date('1-4-1977')
self.assertEqual(type(parsed), ApproximateDate)
self.assertEqual(repr(parsed), '1977-04-01')
def test_natural_date_string(self):
parsed = parse_approximate_date('31st December 1999')
self.assertEqual(type(parsed), ApproximateDate)
self.assertEqual(repr(parsed), '1999-12-31')
def test_empty_string(self):
with self.assertRaises(ValueError):
parse_approximate_date('')
def test_expanded_natural_date_string(self):
parsed = parse_approximate_date('31st of December 1999')
self.assertEqual(type(parsed), ApproximateDate)
self.assertEqual(repr(parsed), '1999-12-31')
def test_nonsense_string(self):
with self.assertRaises(ValueError):
parse_approximate_date('this is not a date')
def test_spanish_date_string(self):
with self.assertRaises(ValueError):
parsed = parse_approximate_date('20 febrero 1954 ')
with override('es'):
parsed = parse_approximate_date('20 febrero 1954 ')
self.assertEqual(type(parsed), ApproximateDate)
self.assertEqual(repr(parsed), '1954-02-20')
|
temasek/android_external_chromium_org
|
refs/heads/cm-11.0
|
third_party/protobuf/python/google/protobuf/internal/message_listener.py
|
590
|
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# http://code.google.com/p/protobuf/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Defines a listener interface for observing certain
state transitions on Message objects.
Also defines a null implementation of this interface.
"""
__author__ = 'robinson@google.com (Will Robinson)'
class MessageListener(object):
"""Listens for modifications made to a message. Meant to be registered via
Message._SetListener().
Attributes:
dirty: If True, then calling Modified() would be a no-op. This can be
used to avoid these calls entirely in the common case.
"""
def Modified(self):
"""Called every time the message is modified in such a way that the parent
message may need to be updated. This currently means either:
(a) The message was modified for the first time, so the parent message
should henceforth mark the message as present.
(b) The message's cached byte size became dirty -- i.e. the message was
modified for the first time after a previous call to ByteSize().
Therefore the parent should also mark its byte size as dirty.
Note that (a) implies (b), since new objects start out with a client cached
size (zero). However, we document (a) explicitly because it is important.
Modified() will *only* be called in response to one of these two events --
not every time the sub-message is modified.
Note that if the listener's |dirty| attribute is true, then calling
Modified at the moment would be a no-op, so it can be skipped. Performance-
sensitive callers should check this attribute directly before calling since
it will be true most of the time.
"""
raise NotImplementedError
class NullMessageListener(object):
"""No-op MessageListener implementation."""
def Modified(self):
pass
|
jamespcole/home-assistant
|
refs/heads/master
|
homeassistant/components/onvif/__init__.py
|
15
|
"""The onvif component."""
|
davidgbe/scikit-learn
|
refs/heads/master
|
examples/decomposition/plot_pca_vs_fa_model_selection.py
|
142
|
"""
===============================================================
Model selection with Probabilistic PCA and Factor Analysis (FA)
===============================================================
Probabilistic PCA and Factor Analysis are probabilistic models.
The consequence is that the likelihood of new data can be used
for model selection and covariance estimation.
Here we compare PCA and FA with cross-validation on low rank data corrupted
with homoscedastic noise (noise variance
is the same for each feature) or heteroscedastic noise (noise variance
is the different for each feature). In a second step we compare the model
likelihood to the likelihoods obtained from shrinkage covariance estimators.
One can observe that with homoscedastic noise both FA and PCA succeed
in recovering the size of the low rank subspace. The likelihood with PCA
is higher than FA in this case. However PCA fails and overestimates
the rank when heteroscedastic noise is present. Under appropriate
circumstances the low rank models are more likely than shrinkage models.
The automatic estimation from
Automatic Choice of Dimensionality for PCA. NIPS 2000: 598-604
by Thomas P. Minka is also compared.
"""
print(__doc__)
# Authors: Alexandre Gramfort
# Denis A. Engemann
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from scipy import linalg
from sklearn.decomposition import PCA, FactorAnalysis
from sklearn.covariance import ShrunkCovariance, LedoitWolf
from sklearn.cross_validation import cross_val_score
from sklearn.grid_search import GridSearchCV
###############################################################################
# Create the data
n_samples, n_features, rank = 1000, 50, 10
sigma = 1.
rng = np.random.RandomState(42)
U, _, _ = linalg.svd(rng.randn(n_features, n_features))
X = np.dot(rng.randn(n_samples, rank), U[:, :rank].T)
# Adding homoscedastic noise
X_homo = X + sigma * rng.randn(n_samples, n_features)
# Adding heteroscedastic noise
sigmas = sigma * rng.rand(n_features) + sigma / 2.
X_hetero = X + rng.randn(n_samples, n_features) * sigmas
###############################################################################
# Fit the models
n_components = np.arange(0, n_features, 5) # options for n_components
def compute_scores(X):
pca = PCA()
fa = FactorAnalysis()
pca_scores, fa_scores = [], []
for n in n_components:
pca.n_components = n
fa.n_components = n
pca_scores.append(np.mean(cross_val_score(pca, X)))
fa_scores.append(np.mean(cross_val_score(fa, X)))
return pca_scores, fa_scores
def shrunk_cov_score(X):
shrinkages = np.logspace(-2, 0, 30)
cv = GridSearchCV(ShrunkCovariance(), {'shrinkage': shrinkages})
return np.mean(cross_val_score(cv.fit(X).best_estimator_, X))
def lw_score(X):
return np.mean(cross_val_score(LedoitWolf(), X))
for X, title in [(X_homo, 'Homoscedastic Noise'),
(X_hetero, 'Heteroscedastic Noise')]:
pca_scores, fa_scores = compute_scores(X)
n_components_pca = n_components[np.argmax(pca_scores)]
n_components_fa = n_components[np.argmax(fa_scores)]
pca = PCA(n_components='mle')
pca.fit(X)
n_components_pca_mle = pca.n_components_
print("best n_components by PCA CV = %d" % n_components_pca)
print("best n_components by FactorAnalysis CV = %d" % n_components_fa)
print("best n_components by PCA MLE = %d" % n_components_pca_mle)
plt.figure()
plt.plot(n_components, pca_scores, 'b', label='PCA scores')
plt.plot(n_components, fa_scores, 'r', label='FA scores')
plt.axvline(rank, color='g', label='TRUTH: %d' % rank, linestyle='-')
plt.axvline(n_components_pca, color='b',
label='PCA CV: %d' % n_components_pca, linestyle='--')
plt.axvline(n_components_fa, color='r',
label='FactorAnalysis CV: %d' % n_components_fa, linestyle='--')
plt.axvline(n_components_pca_mle, color='k',
label='PCA MLE: %d' % n_components_pca_mle, linestyle='--')
# compare with other covariance estimators
plt.axhline(shrunk_cov_score(X), color='violet',
label='Shrunk Covariance MLE', linestyle='-.')
plt.axhline(lw_score(X), color='orange',
label='LedoitWolf MLE' % n_components_pca_mle, linestyle='-.')
plt.xlabel('nb of components')
plt.ylabel('CV scores')
plt.legend(loc='lower right')
plt.title(title)
plt.show()
|
PyWaw/pywaw.org
|
refs/heads/master
|
pywaw/meetups/forms.py
|
1
|
from django import forms
from django.core.exceptions import ValidationError
from meetups.constants import EITHER_EXISTING_OR_NEW_SPEAKER_ERROR
from . import models
class TalkProposalForm(forms.ModelForm):
talk_title = forms.CharField()
talk_description = forms.CharField(max_length=1000, widget=forms.Textarea)
speaker = forms.ModelChoiceField(
queryset=models.Speaker.objects.filter(talks__meetup__isnull=False).distinct(),
required=False,
)
speaker_first_name = forms.CharField(required=False, max_length=30)
speaker_last_name = forms.CharField(required=False, max_length=30)
speaker_website = forms.URLField(required=False)
speaker_phone = forms.CharField(required=False, max_length=30)
speaker_email = forms.EmailField(required=False)
speaker_biography = forms.CharField(required=False, widget=forms.Textarea)
speaker_photo = forms.ImageField(required=False)
REQUIRED_SPEAKER_FIELDS = [
'speaker_first_name',
'speaker_last_name',
'speaker_phone',
'speaker_email',
'speaker_biography',
'speaker_photo',
]
class Meta:
model = models.TalkProposal
fields = ('message',)
def save(self, *args, **kwargs):
talk_proposal = super().save(commit=False)
talk = models.Talk.objects.create(
title=self.cleaned_data['talk_title'],
description=self.cleaned_data['talk_description'],
)
if self.cleaned_data['speaker']:
speaker = self.cleaned_data['speaker']
else:
speaker = models.Speaker.objects.create(
first_name=self.cleaned_data['speaker_first_name'],
last_name=self.cleaned_data['speaker_last_name'],
website=self.cleaned_data['speaker_website'],
phone=self.cleaned_data['speaker_phone'],
email=self.cleaned_data['speaker_email'],
biography=self.cleaned_data['speaker_biography'],
photo=self.cleaned_data['speaker_photo'],
)
talk.speakers.add(speaker)
talk_proposal.talk = talk
talk_proposal.save()
return talk_proposal
def clean(self):
if self._existing_speaker_field_empty():
if self._all_required_new_speaker_fields_empty():
raise ValidationError(EITHER_EXISTING_OR_NEW_SPEAKER_ERROR)
elif not self._all_required_new_speaker_fields_empty():
for field_name in self.REQUIRED_SPEAKER_FIELDS:
if self.cleaned_data.get(field_name) in ['', None]:
self._errors[field_name] = self.fields[field_name].error_messages['required']
return self.cleaned_data
def _all_required_new_speaker_fields_empty(self):
return all(self.cleaned_data.get(field_name) == '' for field_name in self.REQUIRED_SPEAKER_FIELDS)
def _existing_speaker_field_empty(self):
return not self.cleaned_data['speaker']
|
ricardogtx/estudoDjango
|
refs/heads/master
|
simplemooc/core/tests.py
|
1
|
from django.test import TestCase
from django.core import mail
from django.test.client import Client
from django.urls import reverse
class HomeViewTest(TestCase):
def test_home_status_code(self):
client = Client()
response = client.get(reverse('core:home'))
self.assertEqual(response.status_code, 200)
def test_home_template_used(self):
client = Client()
response = client.get(reverse('core:home'))
self.assertTemplateUsed(response, 'home.html')
self.assertTemplateUsed(response, 'base.html')
|
micolous/intuition
|
refs/heads/master
|
src/intuition/protocol.py
|
1
|
"""
intuition/protocol.py - Twisted protocol library for OWL Intuition's multicast UDP energy monitoring protocol.
Copyright 2013-2014 Michael Farrell <micolous+git@gmail.com>
Copyright 2013 Johan van den Dorpe <johan.vandendorpe@gmail.com>
This library is free software: you can redistribute it and/or modify
it under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License
along with this library. If not, see <http://www.gnu.org/licenses/>.
"""
from warnings import warn
from twisted.internet.protocol import DatagramProtocol
from lxml import objectify
from decimal import Decimal
MCAST_ADDR = '224.192.32.19'
MCAST_PORT = 22600
class OwlBaseMessage(object):
@property
def mac(self):
"""
MAC Address of the Network Owl. This information comes from the
body of the multicast UDP traffic (not the ethernet headers), so may
be spoofed.
"""
return self._mac
@property
def rssi(self):
"""
Recieve signal strength (dBm).
"""
return self._rssi
@property
def lqi(self):
"""
Link quality, with 0 being best.
"""
return self._lqi
class OwlChannel(object):
# structure for storing data about electricity channels
def __init__(self, channel_id, current_w, daily_wh):
self._channel_id = channel_id
self._current_w = Decimal(current_w)
self._daily_wh = Decimal(daily_wh)
@property
def channel_id(self):
return self._channel_id
@property
def current_w(self):
return self._current_w
@property
def daily_wh(self):
return self._daily_wh
def __str__(self):
return '<OwlChannel: id=%s, current=%s, today=%s>' % (
self.channel_id,
self.current_w,
self.daily_wh
)
class OwlTemperature(object):
def __init__(self, zone, current_temp, required_temp):
self._zone = zone
self._current_temp = Decimal(current_temp)
self._required_temp = Decimal(required_temp)
@property
def zone_id(self):
"""
Unique identifier for the zone.
"""
return self._zone
@property
def current_temp(self):
"""
The current temperature in the zone. Units are undefined by the
documentation, but appear to be degrees Celcius.
"""
return self._current_temp
@property
def required_temp(self):
"""
The desired temperature in the zone. Units are undefined by the
documentation, but appear to be degrees Celcius.
"""
return self._required_temp
def __str__(self):
return '<OwlTemperature: id=%s, current=%s C, required=%s C>' % (
self.zone_id,
self.current_temp,
self.required_temp
)
class OwlHeating(OwlBaseMessage):
def __init__(self, datagram):
# TODO: support Network Owl 2.3
assert (datagram.tag == 'heating'), ('OwlHeating XML must have `heating` root node (got %r).' % datagram.tag)
self._mac = datagram.attrib['id']
# read signal information for the sensor's 433MHz link
self._rssi = Decimal(datagram.signal[0].attrib['rssi'])
self._lqi = Decimal(datagram.signal[0].attrib['lqi'])
# read battery information from the sensor.
self._battery_mv = Decimal(datagram.battery[0].attrib['level'][:-2])
self._zones = {}
for temp in datagram.temperature:
assert temp.attrib['zone'] not in self._zones
self._zones[temp.attrib['zone']] = OwlTemperature(temp.attrib['zone'], temp.current[0].text, temp.required[0].text)
@property
def battery_mv(self):
"""
Voltage level of the battery in the sensor, in millivolts.
"""
return self._battery_mv
@property
def zones(self):
"""
Zones defined for managing heating.
"""
return self._zones
def __str__(self):
return '<OwlHeating: rssi=%s, lqi=%s, battery=%s mV, zones=%s>' % (
self.rssi,
self.lqi,
self.battery_mv,
', '.join((str(x) for x in self.zones.itervalues()))
)
class OwlElectricity(OwlBaseMessage):
def __init__(self, datagram):
assert (datagram.tag == 'electricity'), ('OwlElectricity XML must have `electricity` root node (got %r).' % datagram.tag)
self._mac = datagram.attrib['id']
# read signal information for the sensor's 433MHz link
self._rssi = Decimal(datagram.signal[0].attrib['rssi'])
self._lqi = Decimal(datagram.signal[0].attrib['lqi'])
# read battery information from the sensor.
self._battery_pc = Decimal(datagram.battery[0].attrib['level'][:-1])
# read sensors (channels)
self._channels = {}
for channel in datagram.chan:
assert channel.attrib['id'] not in self._channels, 'Channel duplicate'
assert channel.curr[0].attrib['units'] == 'w', 'Current units must be watts'
assert channel.day[0].attrib['units'] == 'wh', 'Daily usage must be watthours'
# we're good and done our tests, create a channel
self._channels[channel.attrib['id']] = OwlChannel(channel.attrib['id'], channel.curr[0].text, channel.day[0].text)
@property
def battery_pc(self):
"""
Percentage of battery remaining.
Only on OwlElectricity messages.
"""
return self._battery_pc
@property
def battery(self):
"""
Deprecated: use OwlElectricity.battery_pc instead.
"""
warn('Use OwlElectricity.battery_pc instead.', DeprecationWarning, stacklevel=2)
return self.battery_pc
@property
def channels(self):
return self._channels
def __str__(self):
return '<OwlElectricity: rssi=%s, lqi=%s, battery=%s%%, channels=%s>' % (
self.rssi,
self.lqi,
self.battery_pc,
', '.join((str(x) for x in self.channels.itervalues()))
)
def parse_datagram(datagram):
"""
Parses a Network Owl datagram.
"""
xml = objectify.fromstring(datagram)
if xml.tag == 'electricity':
msg = OwlElectricity(xml)
elif xml.tag == 'heating':
# note: requires network owl 2.2
# TODO: implement network owl 2.3 support
msg = OwlHeating(xml)
else:
raise NotImplementedError, 'Message type %r not implemented.' % msg.tag
return msg
class OwlIntuitionProtocol(DatagramProtocol):
def __init__(self, iface=''):
"""
Protocol for Owl Intution (Network Owl) multicast UDP.
:param iface: Name of the interface to use to communicate with the Network Owl. If not specified, uses the default network connection on the cost.
:type iface: str
"""
self.iface = iface
def startProtocol(self):
self.transport.joinGroup(MCAST_ADDR, self.iface)
def datagramReceived(self, datagram, address):
msg = parse_datagram(datagram)
self.owlReceived(address, msg)
def owlReceived(self, address, msg):
# for the test program
print '%s: %s' % (address, msg)
if __name__ == '__main__': # pragma: no cover
# Simple test program!
from twisted.internet import reactor
from argparse import ArgumentParser
parser = ArgumentParser()
parser.add_argument('-i', '--iface', dest='iface', default='', help='Network interface to use for getting data.')
options = parser.parse_args()
protocol = OwlIntuitionProtocol(iface=options.iface)
reactor.listenMulticast(MCAST_PORT, protocol, listenMultiple=True)
reactor.run()
|
camptocamp/c2c-rd-addons
|
refs/heads/8.0
|
c2c_account_closing_remarks/__openerp__.py
|
4
|
# -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2011 Tiny SPRL (<http://tiny.be>).
# Copyright (C) 2010-2011 ChriCar Beteiligungs- und Beratungs- GmbH (<http://www.camptocamp.at>)
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
{ 'sequence': 500,
'name': 'Account Closing Remarks',
'version': '0.7',
'category': 'Accounting & Finance',
'description': """
Add a per year textbox for description of account closing remarks
""",
'author': 'ChriCar Beteiligungs- und Beratungs- GmbH',
'depends': [ 'account' ],
'data': ['account_view.xml','security/ir.model.access.csv'
],
'demo_xml': [],
'installable': False,
'active': False,
}
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
|
shibumi/yowsup
|
refs/heads/master
|
src/Yowsup/Contacts/contacts.py
|
18
|
'''
Copyright (c) <2012> Tarek Galal <tare2.galal@gmail.com>
Permission is hereby granted, free of charge, to any person obtaining a copy of this
software and associated documentation files (the "Software"), to deal in the Software
without restriction, including without limitation the rights to use, copy, modify,
merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR
A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
'''
from Yowsup.Common.Http.warequest import WARequest
from Yowsup.Common.Http.waresponseparser import JSONResponseParser
from hashlib import md5
import random, sys
from Yowsup.Common.utilities import Utilities
class WAContactsSyncRequest():
def __init__(self, username, password, contacts):
self.username = username
self.password = password
self.contacts = contacts
self.authReq = WAContactsSyncAuth(username, password)
def setCredentials(self, username, password):
self.username = username
self.password = password
self.authReq = WAContactsSyncAuth(username, password)
def setContacts(self, contacts):
self.contacts = contacts
def send(self):
auth = self.authReq.send()
if not auth["message"] == "next token":
return auth
response = self.authReq.response
respH = response.getheader("www-authenticate")
self.authReq._d(respH)
tmp = respH[respH.index('nonce="')+len('nonce="'):]
nonce = tmp[:tmp.index('"')]
q = WAContactsSyncQuery(self.username, self.password, nonce, self.contacts)
resp = q.send()
return resp
class WAContactsSyncAuth(WARequest):
nc = "00000001"
realm = "s.whatsapp.net"
qop = "auth"
digestUri = "WAWA/s.whatsapp.net"
charSet = "utf-8"
authMethod = "X-WAWA"
authTemplate = '{auth_method}: username="{username}",realm="{realm}",nonce="{nonce}",cnonce="{cnonce}",nc="{nc}",qop="auth",\
digest-uri="{digest_uri}",response="{response}",charset="utf-8"'
def __init__(self, username, password, nonce = "0"):
super(WAContactsSyncAuth, self).__init__();
self.url = "sro.whatsapp.net/v2/sync/a"
self.type = "POST"
cnonce = Utilities.str(random.randint(100000000000000,1000000000000000), 36)
credentials = bytearray((username+":s.whatsapp.net:").encode())
credentials.extend(password)
if sys.version_info >= (3, 0):
buf = lambda x: bytes(x, 'iso-8859-1') if type(x) is str else bytes(x)
else:
buf = buffer
response = self.encode(
self.md5(
self.encode(
self.md5(
self.md5( buf ( credentials ) )
+ (":" + nonce + ":" + cnonce).encode()
)
)
+ (":"+nonce+":" + WAContactsSyncAuth.nc+":" + cnonce + ":auth:").encode()
+ self.encode(
self.md5(("AUTHENTICATE:"+WAContactsSyncAuth.digestUri).encode())
))).decode()
authField = WAContactsSyncAuth.authTemplate.format(auth_method = WAContactsSyncAuth.authMethod,
username = username,
realm = WAContactsSyncAuth.realm,
nonce = nonce,
cnonce = cnonce,
nc= WAContactsSyncAuth.nc,
digest_uri = WAContactsSyncAuth.digestUri,
response = response)
self.addHeaderField("Authorization", authField)
self.pvars = ["message"]
self.setParser(JSONResponseParser())
def md5(self, data):
return md5(data).digest();
def getResponseDigest(self):
pass
def encode(self, inp):
res = []
def _enc(n):
if n < 10:
return n + 48
return n + 87
for c in inp:
if type(inp) is str:
c = ord(c)
if c < 0: c += 256
res.append(_enc(c >> 4))
res.append(_enc(c % 16))
return "".join(map(chr, res)).encode();
class WAContactsSyncQuery(WAContactsSyncAuth):
def __init__(self, username, password, nonce, contacts):
super(WAContactsSyncQuery, self).__init__(username, password, nonce)
self.url = "sro.whatsapp.net/v2/sync/q"
self.pvars = ["c"]
self.addParam("ut", "all")
#self.addParam("ut", "wa")
self.addParam("t", "c")
#self.addParam("t", "w")
for c in contacts:
self.addParam("u[]", c)
|
mandeepdhami/nova
|
refs/heads/master
|
nova/tests/unit/virt/libvirt/test_lvm.py
|
17
|
# Copyright 2012 NTT Data. All Rights Reserved.
# Copyright 2012 Yahoo! Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import contextlib
import mock
from oslo_concurrency import processutils
from oslo_config import cfg
from nova import exception
from nova import test
from nova import utils
from nova.virt.libvirt import lvm
from nova.virt.libvirt import utils as libvirt_utils
CONF = cfg.CONF
class LvmTestCase(test.NoDBTestCase):
def test_get_volume_size(self):
executes = []
def fake_execute(*cmd, **kwargs):
executes.append(cmd)
return 123456789, None
expected_commands = [('blockdev', '--getsize64', '/dev/foo')]
self.stubs.Set(utils, 'execute', fake_execute)
size = lvm.get_volume_size('/dev/foo')
self.assertEqual(expected_commands, executes)
self.assertEqual(size, 123456789)
@mock.patch.object(utils, 'execute',
side_effect=processutils.ProcessExecutionError(
stderr=('blockdev: cannot open /dev/foo: '
'No such device or address')))
def test_get_volume_size_not_found(self, mock_execute):
self.assertRaises(exception.VolumeBDMPathNotFound,
lvm.get_volume_size, '/dev/foo')
@mock.patch.object(utils, 'execute',
side_effect=processutils.ProcessExecutionError(
stderr=('blockdev: cannot open /dev/foo: '
'No such file or directory')))
def test_get_volume_size_not_found_file(self, mock_execute):
self.assertRaises(exception.VolumeBDMPathNotFound,
lvm.get_volume_size, '/dev/foo')
@mock.patch.object(libvirt_utils, 'path_exists', return_value=True)
@mock.patch.object(utils, 'execute',
side_effect=processutils.ProcessExecutionError(
stderr='blockdev: i am sad in other ways'))
def test_get_volume_size_unexpectd_error(self, mock_execute,
mock_path_exists):
self.assertRaises(processutils.ProcessExecutionError,
lvm.get_volume_size, '/dev/foo')
def test_lvm_clear(self):
def fake_lvm_size(path):
return lvm_size
def fake_execute(*cmd, **kwargs):
executes.append(cmd)
self.stubs.Set(lvm, 'get_volume_size', fake_lvm_size)
self.stubs.Set(utils, 'execute', fake_execute)
# Test the correct dd commands are run for various sizes
lvm_size = 1
executes = []
expected_commands = [('dd', 'bs=1', 'if=/dev/zero', 'of=/dev/v1',
'seek=0', 'count=1', 'conv=fdatasync')]
lvm.clear_volume('/dev/v1')
self.assertEqual(expected_commands, executes)
lvm_size = 1024
executes = []
expected_commands = [('dd', 'bs=1024', 'if=/dev/zero', 'of=/dev/v2',
'seek=0', 'count=1', 'conv=fdatasync')]
lvm.clear_volume('/dev/v2')
self.assertEqual(expected_commands, executes)
lvm_size = 1025
executes = []
expected_commands = [('dd', 'bs=1024', 'if=/dev/zero', 'of=/dev/v3',
'seek=0', 'count=1', 'conv=fdatasync')]
expected_commands += [('dd', 'bs=1', 'if=/dev/zero', 'of=/dev/v3',
'seek=1024', 'count=1', 'conv=fdatasync')]
lvm.clear_volume('/dev/v3')
self.assertEqual(expected_commands, executes)
lvm_size = 1048576
executes = []
expected_commands = [('dd', 'bs=1048576', 'if=/dev/zero', 'of=/dev/v4',
'seek=0', 'count=1', 'oflag=direct')]
lvm.clear_volume('/dev/v4')
self.assertEqual(expected_commands, executes)
lvm_size = 1048577
executes = []
expected_commands = [('dd', 'bs=1048576', 'if=/dev/zero', 'of=/dev/v5',
'seek=0', 'count=1', 'oflag=direct')]
expected_commands += [('dd', 'bs=1', 'if=/dev/zero', 'of=/dev/v5',
'seek=1048576', 'count=1', 'conv=fdatasync')]
lvm.clear_volume('/dev/v5')
self.assertEqual(expected_commands, executes)
lvm_size = 1234567
executes = []
expected_commands = [('dd', 'bs=1048576', 'if=/dev/zero', 'of=/dev/v6',
'seek=0', 'count=1', 'oflag=direct')]
expected_commands += [('dd', 'bs=1024', 'if=/dev/zero', 'of=/dev/v6',
'seek=1024', 'count=181', 'conv=fdatasync')]
expected_commands += [('dd', 'bs=1', 'if=/dev/zero', 'of=/dev/v6',
'seek=1233920', 'count=647', 'conv=fdatasync')]
lvm.clear_volume('/dev/v6')
self.assertEqual(expected_commands, executes)
# Test volume_clear_size limits the size
lvm_size = 10485761
CONF.set_override('volume_clear_size', '1', 'libvirt')
executes = []
expected_commands = [('dd', 'bs=1048576', 'if=/dev/zero', 'of=/dev/v7',
'seek=0', 'count=1', 'oflag=direct')]
lvm.clear_volume('/dev/v7')
self.assertEqual(expected_commands, executes)
CONF.set_override('volume_clear_size', '2', 'libvirt')
lvm_size = 1048576
executes = []
expected_commands = [('dd', 'bs=1048576', 'if=/dev/zero', 'of=/dev/v9',
'seek=0', 'count=1', 'oflag=direct')]
lvm.clear_volume('/dev/v9')
self.assertEqual(expected_commands, executes)
# Test volume_clear=shred
CONF.set_override('volume_clear', 'shred', 'libvirt')
CONF.set_override('volume_clear_size', '0', 'libvirt')
lvm_size = 1048576
executes = []
expected_commands = [('shred', '-n3', '-s1048576', '/dev/va')]
lvm.clear_volume('/dev/va')
self.assertEqual(expected_commands, executes)
CONF.set_override('volume_clear', 'shred', 'libvirt')
CONF.set_override('volume_clear_size', '1', 'libvirt')
lvm_size = 10485761
executes = []
expected_commands = [('shred', '-n3', '-s1048576', '/dev/vb')]
lvm.clear_volume('/dev/vb')
self.assertEqual(expected_commands, executes)
# Test volume_clear=none does nothing
CONF.set_override('volume_clear', 'none', 'libvirt')
executes = []
expected_commands = []
lvm.clear_volume('/dev/vc')
self.assertEqual(expected_commands, executes)
@mock.patch.object(utils, 'execute',
side_effect=processutils.ProcessExecutionError(
stderr=('blockdev: cannot open /dev/foo: '
'No such file or directory')))
def test_lvm_clear_ignore_lvm_not_found(self, mock_execute):
lvm.clear_volume('/dev/foo')
def test_fail_remove_all_logical_volumes(self):
def fake_execute(*args, **kwargs):
if 'vol2' in args:
raise processutils.ProcessExecutionError('Error')
with contextlib.nested(
mock.patch.object(lvm, 'clear_volume'),
mock.patch.object(libvirt_utils, 'execute',
side_effect=fake_execute)) as (mock_clear, mock_execute):
self.assertRaises(exception.VolumesNotRemoved,
lvm.remove_volumes,
['vol1', 'vol2', 'vol3'])
self.assertEqual(3, mock_execute.call_count)
|
FFMG/myoddweb.piger
|
refs/heads/master
|
monitor/api/python/Python-3.7.2/Lib/test/final_b.py
|
103
|
"""
Fodder for module finalization tests in test_module.
"""
import shutil
import test.final_a
x = 'b'
class C:
def __del__(self):
# Inspect module globals and builtins
print("x =", x)
print("final_a.x =", test.final_a.x)
print("shutil.rmtree =", getattr(shutil.rmtree, '__name__', None))
print("len =", getattr(len, '__name__', None))
c = C()
_underscored = C()
|
StrellaGroup/erpnext
|
refs/heads/develop
|
erpnext/hr/report/bank_remittance/bank_remittance.py
|
6
|
# Copyright (c) 2013, Frappe Technologies Pvt. Ltd. and contributors
# For license information, please see license.txt
from __future__ import unicode_literals
import frappe
from frappe.utils import formatdate
import itertools
from frappe import _, get_all
def execute(filters=None):
columns = [
{
"label": _("Payroll Number"),
"fieldtype": "Link",
"fieldname": "payroll_no",
"options": "Payroll Entry",
"width": 150
},
{
"label": _("Debit A/C Number"),
"fieldtype": "Int",
"fieldname": "debit_account",
"hidden": 1,
"width": 200
},
{
"label": _("Payment Date"),
"fieldtype": "Data",
"fieldname": "payment_date",
"width": 100
},
{
"label": _("Employee Name"),
"fieldtype": "Link",
"fieldname": "employee_name",
"options": "Employee",
"width": 200
},
{
"label": _("Bank Name"),
"fieldtype": "Data",
"fieldname": "bank_name",
"width": 50
},
{
"label": _("Employee A/C Number"),
"fieldtype": "Int",
"fieldname": "employee_account_no",
"width": 50
},
{
"label": _("IFSC Code"),
"fieldtype": "Data",
"fieldname": "bank_code",
"width": 100
},
{
"label": _("Currency"),
"fieldtype": "Data",
"fieldname": "currency",
"width": 50
},
{
"label": _("Net Salary Amount"),
"fieldtype": "Currency",
"options": "currency",
"fieldname": "amount",
"width": 100
}
]
data = []
accounts = get_bank_accounts()
payroll_entries = get_payroll_entries(accounts, filters)
salary_slips = get_salary_slips(payroll_entries)
get_emp_bank_ifsc_code(salary_slips)
for salary in salary_slips:
if salary.bank_name and salary.bank_account_no and salary.debit_acc_no and salary.status in ["Submitted", "Paid"]:
row = {
"payroll_no": salary.payroll_entry,
"debit_account": salary.debit_acc_no,
"payment_date": frappe.utils.formatdate(salary.modified.strftime('%Y-%m-%d')),
"bank_name": salary.bank_name,
"employee_account_no": salary.bank_account_no,
"bank_code": salary.ifsc_code,
"employee_name": salary.employee+": " + salary.employee_name,
"currency": frappe.get_cached_value('Company', filters.company, 'default_currency'),
"amount": salary.net_pay,
}
data.append(row)
return columns, data
def get_bank_accounts():
accounts = [d.name for d in get_all("Account", filters={"account_type": "Bank"})]
return accounts
def get_payroll_entries(accounts, filters):
payroll_filter = [
('payment_account', 'IN', accounts),
('number_of_employees', '>', 0),
('Company', '=', filters.company)
]
if filters.to_date:
payroll_filter.append(('posting_date', '<', filters.to_date))
if filters.from_date:
payroll_filter.append(('posting_date', '>', filters.from_date))
entries = get_all("Payroll Entry", payroll_filter, ["name", "payment_account"])
payment_accounts = [d.payment_account for d in entries]
set_company_account(payment_accounts, entries)
return entries
def get_salary_slips(payroll_entries):
payroll = [d.name for d in payroll_entries]
salary_slips = get_all("Salary Slip", filters = [("payroll_entry", "IN", payroll)],
fields = ["modified", "net_pay", "bank_name", "bank_account_no", "payroll_entry", "employee", "employee_name", "status"]
)
payroll_entry_map = {}
for entry in payroll_entries:
payroll_entry_map[entry.name] = entry
# appending company debit accounts
for slip in salary_slips:
slip["debit_acc_no"] = payroll_entry_map[slip.payroll_entry]['company_account']
return salary_slips
def get_emp_bank_ifsc_code(salary_slips):
emp_names = [d.employee for d in salary_slips]
ifsc_codes = get_all("Employee", [("name", "IN", emp_names)], ["ifsc_code", "name"])
ifsc_codes_map = {}
for code in ifsc_codes:
ifsc_codes_map[code.name] = code
for slip in salary_slips:
slip["ifsc_code"] = ifsc_codes_map[code.name]['ifsc_code']
return salary_slips
def set_company_account(payment_accounts, payroll_entries):
company_accounts = get_all("Bank Account", [("account", "in", payment_accounts)], ["account", "bank_account_no"])
company_accounts_map = {}
for acc in company_accounts:
company_accounts_map[acc.account] = acc
for entry in payroll_entries:
entry["company_account"] = company_accounts_map[entry.payment_account]['bank_account_no']
return payroll_entries
|
Nowheresly/odoo
|
refs/heads/8.0
|
addons/website_forum/controllers/__init__.py
|
4497
|
# -*- coding: utf-8 -*-
import main
|
royorbs3/Leviathan-V1-Kernel-G925T
|
refs/heads/master
|
tools/perf/python/twatch.py
|
7370
|
#! /usr/bin/python
# -*- python -*-
# -*- coding: utf-8 -*-
# twatch - Experimental use of the perf python interface
# Copyright (C) 2011 Arnaldo Carvalho de Melo <acme@redhat.com>
#
# This application is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; version 2.
#
# This application is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
import perf
def main():
cpus = perf.cpu_map()
threads = perf.thread_map()
evsel = perf.evsel(task = 1, comm = 1, mmap = 0,
wakeup_events = 1, watermark = 1,
sample_id_all = 1,
sample_type = perf.SAMPLE_PERIOD | perf.SAMPLE_TID | perf.SAMPLE_CPU | perf.SAMPLE_TID)
evsel.open(cpus = cpus, threads = threads);
evlist = perf.evlist(cpus, threads)
evlist.add(evsel)
evlist.mmap()
while True:
evlist.poll(timeout = -1)
for cpu in cpus:
event = evlist.read_on_cpu(cpu)
if not event:
continue
print "cpu: %2d, pid: %4d, tid: %4d" % (event.sample_cpu,
event.sample_pid,
event.sample_tid),
print event
if __name__ == '__main__':
main()
|
pygeek/django
|
refs/heads/master
|
tests/regressiontests/bash_completion/management/commands/test_command.py
|
183
|
from optparse import make_option
from django.core.management.base import BaseCommand
class Command(BaseCommand):
option_list = BaseCommand.option_list + (
make_option("--list", action="store_true", dest="list",
help="Print all options"),
)
def handle(self, *args, **options):
pass
|
ecotg/Misc
|
refs/heads/master
|
Visions/for_360pi/spiders/__init__.py
|
1
|
"""
Scrape: Visions.ca
To run this script, in this directory, run 'scrapy crawl visions' in the
terminal.
For each category listed on the lefthand banner on the
http://www.visions.ca/Default.aspx page, this script will extract the
product details for one product.
First, the spider will extract and follow the hrefs for all thirteen
categories, see rule 1. Then for all the categories, apart from 'Bundles',
the spider will find and follow href link for the first brand listed on the
category pages' left hand menu, see rule 2. Once on the brand page, this
spider will extract the details for the first product listed on that page.
For the 'Bundles' category, due to the category page's different layout,
the spider will extract the link for the first product listed on that page,
follow that link, and then extract the product's details, see Rule 3.
For each run of the spider, a JSON file and log file are created, containing
the thirteen scraped products, one per category, and a log for the spider,
respectively. Each of thirteen categories are listed as a field for their
respective product.
Script utilizes the Scrapy method, its xpath, ItemLoader and LinkExtractor
classes to obtain the product data.
"""
from for_360pi.items import VisionsProduct
from scrapy.selector import Selector
from scrapy.contrib.loader import ItemLoader
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors import LinkExtractor
class VisionSpyder(CrawlSpider):
name = 'visions'
allowed_domains = ['visions.ca']
start_urls = ['http://www.visions.ca/Default.aspx']
# Store long xpaths as a string to aid clarity.
xpath1 = ('//*[@id="mastermenu-container"]/ul[@id="mastermenu-startbtn"]'
'/li/ul[@id="mastermenu-dropdown"]/li/a[starts-with(@href,'
'"/Catalogue/")]')
xpath2 = ('//*[@id="subcatemenu-container"]'
'/div[@id="subcatemenu-brand-all"]/ul/li[1]')
xpath3 = ('//*[@id="ctl00_tdMainPanel"]/div[contains(@class,'
'"bundleItem")][1]//a[contains(@id, "BundleName")]')
rules = (Rule(LinkExtractor(allow = ('/Catalogue/.*'),
restrict_xpaths = (xpath1),
deny_domains = ('ProductResults')),),
Rule(LinkExtractor(restrict_xpaths = (xpath2)),
callback = 'parse_by_brand'),
Rule(LinkExtractor(restrict_xpaths = (xpath3)),
callback = 'parse_by_product'),)
def parse_by_brand(self, response):
"""
Except for the 'Bundles' page, for the first brand listed on each
category page, extract the product details for the first listed
product.
"""
selector = Selector(response)
# Use the 'Bread Crumbs' attribute to compose the category path.
self.crumb_xpath_1 = ('//*[@id="ctl00_pnlBreadCrumbs"]/a//text()')
self.crumb_1 = selector.xpath(self.crumb_xpath_1).extract()
self.crumb_xpath_2 = ('//*[@id="ctl00_pnlBreadCrumbs"]/span//text()')
self.crumb_2 = selector.xpath(self.crumb_xpath_2).extract()
self.category = ('/'.join(self.crumb_1).encode('utf-8') + '/' +
'/'.join(self.crumb_2).encode('utf-8'))
self.empty_page_xpath = ('//*[@id="ctl00_tdMainPanel"]'
'/div/div[@id="ctl00_Content'
'PlaceHolder1_pnlNoRecords"]')
if (selector.xpath(self.empty_page_xpath)):
loader = ItemLoader(item = VisionsProduct(), response = response)
self.empty_page = 'Empty Page/No Records'
# set product details for an empty page
loader.add_value('category', self.category)
loader.add_value('product', self.empty_page)
loader.add_value('price', self.empty_page)
loader.add_value('availability', self.empty_page)
else:
# If not-empty, extract and load product details
self.results_xpath = ('//*[contains(@class,'
'"productresult-itembox")]')
self.results = selector.xpath(self.results_xpath)
loader = ItemLoader(item = VisionsProduct(),
selector = self.results[0])
# Store long xpaths in dict to aid clarity
self.field_xpaths = {
'product': ('div[contains(@class,'
'"contentright")]/h2/a/font/text()'),
'price': ('div[contains(@class, "contentright")]'
'/div/div/span[@class="price"]'),
'price_gif': ('div[contains(@class,"contentright")'
']/div/div[contains(@id, "AddToCart")]/a/img'),
'clearance_gif': ('div[contains(@class,'
'"productresult-imagebox")]/div/div[contains'
'(@style,"position:absolute; bottom")]/img'
'[contains(@src, "final_clearance_'
'box.gif")]')
}
loader.add_value('category', self.category)
loader.add_xpath('product', self.field_xpaths['product'])
# Use the first listed price, which equals the sale price if
# product on sale, else is the regular price.
# For products without listed price, check for 'In Store' img
# attribute to indicate price detail.
try:
loader.add_xpath('price', self.field_xpaths['price'],
re = '\$[\d]*[,]*[\d]*\.[\d]*')
except IndexError:
if (self.results[0].xpath(self.field_xpaths['price_gif'])):
loader.add_value('price', 'Available only In-Store')
# Use the Final clearance gif to help determine product
# availability.
if (self.results[0].xpath(self.field_xpaths['clearance_gif'])):
loader.add_value('availability', 'Limited Quantity/Clearance')
else:
loader.add_value('availability', 'Not Limited/Clearance Item')
yield loader.load_item()
def parse_by_product(self, response):
"""
For the 'Bundles' category, grab the product details for the first
product listed.
"""
self.selector = Selector(response)
self.results = self.selector.xpath('//*[@id="ctl00_tdMainPanel"]')
loader = ItemLoader(item = VisionsProduct(),
selector = self.results[0])
self.field_xpaths = {
'product': ('div[contains(@class, "catalogueTitle")]'
'/h3/text()'),
'price': ('div[@id="ctl00_ContentPlaceHolder1_pnl'
'Bundle"]/div[@id="divProductDetails"]/div'
'[contains(@class, "priceAddToCart")]/div[1]/span'
'[contains(@id, "SalePrice")]/text()')
}
# Extract and load product details
loader.add_xpath('product', self.field_xpaths['product'])
loader.add_xpath('price', self.field_xpaths['price'],
re = '\$[\d]*[,]*[\d]*\.[\d]*')
loader.add_value('availability', 'Not Limited/Clearance Item')
# Because it's an individual product page, manually set the category
self.category = '/'.join(['Home', response.url.split('/')[4]])
loader.add_value('category', self.category)
yield loader.load_item()
def main():
VisionSpyder()
if __name__ == '__main__':
main()
|
the-zebulan/CodeWars
|
refs/heads/master
|
tests/beta_tests/test_separate_filename_from_extension.py
|
1
|
import unittest
from katas.beta.separate_filename_from_extension import \
separate_filename_from_extension
class SeparateFilenameFromExtensionTestCase(unittest.TestCase):
def test_equal_1(self):
self.assertEqual(separate_filename_from_extension(
'/some/path/to/file.txt'
), ('/some/path/to/file', '.txt'))
def test_equal_2(self):
self.assertEqual(separate_filename_from_extension(
'/some/path/.to/.file.txt'
), ('/some/path/.to/.file', '.txt'))
def test_equal_3(self):
self.assertEqual(separate_filename_from_extension(
'/some/path/.to/.file.tar.gz'
), ('/some/path/.to/.file', '.tar.gz'))
|
Hao-Liu/avocado-vt
|
refs/heads/master
|
virttest/libvirt_xml/capability_xml.py
|
18
|
"""
Module simplifying manipulation of XML described at
http://libvirt.org/formatcaps.html
"""
from virttest import xml_utils
from virttest.libvirt_xml import base, accessors, xcepts
class CapabilityXML(base.LibvirtXMLBase):
"""
Handler of libvirt capabilities and nonspecific item operations.
Properties:
uuid:
string of host uuid
guest_capabilities:
dict, read-only
get:
dict map from os type names to dict map from arch names
"""
# TODO: Add more __slots__ and accessors to get some useful stats
# e.g. guest_count etc.
__slots__ = ('uuid', 'guest_capabilities', 'cpu_count', 'arch', 'model',
'vendor', 'feature_list', 'power_management_list',
'cpu_topology', 'cells_topology')
__schema_name__ = "capability"
def __init__(self, virsh_instance=base.virsh):
accessors.XMLElementText(property_name="uuid",
libvirtxml=self,
forbidden=['set', 'del'],
parent_xpath='/host',
tag_name='uuid')
# This will skip self.get_guest_capabilities() defined below
accessors.AllForbidden(property_name="guest_capabilities",
libvirtxml=self)
# This will skip self.get_cpu_count() defined below
accessors.AllForbidden(property_name="cpu_count",
libvirtxml=self)
# The set action is for test.
accessors.XMLElementText(property_name="arch",
libvirtxml=self,
forbidden=['del'],
parent_xpath='/host/cpu',
tag_name='arch')
accessors.XMLElementText(property_name="model",
libvirtxml=self,
forbidden=['del'],
parent_xpath='/host/cpu',
tag_name='model')
accessors.XMLElementText(property_name="vendor",
libvirtxml=self,
forbidden=['del'],
parent_xpath='/host/cpu',
tag_name='vendor')
accessors.XMLElementDict(property_name="cpu_topology",
libvirtxml=self,
forbidden=['del'],
parent_xpath='/host/cpu',
tag_name='topology')
accessors.XMLElementNest(property_name='cells_topology',
libvirtxml=self,
parent_xpath='/host',
tag_name='topology',
subclass=TopologyXML,
subclass_dargs={
'virsh_instance': virsh_instance})
# This will skip self.get_feature_list() defined below
accessors.AllForbidden(property_name="feature_list",
libvirtxml=self)
# This will skip self.get_power_management_list() defined below
accessors.AllForbidden(property_name="power_management_list",
libvirtxml=self)
super(CapabilityXML, self).__init__(virsh_instance)
# calls set_xml accessor method
self['xml'] = self.__dict_get__('virsh').capabilities()
def get_guest_capabilities(self):
"""
Accessor method for guest_capabilities property (in __slots__).
Return a guest capabilities dict in following schema:
{<os_type>: {<arch name>: {'wordsize': '', 'emulator': '',
'machine': [<machine name>, ...], 'domaini_<type>': {'emulator': ''}}}}
"""
guest_capa = {}
xmltreefile = self.__dict_get__('xml')
for guest in xmltreefile.findall('guest'):
os_type_name = guest.find('os_type').text
# Multiple guest definitions can share same os_type (e.g. hvm, pvm)
if os_type_name == 'xen':
os_type_name = 'pv'
guest_arch = guest_capa.get(os_type_name, {})
for arch in guest.findall('arch'):
arch_name = arch.get('name')
arch_prop = guest_arch.get(arch_name, {})
arch_prop['wordsize'] = arch.find('wordsize').text
arch_prop['emulator'] = arch.find('emulator').text
m_list = []
for machine in arch.findall('machine'):
machine_text = machine.text
# Don't add duplicate entries
if not m_list.count(machine_text):
m_list.append(machine_text)
arch_prop['machine'] = m_list
for domain in arch.findall('domain'):
domain_name = "domain_" + domain.get('type')
dom_prop = {}
if domain.find('emulator') is not None:
dom_prop['emulator'] = domain.find('emulator').text
arch_prop[domain_name] = dom_prop
guest_arch[arch_name] = arch_prop
guest_capa[os_type_name] = guest_arch
return guest_capa
def get_power_management_list(self):
"""
Accessor method for power_management_list property (in __slots__)
"""
xmltreefile = self.__dict_get__('xml')
pms = xmltreefile.find('host').find('power_management').getchildren()
return [pm.tag for pm in pms]
def get_feature_list(self):
"""
Accessor method for feature_list property (in __slots__)
"""
feature_list = [] # [<feature1>, <feature2>, ...]
xmltreefile = self.__dict_get__('xml')
for feature_node in xmltreefile.findall('/host/cpu/feature'):
feature_list.append(feature_node)
return feature_list
def get_feature(self, num):
"""
Get a feature element from feature list by number
:return: Feature element
"""
count = len(self.feature_list)
try:
num = int(num)
return self.feature_list[num]
except (ValueError, TypeError):
raise xcepts.LibvirtXMLError("Invalid feature number %s" % num)
except IndexError:
raise xcepts.LibvirtXMLError("Only %d feature(s)" % count)
def get_feature_name(self, num):
"""
Get assigned feature name
:param num: Assigned feature number
:return: Assigned feature name
"""
return self.get_feature(num).get('name')
def get_cpu_count(self):
"""
Accessor method for cpu_count property (in __slots__)
"""
cpu_count = 0
xmltreefile = self.__dict_get__('xml')
for cpus in xmltreefile.findall('/host/topology/cells/cell/cpus'):
cpu_num = cpus.get('num')
cpu_count += int(cpu_num)
return cpu_count
def remove_feature(self, num):
"""
Remove a assigned feature from xml
:param num: Assigned feature number
"""
xmltreefile = self.__dict_get__('xml')
feature_remove_node = self.get_feature(num)
cpu_node = xmltreefile.find('/host/cpu')
cpu_node.remove(feature_remove_node)
def check_feature_name(self, name):
"""
Check feature name valid or not.
:param name: The checked feature name
:return: True if check pass
"""
sys_feature = []
cpu_xml_file = open('/proc/cpuinfo', 'r')
for line in cpu_xml_file.readlines():
if line.find('flags') != -1:
feature_names = line.split(':')[1].strip()
sys_sub_feature = feature_names.split(' ')
sys_feature = list(set(sys_feature + sys_sub_feature))
cpu_xml_file.close()
return (name in sys_feature)
def set_feature(self, num, value):
"""
Set a assigned feature value to xml
:param num: Assigned feature number
:param value: The feature name modified to
"""
feature_set_node = self.get_feature(num)
feature_set_node.set('name', value)
def add_feature(self, value):
"""
Add a feature Element to xml
:param value: The added feature name
"""
xmltreefile = self.__dict_get__('xml')
cpu_node = xmltreefile.find('/host/cpu')
xml_utils.ElementTree.SubElement(cpu_node, 'feature', {'name': value})
class TopologyXML(base.LibvirtXMLBase):
"""
Handler of cells topology element in libvirt capabilities.
Properties:
num:
string of node cell numbers
cell:
list of cpu dict
"""
__slots__ = ('num', 'cell')
def __init__(self, virsh_instance=base.virsh):
"""
Create new cells topology XML instance
"""
accessors.XMLAttribute(property_name="num",
libvirtxml=self,
parent_xpath='/',
tag_name='cells',
attribute='num')
accessors.AllForbidden(property_name="cell",
libvirtxml=self)
super(TopologyXML, self).__init__(virsh_instance)
self.xml = self.__dict_get__('virsh').capabilities()
self.xmltreefile.reroot("/host/topology")
self.xmltreefile.write()
def get_cell(self):
"""
Return CellXML instances list
"""
cell_list = []
for cell_node in self.xmltreefile.findall('/cells/cell'):
xml_str = xml_utils.ElementTree.tostring(
cell_node)
new_cell = CellXML()
new_cell.xml = xml_str
cell_list.append(new_cell)
return cell_list
class CellXML(base.LibvirtXMLBase):
"""
Handler of cell element in libvirt capabilities.
Properties:
cell_id:
string of node cell number id
memory:
int, memory size
mem_unit:
string of memory unit
pages:
list of pages dict
sibling:
list of sibling dict
cpus_num:
string of cpus number
cpu:
list of cpu dict
"""
__slots__ = ('cell_id', 'memory', 'mem_unit', 'pages', 'sibling',
'cpus_num', 'cpu')
def __init__(self, virsh_instance=base.virsh):
"""
Create new cpus XML instance
"""
accessors.XMLAttribute(property_name="cell_id",
libvirtxml=self,
parent_xpath='/',
tag_name='cell',
attribute='id')
accessors.XMLElementInt(property_name="memory",
libvirtxml=self,
parent_xpath='/',
tag_name='memory')
accessors.XMLAttribute(property_name="mem_unit",
libvirtxml=self,
parent_xpath='/',
tag_name='memory',
attribute='unit')
accessors.XMLElementList(property_name="pages",
libvirtxml=self,
parent_xpath='/',
marshal_from=self.marshal_from_pages,
marshal_to=self.marshal_to_pages)
accessors.XMLElementList(property_name="sibling",
libvirtxml=self,
parent_xpath='/distances',
marshal_from=self.marshal_from_sibling,
marshal_to=self.marshal_to_sibling)
accessors.XMLAttribute(property_name="cpus_num",
libvirtxml=self,
parent_xpath='/',
tag_name='cpus',
attribute='num')
accessors.XMLElementList(property_name="cpu",
libvirtxml=self,
parent_xpath='/cpus',
marshal_from=self.marshal_from_cpu,
marshal_to=self.marshal_to_cpu)
super(CellXML, self).__init__(virsh_instance)
self.xml = u"<cell></cell>"
@staticmethod
def marshal_from_pages(item, index, libvirtxml):
"""
Convert a dict to pages tag and attributes.
"""
del index
del libvirtxml
if not isinstance(item, dict):
raise xcepts.LibvirtXMLError("Expected a dictionary of pages "
"attributes, not a %s"
% str(item))
return ('pages', dict(item))
@staticmethod
def marshal_to_pages(tag, attr_dict, index, libvirtxml, text):
"""
Convert a pages tag and attributes to a dict.
"""
del index
del libvirtxml
if tag != 'pages':
return None
attr_dict['text'] = text
return dict(attr_dict)
@staticmethod
def marshal_from_sibling(item, index, libvirtxml):
"""
Convert a dict to sibling tag and attributes.
"""
del index
del libvirtxml
if not isinstance(item, dict):
raise xcepts.LibvirtXMLError("Expected a dictionary of sibling "
"attributes, not a %s"
% str(item))
return ('sibling', dict(item))
@staticmethod
def marshal_to_sibling(tag, attr_dict, index, libvirtxml):
"""
Convert a sibling tag and attributes to a dict.
"""
del index
del libvirtxml
if tag != 'sibling':
return None
return dict(attr_dict)
@staticmethod
def marshal_from_cpu(item, index, libvirtxml):
"""
Convert a dict to cpu tag and attributes.
"""
del index
del libvirtxml
if not isinstance(item, dict):
raise xcepts.LibvirtXMLError("Expected a dictionary of cpu "
"attributes, not a %s"
% str(item))
return ('cpu', dict(item))
@staticmethod
def marshal_to_cpu(tag, attr_dict, index, libvirtxml):
"""
Convert a cpu tag and attributes to a dict.
"""
del index
del libvirtxml
if tag != 'cpu':
return None
return dict(attr_dict)
|
nikhilprathapani/python-for-android
|
refs/heads/master
|
python3-alpha/python3-src/Doc/includes/sqlite3/simple_tableprinter.py
|
96
|
import sqlite3
FIELD_MAX_WIDTH = 20
TABLE_NAME = 'people'
SELECT = 'select * from %s order by age, name_last' % TABLE_NAME
con = sqlite3.connect("mydb")
cur = con.cursor()
cur.execute(SELECT)
# Print a header.
for fieldDesc in cur.description:
print(fieldDesc[0].ljust(FIELD_MAX_WIDTH), end=' ')
print() # Finish the header with a newline.
print('-' * 78)
# For each row, print the value of each field left-justified within
# the maximum possible width of that field.
fieldIndices = range(len(cur.description))
for row in cur:
for fieldIndex in fieldIndices:
fieldValue = str(row[fieldIndex])
print(fieldValue.ljust(FIELD_MAX_WIDTH), end=' ')
print() # Finish the row with a newline.
|
gautam1858/tensorflow
|
refs/heads/master
|
tensorflow/python/autograph/core/function_wrapping.py
|
21
|
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Support for wrapping converted functions bodies with auxiliary logic."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import contextlib
from tensorflow.python.framework import ops
@contextlib.contextmanager
def function_scope(function_name):
"""Returns a context manager for the converted body of a function."""
with ops.name_scope(function_name):
yield
|
goloveychuk/compose
|
refs/heads/master
|
compose/project.py
|
1
|
from __future__ import absolute_import
from __future__ import unicode_literals
import logging
from functools import reduce
from docker.errors import APIError
from .config import ConfigurationError
from .config import get_service_name_from_net
from .const import DEFAULT_TIMEOUT
from .const import LABEL_ONE_OFF
from .const import LABEL_PROJECT
from .const import LABEL_SERVICE
from .container import Container
from .legacy import check_for_legacy_containers
from .service import ContainerNet
from .service import ConvergenceStrategy
from .service import Net
from .service import Service
from .service import ServiceNet
from .utils import parallel_execute
log = logging.getLogger(__name__)
def sort_service_dicts(services):
# Topological sort (Cormen/Tarjan algorithm).
unmarked = services[:]
temporary_marked = set()
sorted_services = []
def get_service_names(links):
return [link.split(':')[0] for link in links]
def get_service_dependents(service_dict, services):
name = service_dict['name']
return [
service for service in services
if (name in get_service_names(service.get('links', [])) or
name in service.get('volumes_from', []) or
name == get_service_name_from_net(service.get('net')))
]
def visit(n):
if n['name'] in temporary_marked:
if n['name'] in get_service_names(n.get('links', [])):
raise DependencyError('A service can not link to itself: %s' % n['name'])
if n['name'] in n.get('volumes_from', []):
raise DependencyError('A service can not mount itself as volume: %s' % n['name'])
else:
raise DependencyError('Circular import between %s' % ' and '.join(temporary_marked))
if n in unmarked:
temporary_marked.add(n['name'])
for m in get_service_dependents(n, services):
visit(m)
temporary_marked.remove(n['name'])
unmarked.remove(n)
sorted_services.insert(0, n)
while unmarked:
visit(unmarked[-1])
return sorted_services
class Project(object):
"""
A collection of services.
"""
def __init__(self, name, services, client):
self.name = name
self.services = services
self.client = client
def labels(self, one_off=False):
return [
'{0}={1}'.format(LABEL_PROJECT, self.name),
'{0}={1}'.format(LABEL_ONE_OFF, "True" if one_off else "False"),
]
@classmethod
def from_dicts(cls, name, service_dicts, client):
"""
Construct a ServiceCollection from a list of dicts representing services.
"""
project = cls(name, [], client)
for service_dict in sort_service_dicts(service_dicts):
links = project.get_links(service_dict)
volumes_from = project.get_volumes_from(service_dict)
net = project.get_net(service_dict)
project.services.append(
Service(
client=client,
project=name,
links=links,
net=net,
volumes_from=volumes_from,
**service_dict))
return project
@property
def service_names(self):
return [service.name for service in self.services]
def get_service(self, name):
"""
Retrieve a service by name. Raises NoSuchService
if the named service does not exist.
"""
for service in self.services:
if service.name == name:
return service
raise NoSuchService(name)
def validate_service_names(self, service_names):
"""
Validate that the given list of service names only contains valid
services. Raises NoSuchService if one of the names is invalid.
"""
valid_names = self.service_names
for name in service_names:
if name not in valid_names:
raise NoSuchService(name)
def get_services(self, service_names=None, include_deps=False):
"""
Returns a list of this project's services filtered
by the provided list of names, or all services if service_names is None
or [].
If include_deps is specified, returns a list including the dependencies for
service_names, in order of dependency.
Preserves the original order of self.services where possible,
reordering as needed to resolve dependencies.
Raises NoSuchService if any of the named services do not exist.
"""
if service_names is None or len(service_names) == 0:
return self.get_services(
service_names=self.service_names,
include_deps=include_deps
)
else:
unsorted = [self.get_service(name) for name in service_names]
services = [s for s in self.services if s in unsorted]
if include_deps:
services = reduce(self._inject_deps, services, [])
uniques = []
[uniques.append(s) for s in services if s not in uniques]
return uniques
def get_links(self, service_dict):
links = []
if 'links' in service_dict:
for link in service_dict.get('links', []):
if ':' in link:
service_name, link_name = link.split(':', 1)
else:
service_name, link_name = link, None
try:
links.append((self.get_service(service_name), link_name))
except NoSuchService:
raise ConfigurationError(
'Service "%s" has a link to service "%s" which does not '
'exist.' % (service_dict['name'], service_name))
del service_dict['links']
return links
def get_volumes_from(self, service_dict):
volumes_from = []
if 'volumes_from' in service_dict:
for volume_name in service_dict.get('volumes_from', []):
try:
service = self.get_service(volume_name)
volumes_from.append(service)
except NoSuchService:
try:
container = Container.from_id(self.client, volume_name)
volumes_from.append(container)
except APIError:
raise ConfigurationError(
'Service "%s" mounts volumes from "%s", which is '
'not the name of a service or container.' % (
service_dict['name'],
volume_name))
del service_dict['volumes_from']
return volumes_from
def get_net(self, service_dict):
net = service_dict.pop('net', None)
if not net:
return Net(None)
net_name = get_service_name_from_net(net)
if not net_name:
return Net(net)
try:
return ServiceNet(self.get_service(net_name))
except NoSuchService:
pass
try:
return ContainerNet(Container.from_id(self.client, net_name))
except APIError:
raise ConfigurationError(
'Service "%s" is trying to use the network of "%s", '
'which is not the name of a service or container.' % (
service_dict['name'],
net_name))
def start(self, service_names=None, **options):
for service in self.get_services(service_names):
service.start(**options)
def stop(self, service_names=None, **options):
parallel_execute(
objects=self.containers(service_names),
obj_callable=lambda c: c.stop(**options),
msg_index=lambda c: c.name,
msg="Stopping"
)
def pause(self, service_names=None, **options):
for service in reversed(self.get_services(service_names)):
service.pause(**options)
def unpause(self, service_names=None, **options):
for service in self.get_services(service_names):
service.unpause(**options)
def kill(self, service_names=None, **options):
parallel_execute(
objects=self.containers(service_names),
obj_callable=lambda c: c.kill(**options),
msg_index=lambda c: c.name,
msg="Killing"
)
def remove_stopped(self, service_names=None, **options):
all_containers = self.containers(service_names, stopped=True)
stopped_containers = [c for c in all_containers if not c.is_running]
parallel_execute(
objects=stopped_containers,
obj_callable=lambda c: c.remove(**options),
msg_index=lambda c: c.name,
msg="Removing"
)
def restart(self, service_names=None, **options):
for service in self.get_services(service_names):
service.restart(**options)
def build(self, service_names=None, no_cache=False, pull=False, version=None):
for service in self.get_services(service_names):
if service.can_be_built():
if version is not None:
service.put_version(version)
service.build(no_cache, pull)
else:
log.info('%s uses an image, skipping' % service.name)
def up(self,
service_names=None,
start_deps=True,
strategy=ConvergenceStrategy.changed,
do_build=True,
timeout=DEFAULT_TIMEOUT,
version=None):
services = self.get_services(service_names, include_deps=start_deps)
if version is not None:
for service in services:
service.put_version(version)
for service in services:
service.remove_duplicate_containers()
plans = self._get_convergence_plans(services, strategy)
return [
container
for service in services
for container in service.execute_convergence_plan(
plans[service.name],
do_build=do_build,
timeout=timeout
)
]
def _get_convergence_plans(self, services, strategy):
plans = {}
for service in services:
updated_dependencies = [
name
for name in service.get_dependency_names()
if name in plans
and plans[name].action == 'recreate'
]
if updated_dependencies and strategy.allows_recreate:
log.debug('%s has upstream changes (%s)',
service.name,
", ".join(updated_dependencies))
plan = service.convergence_plan(ConvergenceStrategy.always)
else:
plan = service.convergence_plan(strategy)
plans[service.name] = plan
return plans
def pull(self, service_names=None, ignore_pull_failures=False):
for service in self.get_services(service_names, include_deps=True):
service.pull(ignore_pull_failures)
def containers(self, service_names=None, stopped=False, one_off=False):
if service_names:
self.validate_service_names(service_names)
else:
service_names = self.service_names
containers = list(filter(None, [
Container.from_ps(self.client, container)
for container in self.client.containers(
all=stopped,
filters={'label': self.labels(one_off=one_off)})]))
def matches_service_names(container):
return container.labels.get(LABEL_SERVICE) in service_names
if not containers:
check_for_legacy_containers(
self.client,
self.name,
self.service_names,
)
return [c for c in containers if matches_service_names(c)]
def _inject_deps(self, acc, service):
dep_names = service.get_dependency_names()
if len(dep_names) > 0:
dep_services = self.get_services(
service_names=list(set(dep_names)),
include_deps=True
)
else:
dep_services = []
dep_services.append(service)
return acc + dep_services
class NoSuchService(Exception):
def __init__(self, name):
self.name = name
self.msg = "No such service: %s" % self.name
def __str__(self):
return self.msg
class DependencyError(ConfigurationError):
pass
|
jffernandez/kivy
|
refs/heads/master
|
examples/widgets/textinput.py
|
81
|
'''
Textinput tests
===============
This test is used to demonstrate virtual keyboard according to current
configuration.
Run this test as::
# use dock virtual keyboard (one instance)
python textinput.py -c kivy:keyboard_mode:dock
# use multi users virtual keyboard (multiples instance)
python textinput.py -c kivy:keyboard_mode:multi
# use system keyboard (one instance)
python textinput.py -c kivy:keyboard_mode:system
# use automatic detection from current platform
python textinput.py -c kivy:keyboard_mode:
'''
import kivy
kivy.require('1.0.8')
from kivy.core.window import Window
from kivy.uix.textinput import TextInput
from kivy.uix.floatlayout import FloatLayout
from kivy.uix.scatter import Scatter
from kivy.uix.button import Button
from kivy.uix.label import Label
from kivy.config import Config
from kivy.base import runTouchApp
if __name__ == '__main__':
root = FloatLayout()
# create a button to release everything
def release_all_keyboard(*l):
Window.release_all_keyboards()
btn = Button(text='Release\nall\nkeyboards', size_hint=(None, None),
halign='center')
btn.bind(on_release=release_all_keyboard)
root.add_widget(btn)
# show current configuration
lbl = 'Configuration keyboard_mode is %r, keyboard_layout is %r' % (
Config.get('kivy', 'keyboard_mode'),
Config.get('kivy', 'keyboard_layout'))
label = Label(text=lbl, size_hint_y=None, height=50, pos_hint={'top': 1})
root.add_widget(label)
s = Scatter(size_hint=(None, None), pos=(300, 300))
s.add_widget(TextInput(size_hint=(None, None), size=(100, 50)))
root.add_widget(s)
s = Scatter(size_hint=(None, None), pos=(400, 300), rotation=45)
s.add_widget(TextInput(size_hint=(None, None), size=(100, 50)))
root.add_widget(s)
runTouchApp(root)
|
TeamExodus/external_chromium_org
|
refs/heads/EXODUS-5.1
|
build/android/pylib/junit/test_runner.py
|
26
|
# Copyright 2014 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import os
from pylib import cmd_helper
from pylib import constants
class JavaTestRunner(object):
"""Runs java tests on the host."""
def __init__(self, options):
self._package_filter = options.package_filter
self._runner_filter = options.runner_filter
self._sdk_version = options.sdk_version
self._test_filter = options.test_filter
self._test_suite = options.test_suite
def SetUp(self):
pass
def RunTest(self, _test):
"""Runs junit tests from |self._test_suite|."""
command = ['java',
'-jar', os.path.join(constants.GetOutDirectory(), 'lib.java',
'%s.jar' % self._test_suite)]
if self._test_filter:
command.extend(['-gtest-filter', self._test_filter])
if self._package_filter:
command.extend(['-package-filter', self._package_filter])
if self._runner_filter:
command.extend(['-runner-filter', self._runner_filter])
if self._sdk_version:
command.extend(['-sdk-version', self._sdk_version])
return cmd_helper.RunCmd(command)
def TearDown(self):
pass
|
dreamsxin/kbengine
|
refs/heads/master
|
kbe/res/scripts/common/Lib/test/audiotests.py
|
72
|
from test.support import findfile, TESTFN, unlink
import unittest
import array
import io
import pickle
import sys
class UnseekableIO(io.FileIO):
def tell(self):
raise io.UnsupportedOperation
def seek(self, *args, **kwargs):
raise io.UnsupportedOperation
class AudioTests:
close_fd = False
def setUp(self):
self.f = self.fout = None
def tearDown(self):
if self.f is not None:
self.f.close()
if self.fout is not None:
self.fout.close()
unlink(TESTFN)
def check_params(self, f, nchannels, sampwidth, framerate, nframes,
comptype, compname):
self.assertEqual(f.getnchannels(), nchannels)
self.assertEqual(f.getsampwidth(), sampwidth)
self.assertEqual(f.getframerate(), framerate)
self.assertEqual(f.getnframes(), nframes)
self.assertEqual(f.getcomptype(), comptype)
self.assertEqual(f.getcompname(), compname)
params = f.getparams()
self.assertEqual(params,
(nchannels, sampwidth, framerate, nframes, comptype, compname))
self.assertEqual(params.nchannels, nchannels)
self.assertEqual(params.sampwidth, sampwidth)
self.assertEqual(params.framerate, framerate)
self.assertEqual(params.nframes, nframes)
self.assertEqual(params.comptype, comptype)
self.assertEqual(params.compname, compname)
dump = pickle.dumps(params)
self.assertEqual(pickle.loads(dump), params)
class AudioWriteTests(AudioTests):
def create_file(self, testfile):
f = self.fout = self.module.open(testfile, 'wb')
f.setnchannels(self.nchannels)
f.setsampwidth(self.sampwidth)
f.setframerate(self.framerate)
f.setcomptype(self.comptype, self.compname)
return f
def check_file(self, testfile, nframes, frames):
with self.module.open(testfile, 'rb') as f:
self.assertEqual(f.getnchannels(), self.nchannels)
self.assertEqual(f.getsampwidth(), self.sampwidth)
self.assertEqual(f.getframerate(), self.framerate)
self.assertEqual(f.getnframes(), nframes)
self.assertEqual(f.readframes(nframes), frames)
def test_write_params(self):
f = self.create_file(TESTFN)
f.setnframes(self.nframes)
f.writeframes(self.frames)
self.check_params(f, self.nchannels, self.sampwidth, self.framerate,
self.nframes, self.comptype, self.compname)
f.close()
def test_write_context_manager_calls_close(self):
# Close checks for a minimum header and will raise an error
# if it is not set, so this proves that close is called.
with self.assertRaises(self.module.Error):
with self.module.open(TESTFN, 'wb'):
pass
with self.assertRaises(self.module.Error):
with open(TESTFN, 'wb') as testfile:
with self.module.open(testfile):
pass
def test_context_manager_with_open_file(self):
with open(TESTFN, 'wb') as testfile:
with self.module.open(testfile) as f:
f.setnchannels(self.nchannels)
f.setsampwidth(self.sampwidth)
f.setframerate(self.framerate)
f.setcomptype(self.comptype, self.compname)
self.assertEqual(testfile.closed, self.close_fd)
with open(TESTFN, 'rb') as testfile:
with self.module.open(testfile) as f:
self.assertFalse(f.getfp().closed)
params = f.getparams()
self.assertEqual(params.nchannels, self.nchannels)
self.assertEqual(params.sampwidth, self.sampwidth)
self.assertEqual(params.framerate, self.framerate)
if not self.close_fd:
self.assertIsNone(f.getfp())
self.assertEqual(testfile.closed, self.close_fd)
def test_context_manager_with_filename(self):
# If the file doesn't get closed, this test won't fail, but it will
# produce a resource leak warning.
with self.module.open(TESTFN, 'wb') as f:
f.setnchannels(self.nchannels)
f.setsampwidth(self.sampwidth)
f.setframerate(self.framerate)
f.setcomptype(self.comptype, self.compname)
with self.module.open(TESTFN) as f:
self.assertFalse(f.getfp().closed)
params = f.getparams()
self.assertEqual(params.nchannels, self.nchannels)
self.assertEqual(params.sampwidth, self.sampwidth)
self.assertEqual(params.framerate, self.framerate)
if not self.close_fd:
self.assertIsNone(f.getfp())
def test_write(self):
f = self.create_file(TESTFN)
f.setnframes(self.nframes)
f.writeframes(self.frames)
f.close()
self.check_file(TESTFN, self.nframes, self.frames)
def test_write_bytearray(self):
f = self.create_file(TESTFN)
f.setnframes(self.nframes)
f.writeframes(bytearray(self.frames))
f.close()
self.check_file(TESTFN, self.nframes, self.frames)
def test_write_array(self):
f = self.create_file(TESTFN)
f.setnframes(self.nframes)
f.writeframes(array.array('h', self.frames))
f.close()
self.check_file(TESTFN, self.nframes, self.frames)
def test_write_memoryview(self):
f = self.create_file(TESTFN)
f.setnframes(self.nframes)
f.writeframes(memoryview(self.frames))
f.close()
self.check_file(TESTFN, self.nframes, self.frames)
def test_incompleted_write(self):
with open(TESTFN, 'wb') as testfile:
testfile.write(b'ababagalamaga')
f = self.create_file(testfile)
f.setnframes(self.nframes + 1)
f.writeframes(self.frames)
f.close()
with open(TESTFN, 'rb') as testfile:
self.assertEqual(testfile.read(13), b'ababagalamaga')
self.check_file(testfile, self.nframes, self.frames)
def test_multiple_writes(self):
with open(TESTFN, 'wb') as testfile:
testfile.write(b'ababagalamaga')
f = self.create_file(testfile)
f.setnframes(self.nframes)
framesize = self.nchannels * self.sampwidth
f.writeframes(self.frames[:-framesize])
f.writeframes(self.frames[-framesize:])
f.close()
with open(TESTFN, 'rb') as testfile:
self.assertEqual(testfile.read(13), b'ababagalamaga')
self.check_file(testfile, self.nframes, self.frames)
def test_overflowed_write(self):
with open(TESTFN, 'wb') as testfile:
testfile.write(b'ababagalamaga')
f = self.create_file(testfile)
f.setnframes(self.nframes - 1)
f.writeframes(self.frames)
f.close()
with open(TESTFN, 'rb') as testfile:
self.assertEqual(testfile.read(13), b'ababagalamaga')
self.check_file(testfile, self.nframes, self.frames)
def test_unseekable_read(self):
with self.create_file(TESTFN) as f:
f.setnframes(self.nframes)
f.writeframes(self.frames)
with UnseekableIO(TESTFN, 'rb') as testfile:
self.check_file(testfile, self.nframes, self.frames)
def test_unseekable_write(self):
with UnseekableIO(TESTFN, 'wb') as testfile:
with self.create_file(testfile) as f:
f.setnframes(self.nframes)
f.writeframes(self.frames)
self.check_file(TESTFN, self.nframes, self.frames)
def test_unseekable_incompleted_write(self):
with UnseekableIO(TESTFN, 'wb') as testfile:
testfile.write(b'ababagalamaga')
f = self.create_file(testfile)
f.setnframes(self.nframes + 1)
try:
f.writeframes(self.frames)
except OSError:
pass
try:
f.close()
except OSError:
pass
with open(TESTFN, 'rb') as testfile:
self.assertEqual(testfile.read(13), b'ababagalamaga')
self.check_file(testfile, self.nframes + 1, self.frames)
def test_unseekable_overflowed_write(self):
with UnseekableIO(TESTFN, 'wb') as testfile:
testfile.write(b'ababagalamaga')
f = self.create_file(testfile)
f.setnframes(self.nframes - 1)
try:
f.writeframes(self.frames)
except OSError:
pass
try:
f.close()
except OSError:
pass
with open(TESTFN, 'rb') as testfile:
self.assertEqual(testfile.read(13), b'ababagalamaga')
framesize = self.nchannels * self.sampwidth
self.check_file(testfile, self.nframes - 1, self.frames[:-framesize])
class AudioTestsWithSourceFile(AudioTests):
@classmethod
def setUpClass(cls):
cls.sndfilepath = findfile(cls.sndfilename, subdir='audiodata')
def test_read_params(self):
f = self.f = self.module.open(self.sndfilepath)
#self.assertEqual(f.getfp().name, self.sndfilepath)
self.check_params(f, self.nchannels, self.sampwidth, self.framerate,
self.sndfilenframes, self.comptype, self.compname)
def test_close(self):
with open(self.sndfilepath, 'rb') as testfile:
f = self.f = self.module.open(testfile)
self.assertFalse(testfile.closed)
f.close()
self.assertEqual(testfile.closed, self.close_fd)
with open(TESTFN, 'wb') as testfile:
fout = self.fout = self.module.open(testfile, 'wb')
self.assertFalse(testfile.closed)
with self.assertRaises(self.module.Error):
fout.close()
self.assertEqual(testfile.closed, self.close_fd)
fout.close() # do nothing
def test_read(self):
framesize = self.nchannels * self.sampwidth
chunk1 = self.frames[:2 * framesize]
chunk2 = self.frames[2 * framesize: 4 * framesize]
f = self.f = self.module.open(self.sndfilepath)
self.assertEqual(f.readframes(0), b'')
self.assertEqual(f.tell(), 0)
self.assertEqual(f.readframes(2), chunk1)
f.rewind()
pos0 = f.tell()
self.assertEqual(pos0, 0)
self.assertEqual(f.readframes(2), chunk1)
pos2 = f.tell()
self.assertEqual(pos2, 2)
self.assertEqual(f.readframes(2), chunk2)
f.setpos(pos2)
self.assertEqual(f.readframes(2), chunk2)
f.setpos(pos0)
self.assertEqual(f.readframes(2), chunk1)
with self.assertRaises(self.module.Error):
f.setpos(-1)
with self.assertRaises(self.module.Error):
f.setpos(f.getnframes() + 1)
def test_copy(self):
f = self.f = self.module.open(self.sndfilepath)
fout = self.fout = self.module.open(TESTFN, 'wb')
fout.setparams(f.getparams())
i = 0
n = f.getnframes()
while n > 0:
i += 1
fout.writeframes(f.readframes(i))
n -= i
fout.close()
fout = self.fout = self.module.open(TESTFN, 'rb')
f.rewind()
self.assertEqual(f.getparams(), fout.getparams())
self.assertEqual(f.readframes(f.getnframes()),
fout.readframes(fout.getnframes()))
def test_read_not_from_start(self):
with open(TESTFN, 'wb') as testfile:
testfile.write(b'ababagalamaga')
with open(self.sndfilepath, 'rb') as f:
testfile.write(f.read())
with open(TESTFN, 'rb') as testfile:
self.assertEqual(testfile.read(13), b'ababagalamaga')
with self.module.open(testfile, 'rb') as f:
self.assertEqual(f.getnchannels(), self.nchannels)
self.assertEqual(f.getsampwidth(), self.sampwidth)
self.assertEqual(f.getframerate(), self.framerate)
self.assertEqual(f.getnframes(), self.sndfilenframes)
self.assertEqual(f.readframes(self.nframes), self.frames)
|
osungjin/livechat
|
refs/heads/master
|
livechat_app/hello_gae/handlers.py
|
1
|
# -*- coding: utf-8 -*-
from flask import Blueprint
from flask import render_template
from flask import redirect, request
import pdb
hello_gae = Blueprint('hello_gae', __name__)
livechat = Blueprint('livechat', __name__)
@hello_gae.route('/')
def login():
return render_template("login.html")
@livechat.route('/')
def chat():
name = request.args['id']
return render_template("liveChat.html",name=str(name))
|
chirilo/kuma
|
refs/heads/master
|
kuma/demos/models.py
|
5
|
from hashlib import md5
import operator
from os import makedirs
from os.path import basename, dirname, isdir, join
from shutil import rmtree, copyfileobj
import re
from time import time
import zipfile
import magic
from django.conf import settings
from django.core.exceptions import ValidationError
from django.core.files.storage import FileSystemStorage
from django.db import models
from django.db.models import Q
from django.db.models.fields.files import FieldFile, ImageFieldFile
from django.template.defaultfilters import filesizeformat
from django.utils.text import slugify
from django.utils.translation import ugettext_lazy as _
from constance import config
from constance.admin import FIELDS
from django.utils.functional import lazy
from kuma.actioncounters.fields import ActionCounterField
from kuma.core.managers import NamespacedTaggableManager
from kuma.core.urlresolvers import reverse
from kuma.core.utils import generate_filename_and_delete_previous
from . import challenge_utils, DEMO_LICENSES, scale_image
from .embed import VideoEmbedURLField
SCREENSHOT_MAXW = getattr(settings, 'DEMO_SCREENSHOT_MAX_WIDTH', 480)
SCREENSHOT_MAXH = getattr(settings, 'DEMO_SCREENSHOT_MAX_HEIGHT', 360)
THUMBNAIL_MAXW = getattr(settings, 'DEMO_THUMBNAIL_MAX_WIDTH', 200)
THUMBNAIL_MAXH = getattr(settings, 'DEMO_THUMBNAIL_MAX_HEIGHT', 150)
# Set up a file system for demo uploads that can be kept separate from the rest
# of /media if necessary. Lots of hackery here to ensure a set of sensible
# defaults are tried.
DEMO_UPLOADS_ROOT = getattr(
settings, 'DEMO_UPLOADS_ROOT',
'%s/uploads/demos' % getattr(settings, 'MEDIA_ROOT', 'media'))
DEMO_UPLOADS_URL = getattr(
settings, 'DEMO_UPLOADS_URL',
'%s/uploads/demos' % getattr(settings, 'MEDIA_URL', '/media'))
demo_uploads_fs = FileSystemStorage(location=DEMO_UPLOADS_ROOT, base_url=DEMO_UPLOADS_URL)
DEMO_MIMETYPE_BLACKLIST = getattr(settings, 'DEMO_FILETYPE_BLACKLIST', [
'application/msword',
'application/pdf',
'application/postscript',
'application/vnd.lotus-wordpro',
'application/vnd.ms-cab-compressed',
'application/vnd.ms-excel',
'application/vnd.ms-tnef',
'application/vnd.oasis.opendocument.text',
'application/vnd.symbian.install',
'application/x-123',
'application/x-arc',
'application/x-archive',
'application/x-arj',
'application/x-bittorrent',
'application/x-bzip2',
'application/x-compress',
'application/x-debian-package',
'application/x-dosexec',
'application/x-executable',
'application/x-gzip',
'application/x-iso9660-image',
'application/x-java-applet',
'application/x-java-jce-keystore',
'application/x-java-keystore',
'application/x-java-pack200',
'application/x-lha',
'application/x-lharc',
'application/x-lzip',
'application/x-msaccess',
'application/x-rar',
'application/x-rpm',
'application/x-sc',
'application/x-setupscript.',
'application/x-sharedlib',
'application/x-shockwave-flash',
'application/x-stuffit',
'application/x-tar',
'application/x-zip',
'application/x-xz',
'application/x-zoo',
'application/xml-sitemap',
'application/zip',
'model/vrml',
'model/x3d',
'text/x-msdos-batch',
'text/x-perl',
'text/x-php',
])
LAZY_CONSTANCE_TYPES = list(FIELDS.keys())
LAZY_CONSTANCE_TYPES.remove(unicode) # because we already have str in the list
def _config(name, default=None):
"""
Just a silly wrapper arround the constance's config object.
"""
return getattr(config, name, default)
"""
A function to use constance's config object in an environment in which
one requires lazy values such a model field parameters.
E.g. something that is a pretty stupid idea but should show the risk as well::
class Entry(models.Model):
title = models.CharField(max_length=config_lazy('ENTRY_MAX_LENGTH'))
.. where ``ENTRY_MAX_LENGTH`` is the name of the config value.
"""
config_lazy = lazy(_config, *LAZY_CONSTANCE_TYPES)
def get_root_for_submission(instance):
"""Build a root path for demo submission files"""
username = instance.creator.username
return join(username[0], username[1], username,
md5(instance.slug).hexdigest())
def screenshot_upload_to(instance, filename, field_filename):
base = get_root_for_submission(instance)
filename = '%s_%s' % (int(time()), field_filename)
return join(base, filename)
def upload_screenshot_1(instance, filename):
return screenshot_upload_to(instance, filename, 'screenshot_1.png')
def upload_screenshot_2(instance, filename):
return screenshot_upload_to(instance, filename, 'screenshot_2.png')
def upload_screenshot_3(instance, filename):
return screenshot_upload_to(instance, filename, 'screenshot_3.png')
def upload_screenshot_4(instance, filename):
return screenshot_upload_to(instance, filename, 'screenshot_4.png')
def upload_screenshot_5(instance, filename):
return screenshot_upload_to(instance, filename, 'screenshot_5.png')
def demo_package_upload_to(instance, filename):
base = get_root_for_submission(instance)
filename = '%s_%s_%s' % (instance.slug[:20], int(time()), 'demo_package.zip')
return join(base, filename)
class ReplacingFieldZipFile(FieldFile):
def delete(self, save=True):
# Delete any unpacked zip file, if found.
new_root_dir = self.path.replace('.zip', '')
if isdir(new_root_dir):
rmtree(new_root_dir)
return super(ReplacingFieldZipFile, self).delete(save)
def save(self, name, content, save=True):
new_filename = generate_filename_and_delete_previous(self, name)
super(ReplacingFieldZipFile, self).save(new_filename, content, save)
def _get_size(self):
"""Override FieldFile size property to return 0 in case of a missing file."""
try:
return super(ReplacingFieldZipFile, self)._get_size()
except OSError:
return 0
size = property(_get_size)
class ReplacingZipFileField(models.FileField):
# TODO:liberate
"""This field causes an uploaded file to replace an existing one on disk."""
attr_class = ReplacingFieldZipFile
def __init__(self, *args, **kwargs):
self.max_upload_size = kwargs.pop('max_upload_size', 5)
super(ReplacingZipFileField, self).__init__(*args, **kwargs)
def clean(self, *args, **kwargs):
data = super(ReplacingZipFileField, self).clean(*args, **kwargs)
file = data.file
try:
if file._size > self.max_upload_size:
raise ValidationError(
_('Please keep filesize under %s. Current filesize %s') %
(filesizeformat(self.max_upload_size), filesizeformat(file._size))
)
except AttributeError:
pass
return data
class ReplacingImageWithThumbFieldFile(ImageFieldFile):
def thumbnail_name(self):
# HACK: This works, but I'm not proud of it
if not self.name:
return ''
parts = self.name.rsplit('.', 1)
return ''.join((parts[0], '_thumb', '.', parts[1]))
def thumbnail_url(self):
if not self.url:
return ''
# HACK: Use legacy thumbnail URL, if new-style file missing.
DEV = getattr(settings, 'DEV', False)
if not DEV and not self.storage.exists(self.thumbnail_name()):
return self.url.replace('screenshot', 'screenshot_thumb')
# HACK: This works, but I'm not proud of it
parts = self.url.rsplit('.', 1)
return ''.join((parts[0], '_thumb', '.', parts[1]))
def delete(self, save=True):
# Delete any associated thumbnail image before deleting primary
t_name = self.thumbnail_name()
if t_name:
self.storage.delete(t_name)
return super(ImageFieldFile, self).delete(save)
def save(self, name, content, save=True):
new_filename = generate_filename_and_delete_previous(self, name)
super(ImageFieldFile, self).save(new_filename, content, save)
# Create associated scaled thumbnail image
t_name = self.thumbnail_name()
if t_name:
thumb_file = scale_image(
self.storage.open(new_filename),
(self.field.thumb_max_width, self.field.thumb_max_height))
self.storage.save(t_name, thumb_file)
class ReplacingImageWithThumbField(models.ImageField):
# TODO:liberate
"""This field causes an uploaded file to replace an existing one on disk."""
attr_class = ReplacingImageWithThumbFieldFile
def __init__(self, *args, **kwargs):
self.full_max_width = kwargs.pop("full_max_width", SCREENSHOT_MAXW)
self.full_max_height = kwargs.pop("full_max_height", SCREENSHOT_MAXH)
self.thumb_max_width = kwargs.pop("thumb_max_width", THUMBNAIL_MAXW)
self.thumb_max_height = kwargs.pop("thumb_max_height", THUMBNAIL_MAXH)
super(ReplacingImageWithThumbField, self).__init__(*args, **kwargs)
def clean(self, *args, **kwargs):
data = super(ReplacingImageWithThumbField, self).clean(*args, **kwargs)
# Scale the input image down to maximum full size.
scaled_file = scale_image(
data.file,
(self.full_max_width, self.full_max_height))
if not scaled_file:
raise ValidationError(_('Cannot process image'))
data.file = scaled_file
return data
class SubmissionManager(models.Manager):
"""Manager for Submission objects"""
def get_by_natural_key(self, slug):
return self.get(slug=slug)
# never show censored submissions
def get_queryset(self):
return super(SubmissionManager, self).get_queryset().exclude(censored=True)
# TODO: Make these search functions into a mixin?
# See: http://www.julienphalip.com/blog/2008/08/16/adding-search-django-site-snap/
def _normalize_query(self, query_string,
findterms=re.compile(r'"([^"]+)"|(\S+)').findall,
normspace=re.compile(r'\s{2,}').sub):
''' Splits the query string in invidual keywords, getting rid of unecessary spaces
and grouping quoted words together.
Example:
>>> _normalize_query(' some random words "with quotes " and spaces')
['some', 'random', 'words', 'with quotes', 'and', 'spaces']
'''
return [normspace(' ', (t[0] or t[1]).strip()) for t in findterms(query_string)]
# See: http://www.julienphalip.com/blog/2008/08/16/adding-search-django-site-snap/
def _get_query(self, query_string, search_fields):
''' Returns a query, that is a combination of Q objects. That combination
aims to search keywords within a model by testing the given search fields.
'''
query = None # Query to search for every search term
terms = self._normalize_query(query_string)
for term in terms:
or_query = None # Query to search for a given term in each field
for field_name in search_fields:
q = Q(**{"%s__icontains" % field_name: term})
if or_query is None:
or_query = q
else:
or_query = or_query | q
if query is None:
query = or_query
else:
query = query & or_query
return query
def search(self, query_string, sort):
"""Quick and dirty keyword search on submissions"""
# TODO: Someday, replace this with a real search engine
strip_qs = query_string.strip()
if not strip_qs:
return self.all_sorted(sort).order_by('-modified')
else:
query = self._get_query(strip_qs, ['title', 'summary', 'description'])
return self.all_sorted(sort).filter(query).order_by('-modified')
def all_sorted(self, sort=None, max=5):
"""Apply to .all() one of the sort orders supported for views"""
queryset = self.all()
if sort == 'launches':
return queryset.order_by('-launches_total')
elif sort == 'likes':
return queryset.order_by('-likes_total')
elif sort == 'upandcoming':
return queryset.order_by('-likes_recent', '-launches_recent')
elif sort == 'recentfeatured':
return (queryset.filter(featured=True)
.exclude(hidden=True)
.order_by('-modified')[:max])
else:
return queryset.order_by('-created')
class Submission(models.Model):
"""Representation of a demo submission"""
objects = SubmissionManager()
admin_manager = models.Manager()
title = models.CharField(
_("what is your demo's name?"),
max_length=255, blank=False, unique=True)
slug = models.SlugField(
_("slug"),
blank=False, unique=True, max_length=50)
summary = models.CharField(
_("describe your demo in one line"),
max_length=255, blank=False)
description = models.TextField(
_("describe your demo in more detail (optional)"),
blank=True)
featured = models.BooleanField(default=False)
hidden = models.BooleanField(
_("Hide this demo from others?"), default=False)
censored = models.BooleanField(default=False)
censored_url = models.URLField(
_("Redirect URL for censorship."),
blank=True, null=True)
navbar_optout = models.BooleanField(
_('control how your demo is launched'),
choices=(
(True, _('Disable navigation bar, launch demo in a new window')),
(False, _('Use navigation bar, display demo in <iframe>'))
), default=False)
# FIXME: remove since it's unneeded
comments_total = models.PositiveIntegerField(default=0)
launches = ActionCounterField()
likes = ActionCounterField()
taggit_tags = NamespacedTaggableManager(blank=True)
screenshot_1 = ReplacingImageWithThumbField(
_('Screenshot #1'),
max_length=255,
storage=demo_uploads_fs,
upload_to=upload_screenshot_1,
blank=False)
screenshot_2 = ReplacingImageWithThumbField(
_('Screenshot #2'),
max_length=255,
storage=demo_uploads_fs,
upload_to=upload_screenshot_2,
blank=True)
screenshot_3 = ReplacingImageWithThumbField(
_('Screenshot #3'),
max_length=255,
storage=demo_uploads_fs,
upload_to=upload_screenshot_3,
blank=True)
screenshot_4 = ReplacingImageWithThumbField(
_('Screenshot #4'),
max_length=255,
storage=demo_uploads_fs,
upload_to=upload_screenshot_4,
blank=True)
screenshot_5 = ReplacingImageWithThumbField(
_('Screenshot #5'),
max_length=255,
storage=demo_uploads_fs,
upload_to=upload_screenshot_5,
blank=True)
video_url = VideoEmbedURLField(
_("have a video of your demo in action? (optional)"),
blank=True, null=True)
demo_package = ReplacingZipFileField(
_('select a ZIP file containing your demo'),
max_length=255,
max_upload_size=config_lazy('DEMO_MAX_ZIP_FILESIZE',
60 * 1024 * 1024), # overridden by constance
storage=demo_uploads_fs,
upload_to=demo_package_upload_to,
blank=False)
source_code_url = models.URLField(
_("Is your source code also available somewhere else on the web (e.g., github)? Please share the link."),
blank=True, null=True)
license_name = models.CharField(
_("Select the license that applies to your source code."),
max_length=64, blank=False,
choices=[(license['name'], license['title'])
for license in DEMO_LICENSES.values()]
)
creator = models.ForeignKey(settings.AUTH_USER_MODEL, blank=False, null=True)
created = models.DateTimeField(
_('date created'),
auto_now_add=True, blank=False)
modified = models.DateTimeField(
_('date last modified'),
auto_now=True, blank=False)
def natural_key(self):
return (self.slug,)
def update(self, **kw):
"""
Shortcut for doing an UPDATE on this object.
If _signal=False is in ``kw`` the post_save signal won't be sent.
"""
signal = kw.pop('_signal', True)
cls = self.__class__
using = kw.pop('using', 'default')
for k, v in kw.items():
setattr(self, k, v)
if signal:
# Detect any attribute changes during pre_save and add those to the
# update kwargs.
attrs = dict(self.__dict__)
models.signals.pre_save.send(sender=cls, instance=self)
for k, v in self.__dict__.items():
if attrs[k] != v:
kw[k] = v
setattr(self, k, v)
cls.objects.using(using).filter(pk=self.pk).update(**kw)
if signal:
models.signals.post_save.send(sender=cls, instance=self,
created=False)
def censor(self, url=None):
"""Censor a demo, with optional link to explanation"""
self.censored = True
self.censored_url = url
self.save()
root = join(DEMO_UPLOADS_ROOT, get_root_for_submission(self))
if isdir(root):
rmtree(root)
def __unicode__(self):
return 'Submission "%(title)s"' % dict(title=self.title)
def get_absolute_url(self):
return reverse('kuma.demos.views.detail', kwargs={'slug': self.slug})
def _make_unique_slug(self, **kwargs):
"""
Try to generate a unique 50-character slug.
"""
if self.slug:
slug = self.slug[:50]
else:
slug = slugify(self.title)[:50]
using = kwargs['using'] if 'using' in kwargs else 'default'
existing = Submission.objects.using(using).filter(slug=slug)
if (not existing) or (self.id and self.id in [s.id for s in existing]):
return slug
# If the first 50 characters aren't unique, we chop off the
# last two and try sticking a two-digit number there.
#
# If for some reason we get to 100 demos which all have the
# same first fifty characters in their title, this will
# break. Hopefully that's unlikely enough that it won't be a
# problem, but we can always add a check at the end of the
# while loop or come up with some other method if we actually
# run into it.
base_slug = slug[:-2]
i = 0
while Submission.objects.filter(slug=slug).exists() and i < 100:
slug = "%s%02d" % (base_slug, i)
i += 1
return slug
def save(self, **kwargs):
"""Save the submission, updating slug and screenshot thumbnails"""
self.slug = self._make_unique_slug(**kwargs)
super(Submission, self).save(**kwargs)
def delete(self, using=None):
root = join(DEMO_UPLOADS_ROOT, get_root_for_submission(self))
if isdir(root):
rmtree(root)
super(Submission, self).delete(using)
def clean(self):
if self.demo_package:
Submission.validate_demo_zipfile(self.demo_package)
def next(self):
"""Find the next submission by created time, return None if not found."""
try:
obj = self.get_next_by_created(hidden=False)
return obj
except Submission.DoesNotExist:
return None
def previous(self):
"""Find the previous submission by created time, return None if not found."""
try:
obj = self.get_previous_by_created(hidden=False)
return obj
except Submission.DoesNotExist:
return None
def screenshot_url(self, index='1'):
"""Fetch the screenshot URL for a given index, swallowing errors"""
try:
return getattr(self, 'screenshot_%s' % index).url
except:
return ''
def thumbnail_url(self, index='1'):
"""Fetch the screenshot thumbnail URL for a given index, swallowing
errors"""
try:
return getattr(self, 'screenshot_%s' % index).thumbnail_url()
except:
return ''
def get_flags(self):
"""
Assemble status flags, based on featured status and a set of special
tags (eg. for Dev Derby). The flags are assembled in order of display
priority, so the first flag on the list (if any) is the most
important"""
flags = []
# Iterate through known flags based on tag naming convention. Tag flags
# are listed here in order of priority.
tag_flags = ('firstplace', 'secondplace', 'thirdplace', 'finalist')
or_queries = []
for tag_flag in tag_flags:
term = 'system:challenge:%s:' % tag_flag
or_queries.append(Q(**{'name__startswith': term}))
for tag in self.taggit_tags.filter(reduce(operator.or_, or_queries)):
split_tag_name = tag.name.split(':')
if len(split_tag_name) > 2: # the first two items are ['system', 'challenge']
flags.append(split_tag_name[2]) # the third item is the tag name
# Featured is an odd-man-out before we had tags
if self.featured:
flags.append('featured')
return flags
def is_derby_submission(self):
return bool(self.taggit_tags.all_ns('challenge:'))
def challenge_closed(self):
challenge_tags = self.taggit_tags.all_ns('challenge:')
if not challenge_tags or 'challenge:none' in map(str, challenge_tags):
return False
return challenge_utils.challenge_closed(challenge_tags)
@classmethod
def allows_listing_hidden_by(cls, user):
return user.is_staff or user.is_superuser
def allows_viewing_by(self, user):
if not self.censored:
return (user.is_staff or
user.is_superuser or
user.pk == self.creator.pk or
not self.hidden)
def allows_managing_by(self, user):
return user.is_staff or user.is_superuser or user.pk == self.creator.pk
@classmethod
def get_valid_demo_zipfile_entries(cls, zf):
"""Filter a ZIP file's entries for only accepted entries"""
# TODO: Move to zip file field?
return [x for x in zf.infolist() if
not (x.filename.startswith('/') or '/..' in x.filename) and
not (basename(x.filename).startswith('.')) and
x.file_size > 0]
@classmethod
def validate_demo_zipfile(cls, file):
"""Ensure a given file is a valid ZIP file without disallowed file
entries and with an HTML index."""
# TODO: Move to zip file field?
try:
zf = zipfile.ZipFile(file)
except:
raise ValidationError(_('ZIP file contains no acceptable files'))
if zf.testzip():
raise ValidationError(_('ZIP file corrupted'))
valid_entries = Submission.get_valid_demo_zipfile_entries(zf)
if len(valid_entries) == 0:
raise ValidationError(_('ZIP file contains no acceptable files'))
m_mime = magic.Magic(mime=True)
index_found = False
for zi in valid_entries:
name = zi.filename
# HACK: We're accepting {index,demo}.html as the root index and
# normalizing on unpack
if 'index.html' == name or 'demo.html' == name:
index_found = True
if zi.file_size > config.DEMO_MAX_FILESIZE_IN_ZIP:
raise ValidationError(
_('ZIP file contains a file that is too large: %(filename)s') %
{"filename": name}
)
file_data = zf.read(zi)
# HACK: Sometimes we get "type; charset", even if charset wasn't asked for
file_mime_type = m_mime.from_buffer(file_data).split(';')[0]
extensions = config.DEMO_BLACKLIST_OVERRIDE_EXTENSIONS.split()
override_file_extensions = ['.%s' % extension
for extension in extensions]
if (file_mime_type in DEMO_MIMETYPE_BLACKLIST and
not name.endswith(tuple(override_file_extensions))):
raise ValidationError(
_('ZIP file contains an unacceptable file: %(filename)s') %
{'filename': name})
if not index_found:
raise ValidationError(_('HTML index not found in ZIP'))
def process_demo_package(self):
"""Unpack the demo ZIP file into the appropriate directory, filtering
out any invalid file entries and normalizing demo.html to index.html if
present."""
# TODO: Move to zip file field?
# Derive a directory name from the zip filename, clean up any existing
# directory before unpacking.
new_root_dir = self.demo_package.path.replace('.zip', '')
if isdir(new_root_dir):
rmtree(new_root_dir)
# Load up the zip file and extract the valid entries
zf = zipfile.ZipFile(self.demo_package.file)
valid_entries = Submission.get_valid_demo_zipfile_entries(zf)
for zi in valid_entries:
if type(zi.filename) is unicode:
zi_filename = zi.filename
else:
zi_filename = zi.filename.decode('utf-8', 'ignore')
# HACK: Normalize demo.html to index.html
if zi_filename == u'demo.html':
zi_filename = u'index.html'
# Relocate all files from detected root dir to a directory named
# for the zip file in storage
out_fn = join(new_root_dir, zi_filename)
out_dir = dirname(out_fn)
# Create parent directories where necessary.
if not isdir(out_dir):
makedirs(out_dir.encode('utf-8'), 0775)
# Extract the file from the zip into the desired location.
fout = open(out_fn.encode('utf-8'), 'wb')
copyfileobj(zf.open(zi), fout)
|
sadanandb/pmt
|
refs/heads/master
|
src/pyasm/search/sobject_default_config.py
|
6
|
###########################################################
#
# Copyright (c) 2005, Southpaw Technology
# All Rights Reserved
#
# PROPRIETARY INFORMATION. This software is proprietary to
# Southpaw Technology, and is not to be reproduced, transmitted,
# or disclosed in any way without written permission.
#
#
#
__all__ = ['SObjectDefaultConfig']
from pyasm.common import Base, Xml, Environment
from pyasm.search import DbContainer, SearchType, SqlException, Sql
class SObjectDefaultConfig(Base):
'''An artificial config file is made if none are found'''
def __init__(my, search_type, view, config_base=None, mode="columns"):
my.search_type = search_type
if view:
my.view = view
else:
my.view = config_base
if not my.view:
my.view = "table"
# bit of protection ... : have been known to show up in view names
my.view = my.view.replace(":", '_')
#mode = "basic"
my.xml = Xml()
if mode == 'columns':
my.handle_columns_mode()
else:
my.handle_basic_mode()
def get_columns(my, required_only=False):
if my.search_type == 'sthpw/virtual':
return []
search_type_obj = SearchType.get(my.search_type)
table = search_type_obj.get_table()
from pyasm.biz import Project
db_resource = Project.get_db_resource_by_search_type(my.search_type)
database_name = db_resource.get_database()
db = DbContainer.get(db_resource)
# table may not exist
try:
all_columns = db.get_columns(table)
columns = []
if required_only:
nullables = db.get_column_nullables(table)
for column in all_columns:
null_ok = nullables.get(column)
if not null_ok:
columns.append(column)
# if there are no required columns
if not columns:
columns = all_columns
else:
columns = all_columns
except SqlException:
Environment.add_warning('missing table', 'Table [%s] does not exist in database [%s]' %(table, database_name))
return []
return columns
def handle_basic_mode(my):
doc = my.xml.create_doc("config")
root = my.xml.get_root_node()
db_columns = my.get_columns()
if "code" in db_columns:
columns = ["preview", "code"]
elif "name" in db_columns:
columns = ["preview", "name"]
elif "id" in db_columns:
columns = ["preview", "id"]
table = my.xml.create_element("table")
Xml.append_child(root, table)
for column in ["preview", "code"]:
element = my.xml.create_element("element")
Xml.set_attribute(element, "name", column)
Xml.append_child(table, element)
# create the edit
edit = my.xml.create_element("edit")
Xml.append_child(root, edit)
for column in ["preview", "code"]:
element = my.xml.create_element("element")
Xml.set_attribute(element, "name", column)
Xml.append_child(edit, element)
# create the manual publish view
publish = my.xml.create_element("publish")
Xml.append_child(root, publish)
element = my.xml.create_element("element")
Xml.set_attribute(element, "name", "image")
Xml.append_child(publish, element)
dis_element = my.xml.create_element("display")
Xml.set_attribute(dis_element, "class", "ThumbInputWdg")
act_element = my.xml.create_element("action")
Xml.set_attribute(act_element, "class", "NullAction")
Xml.append_child(element, dis_element)
Xml.append_child(element, act_element)
element = my.xml.create_element("element")
Xml.set_attribute(element, "name", "publish_files")
Xml.append_child(publish, element)
dis_element = my.xml.create_element("display")
Xml.set_attribute(dis_element, "class", "UploadWdg")
# add options
option = my.xml.create_text_element('names','publish_icon|publish_main')
Xml.append_child(dis_element, option)
option = my.xml.create_text_element('required','false|true')
Xml.append_child(dis_element, option)
act_element = my.xml.create_element("action")
Xml.set_attribute(act_element, "class", "MultiUploadAction")
# add options
option = my.xml.create_text_element('names','publish_icon|publish_main')
Xml.append_child(act_element, option)
option = my.xml.create_text_element('types','icon_main|main')
Xml.append_child(act_element, option)
Xml.append_child(element, dis_element)
Xml.append_child(element, act_element)
value = my.xml.to_string()
my.xml = Xml()
my.xml.read_string(value)
def handle_columns_mode(my):
doc = my.xml.create_doc("config")
root = my.xml.get_root_node()
columns = my.get_columns()
if len(columns) == 1 and columns[0] == "id":
columns = my.get_columns(required_only=False)
# create the table
# search is a special view for SearchWdg and it should not be created
if my.view not in ['search','publish']:
table = my.xml.create_element(my.view)
my.xml.append_child(root, table)
for column in columns:
if column in ["_id", "id", "oid", "s_status"]:
continue
element = my.xml.create_element("element")
Xml.set_attribute(element, "name", column)
my.xml.append_child(table, element)
# add history, input and output for the load view (designed for app loading)
if my.view == 'load':
element = my.xml.create_element("element")
Xml.set_attribute(element, "name", "checkin")
my.xml.append_child(table, element)
for column in ['input', 'output']:
element = my.xml.create_element("element")
Xml.set_attribute(element, "name", column)
Xml.set_attribute(element, "edit", "false")
display_element = my.xml.create_element("display")
Xml.set_attribute(display_element, "class", "tactic.ui.cgapp.LoaderElementWdg")
my.xml.append_child(element, display_element)
stype, key = SearchType.break_up_key(my.search_type)
op1 = my.xml.create_text_element("search_type", stype)
op2 = my.xml.create_text_element("mode", column)
my.xml.append_child(display_element, op1)
my.xml.append_child(display_element, op2)
my.xml.append_child(table, element)
value = my.xml.to_string()
my.xml = Xml()
my.xml.read_string(value)
def get_type(my, element_name):
xpath = "config/%s/element[@name='%s']/@type" % (my.view,element_name)
type = my.xml.get_value(xpath)
if not type:
xpath = "config/%s/element[@name='%s']/@type" % ("definition",element_name)
type = my.xml.get_value(xpath)
return type
def get_xml(my):
return my.xml
|
britcey/ansible
|
refs/heads/devel
|
contrib/inventory/vmware_inventory.py
|
28
|
#!/usr/bin/env python
# Requirements
# - pyvmomi >= 6.0.0.2016.4
# TODO:
# * more jq examples
# * optional folder heriarchy
"""
$ jq '._meta.hostvars[].config' data.json | head
{
"alternateguestname": "",
"instanceuuid": "5035a5cd-b8e8-d717-e133-2d383eb0d675",
"memoryhotaddenabled": false,
"guestfullname": "Red Hat Enterprise Linux 7 (64-bit)",
"changeversion": "2016-05-16T18:43:14.977925Z",
"uuid": "4235fc97-5ddb-7a17-193b-9a3ac97dc7b4",
"cpuhotremoveenabled": false,
"vpmcenabled": false,
"firmware": "bios",
"""
from __future__ import print_function
import argparse
import atexit
import datetime
import getpass
import os
import re
import six
import ssl
import sys
import uuid
from collections import defaultdict
from six.moves import configparser
from time import time
from jinja2 import Environment
HAS_PYVMOMI = False
try:
from pyVmomi import vim
from pyVim.connect import SmartConnect, Disconnect
HAS_PYVMOMI = True
except ImportError:
pass
try:
import json
except ImportError:
import simplejson as json
hasvcr = False
try:
import vcr
hasvcr = True
except ImportError:
pass
def regex_match(s, pattern):
'''Custom filter for regex matching'''
reg = re.compile(pattern)
if reg.match(s):
return True
else:
return False
class VMwareMissingHostException(Exception):
pass
class VMWareInventory(object):
__name__ = 'VMWareInventory'
guest_props = False
instances = []
debug = False
load_dumpfile = None
write_dumpfile = None
maxlevel = 1
lowerkeys = True
config = None
cache_max_age = None
cache_path_cache = None
cache_path_index = None
cache_dir = None
server = None
port = None
username = None
password = None
validate_certs = True
host_filters = []
skip_keys = []
groupby_patterns = []
if sys.version_info > (3, 0):
safe_types = [int, bool, str, float, None]
else:
safe_types = [int, long, bool, str, float, None]
iter_types = [dict, list]
bad_types = ['Array', 'disabledMethod', 'declaredAlarmState']
vimTableMaxDepth = {
"vim.HostSystem": 2,
"vim.VirtualMachine": 2,
}
custom_fields = {}
# use jinja environments to allow for custom filters
env = Environment()
env.filters['regex_match'] = regex_match
# translation table for attributes to fetch for known vim types
if not HAS_PYVMOMI:
vimTable = {}
else:
vimTable = {
vim.Datastore: ['_moId', 'name'],
vim.ResourcePool: ['_moId', 'name'],
vim.HostSystem: ['_moId', 'name'],
}
@staticmethod
def _empty_inventory():
return {"_meta": {"hostvars": {}}}
def __init__(self, load=True):
self.inventory = VMWareInventory._empty_inventory()
if load:
# Read settings and parse CLI arguments
self.parse_cli_args()
self.read_settings()
# Check the cache
cache_valid = self.is_cache_valid()
# Handle Cache
if self.args.refresh_cache or not cache_valid:
self.do_api_calls_update_cache()
else:
self.debugl('loading inventory from cache')
self.inventory = self.get_inventory_from_cache()
def debugl(self, text):
if self.args.debug:
try:
text = str(text)
except UnicodeEncodeError:
text = text.encode('ascii', 'ignore')
print('%s %s' % (datetime.datetime.now(), text))
def show(self):
# Data to print
self.debugl('dumping results')
data_to_print = None
if self.args.host:
data_to_print = self.get_host_info(self.args.host)
elif self.args.list:
# Display list of instances for inventory
data_to_print = self.inventory
return json.dumps(data_to_print, indent=2)
def is_cache_valid(self):
''' Determines if the cache files have expired, or if it is still valid '''
valid = False
if os.path.isfile(self.cache_path_cache):
mod_time = os.path.getmtime(self.cache_path_cache)
current_time = time()
if (mod_time + self.cache_max_age) > current_time:
valid = True
return valid
def do_api_calls_update_cache(self):
''' Get instances and cache the data '''
self.inventory = self.instances_to_inventory(self.get_instances())
self.write_to_cache(self.inventory)
def write_to_cache(self, data):
''' Dump inventory to json file '''
with open(self.cache_path_cache, 'wb') as f:
f.write(json.dumps(data))
def get_inventory_from_cache(self):
''' Read in jsonified inventory '''
jdata = None
with open(self.cache_path_cache, 'rb') as f:
jdata = f.read()
return json.loads(jdata)
def read_settings(self):
''' Reads the settings from the vmware_inventory.ini file '''
scriptbasename = __file__
scriptbasename = os.path.basename(scriptbasename)
scriptbasename = scriptbasename.replace('.py', '')
defaults = {'vmware': {
'server': '',
'port': 443,
'username': '',
'password': '',
'validate_certs': True,
'ini_path': os.path.join(os.path.dirname(__file__), '%s.ini' % scriptbasename),
'cache_name': 'ansible-vmware',
'cache_path': '~/.ansible/tmp',
'cache_max_age': 3600,
'max_object_level': 1,
'skip_keys': 'declaredalarmstate,'
'disabledmethod,'
'dynamicproperty,'
'dynamictype,'
'environmentbrowser,'
'managedby,'
'parent,'
'childtype,'
'resourceconfig',
'alias_pattern': '{{ config.name + "_" + config.uuid }}',
'host_pattern': '{{ guest.ipaddress }}',
'host_filters': '{{ guest.gueststate == "running" }}',
'groupby_patterns': '{{ guest.guestid }},{{ "templates" if config.template else "guests"}}',
'lower_var_keys': True,
'custom_field_group_prefix': 'vmware_tag_',
'groupby_custom_field': False}
}
if six.PY3:
config = configparser.ConfigParser()
else:
config = configparser.SafeConfigParser()
# where is the config?
vmware_ini_path = os.environ.get('VMWARE_INI_PATH', defaults['vmware']['ini_path'])
vmware_ini_path = os.path.expanduser(os.path.expandvars(vmware_ini_path))
config.read(vmware_ini_path)
# apply defaults
for k, v in defaults['vmware'].items():
if not config.has_option('vmware', k):
config.set('vmware', k, str(v))
# where is the cache?
self.cache_dir = os.path.expanduser(config.get('vmware', 'cache_path'))
if self.cache_dir and not os.path.exists(self.cache_dir):
os.makedirs(self.cache_dir)
# set the cache filename and max age
cache_name = config.get('vmware', 'cache_name')
self.cache_path_cache = self.cache_dir + "/%s.cache" % cache_name
self.debugl('cache path is %s' % self.cache_path_cache)
self.cache_max_age = int(config.getint('vmware', 'cache_max_age'))
# mark the connection info
self.server = os.environ.get('VMWARE_SERVER', config.get('vmware', 'server'))
self.debugl('server is %s' % self.server)
self.port = int(os.environ.get('VMWARE_PORT', config.get('vmware', 'port')))
self.username = os.environ.get('VMWARE_USERNAME', config.get('vmware', 'username'))
self.debugl('username is %s' % self.username)
self.password = os.environ.get('VMWARE_PASSWORD', config.get('vmware', 'password'))
self.validate_certs = os.environ.get('VMWARE_VALIDATE_CERTS', config.get('vmware', 'validate_certs'))
if self.validate_certs in ['no', 'false', 'False', False]:
self.validate_certs = False
self.debugl('cert validation is %s' % self.validate_certs)
# behavior control
self.maxlevel = int(config.get('vmware', 'max_object_level'))
self.debugl('max object level is %s' % self.maxlevel)
self.lowerkeys = config.get('vmware', 'lower_var_keys')
if type(self.lowerkeys) != bool:
if str(self.lowerkeys).lower() in ['yes', 'true', '1']:
self.lowerkeys = True
else:
self.lowerkeys = False
self.debugl('lower keys is %s' % self.lowerkeys)
self.skip_keys = list(config.get('vmware', 'skip_keys').split(','))
self.debugl('skip keys is %s' % self.skip_keys)
self.host_filters = list(config.get('vmware', 'host_filters').split(','))
self.debugl('host filters are %s' % self.host_filters)
self.groupby_patterns = list(config.get('vmware', 'groupby_patterns').split(','))
self.debugl('groupby patterns are %s' % self.groupby_patterns)
# Special feature to disable the brute force serialization of the
# virtulmachine objects. The key name for these properties does not
# matter because the values are just items for a larger list.
if config.has_section('properties'):
self.guest_props = []
for prop in config.items('properties'):
self.guest_props.append(prop[1])
# save the config
self.config = config
def parse_cli_args(self):
''' Command line argument processing '''
parser = argparse.ArgumentParser(description='Produce an Ansible Inventory file based on PyVmomi')
parser.add_argument('--debug', action='store_true', default=False,
help='show debug info')
parser.add_argument('--list', action='store_true', default=True,
help='List instances (default: True)')
parser.add_argument('--host', action='store',
help='Get all the variables about a specific instance')
parser.add_argument('--refresh-cache', action='store_true', default=False,
help='Force refresh of cache by making API requests to VSphere (default: False - use cache files)')
parser.add_argument('--max-instances', default=None, type=int,
help='maximum number of instances to retrieve')
self.args = parser.parse_args()
def get_instances(self):
''' Get a list of vm instances with pyvmomi '''
kwargs = {'host': self.server,
'user': self.username,
'pwd': self.password,
'port': int(self.port)}
if hasattr(ssl, 'SSLContext') and not self.validate_certs:
context = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
context.verify_mode = ssl.CERT_NONE
kwargs['sslContext'] = context
return self._get_instances(kwargs)
def _get_instances(self, inkwargs):
''' Make API calls '''
instances = []
si = SmartConnect(**inkwargs)
self.debugl('retrieving all instances')
if not si:
print("Could not connect to the specified host using specified "
"username and password")
return -1
atexit.register(Disconnect, si)
content = si.RetrieveContent()
# Create a search container for virtualmachines
self.debugl('creating containerview for virtualmachines')
container = content.rootFolder
viewType = [vim.VirtualMachine]
recursive = True
containerView = content.viewManager.CreateContainerView(container, viewType, recursive)
children = containerView.view
for child in children:
# If requested, limit the total number of instances
if self.args.max_instances:
if len(instances) >= self.args.max_instances:
break
instances.append(child)
self.debugl("%s total instances in container view" % len(instances))
if self.args.host:
instances = [x for x in instances if x.name == self.args.host]
instance_tuples = []
for instance in sorted(instances):
if self.guest_props:
ifacts = self.facts_from_proplist(instance)
else:
ifacts = self.facts_from_vobj(instance)
instance_tuples.append((instance, ifacts))
self.debugl('facts collected for all instances')
cfm = content.customFieldsManager
if cfm is not None and cfm.field:
for f in cfm.field:
if f.managedObjectType == vim.VirtualMachine:
self.custom_fields[f.key] = f.name
self.debugl('%d custom fieds collected' % len(self.custom_fields))
return instance_tuples
def instances_to_inventory(self, instances):
''' Convert a list of vm objects into a json compliant inventory '''
self.debugl('re-indexing instances based on ini settings')
inventory = VMWareInventory._empty_inventory()
inventory['all'] = {}
inventory['all']['hosts'] = []
for idx, instance in enumerate(instances):
# make a unique id for this object to avoid vmware's
# numerous uuid's which aren't all unique.
thisid = str(uuid.uuid4())
idata = instance[1]
# Put it in the inventory
inventory['all']['hosts'].append(thisid)
inventory['_meta']['hostvars'][thisid] = idata.copy()
inventory['_meta']['hostvars'][thisid]['ansible_uuid'] = thisid
# Make a map of the uuid to the alias the user wants
name_mapping = self.create_template_mapping(
inventory,
self.config.get('vmware', 'alias_pattern')
)
# Make a map of the uuid to the ssh hostname the user wants
host_mapping = self.create_template_mapping(
inventory,
self.config.get('vmware', 'host_pattern')
)
# Reset the inventory keys
for k, v in name_mapping.items():
if not host_mapping or not k in host_mapping:
continue
# set ansible_host (2.x)
try:
inventory['_meta']['hostvars'][k]['ansible_host'] = host_mapping[k]
# 1.9.x backwards compliance
inventory['_meta']['hostvars'][k]['ansible_ssh_host'] = host_mapping[k]
except Exception:
continue
if k == v:
continue
# add new key
inventory['all']['hosts'].append(v)
inventory['_meta']['hostvars'][v] = inventory['_meta']['hostvars'][k]
# cleanup old key
inventory['all']['hosts'].remove(k)
inventory['_meta']['hostvars'].pop(k, None)
self.debugl('pre-filtered hosts:')
for i in inventory['all']['hosts']:
self.debugl(' * %s' % i)
# Apply host filters
for hf in self.host_filters:
if not hf:
continue
self.debugl('filter: %s' % hf)
filter_map = self.create_template_mapping(inventory, hf, dtype='boolean')
for k, v in filter_map.items():
if not v:
# delete this host
inventory['all']['hosts'].remove(k)
inventory['_meta']['hostvars'].pop(k, None)
self.debugl('post-filter hosts:')
for i in inventory['all']['hosts']:
self.debugl(' * %s' % i)
# Create groups
for gbp in self.groupby_patterns:
groupby_map = self.create_template_mapping(inventory, gbp)
for k, v in groupby_map.items():
if v not in inventory:
inventory[v] = {}
inventory[v]['hosts'] = []
if k not in inventory[v]['hosts']:
inventory[v]['hosts'].append(k)
if self.config.get('vmware', 'groupby_custom_field'):
for k, v in inventory['_meta']['hostvars'].items():
if 'customvalue' in v:
for tv in v['customvalue']:
if not isinstance(tv['value'], str) and not isinstance(tv['value'], unicode):
continue
newkey = None
field_name = self.custom_fields[tv['key']] if tv['key'] in self.custom_fields else tv['key']
values = []
keylist = map(lambda x: x.strip(), tv['value'].split(','))
for kl in keylist:
try:
newkey = self.config.get('vmware', 'custom_field_group_prefix') + field_name + '_' + kl
newkey = newkey.strip()
except Exception as e:
self.debugl(e)
values.append(newkey)
for tag in values:
if not tag:
continue
if tag not in inventory:
inventory[tag] = {}
inventory[tag]['hosts'] = []
if k not in inventory[tag]['hosts']:
inventory[tag]['hosts'].append(k)
return inventory
def create_template_mapping(self, inventory, pattern, dtype='string'):
''' Return a hash of uuid to templated string from pattern '''
mapping = {}
for k, v in inventory['_meta']['hostvars'].items():
t = self.env.from_string(pattern)
newkey = None
try:
newkey = t.render(v)
newkey = newkey.strip()
except Exception as e:
self.debugl(e)
if not newkey:
continue
elif dtype == 'integer':
newkey = int(newkey)
elif dtype == 'boolean':
if newkey.lower() == 'false':
newkey = False
elif newkey.lower() == 'true':
newkey = True
elif dtype == 'string':
pass
mapping[k] = newkey
return mapping
def facts_from_proplist(self, vm):
'''Get specific properties instead of serializing everything'''
rdata = {}
for prop in self.guest_props:
self.debugl('getting %s property for %s' % (prop, vm.name))
key = prop
if self.lowerkeys:
key = key.lower()
if '.' not in prop:
# props without periods are direct attributes of the parent
rdata[key] = getattr(vm, prop)
else:
# props with periods are subkeys of parent attributes
parts = prop.split('.')
total = len(parts) - 1
# pointer to the current object
val = None
# pointer to the current result key
lastref = rdata
for idx, x in enumerate(parts):
# if the val wasn't set yet, get it from the parent
if not val:
try:
val = getattr(vm, x)
except AttributeError as e:
self.debugl(e)
else:
# in a subkey, get the subprop from the previous attrib
try:
val = getattr(val, x)
except AttributeError as e:
self.debugl(e)
# lowercase keys if requested
if self.lowerkeys:
x = x.lower()
# change the pointer or set the final value
if idx != total:
if x not in lastref:
lastref[x] = {}
lastref = lastref[x]
else:
lastref[x] = val
return rdata
def facts_from_vobj(self, vobj, level=0):
''' Traverse a VM object and return a json compliant data structure '''
# pyvmomi objects are not yet serializable, but may be one day ...
# https://github.com/vmware/pyvmomi/issues/21
# WARNING:
# Accessing an object attribute will trigger a SOAP call to the remote.
# Increasing the attributes collected or the depth of recursion greatly
# increases runtime duration and potentially memory+network utilization.
if level == 0:
try:
self.debugl("get facts for %s" % vobj.name)
except Exception as e:
self.debugl(e)
rdata = {}
methods = dir(vobj)
methods = [str(x) for x in methods if not x.startswith('_')]
methods = [x for x in methods if x not in self.bad_types]
methods = [x for x in methods if not x.lower() in self.skip_keys]
methods = sorted(methods)
for method in methods:
# Attempt to get the method, skip on fail
try:
methodToCall = getattr(vobj, method)
except Exception as e:
continue
# Skip callable methods
if callable(methodToCall):
continue
if self.lowerkeys:
method = method.lower()
rdata[method] = self._process_object_types(
methodToCall,
thisvm=vobj,
inkey=method,
)
return rdata
def _process_object_types(self, vobj, thisvm=None, inkey=None, level=0):
''' Serialize an object '''
rdata = {}
if type(vobj).__name__ in self.vimTableMaxDepth and level >= self.vimTableMaxDepth[type(vobj).__name__]:
return rdata
if vobj is None:
rdata = None
elif type(vobj) in self.vimTable:
rdata = {}
for key in self.vimTable[type(vobj)]:
try:
rdata[key] = getattr(vobj, key)
except Exception as e:
self.debugl(e)
elif issubclass(type(vobj), str) or isinstance(vobj, str):
if vobj.isalnum():
rdata = vobj
else:
rdata = vobj.decode('ascii', 'ignore')
elif issubclass(type(vobj), bool) or isinstance(vobj, bool):
rdata = vobj
elif issubclass(type(vobj), int) or isinstance(vobj, int):
rdata = vobj
elif issubclass(type(vobj), float) or isinstance(vobj, float):
rdata = vobj
elif issubclass(type(vobj), long) or isinstance(vobj, long):
rdata = vobj
elif issubclass(type(vobj), list) or issubclass(type(vobj), tuple):
rdata = []
try:
vobj = sorted(vobj)
except Exception:
pass
for idv, vii in enumerate(vobj):
if level + 1 <= self.maxlevel:
vid = self._process_object_types(
vii,
thisvm=thisvm,
inkey=inkey + '[' + str(idv) + ']',
level=(level + 1)
)
if vid:
rdata.append(vid)
elif issubclass(type(vobj), dict):
pass
elif issubclass(type(vobj), object):
methods = dir(vobj)
methods = [str(x) for x in methods if not x.startswith('_')]
methods = [x for x in methods if x not in self.bad_types]
methods = [x for x in methods if not inkey + '.' + x.lower() in self.skip_keys]
methods = sorted(methods)
for method in methods:
# Attempt to get the method, skip on fail
try:
methodToCall = getattr(vobj, method)
except Exception as e:
continue
if callable(methodToCall):
continue
if self.lowerkeys:
method = method.lower()
if level + 1 <= self.maxlevel:
try:
rdata[method] = self._process_object_types(
methodToCall,
thisvm=thisvm,
inkey=inkey + '.' + method,
level=(level + 1)
)
except vim.fault.NoPermission:
self.debugl("Skipping method %s (NoPermission)" % method)
else:
pass
return rdata
def get_host_info(self, host):
''' Return hostvars for a single host '''
if host in self.inventory['_meta']['hostvars']:
return self.inventory['_meta']['hostvars'][host]
elif self.args.host and self.inventory['_meta']['hostvars']:
match = None
for k, v in self.inventory['_meta']['hostvars']:
if self.inventory['_meta']['hostvars'][k]['name'] == self.args.host:
match = k
break
if match:
return self.inventory['_meta']['hostvars'][match]
else:
raise VMwareMissingHostException('%s not found' % host)
else:
raise VMwareMissingHostException('%s not found' % host)
if __name__ == "__main__":
# Run the script
print(VMWareInventory().show())
|
cernops/nova
|
refs/heads/master
|
nova/virt/virtapi.py
|
23
|
# Copyright 2012 IBM Corp.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import contextlib
class VirtAPI(object):
@contextlib.contextmanager
def wait_for_instance_event(self, instance, event_names, deadline=300,
error_callback=None):
raise NotImplementedError()
|
haidarvm/hadits
|
refs/heads/master
|
assets/post.py
|
1
|
from urllib.request import Request, urlopen
from urllib import parse,error
url = 'http://192.168.12.30/post.php'
value = {"test" : 'testing' , "data" : 'haidar123' }
parse_data = parse.urlencode(value)
try:
ropen = urlopen(url, parse_data.encode('utf-8'))
print(ropen.read())
except error.URLError as e: print("URL Error:",e.read() , url)
|
HaebinShin/tensorflow
|
refs/heads/master
|
tensorflow/python/kernel_tests/sparse_ops_test.py
|
1
|
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for Python ops defined in sparse_ops."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow as tf
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.framework import test_util
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import sparse_ops
from tensorflow.python.platform import googletest
# TODO(zongheng): it'd be great to factor out this function and various random
# SparseTensor gen funcs.
def _sparsify(x, thresh=0.5, index_dtype=np.int64):
x[x < thresh] = 0
non_zero = np.where(x)
x_indices = np.vstack(non_zero).astype(index_dtype).T
x_values = x[non_zero]
x_shape = x.shape
return ops.SparseTensor(
indices=x_indices, values=x_values, shape=x_shape), len(x_values)
class SparseToIndicatorTest(test_util.TensorFlowTestCase):
def _SparseTensor_5x6(self, dtype):
ind = np.array([
[0, 0],
[1, 0], [1, 3], [1, 4],
[3, 2], [3, 3]])
val = np.array([0, 10, 13, 14, 32, 33])
shape = np.array([5, 6])
return ops.SparseTensor(
constant_op.constant(ind, dtypes.int64),
constant_op.constant(val, dtype),
constant_op.constant(shape, dtypes.int64))
def _SparseTensor_2x3x4(self, dtype):
# Includes two entries with the form [1, 1, x] : 150.
ind = np.array([
[0, 0, 1],
[0, 1, 0],
[0, 1, 2],
[1, 0, 3],
[1, 1, 0],
[1, 1, 1],
[1, 1, 2],
[1, 2, 2]])
val = np.array([1, 10, 12, 103, 150, 149, 150, 122])
shape = np.array([2, 3, 4])
return ops.SparseTensor(
constant_op.constant(ind, dtypes.int64),
constant_op.constant(val, dtype),
constant_op.constant(shape, dtypes.int64))
def testInt32(self):
with self.test_session(use_gpu=False):
sp_input = self._SparseTensor_5x6(dtypes.int32)
output = sparse_ops.sparse_to_indicator(sp_input, 50).eval()
expected_output = np.zeros((5, 50), dtype=np.bool)
expected_trues = ((0, 0), (1, 10), (1, 13), (1, 14), (3, 32), (3, 33))
for expected_true in expected_trues:
expected_output[expected_true] = True
self.assertAllEqual(output, expected_output)
def testInt64(self):
with self.test_session(use_gpu=False):
sp_input = self._SparseTensor_5x6(dtypes.int64)
output = sparse_ops.sparse_to_indicator(sp_input, 50).eval()
expected_output = np.zeros((5, 50), dtype=np.bool)
expected_trues = [(0, 0), (1, 10), (1, 13), (1, 14), (3, 32), (3, 33)]
for expected_true in expected_trues:
expected_output[expected_true] = True
self.assertAllEqual(output, expected_output)
def testHigherRank(self):
with self.test_session(use_gpu=False):
sp_input = self._SparseTensor_2x3x4(dtypes.int64)
output = sparse_ops.sparse_to_indicator(sp_input, 200).eval()
expected_output = np.zeros((2, 3, 200), dtype=np.bool)
expected_trues = [(0, 0, 1), (0, 1, 10), (0, 1, 12),
(1, 0, 103), (1, 1, 149), (1, 1, 150),
(1, 2, 122)]
for expected_true in expected_trues:
expected_output[expected_true] = True
self.assertAllEqual(output, expected_output)
class SparseMergeTest(test_util.TensorFlowTestCase):
def _SparseTensor_3x50(self, indices_dtype, values_dtype):
ind = np.array([
[0, 0],
[1, 0], [1, 1], [1, 2],
[2, 0], [2, 1]])
# NB: these are not sorted
indices = np.array([0, 13, 10, 14, 32, 33])
values = np.array([-3, 4, 1, 1, 5, 9])
shape = np.array([3, 3])
indices = ops.SparseTensor(
constant_op.constant(ind, dtypes.int64),
constant_op.constant(indices, indices_dtype),
constant_op.constant(shape, dtypes.int64))
values = ops.SparseTensor(
constant_op.constant(ind, dtypes.int64),
constant_op.constant(values, values_dtype),
constant_op.constant(shape, dtypes.int64))
return indices, values
def testInt32AndFloat32(self):
vocab_size = 50
with self.test_session(use_gpu=False) as sess:
indices, values = self._SparseTensor_3x50(dtypes.int32, dtypes.float32)
sp_output = sparse_ops.sparse_merge(indices, values, vocab_size)
output = sess.run(sp_output)
self.assertAllEqual(
output.indices,
[[0, 0], [1, 10], [1, 13], [1, 14], [2, 32], [2, 33]])
self.assertAllEqual(
output.values,
[-3, 1, 4, 1, 5, 9])
self.assertAllEqual(
output.shape,
[3, vocab_size])
def testInt64AndFloat32(self):
vocab_size = 50
with self.test_session(use_gpu=False) as sess:
indices, values = self._SparseTensor_3x50(dtypes.int64, dtypes.float32)
sp_output = sparse_ops.sparse_merge(indices, values, vocab_size)
output = sess.run(sp_output)
self.assertAllEqual(
output.indices,
[[0, 0], [1, 10], [1, 13], [1, 14], [2, 32], [2, 33]])
self.assertAllEqual(
output.values,
[-3, 1, 4, 1, 5, 9])
self.assertAllEqual(
output.shape,
[3, vocab_size])
def testInt64AndFloat64(self):
vocab_size = 50
with self.test_session(use_gpu=False) as sess:
indices, values = self._SparseTensor_3x50(dtypes.int64, dtypes.float64)
sp_output = sparse_ops.sparse_merge(indices, values, vocab_size)
output = sess.run(sp_output)
self.assertAllEqual(
output.indices,
[[0, 0], [1, 10], [1, 13], [1, 14], [2, 32], [2, 33]])
self.assertAllEqual(
output.values,
[-3, 1, 4, 1, 5, 9])
self.assertAllEqual(
output.shape,
[3, vocab_size])
class SparseRetainTest(test_util.TensorFlowTestCase):
def _SparseTensor_5x6(self):
ind = np.array([
[0, 0],
[1, 0], [1, 3], [1, 4],
[3, 2], [3, 3]])
val = np.array([0, 10, 13, 14, 32, 33])
shape = np.array([5, 6])
return ops.SparseTensor(
constant_op.constant(ind, dtypes.int64),
constant_op.constant(val, dtypes.int32),
constant_op.constant(shape, dtypes.int64))
def testBasic(self):
with self.test_session(use_gpu=False) as sess:
sp_input = self._SparseTensor_5x6()
to_retain = np.array([1, 0, 0, 1, 1, 0], dtype=np.bool)
sp_output = sparse_ops.sparse_retain(sp_input, to_retain)
output = sess.run(sp_output)
self.assertAllEqual(output.indices, [[0, 0], [1, 4], [3, 2]])
self.assertAllEqual(output.values, [0, 14, 32])
self.assertAllEqual(output.shape, [5, 6])
def testRetainNone(self):
with self.test_session(use_gpu=False) as sess:
sp_input = self._SparseTensor_5x6()
to_retain = np.zeros((6,), dtype=np.bool)
sp_output = sparse_ops.sparse_retain(sp_input, to_retain)
output = sess.run(sp_output)
self.assertAllEqual(output.indices, np.array([]).reshape((0, 2)))
self.assertAllEqual(output.values, [])
self.assertAllEqual(output.shape, [5, 6])
def testMismatchedRetainShape(self):
with self.test_session(use_gpu=False):
sp_input = self._SparseTensor_5x6()
to_retain = np.array([1, 0, 0, 1, 0], dtype=np.bool)
with self.assertRaises(ValueError):
sparse_ops.sparse_retain(sp_input, to_retain)
class SparseResetShapeTest(test_util.TensorFlowTestCase):
_IND_2_5_6 = np.array([[0, 0, 0], [0, 1, 0], [0, 1, 3], [1, 1, 4],
[1, 3, 2], [1, 3, 3]], dtype=np.int64)
_VAL_2_5_6 = np.array([0, 10, 13, 14, 32, 33], dtype=np.int32)
_SHP_2_5_6 = np.array([2, 5, 6], dtype=np.int64)
def _SparseTensor_2x5x6(self):
return ops.SparseTensor(
constant_op.constant(self._IND_2_5_6, dtypes.int64),
constant_op.constant(self._VAL_2_5_6, dtypes.int32),
constant_op.constant(self._SHP_2_5_6, dtypes.int64))
def _SparseTensorValue_2x5x6(self):
return ops.SparseTensorValue(self._IND_2_5_6, self._VAL_2_5_6,
self._SHP_2_5_6)
def testBasic(self):
with self.test_session(use_gpu=False) as sess:
sp_input = self._SparseTensor_2x5x6()
new_shape = np.array([3, 6, 7], dtype=np.int64)
sp_output = sparse_ops.sparse_reset_shape(sp_input, new_shape)
output = sess.run(sp_output)
self.assertAllEqual(output.indices, [[0, 0, 0], [0, 1, 0],
[0, 1, 3], [1, 1, 4],
[1, 3, 2], [1, 3, 3]])
self.assertAllEqual(output.values, [0, 10, 13, 14, 32, 33])
self.assertAllEqual(output.shape, [3, 6, 7])
def testInputUnavaibleInGraphConstructionOk(self):
with self.test_session(use_gpu=False) as sess:
sp_input = array_ops.sparse_placeholder(dtype=dtypes.int32)
new_shape = np.array([3, 6, 7], dtype=np.int64)
sp_output = sparse_ops.sparse_reset_shape(sp_input, new_shape)
output = sess.run(sp_output,
feed_dict={sp_input: self._SparseTensorValue_2x5x6()})
self.assertAllEqual(output.indices, [[0, 0, 0], [0, 1, 0],
[0, 1, 3], [1, 1, 4],
[1, 3, 2], [1, 3, 3]])
self.assertAllEqual(output.values, [0, 10, 13, 14, 32, 33])
self.assertAllEqual(output.shape, [3, 6, 7])
def testTightBoundingBox(self):
with self.test_session(use_gpu=False) as sess:
sp_input = self._SparseTensor_2x5x6()
sp_output = sparse_ops.sparse_reset_shape(sp_input)
output = sess.run(sp_output)
self.assertAllEqual(output.indices, [[0, 0, 0], [0, 1, 0],
[0, 1, 3], [1, 1, 4],
[1, 3, 2], [1, 3, 3]])
self.assertAllEqual(output.values, [0, 10, 13, 14, 32, 33])
self.assertAllEqual(output.shape, [2, 4, 5])
def testInvalidRank(self):
with self.test_session(use_gpu=False):
sp_input = self._SparseTensor_2x5x6()
new_shape = np.array([3, 7], dtype=np.int64)
with self.assertRaises(ValueError):
sparse_ops.sparse_reset_shape(sp_input, new_shape)
def testInvalidRankNewShapeUnavaibleInGraphConstruction(self):
with self.test_session(use_gpu=False) as sess:
new_shape = array_ops.placeholder(dtype=dtypes.int64)
sp_input = self._SparseTensor_2x5x6()
out = sparse_ops.sparse_reset_shape(sp_input, new_shape)
with self.assertRaisesOpError("x == y did not hold element-wise"):
sess.run(out, feed_dict={new_shape: np.array([3, 7], dtype=np.int64)})
def testInvalidDimensionSize(self):
with self.test_session(use_gpu=False) as sess:
sp_input = self._SparseTensor_2x5x6()
new_shape = np.array([3, 7, 5], dtype=np.int64)
out = sparse_ops.sparse_reset_shape(sp_input, new_shape)
with self.assertRaisesOpError("x <= y did not hold element-wise"):
sess.run(out)
def testInvalidDimensionSizeInputUnavailableInGraphConstruction(self):
sp_input = array_ops.sparse_placeholder(dtype=dtypes.int32)
with self.test_session(use_gpu=False) as sess:
new_shape = np.array([3, 7, 5], dtype=np.int64)
out = sparse_ops.sparse_reset_shape(sp_input, new_shape)
with self.assertRaisesOpError("x <= y did not hold element-wise"):
sess.run(out, feed_dict={sp_input: self._SparseTensorValue_2x5x6()})
class SparseFillEmptyRowsTest(test_util.TensorFlowTestCase):
def _SparseTensor_5x6(self):
ind = np.array([
[0, 0],
[1, 0], [1, 3], [1, 4],
[3, 2], [3, 3]])
val = np.array([0, 10, 13, 14, 32, 33])
shape = np.array([5, 6])
return ops.SparseTensor(
constant_op.constant(ind, dtypes.int64),
constant_op.constant(val, dtypes.int32),
constant_op.constant(shape, dtypes.int64))
def _SparseTensor_String5x6(self):
ind = np.array([
[0, 0],
[1, 0], [1, 3], [1, 4],
[3, 2], [3, 3]])
val = np.array(["a", "b", "c", "d", "e", "f"])
shape = np.array([5, 6])
return ops.SparseTensor(
constant_op.constant(ind, dtypes.int64),
constant_op.constant(val, dtypes.string),
constant_op.constant(shape, dtypes.int64))
def _SparseTensor_2x6(self):
ind = np.array([[0, 0], [1, 0], [1, 3], [1, 4]])
val = np.array([0, 10, 13, 14])
shape = np.array([2, 6])
return ops.SparseTensor(
constant_op.constant(ind, dtypes.int64),
constant_op.constant(val, dtypes.int32),
constant_op.constant(shape, dtypes.int64))
def testFillNumber(self):
with self.test_session(use_gpu=False) as sess:
sp_input = self._SparseTensor_5x6()
sp_output, empty_row_indicator = (
sparse_ops.sparse_fill_empty_rows(sp_input, -1))
output, empty_row_indicator_out = sess.run(
[sp_output, empty_row_indicator])
self.assertAllEqual(
output.indices,
[[0, 0], [1, 0], [1, 3], [1, 4], [2, 0], [3, 2], [3, 3], [4, 0]])
self.assertAllEqual(output.values, [0, 10, 13, 14, -1, 32, 33, -1])
self.assertAllEqual(output.shape, [5, 6])
self.assertAllEqual(empty_row_indicator_out,
np.array([0, 0, 1, 0, 1]).astype(np.bool))
def testFillString(self):
with self.test_session(use_gpu=False) as sess:
sp_input = self._SparseTensor_String5x6()
sp_output, empty_row_indicator = (
sparse_ops.sparse_fill_empty_rows(sp_input, ""))
output, empty_row_indicator_out = sess.run(
[sp_output, empty_row_indicator])
self.assertAllEqual(
output.indices,
[[0, 0], [1, 0], [1, 3], [1, 4], [2, 0], [3, 2], [3, 3], [4, 0]])
self.assertAllEqual(output.values,
[b"a", b"b", b"c", b"d", b"", b"e", b"f", b""])
self.assertAllEqual(output.shape, [5, 6])
self.assertAllEqual(empty_row_indicator_out,
np.array([0, 0, 1, 0, 1]).astype(np.bool))
def testNoEmptyRows(self):
with self.test_session(use_gpu=False) as sess:
sp_input = self._SparseTensor_2x6()
sp_output, empty_row_indicator = (
sparse_ops.sparse_fill_empty_rows(sp_input, -1))
output, empty_row_indicator_out = sess.run(
[sp_output, empty_row_indicator])
self.assertAllEqual(output.indices, [[0, 0], [1, 0], [1, 3], [1, 4]])
self.assertAllEqual(output.values, [0, 10, 13, 14])
self.assertAllEqual(output.shape, [2, 6])
self.assertAllEqual(empty_row_indicator_out, np.zeros(2).astype(np.bool))
class SparseReduceSumTest(test_util.TensorFlowTestCase):
# [[1, ?, 1]
# [?, 1, ?]]
# where ? is implictly-zero.
ind = np.array([[0, 0], [0, 2], [1, 1]]).astype(np.int64)
vals = np.array([1, 1, 1]).astype(np.int32)
shape = np.array([2, 3]).astype(np.int64)
def _compare(self, sp_t, reduction_axes, ndims, keep_dims):
densified = sparse_ops.sparse_tensor_to_dense(sp_t).eval()
np_ans = densified
if reduction_axes is None:
np_ans = np.sum(np_ans, keepdims=keep_dims)
else:
if not isinstance(reduction_axes, list): # Single scalar.
reduction_axes = [reduction_axes]
reduction_axes = np.array(reduction_axes).astype(np.int32)
# Handles negative axes.
reduction_axes = (reduction_axes + ndims) % ndims
# Loop below depends on sorted.
reduction_axes.sort()
for ra in reduction_axes.ravel()[::-1]:
np_ans = np.sum(np_ans, axis=ra, keepdims=keep_dims)
with self.test_session():
tf_ans = sparse_ops.sparse_reduce_sum(sp_t, reduction_axes, keep_dims)
out = tf_ans.eval()
self.assertAllClose(np_ans, out)
def _compare_all(self, sp_t, reduction_axes, ndims):
self._compare(sp_t, reduction_axes, ndims, False)
self._compare(sp_t, reduction_axes, ndims, True)
def testSimpleAndRandomInputs(self):
sp_t = ops.SparseTensor(self.ind, self.vals, self.shape)
with self.test_session(use_gpu=False):
self._compare_all(sp_t, None, ndims=2)
self._compare_all(sp_t, 0, ndims=2)
self._compare_all(sp_t, [1], ndims=2)
self._compare_all(sp_t, [0, 1], ndims=2)
self._compare_all(sp_t, [1, 0], ndims=2)
self._compare_all(sp_t, [-1], ndims=2)
self._compare_all(sp_t, [1, -2], ndims=2)
np.random.seed(1618)
test_dims = [(1618, 1, 11, 7, 1), (1,), (1, 1, 1)]
with self.test_session(use_gpu=False):
for dims in test_dims:
sp_t, unused_nnz = _sparsify(np.random.randn(*dims))
# reduce all using None
self._compare_all(sp_t, None, ndims=len(dims))
# reduce random axes from 1D to N-D
for d in range(1, len(dims) + 1):
axes = np.random.choice(len(dims), size=d, replace=False).tolist()
self._compare_all(sp_t, axes, ndims=len(dims))
def testInvalidAxes(self):
sp_t = ops.SparseTensor(self.ind, self.vals, self.shape)
with self.test_session(use_gpu=False):
with self.assertRaisesOpError("Invalid reduction dimension -3"):
sparse_ops.sparse_reduce_sum(sp_t, -3).eval()
with self.assertRaisesOpError("Invalid reduction dimension 2"):
sparse_ops.sparse_reduce_sum(sp_t, 2).eval()
def testGradient(self):
np.random.seed(8161)
test_dims = [(11, 1, 5, 7, 1), (2, 2)]
with self.test_session(use_gpu=False):
for dims in test_dims:
sp_t, nnz = _sparsify(np.random.randn(*dims))
# reduce random axes from 1D to N-D
for d in range(1, len(dims) + 1):
axes = np.random.choice(len(dims), size=d, replace=False).tolist()
reduced = sparse_ops.sparse_reduce_sum(sp_t, axes)
err = tf.test.compute_gradient_error(sp_t.values, (nnz,), reduced,
reduced.eval().shape)
self.assertLess(err, 1e-3)
# Tests for negative axes.
reduced = sparse_ops.sparse_reduce_sum(sp_t, -1)
err = tf.test.compute_gradient_error(sp_t.values, (nnz,), reduced,
reduced.eval().shape)
self.assertLess(err, 1e-3)
class SparseMathOpsTest(test_util.TensorFlowTestCase):
def _check(self, result_tensor, result_np, input_sp_t):
self.assertTrue(isinstance(result_tensor, ops.SparseTensor))
self.assertTrue(isinstance(input_sp_t, ops.SparseTensor))
self.assertAllEqual(input_sp_t.indices.eval(), result_tensor.indices.eval())
self.assertAllEqual(input_sp_t.shape.eval(), result_tensor.shape.eval())
res_densified = sparse_ops.sparse_to_dense(result_tensor.indices,
result_tensor.shape,
result_tensor.values).eval()
self.assertAllEqual(result_np, res_densified)
def testCwiseDivAndMul(self):
np.random.seed(1618)
sp_shapes = [(10, 10, 10), (5, 5), (1618,), (3, 3, 7)]
dense_shapes = [(10, 10, 1), (5, 5), (1,), (1, 7)]
with self.test_session(use_gpu=False):
for dtype in [np.float32, np.float64, np.int32, np.int64]:
for sp_shape, dense_shape in zip(sp_shapes, dense_shapes):
sp_vals_np = np.random.rand(*sp_shape).astype(dtype) + 1
dense_vals_np = np.random.rand(*dense_shape).astype(dtype) + 1
sp_t, unused_nnz = _sparsify(sp_vals_np, thresh=1.5)
sp_t_densified = sparse_ops.sparse_tensor_to_dense(sp_t).eval()
dense_t = tf.constant(dense_vals_np)
self._check(sp_t / dense_t, sp_t_densified / dense_vals_np, sp_t)
# Check commutative.
self._check(sp_t * dense_t, sp_t_densified * dense_vals_np, sp_t)
self._check(dense_t * sp_t, sp_t_densified * dense_vals_np, sp_t)
if dtype in [np.int32, np.int64]:
res = sp_t / dense_t # should invoke "__truediv__"
self.assertEqual(res.values.eval().dtype, np.float64)
def testCwiseAdd(self):
with self.test_session(use_gpu=False):
# Identity(2) + AllOnes(2,2). Should be equal to 2 * Identity(2).
indices = [[0, 0], [1, 1]]
vals = [1, 1]
shape = (2, 2)
sp_t = tf.SparseTensor(indices, vals, shape)
dense_t = tf.ones(shape, dtype=dtypes.int32)
self._check(sparse_ops.sparse_dense_cwise_add(sp_t, dense_t),
np.identity(2) * 2, sp_t)
# Variant of above, but broadcasts the dense side.
dense_t = tf.ones([1], dtype=dtypes.int32)
self._check(sparse_ops.sparse_dense_cwise_add(sp_t, dense_t),
np.identity(2) * 2, sp_t)
def testGradients(self):
np.random.seed(1618)
sp_shapes = [(10, 10, 10), (5, 5), (1618,), (3, 3, 7)]
dense_shapes = [(10, 10, 1), (5, 5), (1,), (1, 7)]
with self.test_session(use_gpu=False):
for dtype in [np.float32, np.float64]:
for sp_shape, dense_shape in zip(sp_shapes, dense_shapes):
sp_vals_np = np.random.rand(*sp_shape).astype(dtype) + 1
dense_vals_np = np.random.rand(*dense_shape).astype(dtype) + 1
sp_t, nnz = _sparsify(sp_vals_np, thresh=1.5)
dense_t = tf.constant(dense_vals_np)
cmul = sp_t * dense_t
err = tf.test.compute_gradient_error([sp_t.values, dense_t],
[(nnz,), dense_shape],
cmul.values, (nnz,))
self.assertLess(err, 1e-4)
cdiv = sp_t / dense_t
err = tf.test.compute_gradient_error(sp_t.values, (nnz,),
cdiv.values, (nnz,))
self.assertLess(err, 1e-4)
err = tf.test.compute_gradient_error(dense_t, dense_shape,
cdiv.values, (nnz,),
x_init_value=dense_vals_np)
self.assertLess(err, 2e-4)
class SparseSoftmaxTest(test_util.TensorFlowTestCase):
def testEquivalentToDensified(self):
np.random.seed(1618)
n, m = np.random.choice(20, size=2)
for dtype in [np.float32, np.float64]:
sp_vals_np = np.random.rand(n, m).astype(dtype)
batched_sp_t, unused_nnz1 = _sparsify(
sp_vals_np.reshape((1, n, m)), thresh=0.) # No masking.
with self.test_session(use_gpu=False):
densified = tf.constant(sp_vals_np)
sp_result = sparse_ops.sparse_softmax(
batched_sp_t).eval().values.reshape((n, m))
dense_result = tf.nn.softmax(densified)
self.assertAllClose(dense_result.eval(), sp_result)
def testHigherRanks(self):
# For the first shape:
# First batch:
# [? e.]
# [1. ? ]
# Second batch:
# [e ? ]
# [e e ]
#
# The softmax results should be:
# [? 1.] [1 ?]
# [1. ? ] and [.5 .5]
# where ? means implicitly zero.
#
# The second shape: same input data, but with a higher-rank shape.
shapes = [[2, 2, 2], [2, 1, 2, 2]]
for shape in shapes:
values = np.asarray(
[0., np.e, 1., 0., np.e, 0., np.e, np.e]).reshape(shape)
sp_t, unused_nnz = _sparsify(values, thresh=1e-2)
expected_values = [1., 1., 1., .5, .5]
with self.test_session(use_gpu=False):
result = sparse_ops.sparse_softmax(sp_t).eval()
self.assertAllEqual(expected_values, result.values)
self.assertAllEqual(sp_t.indices.eval(), result.indices)
self.assertAllEqual(shape, result.shape)
def testGradient(self):
x_shape = [2, 5, 10]
with self.test_session(use_gpu=False):
for dtype in [np.float32, np.float64]:
x_np = np.random.randn(*x_shape).astype(dtype)
x_tf, nnz = _sparsify(x_np)
y_tf = tf.sparse_softmax(x_tf)
err = tf.test.compute_gradient_error(x_tf.values, (nnz,), y_tf.values,
(nnz,))
self.assertLess(err, 1e-4)
class SparseMinimumMaximumTest(test_util.TensorFlowTestCase):
def _assertSparseTensorValueEqual(self, a, b):
self.assertAllEqual(a.indices, b.indices)
self.assertAllEqual(a.values, b.values)
self.assertAllEqual(a.shape, b.shape)
def testBasic(self):
with self.test_session(use_gpu=False):
# 1-D, values at index 0.
sp_zero = ops.SparseTensor([[0]], [0], [7])
sp_one = ops.SparseTensor([[0]], [1], [7])
max_tf = tf.sparse_maximum(sp_zero, sp_one).eval()
min_tf = tf.sparse_minimum(sp_zero, sp_one).eval()
self._assertSparseTensorValueEqual(sp_one.eval(), max_tf)
self._assertSparseTensorValueEqual(sp_zero.eval(), min_tf)
# Values at different indices.
sp_zero = ops.SparseTensor([[0]], [0], [7])
sp_zero_2 = ops.SparseTensor([[1]], [0], [7])
expected = ops.SparseTensor([[0], [1]], [0, 0], [7])
max_tf = tf.sparse_maximum(sp_zero, sp_zero_2).eval()
min_tf = tf.sparse_minimum(sp_zero, sp_zero_2).eval()
self._assertSparseTensorValueEqual(expected.eval(), max_tf)
self._assertSparseTensorValueEqual(expected.eval(), min_tf)
def testRandom(self):
np.random.seed(1618)
shapes = [(13,), (6, 8), (1, 7, 1)]
for shape in shapes:
for dtype in [np.int32, np.int64, np.float16, np.float32, np.float64]:
a_np = np.random.randn(*shape).astype(dtype)
b_np = np.random.randn(*shape).astype(dtype)
sp_a, unused_a_nnz = _sparsify(a_np, thresh=-.5)
sp_b, unused_b_nnz = _sparsify(b_np, thresh=-.5)
with self.test_session(use_gpu=False):
maximum_tf = tf.sparse_maximum(sp_a, sp_b)
maximum_tf_densified = tf.sparse_tensor_to_dense(maximum_tf).eval()
minimum_tf = tf.sparse_minimum(sp_a, sp_b)
minimum_tf_densified = tf.sparse_tensor_to_dense(minimum_tf).eval()
a_densified = tf.sparse_tensor_to_dense(sp_a).eval()
b_densified = tf.sparse_tensor_to_dense(sp_b).eval()
self.assertAllEqual(
np.maximum(a_densified, b_densified), maximum_tf_densified)
self.assertAllEqual(
np.minimum(a_densified, b_densified), minimum_tf_densified)
def testMismatchedShapes(self):
with self.test_session(use_gpu=False):
sp_zero = ops.SparseTensor([[0, 0]], [0], [1, 1])
sp_one = ops.SparseTensor([[0]], [1], [2])
with self.assertRaisesOpError("Operands do not have the same ranks"):
tf.sparse_maximum(sp_zero, sp_one).eval()
sp_zero = ops.SparseTensor([[0]], [0], [1])
sp_one = ops.SparseTensor([[0]], [1], [2])
with self.assertRaisesOpError("Operands' shapes do not match"):
tf.sparse_maximum(sp_zero, sp_one).eval()
if __name__ == "__main__":
googletest.main()
|
ville-k/tensorflow
|
refs/heads/master
|
tensorflow/contrib/keras/python/keras/models_test.py
|
7
|
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for training routines."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import tempfile
import numpy as np
from tensorflow.contrib.keras.python import keras
from tensorflow.python.platform import test
try:
import h5py # pylint:disable=g-import-not-at-top
except ImportError:
h5py = None
class TestModelSaving(test.TestCase):
def test_sequential_model_saving(self):
if h5py is None:
return # Skip test if models cannot be saved.
with self.test_session():
model = keras.models.Sequential()
model.add(keras.layers.Dense(2, input_shape=(3,)))
model.add(keras.layers.RepeatVector(3))
model.add(keras.layers.TimeDistributed(keras.layers.Dense(3)))
model.compile(loss=keras.losses.MSE,
optimizer=keras.optimizers.RMSprop(lr=0.0001),
metrics=[keras.metrics.categorical_accuracy],
sample_weight_mode='temporal')
x = np.random.random((1, 3))
y = np.random.random((1, 3, 3))
model.train_on_batch(x, y)
out = model.predict(x)
_, fname = tempfile.mkstemp('.h5')
keras.models.save_model(model, fname)
new_model = keras.models.load_model(fname)
os.remove(fname)
out2 = new_model.predict(x)
self.assertAllClose(out, out2, atol=1e-05)
# test that new updates are the same with both models
x = np.random.random((1, 3))
y = np.random.random((1, 3, 3))
model.train_on_batch(x, y)
new_model.train_on_batch(x, y)
out = model.predict(x)
out2 = new_model.predict(x)
self.assertAllClose(out, out2, atol=1e-05)
def test_sequential_model_saving_2(self):
if h5py is None:
return # Skip test if models cannot be saved.
with self.test_session():
# test with custom optimizer, loss
class CustomOp(keras.optimizers.RMSprop):
pass
def custom_loss(y_true, y_pred):
return keras.losses.mse(y_true, y_pred)
model = keras.models.Sequential()
model.add(keras.layers.Dense(2, input_shape=(3,)))
model.add(keras.layers.Dense(3))
model.compile(loss=custom_loss, optimizer=CustomOp(), metrics=['acc'])
x = np.random.random((1, 3))
y = np.random.random((1, 3))
model.train_on_batch(x, y)
out = model.predict(x)
_, fname = tempfile.mkstemp('.h5')
keras.models.save_model(model, fname)
model = keras.models.load_model(
fname,
custom_objects={'CustomOp': CustomOp,
'custom_loss': custom_loss})
os.remove(fname)
out2 = model.predict(x)
self.assertAllClose(out, out2, atol=1e-05)
def test_functional_model_saving(self):
if h5py is None:
return # Skip test if models cannot be saved.
with self.test_session():
inputs = keras.layers.Input(shape=(3,))
x = keras.layers.Dense(2)(inputs)
output = keras.layers.Dense(3)(x)
model = keras.models.Model(inputs, output)
model.compile(loss=keras.losses.MSE,
optimizer=keras.optimizers.RMSprop(lr=0.0001),
metrics=[keras.metrics.categorical_accuracy])
x = np.random.random((1, 3))
y = np.random.random((1, 3))
model.train_on_batch(x, y)
out = model.predict(x)
_, fname = tempfile.mkstemp('.h5')
keras.models.save_model(model, fname)
model = keras.models.load_model(fname)
os.remove(fname)
out2 = model.predict(x)
self.assertAllClose(out, out2, atol=1e-05)
def test_saving_without_compilation(self):
if h5py is None:
return # Skip test if models cannot be saved.
with self.test_session():
model = keras.models.Sequential()
model.add(keras.layers.Dense(2, input_shape=(3,)))
model.add(keras.layers.Dense(3))
model.compile(loss='mse', optimizer='sgd', metrics=['acc'])
_, fname = tempfile.mkstemp('.h5')
keras.models.save_model(model, fname)
model = keras.models.load_model(fname)
os.remove(fname)
def test_saving_right_after_compilation(self):
if h5py is None:
return # Skip test if models cannot be saved.
with self.test_session():
model = keras.models.Sequential()
model.add(keras.layers.Dense(2, input_shape=(3,)))
model.add(keras.layers.Dense(3))
model.compile(loss='mse', optimizer='sgd', metrics=['acc'])
model.model._make_train_function()
_, fname = tempfile.mkstemp('.h5')
keras.models.save_model(model, fname)
model = keras.models.load_model(fname)
os.remove(fname)
class TestSequential(test.TestCase):
"""Most Sequential model API tests are covered in `training_test.py`.
"""
def test_sequential_pop(self):
num_hidden = 5
input_dim = 3
batch_size = 5
num_classes = 2
with self.test_session():
model = keras.models.Sequential()
model.add(keras.layers.Dense(num_hidden, input_dim=input_dim))
model.add(keras.layers.Dense(num_classes))
model.compile(loss='mse', optimizer='sgd')
x = np.random.random((batch_size, input_dim))
y = np.random.random((batch_size, num_classes))
model.fit(x, y, epochs=1)
model.pop()
self.assertEqual(len(model.layers), 1)
self.assertEqual(model.output_shape, (None, num_hidden))
model.compile(loss='mse', optimizer='sgd')
y = np.random.random((batch_size, num_hidden))
model.fit(x, y, epochs=1)
if __name__ == '__main__':
test.main()
|
woggle/mesos-old
|
refs/heads/trunk
|
frameworks/hadoop-0.20.2/src/contrib/hod/hodlib/Common/descGenerator.py
|
182
|
#Licensed to the Apache Software Foundation (ASF) under one
#or more contributor license agreements. See the NOTICE file
#distributed with this work for additional information
#regarding copyright ownership. The ASF licenses this file
#to you under the Apache License, Version 2.0 (the
#"License"); you may not use this file except in compliance
#with the License. You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an "AS IS" BASIS,
#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#See the License for the specific language governing permissions and
#limitations under the License.
"""manage hod configuration"""
# -*- python -*-
import sys, csv, os
from optparse import Option, OptionParser
from xml.dom import minidom
from sets import Set
from select import select, poll, POLLIN
from hodlib.Common.desc import *
class DescGenerator:
"""Contains the conversion to descriptors and other method calls
to config"""
def __init__(self, hodConfig):
"""parse all the descriptors"""
self.hodConfig = hodConfig
def initializeDesc(self):
self.hodConfig['nodepooldesc'] = self.createNodePoolDesc()
self.hodConfig['servicedesc'] = self.createServiceDescDict()
return self.hodConfig
def getServices(self):
"""get all the services from the config"""
sdd = {}
for keys in self.hodConfig:
if keys.startswith('gridservice-'):
str = keys.split('-')
dict = self.hodConfig[keys]
if 'server-params' in dict: dict['attrs'] = dict['server-params']
if 'final-server-params' in dict: dict['final-attrs'] = dict['final-server-params']
dict['id'] = str[1]
desc = ServiceDesc(dict)
sdd[desc.getName()] = desc
return sdd
def createNodePoolDesc(self):
""" create a node pool descriptor and store
it in hodconfig"""
desc = NodePoolDesc(self.hodConfig['resource_manager'])
return desc
def createServiceDescDict(self):
"""create a service descriptor for
all the services and store it in the
hodconfig"""
sdd = self.getServices()
return sdd
|
culturagovbr/sistema-nacional-cultura
|
refs/heads/master
|
planotrabalho/migrations/0011_auto_20181206_1127.py
|
1
|
# Generated by Django 2.0.8 on 2018-12-06 16:34
from django.db import migrations, models
from planotrabalho.models import Componente, Conselheiro
from adesao.models import SistemaCultura, Municipio
import django.db.models.deletion
def migra_conselheiros(apps, schema_editor):
conselheiros = apps.get_model('planotrabalho', 'Conselheiro')
for conselheiro in conselheiros.objects.all():
municipio = Municipio.objects.get(usuario__plano_trabalho__conselho_cultural__id=conselheiro.conselho_aux.id)
if municipio.cidade:
sistemas = SistemaCultura.sistema.filter(ente_federado__nome=municipio.cidade.nome_municipio)
codigo_ibge = str(municipio.cidade.codigo_ibge)
else:
sistemas = SistemaCultura.sistema.filter(ente_federado__nome=municipio.estado.nome_uf)
codigo_ibge = str(municipio.estado.codigo_ibge)
if sistemas.count() > 1:
for sistema in sistemas:
if codigo_ibge in str(sistema.ente_federado.cod_ibge):
sistema_conselho = sistema
break
elif sistemas.count() == 1:
sistema_conselho = sistemas[0]
else:
continue
conselheiro.conselho_id = sistema_conselho.conselho.id
conselheiro.save()
class Migration(migrations.Migration):
dependencies = [
('planotrabalho', '0010_auto_20181203_1847'),
]
operations = [
migrations.RenameField(
model_name='conselheiro',
old_name='conselho',
new_name='conselho_aux',
),
migrations.AddField(
model_name='conselheiro',
name='conselho',
field=models.ForeignKey(to='planotrabalho.Componente', on_delete=django.db.models.deletion.CASCADE, null=True),
),
migrations.RunPython(migra_conselheiros),
migrations.RemoveField(
model_name='conselheiro',
name='conselho_aux',
),
]
|
jcpowermac/ansible
|
refs/heads/devel
|
test/runner/lib/executor.py
|
5
|
"""Execute Ansible tests."""
from __future__ import absolute_import, print_function
import json
import os
import collections
import datetime
import re
import tempfile
import time
import textwrap
import functools
import pipes
import hashlib
import lib.pytar
import lib.thread
from lib.core_ci import (
AnsibleCoreCI,
SshKey,
)
from lib.manage_ci import (
ManageWindowsCI,
ManageNetworkCI,
)
from lib.cloud import (
cloud_filter,
cloud_init,
get_cloud_environment,
get_cloud_platforms,
)
from lib.util import (
ApplicationWarning,
ApplicationError,
SubprocessError,
display,
run_command,
intercept_command,
remove_tree,
make_dirs,
is_shippable,
is_binary_file,
find_pip,
find_executable,
raw_command,
get_coverage_path,
)
from lib.ansible_util import (
ansible_environment,
)
from lib.target import (
IntegrationTarget,
walk_external_targets,
walk_internal_targets,
walk_posix_integration_targets,
walk_network_integration_targets,
walk_windows_integration_targets,
walk_units_targets,
)
from lib.changes import (
ShippableChanges,
LocalChanges,
)
from lib.git import (
Git,
)
from lib.classification import (
categorize_changes,
)
from lib.config import (
TestConfig,
EnvironmentConfig,
IntegrationConfig,
NetworkIntegrationConfig,
PosixIntegrationConfig,
ShellConfig,
UnitsConfig,
WindowsIntegrationConfig,
)
SUPPORTED_PYTHON_VERSIONS = (
'2.6',
'2.7',
'3.5',
'3.6',
'3.7',
)
def check_startup():
"""Checks to perform at startup before running commands."""
check_legacy_modules()
def check_legacy_modules():
"""Detect conflicts with legacy core/extras module directories to avoid problems later."""
for directory in 'core', 'extras':
path = 'lib/ansible/modules/%s' % directory
for root, _, file_names in os.walk(path):
if file_names:
# the directory shouldn't exist, but if it does, it must contain no files
raise ApplicationError('Files prohibited in "%s". '
'These are most likely legacy modules from version 2.2 or earlier.' % root)
def create_shell_command(command):
"""
:type command: list[str]
:rtype: list[str]
"""
optional_vars = (
'TERM',
)
cmd = ['/usr/bin/env']
cmd += ['%s=%s' % (var, os.environ[var]) for var in optional_vars if var in os.environ]
cmd += command
return cmd
def install_command_requirements(args):
"""
:type args: EnvironmentConfig
"""
generate_egg_info(args)
if not args.requirements:
return
packages = []
if isinstance(args, TestConfig):
if args.coverage:
packages.append('coverage')
if args.junit:
packages.append('junit-xml')
pip = find_pip(version=args.python_version)
commands = [generate_pip_install(pip, args.command, packages=packages)]
if isinstance(args, IntegrationConfig):
for cloud_platform in get_cloud_platforms(args):
commands.append(generate_pip_install(pip, '%s.cloud.%s' % (args.command, cloud_platform)))
commands = [cmd for cmd in commands if cmd]
# only look for changes when more than one requirements file is needed
detect_pip_changes = len(commands) > 1
# first pass to install requirements, changes expected unless environment is already set up
changes = run_pip_commands(args, pip, commands, detect_pip_changes)
if not changes:
return # no changes means we can stop early
# second pass to check for conflicts in requirements, changes are not expected here
changes = run_pip_commands(args, pip, commands, detect_pip_changes)
if not changes:
return # no changes means no conflicts
raise ApplicationError('Conflicts detected in requirements. The following commands reported changes during verification:\n%s' %
'\n'.join((' '.join(pipes.quote(c) for c in cmd) for cmd in changes)))
def run_pip_commands(args, pip, commands, detect_pip_changes=False):
"""
:type args: EnvironmentConfig
:type pip: str
:type commands: list[list[str]]
:type detect_pip_changes: bool
:rtype: list[list[str]]
"""
changes = []
after_list = pip_list(args, pip) if detect_pip_changes else None
for cmd in commands:
if not cmd:
continue
before_list = after_list
try:
run_command(args, cmd)
except SubprocessError as ex:
if ex.status != 2:
raise
# If pip is too old it won't understand the arguments we passed in, so we'll need to upgrade it.
# Installing "coverage" on ubuntu 16.04 fails with the error:
# AttributeError: 'Requirement' object has no attribute 'project_name'
# See: https://bugs.launchpad.net/ubuntu/xenial/+source/python-pip/+bug/1626258
# Upgrading pip works around the issue.
run_command(args, [pip, 'install', '--upgrade', 'pip'])
run_command(args, cmd)
after_list = pip_list(args, pip) if detect_pip_changes else None
if before_list != after_list:
changes.append(cmd)
return changes
def pip_list(args, pip):
"""
:type args: EnvironmentConfig
:type pip: str
:rtype: str
"""
stdout, _ = run_command(args, [pip, 'list'], capture=True)
return stdout
def generate_egg_info(args):
"""
:type args: EnvironmentConfig
"""
if os.path.isdir('lib/ansible.egg-info'):
return
run_command(args, ['python%s' % args.python_version, 'setup.py', 'egg_info'], capture=args.verbosity < 3)
def generate_pip_install(pip, command, packages=None):
"""
:type pip: str
:type command: str
:type packages: list[str] | None
:rtype: list[str] | None
"""
constraints = 'test/runner/requirements/constraints.txt'
requirements = 'test/runner/requirements/%s.txt' % command
options = []
if os.path.exists(requirements) and os.path.getsize(requirements):
options += ['-r', requirements]
if packages:
options += packages
if not options:
return None
return [pip, 'install', '--disable-pip-version-check', '-c', constraints] + options
def command_shell(args):
"""
:type args: ShellConfig
"""
if args.delegate:
raise Delegate()
install_command_requirements(args)
cmd = create_shell_command(['bash', '-i'])
run_command(args, cmd)
def command_posix_integration(args):
"""
:type args: PosixIntegrationConfig
"""
all_targets = tuple(walk_posix_integration_targets(include_hidden=True))
internal_targets = command_integration_filter(args, all_targets)
command_integration_filtered(args, internal_targets, all_targets)
def command_network_integration(args):
"""
:type args: NetworkIntegrationConfig
"""
default_filename = 'test/integration/inventory.networking'
if args.inventory:
filename = os.path.join('test/integration', args.inventory)
else:
filename = default_filename
if not args.explain and not args.platform and not os.path.exists(filename):
if args.inventory:
filename = os.path.abspath(filename)
raise ApplicationError(
'Inventory not found: %s\n'
'Use --inventory to specify the inventory path.\n'
'Use --platform to provision resources and generate an inventory file.\n'
'See also inventory template: %s.template' % (filename, default_filename)
)
all_targets = tuple(walk_network_integration_targets(include_hidden=True))
internal_targets = command_integration_filter(args, all_targets, init_callback=network_init)
instances = [] # type: list [lib.thread.WrappedThread]
if args.platform:
get_coverage_path(args) # initialize before starting threads
configs = dict((config['platform_version'], config) for config in args.metadata.instance_config)
for platform_version in args.platform:
platform, version = platform_version.split('/', 1)
config = configs.get(platform_version)
if not config:
continue
instance = lib.thread.WrappedThread(functools.partial(network_run, args, platform, version, config))
instance.daemon = True
instance.start()
instances.append(instance)
while any(instance.is_alive() for instance in instances):
time.sleep(1)
remotes = [instance.wait_for_result() for instance in instances]
inventory = network_inventory(remotes)
display.info('>>> Inventory: %s\n%s' % (filename, inventory.strip()), verbosity=3)
if not args.explain:
with open(filename, 'w') as inventory_fd:
inventory_fd.write(inventory)
success = False
try:
command_integration_filtered(args, internal_targets, all_targets)
success = True
finally:
if args.remote_terminate == 'always' or (args.remote_terminate == 'success' and success):
for instance in instances:
instance.result.stop()
def network_init(args, internal_targets):
"""
:type args: NetworkIntegrationConfig
:type internal_targets: tuple[IntegrationTarget]
"""
if not args.platform:
return
if args.metadata.instance_config is not None:
return
platform_targets = set(a for t in internal_targets for a in t.aliases if a.startswith('network/'))
instances = [] # type: list [lib.thread.WrappedThread]
# generate an ssh key (if needed) up front once, instead of for each instance
SshKey(args)
for platform_version in args.platform:
platform, version = platform_version.split('/', 1)
platform_target = 'network/%s/' % platform
if platform_target not in platform_targets:
display.warning('Skipping "%s" because selected tests do not target the "%s" platform.' % (
platform_version, platform))
continue
instance = lib.thread.WrappedThread(functools.partial(network_start, args, platform, version))
instance.daemon = True
instance.start()
instances.append(instance)
while any(instance.is_alive() for instance in instances):
time.sleep(1)
args.metadata.instance_config = [instance.wait_for_result() for instance in instances]
def network_start(args, platform, version):
"""
:type args: NetworkIntegrationConfig
:type platform: str
:type version: str
:rtype: AnsibleCoreCI
"""
core_ci = AnsibleCoreCI(args, platform, version, stage=args.remote_stage, provider=args.remote_provider)
core_ci.start()
return core_ci.save()
def network_run(args, platform, version, config):
"""
:type args: NetworkIntegrationConfig
:type platform: str
:type version: str
:type config: dict[str, str]
:rtype: AnsibleCoreCI
"""
core_ci = AnsibleCoreCI(args, platform, version, stage=args.remote_stage, provider=args.remote_provider, load=False)
core_ci.load(config)
core_ci.wait()
manage = ManageNetworkCI(core_ci)
manage.wait()
return core_ci
def network_inventory(remotes):
"""
:type remotes: list[AnsibleCoreCI]
:rtype: str
"""
groups = dict([(remote.platform, []) for remote in remotes])
net = []
for remote in remotes:
options = dict(
ansible_host=remote.connection.hostname,
ansible_user=remote.connection.username,
ansible_ssh_private_key_file=os.path.abspath(remote.ssh_key.key),
ansible_network_os=remote.platform,
ansible_connection='local'
)
groups[remote.platform].append(
'%s %s' % (
remote.name.replace('.', '-'),
' '.join('%s="%s"' % (k, options[k]) for k in sorted(options)),
)
)
net.append(remote.platform)
groups['net:children'] = net
template = ''
for group in groups:
hosts = '\n'.join(groups[group])
template += textwrap.dedent("""
[%s]
%s
""") % (group, hosts)
inventory = template
return inventory
def command_windows_integration(args):
"""
:type args: WindowsIntegrationConfig
"""
filename = 'test/integration/inventory.winrm'
if not args.explain and not args.windows and not os.path.isfile(filename):
raise ApplicationError('Use the --windows option or provide an inventory file (see %s.template).' % filename)
all_targets = tuple(walk_windows_integration_targets(include_hidden=True))
internal_targets = command_integration_filter(args, all_targets, init_callback=windows_init)
instances = [] # type: list [lib.thread.WrappedThread]
if args.windows:
get_coverage_path(args) # initialize before starting threads
configs = dict((config['platform_version'], config) for config in args.metadata.instance_config)
for version in args.windows:
config = configs['windows/%s' % version]
instance = lib.thread.WrappedThread(functools.partial(windows_run, args, version, config))
instance.daemon = True
instance.start()
instances.append(instance)
while any(instance.is_alive() for instance in instances):
time.sleep(1)
remotes = [instance.wait_for_result() for instance in instances]
inventory = windows_inventory(remotes)
display.info('>>> Inventory: %s\n%s' % (filename, inventory.strip()), verbosity=3)
if not args.explain:
with open(filename, 'w') as inventory_fd:
inventory_fd.write(inventory)
success = False
try:
command_integration_filtered(args, internal_targets, all_targets)
success = True
finally:
if args.remote_terminate == 'always' or (args.remote_terminate == 'success' and success):
for instance in instances:
instance.result.stop()
def windows_init(args, internal_targets): # pylint: disable=locally-disabled, unused-argument
"""
:type args: WindowsIntegrationConfig
:type internal_targets: tuple[IntegrationTarget]
"""
if not args.windows:
return
if args.metadata.instance_config is not None:
return
instances = [] # type: list [lib.thread.WrappedThread]
for version in args.windows:
instance = lib.thread.WrappedThread(functools.partial(windows_start, args, version))
instance.daemon = True
instance.start()
instances.append(instance)
while any(instance.is_alive() for instance in instances):
time.sleep(1)
args.metadata.instance_config = [instance.wait_for_result() for instance in instances]
def windows_start(args, version):
"""
:type args: WindowsIntegrationConfig
:type version: str
:rtype: AnsibleCoreCI
"""
core_ci = AnsibleCoreCI(args, 'windows', version, stage=args.remote_stage, provider=args.remote_provider)
core_ci.start()
return core_ci.save()
def windows_run(args, version, config):
"""
:type args: WindowsIntegrationConfig
:type version: str
:type config: dict[str, str]
:rtype: AnsibleCoreCI
"""
core_ci = AnsibleCoreCI(args, 'windows', version, stage=args.remote_stage, provider=args.remote_provider, load=False)
core_ci.load(config)
core_ci.wait()
manage = ManageWindowsCI(core_ci)
manage.wait()
return core_ci
def windows_inventory(remotes):
"""
:type remotes: list[AnsibleCoreCI]
:rtype: str
"""
hosts = []
for remote in remotes:
options = dict(
ansible_host=remote.connection.hostname,
ansible_user=remote.connection.username,
ansible_password=remote.connection.password,
ansible_port=remote.connection.port,
)
hosts.append(
'%s %s' % (
remote.name.replace('/', '_'),
' '.join('%s="%s"' % (k, options[k]) for k in sorted(options)),
)
)
template = """
[windows]
%s
[windows:vars]
ansible_connection=winrm
ansible_winrm_server_cert_validation=ignore
# support winrm connection tests (temporary solution, does not support testing enable/disable of pipelining)
[winrm:children]
windows
# support winrm binary module tests (temporary solution)
[testhost_binary_modules:children]
windows
"""
template = textwrap.dedent(template)
inventory = template % ('\n'.join(hosts))
return inventory
def command_integration_filter(args, targets, init_callback=None):
"""
:type args: IntegrationConfig
:type targets: collections.Iterable[IntegrationTarget]
:type init_callback: (IntegrationConfig, tuple[IntegrationTarget]) -> None
:rtype: tuple[IntegrationTarget]
"""
targets = tuple(target for target in targets if 'hidden/' not in target.aliases)
changes = get_changes_filter(args)
require = (args.require or []) + changes
exclude = (args.exclude or [])
internal_targets = walk_internal_targets(targets, args.include, exclude, require)
environment_exclude = get_integration_filter(args, internal_targets)
environment_exclude += cloud_filter(args, internal_targets)
if environment_exclude:
exclude += environment_exclude
internal_targets = walk_internal_targets(targets, args.include, exclude, require)
if not internal_targets:
raise AllTargetsSkipped()
if args.start_at and not any(t.name == args.start_at for t in internal_targets):
raise ApplicationError('Start at target matches nothing: %s' % args.start_at)
if init_callback:
init_callback(args, internal_targets)
cloud_init(args, internal_targets)
if args.delegate:
raise Delegate(require=changes, exclude=exclude)
install_command_requirements(args)
return internal_targets
def command_integration_filtered(args, targets, all_targets):
"""
:type args: IntegrationConfig
:type targets: tuple[IntegrationTarget]
:type all_targets: tuple[IntegrationTarget]
"""
found = False
passed = []
failed = []
targets_iter = iter(targets)
all_targets_dict = dict((target.name, target) for target in all_targets)
setup_errors = []
setup_targets_executed = set()
for target in all_targets:
for setup_target in target.setup_once + target.setup_always:
if setup_target not in all_targets_dict:
setup_errors.append('Target "%s" contains invalid setup target: %s' % (target.name, setup_target))
if setup_errors:
raise ApplicationError('Found %d invalid setup aliases:\n%s' % (len(setup_errors), '\n'.join(setup_errors)))
test_dir = os.path.expanduser('~/ansible_testing')
if not args.explain and any('needs/ssh/' in target.aliases for target in targets):
max_tries = 20
display.info('SSH service required for tests. Checking to make sure we can connect.')
for i in range(1, max_tries + 1):
try:
run_command(args, ['ssh', '-o', 'BatchMode=yes', 'localhost', 'id'], capture=True)
display.info('SSH service responded.')
break
except SubprocessError:
if i == max_tries:
raise
seconds = 3
display.warning('SSH service not responding. Waiting %d second(s) before checking again.' % seconds)
time.sleep(seconds)
start_at_task = args.start_at_task
results = {}
for target in targets_iter:
if args.start_at and not found:
found = target.name == args.start_at
if not found:
continue
if args.list_targets:
print(target.name)
continue
tries = 2 if args.retry_on_error else 1
verbosity = args.verbosity
cloud_environment = get_cloud_environment(args, target)
original_environment = EnvironmentDescription(args)
display.info('>>> Environment Description\n%s' % original_environment, verbosity=3)
try:
while tries:
tries -= 1
try:
run_setup_targets(args, test_dir, target.setup_once, all_targets_dict, setup_targets_executed, False)
start_time = time.time()
run_setup_targets(args, test_dir, target.setup_always, all_targets_dict, setup_targets_executed, True)
if not args.explain:
# create a fresh test directory for each test target
remove_tree(test_dir)
make_dirs(test_dir)
if target.script_path:
command_integration_script(args, target)
else:
command_integration_role(args, target, start_at_task)
start_at_task = None
end_time = time.time()
results[target.name] = dict(
name=target.name,
type=target.type,
aliases=target.aliases,
modules=target.modules,
run_time_seconds=int(end_time - start_time),
setup_once=target.setup_once,
setup_always=target.setup_always,
coverage=args.coverage,
coverage_label=args.coverage_label,
python_version=args.python_version,
)
break
except SubprocessError:
if cloud_environment:
cloud_environment.on_failure(target, tries)
if not original_environment.validate(target.name, throw=False):
raise
if not tries:
raise
display.warning('Retrying test target "%s" with maximum verbosity.' % target.name)
display.verbosity = args.verbosity = 6
start_time = time.time()
original_environment.validate(target.name, throw=True)
end_time = time.time()
results[target.name]['validation_seconds'] = int(end_time - start_time)
passed.append(target)
except Exception as ex:
failed.append(target)
if args.continue_on_error:
display.error(ex)
continue
display.notice('To resume at this test target, use the option: --start-at %s' % target.name)
next_target = next(targets_iter, None)
if next_target:
display.notice('To resume after this test target, use the option: --start-at %s' % next_target.name)
raise
finally:
display.verbosity = args.verbosity = verbosity
if not args.explain:
results_path = 'test/results/data/%s-%s.json' % (args.command, re.sub(r'[^0-9]', '-', str(datetime.datetime.utcnow().replace(microsecond=0))))
data = dict(
targets=results,
)
with open(results_path, 'w') as results_fd:
results_fd.write(json.dumps(data, sort_keys=True, indent=4))
if failed:
raise ApplicationError('The %d integration test(s) listed below (out of %d) failed. See error output above for details:\n%s' % (
len(failed), len(passed) + len(failed), '\n'.join(target.name for target in failed)))
def run_setup_targets(args, test_dir, target_names, targets_dict, targets_executed, always):
"""
:param args: IntegrationConfig
:param test_dir: str
:param target_names: list[str]
:param targets_dict: dict[str, IntegrationTarget]
:param targets_executed: set[str]
:param always: bool
"""
for target_name in target_names:
if not always and target_name in targets_executed:
continue
target = targets_dict[target_name]
if not args.explain:
# create a fresh test directory for each test target
remove_tree(test_dir)
make_dirs(test_dir)
if target.script_path:
command_integration_script(args, target)
else:
command_integration_role(args, target, None)
targets_executed.add(target_name)
def integration_environment(args, target, cmd):
"""
:type args: IntegrationConfig
:type target: IntegrationTarget
:type cmd: list[str]
:rtype: dict[str, str]
"""
env = ansible_environment(args)
integration = dict(
JUNIT_OUTPUT_DIR=os.path.abspath('test/results/junit'),
ANSIBLE_CALLBACK_WHITELIST='junit',
ANSIBLE_TEST_CI=args.metadata.ci_provider,
)
if args.debug_strategy:
env.update(dict(ANSIBLE_STRATEGY='debug'))
if 'non_local/' in target.aliases:
if args.coverage:
display.warning('Skipping coverage reporting for non-local test: %s' % target.name)
env.update(dict(ANSIBLE_TEST_REMOTE_INTERPRETER=''))
env.update(integration)
cloud_environment = get_cloud_environment(args, target)
if cloud_environment:
cloud_environment.configure_environment(env, cmd)
return env
def command_integration_script(args, target):
"""
:type args: IntegrationConfig
:type target: IntegrationTarget
"""
display.info('Running %s integration test script' % target.name)
cmd = ['./%s' % os.path.basename(target.script_path)]
if args.verbosity:
cmd.append('-' + ('v' * args.verbosity))
env = integration_environment(args, target, cmd)
cwd = target.path
intercept_command(args, cmd, target_name=target.name, env=env, cwd=cwd)
def command_integration_role(args, target, start_at_task):
"""
:type args: IntegrationConfig
:type target: IntegrationTarget
:type start_at_task: str | None
"""
display.info('Running %s integration test role' % target.name)
vars_file = 'integration_config.yml'
if isinstance(args, WindowsIntegrationConfig):
inventory = 'inventory.winrm'
hosts = 'windows'
gather_facts = False
elif isinstance(args, NetworkIntegrationConfig):
inventory = args.inventory or 'inventory.networking'
hosts = target.name[:target.name.find('_')]
gather_facts = False
else:
inventory = 'inventory'
hosts = 'testhost'
gather_facts = True
cloud_environment = get_cloud_environment(args, target)
if cloud_environment:
hosts = cloud_environment.inventory_hosts or hosts
playbook = '''
- hosts: %s
gather_facts: %s
roles:
- { role: %s }
''' % (hosts, gather_facts, target.name)
with tempfile.NamedTemporaryFile(dir='test/integration', prefix='%s-' % target.name, suffix='.yml') as pb_fd:
pb_fd.write(playbook.encode('utf-8'))
pb_fd.flush()
filename = os.path.basename(pb_fd.name)
display.info('>>> Playbook: %s\n%s' % (filename, playbook.strip()), verbosity=3)
cmd = ['ansible-playbook', filename, '-i', inventory, '-e', '@%s' % vars_file]
if start_at_task:
cmd += ['--start-at-task', start_at_task]
if args.tags:
cmd += ['--tags', args.tags]
if args.skip_tags:
cmd += ['--skip-tags', args.skip_tags]
if args.diff:
cmd += ['--diff']
if args.verbosity:
cmd.append('-' + ('v' * args.verbosity))
env = integration_environment(args, target, cmd)
cwd = 'test/integration'
env['ANSIBLE_ROLES_PATH'] = os.path.abspath('test/integration/targets')
intercept_command(args, cmd, target_name=target.name, env=env, cwd=cwd)
def command_units(args):
"""
:type args: UnitsConfig
"""
changes = get_changes_filter(args)
require = (args.require or []) + changes
include, exclude = walk_external_targets(walk_units_targets(), args.include, args.exclude, require)
if not include:
raise AllTargetsSkipped()
if args.delegate:
raise Delegate(require=changes)
install_command_requirements(args)
version_commands = []
for version in SUPPORTED_PYTHON_VERSIONS:
# run all versions unless version given, in which case run only that version
if args.python and version != args.python_version:
continue
env = ansible_environment(args)
cmd = [
'pytest',
'--boxed',
'-r', 'a',
'--color',
'yes' if args.color else 'no',
'--junit-xml',
'test/results/junit/python%s-units.xml' % version,
]
if args.collect_only:
cmd.append('--collect-only')
if args.verbosity:
cmd.append('-' + ('v' * args.verbosity))
if exclude:
cmd += ['--ignore=%s' % target.path for target in exclude]
cmd += [target.path for target in include]
version_commands.append((version, cmd, env))
for version, command, env in version_commands:
display.info('Unit test with Python %s' % version)
try:
intercept_command(args, command, target_name='units', env=env, python_version=version)
except SubprocessError as ex:
# pytest exits with status code 5 when all tests are skipped, which isn't an error for our use case
if ex.status != 5:
raise
def get_changes_filter(args):
"""
:type args: TestConfig
:rtype: list[str]
"""
paths = detect_changes(args)
if paths is None:
return [] # change detection not enabled, do not filter targets
if not paths:
raise NoChangesDetected()
commands = categorize_changes(args, paths, args.command)
targets = commands.get(args.command)
if targets is None:
raise NoTestsForChanges()
if targets == ['all']:
return [] # changes require testing all targets, do not filter targets
return targets
def detect_changes(args):
"""
:type args: TestConfig
:rtype: list[str] | None
"""
if args.changed and is_shippable():
display.info('Shippable detected, collecting parameters from environment.')
paths = detect_changes_shippable(args)
elif args.changed_from or args.changed_path:
paths = args.changed_path or []
if args.changed_from:
with open(args.changed_from, 'r') as changes_fd:
paths += changes_fd.read().splitlines()
elif args.changed:
paths = detect_changes_local(args)
else:
return None # change detection not enabled
if paths is None:
return None # act as though change detection not enabled, do not filter targets
display.info('Detected changes in %d file(s).' % len(paths))
for path in paths:
display.info(path, verbosity=1)
return paths
def detect_changes_shippable(args):
"""Initialize change detection on Shippable.
:type args: TestConfig
:rtype: list[str] | None
"""
git = Git(args)
result = ShippableChanges(args, git)
if result.is_pr:
job_type = 'pull request'
elif result.is_tag:
job_type = 'tag'
else:
job_type = 'merge commit'
display.info('Processing %s for branch %s commit %s' % (job_type, result.branch, result.commit))
if not args.metadata.changes:
args.metadata.populate_changes(result.diff)
return result.paths
def detect_changes_local(args):
"""
:type args: TestConfig
:rtype: list[str]
"""
git = Git(args)
result = LocalChanges(args, git)
display.info('Detected branch %s forked from %s at commit %s' % (
result.current_branch, result.fork_branch, result.fork_point))
if result.untracked and not args.untracked:
display.warning('Ignored %s untracked file(s). Use --untracked to include them.' %
len(result.untracked))
if result.committed and not args.committed:
display.warning('Ignored %s committed change(s). Omit --ignore-committed to include them.' %
len(result.committed))
if result.staged and not args.staged:
display.warning('Ignored %s staged change(s). Omit --ignore-staged to include them.' %
len(result.staged))
if result.unstaged and not args.unstaged:
display.warning('Ignored %s unstaged change(s). Omit --ignore-unstaged to include them.' %
len(result.unstaged))
names = set()
if args.tracked:
names |= set(result.tracked)
if args.untracked:
names |= set(result.untracked)
if args.committed:
names |= set(result.committed)
if args.staged:
names |= set(result.staged)
if args.unstaged:
names |= set(result.unstaged)
if not args.metadata.changes:
args.metadata.populate_changes(result.diff)
for path in result.untracked:
if is_binary_file(path):
args.metadata.changes[path] = ((0, 0),)
continue
with open(path, 'r') as source_fd:
line_count = len(source_fd.read().splitlines())
args.metadata.changes[path] = ((1, line_count),)
return sorted(names)
def get_integration_filter(args, targets):
"""
:type args: IntegrationConfig
:type targets: tuple[IntegrationTarget]
:rtype: list[str]
"""
if args.tox:
# tox has the same exclusions as the local environment
return get_integration_local_filter(args, targets)
if args.docker:
return get_integration_docker_filter(args, targets)
if args.remote:
return get_integration_remote_filter(args, targets)
return get_integration_local_filter(args, targets)
def get_integration_local_filter(args, targets):
"""
:type args: IntegrationConfig
:type targets: tuple[IntegrationTarget]
:rtype: list[str]
"""
exclude = []
if os.getuid() != 0:
skip = 'needs/root/'
skipped = [target.name for target in targets if skip in target.aliases]
if skipped:
exclude.append(skip)
display.warning('Excluding tests marked "%s" which require running as root: %s'
% (skip.rstrip('/'), ', '.join(skipped)))
# consider explicit testing of destructive as though --allow-destructive was given
include_destructive = any(target.startswith('destructive/') for target in args.include)
if not args.allow_destructive and not include_destructive:
skip = 'destructive/'
skipped = [target.name for target in targets if skip in target.aliases]
if skipped:
exclude.append(skip)
display.warning('Excluding tests marked "%s" which require --allow-destructive to run locally: %s'
% (skip.rstrip('/'), ', '.join(skipped)))
if args.python_version.startswith('3'):
python_version = 3
else:
python_version = 2
skip = 'skip/python%d/' % python_version
skipped = [target.name for target in targets if skip in target.aliases]
if skipped:
exclude.append(skip)
display.warning('Excluding tests marked "%s" which are not supported on python %d: %s'
% (skip.rstrip('/'), python_version, ', '.join(skipped)))
return exclude
def get_integration_docker_filter(args, targets):
"""
:type args: IntegrationConfig
:type targets: tuple[IntegrationTarget]
:rtype: list[str]
"""
exclude = []
if not args.docker_privileged:
skip = 'needs/privileged/'
skipped = [target.name for target in targets if skip in target.aliases]
if skipped:
exclude.append(skip)
display.warning('Excluding tests marked "%s" which require --docker-privileged to run under docker: %s'
% (skip.rstrip('/'), ', '.join(skipped)))
python_version = 2 # images are expected to default to python 2 unless otherwise specified
if args.docker.endswith('py3'):
python_version = 3 # docker images ending in 'py3' are expected to default to python 3
if args.docker.endswith(':default'):
python_version = 3 # docker images tagged 'default' are expected to default to python 3
if args.python: # specifying a numeric --python option overrides the default python
if args.python.startswith('3'):
python_version = 3
elif args.python.startswith('2'):
python_version = 2
skip = 'skip/python%d/' % python_version
skipped = [target.name for target in targets if skip in target.aliases]
if skipped:
exclude.append(skip)
display.warning('Excluding tests marked "%s" which are not supported on python %d: %s'
% (skip.rstrip('/'), python_version, ', '.join(skipped)))
return exclude
def get_integration_remote_filter(args, targets):
"""
:type args: IntegrationConfig
:type targets: tuple[IntegrationTarget]
:rtype: list[str]
"""
parts = args.remote.split('/', 1)
platform = parts[0]
exclude = []
skip = 'skip/%s/' % platform
skipped = [target.name for target in targets if skip in target.aliases]
if skipped:
exclude.append(skip)
display.warning('Excluding tests marked "%s" which are not supported on %s: %s'
% (skip.rstrip('/'), platform, ', '.join(skipped)))
python_version = 2 # remotes are expected to default to python 2
skip = 'skip/python%d/' % python_version
skipped = [target.name for target in targets if skip in target.aliases]
if skipped:
exclude.append(skip)
display.warning('Excluding tests marked "%s" which are not supported on python %d: %s'
% (skip.rstrip('/'), python_version, ', '.join(skipped)))
return exclude
class EnvironmentDescription(object):
"""Description of current running environment."""
def __init__(self, args):
"""Initialize snapshot of environment configuration.
:type args: IntegrationConfig
"""
self.args = args
if self.args.explain:
self.data = {}
return
versions = ['']
versions += SUPPORTED_PYTHON_VERSIONS
versions += list(set(v.split('.')[0] for v in SUPPORTED_PYTHON_VERSIONS))
python_paths = dict((v, find_executable('python%s' % v, required=False)) for v in sorted(versions))
python_versions = dict((v, self.get_version([python_paths[v], '-V'])) for v in sorted(python_paths) if python_paths[v])
pip_paths = dict((v, find_executable('pip%s' % v, required=False)) for v in sorted(versions))
pip_versions = dict((v, self.get_version([pip_paths[v], '--version'])) for v in sorted(pip_paths) if pip_paths[v])
pip_interpreters = dict((v, self.get_shebang(pip_paths[v])) for v in sorted(pip_paths) if pip_paths[v])
known_hosts_hash = self.get_hash(os.path.expanduser('~/.ssh/known_hosts'))
self.data = dict(
python_paths=python_paths,
python_versions=python_versions,
pip_paths=pip_paths,
pip_versions=pip_versions,
pip_interpreters=pip_interpreters,
known_hosts_hash=known_hosts_hash,
)
def __str__(self):
"""
:rtype: str
"""
return json.dumps(self.data, sort_keys=True, indent=4)
def validate(self, target_name, throw):
"""
:type target_name: str
:type throw: bool
:rtype: bool
"""
current = EnvironmentDescription(self.args)
original_json = str(self)
current_json = str(current)
if original_json == current_json:
return True
message = ('Test target "%s" has changed the test environment!\n'
'If these changes are necessary, they must be reverted before the test finishes.\n'
'>>> Original Environment\n'
'%s\n'
'>>> Current Environment\n'
'%s' % (target_name, original_json, current_json))
if throw:
raise ApplicationError(message)
display.error(message)
return False
@staticmethod
def get_version(command):
"""
:type command: list[str]
:rtype: str
"""
try:
stdout, stderr = raw_command(command, capture=True, cmd_verbosity=2)
except SubprocessError:
return None # all failures are equal, we don't care why it failed, only that it did
return (stdout or '').strip() + (stderr or '').strip()
@staticmethod
def get_shebang(path):
"""
:type path: str
:rtype: str
"""
with open(path) as script_fd:
return script_fd.readline()
@staticmethod
def get_hash(path):
"""
:type path: str
:rtype: str | None
"""
if not os.path.exists(path):
return None
file_hash = hashlib.md5()
with open(path, 'rb') as file_fd:
file_hash.update(file_fd.read())
return file_hash.hexdigest()
class NoChangesDetected(ApplicationWarning):
"""Exception when change detection was performed, but no changes were found."""
def __init__(self):
super(NoChangesDetected, self).__init__('No changes detected.')
class NoTestsForChanges(ApplicationWarning):
"""Exception when changes detected, but no tests trigger as a result."""
def __init__(self):
super(NoTestsForChanges, self).__init__('No tests found for detected changes.')
class Delegate(Exception):
"""Trigger command delegation."""
def __init__(self, exclude=None, require=None):
"""
:type exclude: list[str] | None
:type require: list[str] | None
"""
super(Delegate, self).__init__()
self.exclude = exclude or []
self.require = require or []
class AllTargetsSkipped(ApplicationWarning):
"""All targets skipped."""
def __init__(self):
super(AllTargetsSkipped, self).__init__('All targets skipped.')
|
BTA-BATA/BATA-SOURCE
|
refs/heads/master
|
contrib/spendfrom/spendfrom.py
|
792
|
#!/usr/bin/env python
#
# Use the raw transactions API to spend bitcoins received on particular addresses,
# and send any change back to that same address.
#
# Example usage:
# spendfrom.py # Lists available funds
# spendfrom.py --from=ADDRESS --to=ADDRESS --amount=11.00
#
# Assumes it will talk to a bitcoind or Bitcoin-Qt running
# on localhost.
#
# Depends on jsonrpc
#
from decimal import *
import getpass
import math
import os
import os.path
import platform
import sys
import time
from jsonrpc import ServiceProxy, json
BASE_FEE=Decimal("0.001")
def check_json_precision():
"""Make sure json library being used does not lose precision converting BTC values"""
n = Decimal("20000000.00000003")
satoshis = int(json.loads(json.dumps(float(n)))*1.0e8)
if satoshis != 2000000000000003:
raise RuntimeError("JSON encode/decode loses precision")
def determine_db_dir():
"""Return the default location of the bitcoin data directory"""
if platform.system() == "Darwin":
return os.path.expanduser("~/Library/Application Support/Bitcoin/")
elif platform.system() == "Windows":
return os.path.join(os.environ['APPDATA'], "Bitcoin")
return os.path.expanduser("~/.bitcoin")
def read_bitcoin_config(dbdir):
"""Read the bitcoin.conf file from dbdir, returns dictionary of settings"""
from ConfigParser import SafeConfigParser
class FakeSecHead(object):
def __init__(self, fp):
self.fp = fp
self.sechead = '[all]\n'
def readline(self):
if self.sechead:
try: return self.sechead
finally: self.sechead = None
else:
s = self.fp.readline()
if s.find('#') != -1:
s = s[0:s.find('#')].strip() +"\n"
return s
config_parser = SafeConfigParser()
config_parser.readfp(FakeSecHead(open(os.path.join(dbdir, "bitcoin.conf"))))
return dict(config_parser.items("all"))
def connect_JSON(config):
"""Connect to a bitcoin JSON-RPC server"""
testnet = config.get('testnet', '0')
testnet = (int(testnet) > 0) # 0/1 in config file, convert to True/False
if not 'rpcport' in config:
config['rpcport'] = 19332 if testnet else 9332
connect = "http://%s:%s@127.0.0.1:%s"%(config['rpcuser'], config['rpcpassword'], config['rpcport'])
try:
result = ServiceProxy(connect)
# ServiceProxy is lazy-connect, so send an RPC command mostly to catch connection errors,
# but also make sure the bitcoind we're talking to is/isn't testnet:
if result.getmininginfo()['testnet'] != testnet:
sys.stderr.write("RPC server at "+connect+" testnet setting mismatch\n")
sys.exit(1)
return result
except:
sys.stderr.write("Error connecting to RPC server at "+connect+"\n")
sys.exit(1)
def unlock_wallet(bitcoind):
info = bitcoind.getinfo()
if 'unlocked_until' not in info:
return True # wallet is not encrypted
t = int(info['unlocked_until'])
if t <= time.time():
try:
passphrase = getpass.getpass("Wallet is locked; enter passphrase: ")
bitcoind.walletpassphrase(passphrase, 5)
except:
sys.stderr.write("Wrong passphrase\n")
info = bitcoind.getinfo()
return int(info['unlocked_until']) > time.time()
def list_available(bitcoind):
address_summary = dict()
address_to_account = dict()
for info in bitcoind.listreceivedbyaddress(0):
address_to_account[info["address"]] = info["account"]
unspent = bitcoind.listunspent(0)
for output in unspent:
# listunspent doesn't give addresses, so:
rawtx = bitcoind.getrawtransaction(output['txid'], 1)
vout = rawtx["vout"][output['vout']]
pk = vout["scriptPubKey"]
# This code only deals with ordinary pay-to-bitcoin-address
# or pay-to-script-hash outputs right now; anything exotic is ignored.
if pk["type"] != "pubkeyhash" and pk["type"] != "scripthash":
continue
address = pk["addresses"][0]
if address in address_summary:
address_summary[address]["total"] += vout["value"]
address_summary[address]["outputs"].append(output)
else:
address_summary[address] = {
"total" : vout["value"],
"outputs" : [output],
"account" : address_to_account.get(address, "")
}
return address_summary
def select_coins(needed, inputs):
# Feel free to improve this, this is good enough for my simple needs:
outputs = []
have = Decimal("0.0")
n = 0
while have < needed and n < len(inputs):
outputs.append({ "txid":inputs[n]["txid"], "vout":inputs[n]["vout"]})
have += inputs[n]["amount"]
n += 1
return (outputs, have-needed)
def create_tx(bitcoind, fromaddresses, toaddress, amount, fee):
all_coins = list_available(bitcoind)
total_available = Decimal("0.0")
needed = amount+fee
potential_inputs = []
for addr in fromaddresses:
if addr not in all_coins:
continue
potential_inputs.extend(all_coins[addr]["outputs"])
total_available += all_coins[addr]["total"]
if total_available < needed:
sys.stderr.write("Error, only %f BTC available, need %f\n"%(total_available, needed));
sys.exit(1)
#
# Note:
# Python's json/jsonrpc modules have inconsistent support for Decimal numbers.
# Instead of wrestling with getting json.dumps() (used by jsonrpc) to encode
# Decimals, I'm casting amounts to float before sending them to bitcoind.
#
outputs = { toaddress : float(amount) }
(inputs, change_amount) = select_coins(needed, potential_inputs)
if change_amount > BASE_FEE: # don't bother with zero or tiny change
change_address = fromaddresses[-1]
if change_address in outputs:
outputs[change_address] += float(change_amount)
else:
outputs[change_address] = float(change_amount)
rawtx = bitcoind.createrawtransaction(inputs, outputs)
signed_rawtx = bitcoind.signrawtransaction(rawtx)
if not signed_rawtx["complete"]:
sys.stderr.write("signrawtransaction failed\n")
sys.exit(1)
txdata = signed_rawtx["hex"]
return txdata
def compute_amount_in(bitcoind, txinfo):
result = Decimal("0.0")
for vin in txinfo['vin']:
in_info = bitcoind.getrawtransaction(vin['txid'], 1)
vout = in_info['vout'][vin['vout']]
result = result + vout['value']
return result
def compute_amount_out(txinfo):
result = Decimal("0.0")
for vout in txinfo['vout']:
result = result + vout['value']
return result
def sanity_test_fee(bitcoind, txdata_hex, max_fee):
class FeeError(RuntimeError):
pass
try:
txinfo = bitcoind.decoderawtransaction(txdata_hex)
total_in = compute_amount_in(bitcoind, txinfo)
total_out = compute_amount_out(txinfo)
if total_in-total_out > max_fee:
raise FeeError("Rejecting transaction, unreasonable fee of "+str(total_in-total_out))
tx_size = len(txdata_hex)/2
kb = tx_size/1000 # integer division rounds down
if kb > 1 and fee < BASE_FEE:
raise FeeError("Rejecting no-fee transaction, larger than 1000 bytes")
if total_in < 0.01 and fee < BASE_FEE:
raise FeeError("Rejecting no-fee, tiny-amount transaction")
# Exercise for the reader: compute transaction priority, and
# warn if this is a very-low-priority transaction
except FeeError as err:
sys.stderr.write((str(err)+"\n"))
sys.exit(1)
def main():
import optparse
parser = optparse.OptionParser(usage="%prog [options]")
parser.add_option("--from", dest="fromaddresses", default=None,
help="addresses to get bitcoins from")
parser.add_option("--to", dest="to", default=None,
help="address to get send bitcoins to")
parser.add_option("--amount", dest="amount", default=None,
help="amount to send")
parser.add_option("--fee", dest="fee", default="0.0",
help="fee to include")
parser.add_option("--datadir", dest="datadir", default=determine_db_dir(),
help="location of bitcoin.conf file with RPC username/password (default: %default)")
parser.add_option("--testnet", dest="testnet", default=False, action="store_true",
help="Use the test network")
parser.add_option("--dry_run", dest="dry_run", default=False, action="store_true",
help="Don't broadcast the transaction, just create and print the transaction data")
(options, args) = parser.parse_args()
check_json_precision()
config = read_bitcoin_config(options.datadir)
if options.testnet: config['testnet'] = True
bitcoind = connect_JSON(config)
if options.amount is None:
address_summary = list_available(bitcoind)
for address,info in address_summary.iteritems():
n_transactions = len(info['outputs'])
if n_transactions > 1:
print("%s %.8f %s (%d transactions)"%(address, info['total'], info['account'], n_transactions))
else:
print("%s %.8f %s"%(address, info['total'], info['account']))
else:
fee = Decimal(options.fee)
amount = Decimal(options.amount)
while unlock_wallet(bitcoind) == False:
pass # Keep asking for passphrase until they get it right
txdata = create_tx(bitcoind, options.fromaddresses.split(","), options.to, amount, fee)
sanity_test_fee(bitcoind, txdata, amount*Decimal("0.01"))
if options.dry_run:
print(txdata)
else:
txid = bitcoind.sendrawtransaction(txdata)
print(txid)
if __name__ == '__main__':
main()
|
adgaudio/mutable_lru_cache
|
refs/heads/master
|
lru_cache_pandas.py
|
1
|
import pandas as pd
def _pd_hash_on_indexes(arg):
"""
hash pandas objects to immutable (index, columns) without modifying the
objects
This hash function uniquely identifies pandas.Series and pandas.DataFrame
by (index, columns).
This means if values of a column have been overwritten (due to a mutable or
"inplace" operation), this cache funtion will ignore the changes and not
update the cache.
"""
if isinstance(arg, (pd.DataFrame, pd.Series)):
return (tuple(arg.index), tuple(arg.columns))
else:
return arg
pd_lru_cache = lru_cache_factory(_pd_hash_on_indexes)
|
acshi/osf.io
|
refs/heads/develop
|
scripts/analytics/run_keen_snapshots.py
|
6
|
from framework.celery_tasks import app as celery_app
from scripts.analytics.base import BaseAnalyticsHarness
from scripts.analytics.addon_snapshot import AddonSnapshot
class SnapshotHarness(BaseAnalyticsHarness):
@property
def analytics_classes(self):
return [AddonSnapshot]
@celery_app.task(name='scripts.analytics.run_keen_snapshots')
def run_main():
SnapshotHarness().main(command_line=False)
if __name__ == '__main__':
SnapshotHarness().main()
|
joequery/django
|
refs/heads/master
|
django/contrib/sites/requests.py
|
695
|
from __future__ import unicode_literals
from django.utils.encoding import python_2_unicode_compatible
@python_2_unicode_compatible
class RequestSite(object):
"""
A class that shares the primary interface of Site (i.e., it has
``domain`` and ``name`` attributes) but gets its data from a Django
HttpRequest object rather than from a database.
The save() and delete() methods raise NotImplementedError.
"""
def __init__(self, request):
self.domain = self.name = request.get_host()
def __str__(self):
return self.domain
def save(self, force_insert=False, force_update=False):
raise NotImplementedError('RequestSite cannot be saved.')
def delete(self):
raise NotImplementedError('RequestSite cannot be deleted.')
|
tstirrat15/exercism-python-responses
|
refs/heads/master
|
series/series_test.py
|
20
|
"""Tests for the series exercise
Implementation note:
The slices function should raise a ValueError with a meaningful error
message if its length argument doesn't fit the series.
"""
import unittest
from series import slices
class SeriesTest(unittest.TestCase):
def test_slices_of_one(self):
self.assertEqual([[0], [1], [2], [3], [4]],
slices("01234", 1))
def test_slices_of_two(self):
self.assertEqual([[9, 7], [7, 8], [8, 6], [6, 7],
[7, 5], [5, 6], [6, 4]],
slices("97867564", 2))
def test_slices_of_three(self):
self.assertEqual([[9, 7, 8], [7, 8, 6], [8, 6, 7],
[6, 7, 5], [7, 5, 6], [5, 6, 4]],
slices("97867564", 3))
def test_slices_of_four(self):
self.assertEqual([[0, 1, 2, 3], [1, 2, 3, 4]],
slices("01234", 4))
def test_slices_of_five(self):
self.assertEqual([[0, 1, 2, 3, 4]],
slices("01234", 5))
def test_overly_long_slice(self):
with self.assertRaises(ValueError):
slices("012", 4)
def test_overly_short_slice(self):
with self.assertRaises(ValueError):
slices("01234", 0)
if __name__ == '__main__':
unittest.main()
|
dataxu/ansible
|
refs/heads/dx-stable-2.5
|
lib/ansible/modules/cloud/vultr/vr_user.py
|
23
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# (c) 2017, René Moser <mail@renemoser.net>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: vr_user
short_description: Manages users on Vultr.
description:
- Create, update and remove users.
version_added: "2.5"
author: "René Moser (@resmo)"
options:
name:
description:
- Name of the user
required: true
email:
description:
- Email of the user.
- Required if C(state=present).
password:
description:
- Password of the user.
- Only considered while creating a user or when C(force=yes).
force:
description:
- Password will only be changed with enforcement.
default: no
choices: [ yes, no ]
api_enabled:
description:
- Whether the API is enabled or not.
default: yes
choices: [ yes, no ]
acls:
description:
- List of ACLs this users should have, see U(https://www.vultr.com/api/#user_user_list).
- Required if C(state=present).
- One or more of the choices list, some depend on each other.
choices:
- manage_users
- subscriptions
- provisioning
- billing
- support
- abuse
- dns
- upgrade
state:
description:
- State of the user.
default: present
choices: [ present, absent ]
extends_documentation_fragment: vultr
'''
EXAMPLES = r'''
- name: Ensure a user exists
local_action:
module: vr_user
name: john
email: john.doe@example.com
password: s3cr3t
acls:
- upgrade
- dns
- manage_users
- subscriptions
- upgrade
- name: Remove a user
local_action:
module: vr_user
name: john
state: absent
'''
RETURN = r'''
---
vultr_api:
description: Response from Vultr API with a few additions/modification
returned: success
type: complex
contains:
api_account:
description: Account used in the ini file to select the key
returned: success
type: string
sample: default
api_timeout:
description: Timeout used for the API requests
returned: success
type: int
sample: 60
api_retries:
description: Amount of max retries for the API requests
returned: success
type: int
sample: 5
api_endpoint:
description: Endpoint used for the API requests
returned: success
type: string
sample: "https://api.vultr.com"
vultr_user:
description: Response from Vultr API
returned: success
type: complex
contains:
id:
description: ID of the user.
returned: success
type: string
sample: 5904bc6ed9234
api_key:
description: API key of the user.
returned: only after resource was created
type: string
sample: 567E6K567E6K567E6K567E6K567E6K
name:
description: Name of the user.
returned: success
type: string
sample: john
email:
description: Email of the user.
returned: success
type: string
sample: "john@exmaple.com"
api_enabled:
description: Whether the API is enabled or not.
returned: success
type: bool
sample: true
acls:
description: List of ACLs of the user.
returned: success
type: list
sample: [manage_users, support, upgrade]
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.vultr import (
Vultr,
vultr_argument_spec,
)
ACLS = [
'manage_users',
'subscriptions',
'provisioning',
'billing',
'support',
'abuse',
'dns',
'upgrade',
]
class AnsibleVultrUser(Vultr):
def __init__(self, module):
super(AnsibleVultrUser, self).__init__(module, "vultr_user")
self.returns = {
'USERID': dict(key='id'),
'name': dict(),
'email': dict(),
'api_enabled': dict(convert_to='bool'),
'acls': dict(),
'api_key': dict()
}
def _common_args(self):
return {
'name': self.module.params.get('name'),
'email': self.module.params.get('email'),
'acls': self.module.params.get('acls'),
'password': self.module.params.get('password'),
'api_enabled': self.get_yes_or_no('api_enabled'),
}
def get_user(self):
users = self.api_query(path="/v1/user/list")
for user in users or []:
if user.get('name') == self.module.params.get('name'):
return user
return {}
def present_user(self):
# Choices with list
acls = self.module.params.get('acls')
for acl in acls or []:
if acl not in ACLS:
self.fail_json(msg='value of acls must be one or more of: %s, got: %s' % (ACLS, acls))
user = self.get_user()
if not user:
user = self._create_user(user)
else:
user = self._update_user(user)
return user
def _has_changed(self, user, data):
for k, v in data.items():
if k not in user:
continue
elif isinstance(v, list):
for i in v:
if i not in user[k]:
return True
elif data[k] != user[k]:
return True
return False
def _create_user(self, user):
self.module.fail_on_missing_params(required_params=['password'])
self.result['changed'] = True
data = self._common_args()
self.result['diff']['before'] = {}
self.result['diff']['after'] = data
if not self.module.check_mode:
user = self.api_query(
path="/v1/user/create",
method="POST",
data=data
)
user.update(self.get_user())
return user
def _update_user(self, user):
data = self._common_args()
data.update({
'USERID': user['USERID'],
})
force = self.module.params.get('force')
if not force:
del data['password']
if force or self._has_changed(user=user, data=data):
self.result['changed'] = True
self.result['diff']['before'] = user
self.result['diff']['after'] = user.copy()
self.result['diff']['after'].update(data)
if not self.module.check_mode:
self.api_query(
path="/v1/user/update",
method="POST",
data=data
)
user = self.get_user()
return user
def absent_user(self):
user = self.get_user()
if user:
self.result['changed'] = True
data = {
'USERID': user['USERID'],
}
self.result['diff']['before'] = user
self.result['diff']['after'] = {}
if not self.module.check_mode:
self.api_query(
path="/v1/user/delete",
method="POST",
data=data
)
return user
def main():
argument_spec = vultr_argument_spec()
argument_spec.update(dict(
name=dict(required=True),
email=dict(),
password=dict(no_log=True),
force=dict(type='bool', default=False),
api_enabled=dict(type='bool', default=True),
acls=dict(type='list', aliases=['acl']),
state=dict(choices=['present', 'absent'], default='present'),
))
module = AnsibleModule(
argument_spec=argument_spec,
required_if=[
('state', 'present', ['email', 'acls']),
],
supports_check_mode=True,
)
vr_user = AnsibleVultrUser(module)
if module.params.get('state') == "absent":
user = vr_user.absent_user()
else:
user = vr_user.present_user()
result = vr_user.get_result(user)
module.exit_json(**result)
if __name__ == '__main__':
main()
|
tumbl3w33d/ansible
|
refs/heads/devel
|
test/units/modules/network/fortios/test_fortios_user_setting.py
|
21
|
# Copyright 2019 Fortinet, Inc.
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <https://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import json
import pytest
from mock import ANY
from ansible.module_utils.network.fortios.fortios import FortiOSHandler
try:
from ansible.modules.network.fortios import fortios_user_setting
except ImportError:
pytest.skip("Could not load required modules for testing", allow_module_level=True)
@pytest.fixture(autouse=True)
def connection_mock(mocker):
connection_class_mock = mocker.patch('ansible.modules.network.fortios.fortios_user_setting.Connection')
return connection_class_mock
fos_instance = FortiOSHandler(connection_mock)
def test_user_setting_creation(mocker):
schema_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.schema')
set_method_result = {'status': 'success', 'http_method': 'POST', 'http_status': 200}
set_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.set', return_value=set_method_result)
input_data = {
'username': 'admin',
'state': 'present',
'user_setting': {
'auth_blackout_time': '3',
'auth_ca_cert': 'test_value_4',
'auth_cert': 'test_value_5',
'auth_http_basic': 'enable',
'auth_invalid_max': '7',
'auth_lockout_duration': '8',
'auth_lockout_threshold': '9',
'auth_portal_timeout': '10',
'auth_secure_http': 'enable',
'auth_src_mac': 'enable',
'auth_ssl_allow_renegotiation': 'enable',
'auth_timeout': '14',
'auth_timeout_type': 'idle-timeout',
'auth_type': 'http',
'radius_ses_timeout_act': 'hard-timeout'
},
'vdom': 'root'}
is_error, changed, response = fortios_user_setting.fortios_user(input_data, fos_instance)
expected_data = {
'auth-blackout-time': '3',
'auth-ca-cert': 'test_value_4',
'auth-cert': 'test_value_5',
'auth-http-basic': 'enable',
'auth-invalid-max': '7',
'auth-lockout-duration': '8',
'auth-lockout-threshold': '9',
'auth-portal-timeout': '10',
'auth-secure-http': 'enable',
'auth-src-mac': 'enable',
'auth-ssl-allow-renegotiation': 'enable',
'auth-timeout': '14',
'auth-timeout-type': 'idle-timeout',
'auth-type': 'http',
'radius-ses-timeout-act': 'hard-timeout'
}
set_method_mock.assert_called_with('user', 'setting', data=expected_data, vdom='root')
schema_method_mock.assert_not_called()
assert not is_error
assert changed
assert response['status'] == 'success'
assert response['http_status'] == 200
def test_user_setting_creation_fails(mocker):
schema_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.schema')
set_method_result = {'status': 'error', 'http_method': 'POST', 'http_status': 500}
set_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.set', return_value=set_method_result)
input_data = {
'username': 'admin',
'state': 'present',
'user_setting': {
'auth_blackout_time': '3',
'auth_ca_cert': 'test_value_4',
'auth_cert': 'test_value_5',
'auth_http_basic': 'enable',
'auth_invalid_max': '7',
'auth_lockout_duration': '8',
'auth_lockout_threshold': '9',
'auth_portal_timeout': '10',
'auth_secure_http': 'enable',
'auth_src_mac': 'enable',
'auth_ssl_allow_renegotiation': 'enable',
'auth_timeout': '14',
'auth_timeout_type': 'idle-timeout',
'auth_type': 'http',
'radius_ses_timeout_act': 'hard-timeout'
},
'vdom': 'root'}
is_error, changed, response = fortios_user_setting.fortios_user(input_data, fos_instance)
expected_data = {
'auth-blackout-time': '3',
'auth-ca-cert': 'test_value_4',
'auth-cert': 'test_value_5',
'auth-http-basic': 'enable',
'auth-invalid-max': '7',
'auth-lockout-duration': '8',
'auth-lockout-threshold': '9',
'auth-portal-timeout': '10',
'auth-secure-http': 'enable',
'auth-src-mac': 'enable',
'auth-ssl-allow-renegotiation': 'enable',
'auth-timeout': '14',
'auth-timeout-type': 'idle-timeout',
'auth-type': 'http',
'radius-ses-timeout-act': 'hard-timeout'
}
set_method_mock.assert_called_with('user', 'setting', data=expected_data, vdom='root')
schema_method_mock.assert_not_called()
assert is_error
assert not changed
assert response['status'] == 'error'
assert response['http_status'] == 500
def test_user_setting_idempotent(mocker):
schema_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.schema')
set_method_result = {'status': 'error', 'http_method': 'DELETE', 'http_status': 404}
set_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.set', return_value=set_method_result)
input_data = {
'username': 'admin',
'state': 'present',
'user_setting': {
'auth_blackout_time': '3',
'auth_ca_cert': 'test_value_4',
'auth_cert': 'test_value_5',
'auth_http_basic': 'enable',
'auth_invalid_max': '7',
'auth_lockout_duration': '8',
'auth_lockout_threshold': '9',
'auth_portal_timeout': '10',
'auth_secure_http': 'enable',
'auth_src_mac': 'enable',
'auth_ssl_allow_renegotiation': 'enable',
'auth_timeout': '14',
'auth_timeout_type': 'idle-timeout',
'auth_type': 'http',
'radius_ses_timeout_act': 'hard-timeout'
},
'vdom': 'root'}
is_error, changed, response = fortios_user_setting.fortios_user(input_data, fos_instance)
expected_data = {
'auth-blackout-time': '3',
'auth-ca-cert': 'test_value_4',
'auth-cert': 'test_value_5',
'auth-http-basic': 'enable',
'auth-invalid-max': '7',
'auth-lockout-duration': '8',
'auth-lockout-threshold': '9',
'auth-portal-timeout': '10',
'auth-secure-http': 'enable',
'auth-src-mac': 'enable',
'auth-ssl-allow-renegotiation': 'enable',
'auth-timeout': '14',
'auth-timeout-type': 'idle-timeout',
'auth-type': 'http',
'radius-ses-timeout-act': 'hard-timeout'
}
set_method_mock.assert_called_with('user', 'setting', data=expected_data, vdom='root')
schema_method_mock.assert_not_called()
assert not is_error
assert not changed
assert response['status'] == 'error'
assert response['http_status'] == 404
def test_user_setting_filter_foreign_attributes(mocker):
schema_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.schema')
set_method_result = {'status': 'success', 'http_method': 'POST', 'http_status': 200}
set_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.set', return_value=set_method_result)
input_data = {
'username': 'admin',
'state': 'present',
'user_setting': {
'random_attribute_not_valid': 'tag',
'auth_blackout_time': '3',
'auth_ca_cert': 'test_value_4',
'auth_cert': 'test_value_5',
'auth_http_basic': 'enable',
'auth_invalid_max': '7',
'auth_lockout_duration': '8',
'auth_lockout_threshold': '9',
'auth_portal_timeout': '10',
'auth_secure_http': 'enable',
'auth_src_mac': 'enable',
'auth_ssl_allow_renegotiation': 'enable',
'auth_timeout': '14',
'auth_timeout_type': 'idle-timeout',
'auth_type': 'http',
'radius_ses_timeout_act': 'hard-timeout'
},
'vdom': 'root'}
is_error, changed, response = fortios_user_setting.fortios_user(input_data, fos_instance)
expected_data = {
'auth-blackout-time': '3',
'auth-ca-cert': 'test_value_4',
'auth-cert': 'test_value_5',
'auth-http-basic': 'enable',
'auth-invalid-max': '7',
'auth-lockout-duration': '8',
'auth-lockout-threshold': '9',
'auth-portal-timeout': '10',
'auth-secure-http': 'enable',
'auth-src-mac': 'enable',
'auth-ssl-allow-renegotiation': 'enable',
'auth-timeout': '14',
'auth-timeout-type': 'idle-timeout',
'auth-type': 'http',
'radius-ses-timeout-act': 'hard-timeout'
}
set_method_mock.assert_called_with('user', 'setting', data=expected_data, vdom='root')
schema_method_mock.assert_not_called()
assert not is_error
assert changed
assert response['status'] == 'success'
assert response['http_status'] == 200
|
jandom/rdkit
|
refs/heads/master
|
rdkit/sping/Pyart/__init__.py
|
18
|
# sping:: pyart
from pidPyart import *
|
uzh/vm-mad
|
refs/heads/master
|
vmmad/simul.py
|
1
|
#! /usr/bin/env python
#
"""
Simulate an `Orchestrator` run given some parameters.
"""
# Copyright (C) 2011, 2012 ETH Zurich and University of Zurich. All rights reserved.
#
# Authors:
# Christian Panse <cp@fgcz.ethz.ch>
# Riccardo Murri <riccardo.murri@gmail.com>
# Tyanko Aleksiev <tyanko.alexiev@gmail.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from __future__ import absolute_import
__docformat__ = 'reStructuredText'
__version__ = "1.0dev (SVN $Revision$)"
# stdlib imports
import argparse
from copy import copy
import csv
import os
import sys
import time
import types
# local imports
from vmmad import log
from vmmad.batchsys.replay import JobsFromFile
from vmmad.provider.libcloud import DummyCloud
from vmmad.orchestrator import Orchestrator, JobInfo, VmInfo
class OrchestratorSimulation(Orchestrator, DummyCloud):
def __init__(self, max_vms, max_delta, max_idle, startup_delay,
output_file, csv_file, start_time, time_interval, cluster_size):
# Convert starting time to UNIX time
if start_time is not None and isinstance(start_time, types.StringTypes):
start_time = time.mktime(time.strptime(start_time, "%Y-%m-%dT%H:%M:%S" ))
# implement the `Cloud` interface to simulate a cloud provider
DummyCloud.__init__(self, '1', '1')
# init the Orchestrator part, using `self` as cloud provider and batch system interface
Orchestrator.__init__(
self,
cloud=self,
batchsys=JobsFromFile(csv_file, self.time, start_time),
max_vms=max_vms,
max_delta=max_delta,
vm_start_timeout=time_interval*max(startup_delay, 10))
# make cluster nodes already available at start
self.cluster_size = cluster_size
for n in xrange(cluster_size):
# NOTE: use `Orchestrator.new_vm` here, as `self.new_vm` creates a *real VM*
nodeid = ('clusternode-%d' % n)
node = Orchestrator.new_vm(self,
vmid=nodeid,
state=VmInfo.READY,
nodename=nodeid,
ever_running=True)
self.vms[nodeid] = node
# Set simulation settings
self.max_idle = max_idle
self.startup_delay = startup_delay
self.output_file = open(output_file, "wb")
self.writer = csv.writer(self.output_file, delimiter=',')
self.writer.writerow(
['#TimeStamp', 'Pending Jobs', 'Running Jobs', 'Started VMs', 'Idle VMS'])
self.time_interval = int(time_interval)
self._next_row = None
# info about running VMs
self._vmid = 0
# no running jobs at the onset
self._running = 0
# if `starting_time` has not been set, then use earliest job
# submission time as starting point
self.starting_time = self.batchsys.start_time - self.time_interval
log.info("Starting simulation at %s",
time.strftime('%Y-%m-%dT%H:%M:%S', time.gmtime(self.starting_time)))
def update_job_status(self):
# do regular work
Orchestrator.update_job_status(self)
# count running jobs
self._running = len([ job for job in self.jobs.itervalues()
if job.state == JobInfo.RUNNING ])
# simulate 'ready' notification from VMs
starting_vms = [ vm for vm in self.vms.values() if vm.state == VmInfo.STARTING ]
for vm in starting_vms:
# we use `vm.last_idle` as a countdown to the `READY` state for VMs:
# it is initialized to `-startup_delay` and incremented at every pass
if vm.last_idle >= 0:
nodename = ("vm-%s" % vm.vmid)
self.vm_is_ready(vm.auth, nodename)
else:
vm.last_idle += 1
# simulate SGE scheduler starting a new job
ready_vms = [ vm for vm in self.vms.values() if vm.state == VmInfo.READY ]
for vm in ready_vms:
if not vm.jobs:
if not self.candidates:
break
job = self.candidates.pop()
job.state = JobInfo.RUNNING
job.exec_node_name = vm.nodename
job.running_at = self.time()
self._running += 1
vm.jobs.add(job.jobid)
log.info("Job %s just started running on node %s (%s).",
job.jobid, vm.vmid, vm.nodename)
def before(self):
# XXX: this only works with `JobsFromFile`!
if len(self.jobs) == 0 and len(self.batchsys.future_jobs) == 0:
log.info("No more jobs, stopping here")
self.output_file.close()
sys.exit(0)
vms = [ vm for vm in self.vms.values() if not vm.ever_running ]
vm_count = len(vms)
starting_vm_count = len([ vm for vm in vms if vm.state == VmInfo.STARTING ])
ready_vms_count = len([ vm for vm in vms if vm.state == VmInfo.READY ])
stopping_vms_count = len([ vm for vm in vms if vm.state == VmInfo.STOPPING ])
idle_vm_count = len([ vm for vm in vms if vm.last_idle > 0 ])
self.writer.writerow(
# timestamp, pending jobs, running jobs, started VMs, idle VMs,
[self.time(), len(self.candidates), self._running, len(self.vms)-self.cluster_size, idle_vm_count])
log.info(
"At time %d: pending jobs %d, running jobs %d, total started VMs %d,"
" starting VMs %d, ready VMs %d, idle VMs %d, stopping VMs %d",
self.time(), len(self.candidates), self._running, len(self.vms),
starting_vm_count, ready_vms_count, idle_vm_count, stopping_vms_count)
def time(self):
"""
Return the current time in the simulation as UNIX epoch.
"""
return self.starting_time + self.cycle * self.time_interval
def new_vm(self, **attrs):
return Orchestrator.new_vm(self, ever_running=False, last_idle=-self.startup_delay)
##
## policy implementation interface
##
def is_cloud_candidate(self, job):
# every job is a candidate in this simulation
return True
def is_new_vm_needed(self):
if len(self.candidates) > 2 * len(self.vms):
return True
def can_vm_be_stopped(self, vm):
if (not vm.ever_running and (vm.last_idle > self.max_idle) and len(vm.jobs) == 0):
return True
else:
return False
##
## (fake) cloud provider interface
##
def start_vm(self, vm):
DummyCloud.start_vm(self, vm)
def update_vm_status(self, vms):
DummyCloud.update_vm_status(self, vms)
def stop_vm(self, vm):
assert not vm.ever_running, (
"Request to stop VM %s which is marked as 'ever running'"
% vm.vmid)
DummyCloud.stop_vm(self, vm)
return True
if "__main__" == __name__:
parser = argparse.ArgumentParser(description='Simulates a cloud orchestrator')
parser.add_argument('--max-vms', '-mv', metavar='N', dest="max_vms", default=10, type=int, help="Maximum number of VMs to be started, default is %(default)s")
parser.add_argument('--max-delta', '-md', metavar='N', dest="max_delta", default=1, type=int, help="Cap the number of VMs that can be started or stopped in a single orchestration cycle. Default is %(default)d.")
parser.add_argument('--max-idle', '-mi', metavar='NUM_SECS', dest="max_idle", default=7200, type=int, help="Maximum idle time (in seconds) before swithing off a VM, default is %(default)s")
parser.add_argument('--startup-delay', '-s', metavar='NUM_SECS', dest="startup_delay", default=60, type=int, help="Time (in seconds) delay before a started VM is READY. Default is %(default)s")
parser.add_argument('--csv-file', '-csvf', metavar='String', dest="csv_file", default="accounting.csv", help="File containing the CSV information, %(default)s")
parser.add_argument('--output-file', '-o', metavar='String', dest="output_file", default="main_sim.txt", help="File name where the output of the simulation will be stored, %(default)s")
parser.add_argument('--cluster-size', '-cs', metavar='NUM_CPUS', dest="cluster_size", default="20", type=int, help="Number of VMs, used for the simulation of real available cluster: %(default)s")
parser.add_argument('--start-time', '-stime', metavar='String', dest="start_time", default=-1, help="Start time for the simulation, default: %(default)s")
parser.add_argument('--time-interval', '-timei', metavar='NUM_SECS', type=int, dest="time_interval", default="3600", help="UNIX interval in seconds used as parsing interval for the jobs in the CSV file, default: %(default)s")
parser.add_argument('--version', '-V', action='version',
version=("%(prog)s version " + __version__))
args = parser.parse_args()
OrchestratorSimulation(args.max_vms, args.max_delta, args.max_idle, args.startup_delay, args.output_file, args.csv_file, args.start_time, args.time_interval, args.cluster_size).run(0)
|
apark263/tensorflow
|
refs/heads/master
|
tensorflow/contrib/tensorrt/python/trt_convert_test.py
|
1
|
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Utilities to test TF-TensorRT integration."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
# pylint: disable=unused-import
from tensorflow.compiler.tf2tensorrt.python.ops import trt_ops
# pylint: enable=unused-import
from tensorflow.contrib.tensorrt.python import trt_convert
from tensorflow.core.framework import graph_pb2
from tensorflow.core.protobuf import config_pb2
from tensorflow.core.protobuf import rewriter_config_pb2
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import graph_util
from tensorflow.python.framework import importer
from tensorflow.python.framework import ops
from tensorflow.python.framework import test_util
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import variables
from tensorflow.python.platform import test
from tensorflow.python.saved_model import builder
from tensorflow.python.saved_model import loader
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import signature_def_utils
from tensorflow.python.saved_model import tag_constants
from tensorflow.python.saved_model import utils
from tensorflow.python.tools import saved_model_utils
class TrtConvertTest(test_util.TensorFlowTestCase):
"""Class to test Tensorflow-TensorRT integration python API."""
def testGetTensorrtRewriterConfig(self):
"""Test case for trt_convert.get_tensorrt_rewriter_config()."""
rewriter_cfg = trt_convert.get_tensorrt_rewriter_config(
rewriter_config=None,
max_batch_size=128,
max_workspace_size_bytes=1234,
precision_mode="INT8",
minimum_segment_size=10,
is_dynamic_op=True,
maximum_cached_engines=2,
cached_engine_batches=[1, 128])
self.assertEqual(["constfold", "layout", "constfold"],
rewriter_cfg.optimizers)
self.assertEqual(rewriter_config_pb2.RewriterConfig.ONE,
rewriter_cfg.meta_optimizer_iterations)
trt_optimizer = None
for optimizer in rewriter_cfg.custom_optimizers:
if optimizer.name == "TensorRTOptimizer":
self.assertTrue(trt_optimizer is None)
trt_optimizer = optimizer
self.assertTrue(trt_optimizer is not None)
for key in [
"minimum_segment_size", "max_batch_size", "is_dynamic_op",
"max_workspace_size_bytes", "precision_mode", "maximum_cached_engines",
"cached_engine_batches"
]:
self.assertTrue(key in trt_optimizer.parameter_map)
self.assertEqual(10, trt_optimizer.parameter_map["minimum_segment_size"].i)
self.assertEqual(128, trt_optimizer.parameter_map["max_batch_size"].i)
self.assertEqual(True, trt_optimizer.parameter_map["is_dynamic_op"].b)
self.assertEqual(1234,
trt_optimizer.parameter_map["max_workspace_size_bytes"].i)
self.assertEqual(
trt_convert._to_bytes("INT8"),
trt_optimizer.parameter_map["precision_mode"].s)
self.assertEqual(2, trt_optimizer.parameter_map["maximum_cached_engines"].i)
self.assertEqual(
[1, 128], trt_optimizer.parameter_map["cached_engine_batches"].list.i)
def _GetConfigProto(self):
"""Get ConfigProto for session creation."""
config = config_pb2.ConfigProto(
gpu_options=config_pb2.GPUOptions(allow_growth=True))
return config
def _GetGraph(self):
"""Get the graph for testing."""
g = ops.Graph()
with g.as_default():
with g.device("/GPU:0"):
inp = array_ops.placeholder(
dtype=dtypes.float32, shape=[None, 1, 1], name="input")
var = variables.VariableV1([[[1.0]]], dtype=dtypes.float32, name="v1")
add = inp + var.value()
mul = inp * add
add = mul + add
out = array_ops.identity(add, name="output")
return g, var, inp, out
def _GetGraphDef(self):
"""Get the graph def for testing."""
g, var, _, _ = self._GetGraph()
with self.session(graph=g, config=self._GetConfigProto()) as sess:
sess.run(var.initializer)
graph_def = graph_util.convert_variables_to_constants(
sess, g.as_graph_def(add_shapes=True), ["output"])
node_name_to_op = {node.name: node.op for node in graph_def.node}
self.assertEqual({
"v1": "Const",
"v1/read": "Identity",
"input": "Placeholder",
"add": "Add",
"mul": "Mul",
"add_1": "Add",
"output": "Identity"
}, node_name_to_op)
return graph_def
def _WriteInputSavedModel(self, input_saved_model_dir):
"""Write the saved model as an input for testing."""
g, var, inp, out = self._GetGraph()
signature_def = signature_def_utils.build_signature_def(
inputs={"myinput": utils.build_tensor_info(inp)},
outputs={"myoutput": utils.build_tensor_info(out)},
method_name=signature_constants.PREDICT_METHOD_NAME)
saved_model_builder = builder.SavedModelBuilder(input_saved_model_dir)
with self.session(graph=g, config=self._GetConfigProto()) as sess:
sess.run(var.initializer)
saved_model_builder.add_meta_graph_and_variables(
sess, [tag_constants.SERVING],
signature_def_map={"mypredict": signature_def})
saved_model_builder.save()
def _TestCreateInferenceGraph(self,
input_saved_model_dir=None,
output_saved_model_dir=None):
"""General method to test trt_convert.create_inference_graph()."""
input_graph_def = None if input_saved_model_dir else self._GetGraphDef()
output_graph_def = trt_convert.create_inference_graph(
input_graph_def, ["output"],
input_saved_model_dir=input_saved_model_dir,
output_saved_model_dir=output_saved_model_dir,
session_config=self._GetConfigProto())
graph_defs_to_verify = [output_graph_def]
if output_saved_model_dir is not None:
saved_model_graph_def = saved_model_utils.get_meta_graph_def(
output_saved_model_dir, tag_constants.SERVING).graph_def
self.assertTrue(isinstance(saved_model_graph_def, graph_pb2.GraphDef))
graph_defs_to_verify.append(saved_model_graph_def)
for graph_def in graph_defs_to_verify:
node_name_to_op = {node.name: node.op for node in graph_def.node}
self.assertEqual({
"input": "Placeholder",
"TRTEngineOp_0": "TRTEngineOp",
"output": "Identity"
}, node_name_to_op)
def testCreateInferenceGraph_BasicConversion(self):
"""Test case for trt_convert.create_inference_graph()."""
if not trt_convert.is_tensorrt_enabled():
return
# Use GraphDef as input.
self._TestCreateInferenceGraph()
# Use SavedModel as input.
tmp_dir = self.get_temp_dir()
input_saved_model_dir = os.path.join(tmp_dir, "in_dir1")
output_saved_model_dir = os.path.join(tmp_dir, "out_dir1")
self._WriteInputSavedModel(input_saved_model_dir)
self._TestCreateInferenceGraph(input_saved_model_dir,
output_saved_model_dir)
def _TestRun(self, sess, batch_size, expect_engine_is_run):
trt_convert.clear_test_values("")
result = sess.run("output:0", feed_dict={"input:0": [[[1.0]]] * batch_size})
self.assertAllEqual([[[4.0]]] * batch_size, result)
execute_engine_test_value = ("done" if expect_engine_is_run else "")
execute_native_segment_test_value = ("" if expect_engine_is_run else "done")
self.assertEqual(
execute_engine_test_value,
trt_convert.get_test_value("TRTEngineOp_0:ExecuteTrtEngine"))
self.assertEqual(
execute_native_segment_test_value,
trt_convert.get_test_value("TRTEngineOp_0:ExecuteNativeSegment"))
def testCreateInferenceGraph_MinimumSegmentSize(self):
if not trt_convert.is_tensorrt_enabled():
return
output_graph_def = trt_convert.create_inference_graph(
self._GetGraphDef(), ["output"],
minimum_segment_size=5,
is_dynamic_op=False)
node_name_to_op = {node.name: node.op for node in output_graph_def.node}
self.assertEqual({
"v1/read": "Const",
"input": "Placeholder",
"add": "Add",
"mul": "Mul",
"add_1": "Add",
"output": "Identity"
}, node_name_to_op)
def testCreateInferenceGraph_DynamicOp(self):
if not trt_convert.is_tensorrt_enabled():
return
trt_convert.enable_test_value()
tmp_dir = self.get_temp_dir()
input_saved_model_dir = os.path.join(tmp_dir, "in_dir2")
output_saved_model_dir = os.path.join(tmp_dir, "out_dir2")
self._WriteInputSavedModel(input_saved_model_dir)
output_graph_def = trt_convert.create_inference_graph(
None,
None,
is_dynamic_op=True,
maximum_cached_engines=2,
input_saved_model_dir=input_saved_model_dir,
output_saved_model_dir=output_saved_model_dir,
session_config=self._GetConfigProto())
# Test the output GraphDef.
with ops.Graph().as_default():
importer.import_graph_def(output_graph_def, name="")
with self.test_session(config=self._GetConfigProto()) as sess:
# Run with batch size 1, a new engine is created and cached.
self._TestRun(sess, 1, True)
# Run with batch size 2, a new engine is created and cached.
self._TestRun(sess, 2, True)
# Run with batch size 3, since the number of cached engines has reached
# the max, it should evict an old engine and create a new one.
self._TestRun(sess, 3, True)
# Test the output SavedModel
with ops.Graph().as_default():
with self.test_session(config=self._GetConfigProto()) as sess:
loader.load(sess, [tag_constants.SERVING], output_saved_model_dir)
# Run with batch size 1, a new engine is created and cached.
self._TestRun(sess, 1, True)
# Run with batch size 2, a new engine is created and cached.
self._TestRun(sess, 2, True)
# Run with batch size 3, since the number of cached engines has reached
# the max, it should evict an old engine and create a new one.
self._TestRun(sess, 3, True)
def testCreateInferenceGraph_StaticOp(self):
if not trt_convert.is_tensorrt_enabled():
return
trt_convert.enable_test_value()
tmp_dir = self.get_temp_dir()
input_saved_model_dir = os.path.join(tmp_dir, "in_dir3")
output_saved_model_dir = os.path.join(tmp_dir, "out_dir3")
self._WriteInputSavedModel(input_saved_model_dir)
output_graph_def = trt_convert.create_inference_graph(
None,
None,
max_batch_size=1,
is_dynamic_op=False,
maximum_cached_engines=2, # This is noop, added just for testing.
input_saved_model_dir=input_saved_model_dir,
output_saved_model_dir=output_saved_model_dir,
session_config=self._GetConfigProto())
# Test the output GraphDef.
with ops.Graph().as_default():
importer.import_graph_def(output_graph_def, name="")
with self.test_session(config=self._GetConfigProto()) as sess:
# Run with batch size 1, the default engine embedded in the graphdef
# will be used.
self._TestRun(sess, 1, True)
# Run with batch size 2, which exceed the max_batch_size, it should fall
# back to TF function.
self._TestRun(sess, 2, False)
# Test the output SavedModel
with ops.Graph().as_default():
with self.test_session(config=self._GetConfigProto()) as sess:
loader.load(sess, [tag_constants.SERVING], output_saved_model_dir)
# Run with batch size 1, the default engine embedded in the graphdef
# will be used.
self._TestRun(sess, 1, True)
# Run with batch size 2, which exceed the max_batch_size, it should fall
# back to TF function.
self._TestRun(sess, 2, False)
if __name__ == "__main__":
test.main()
|
eloquence/unisubs
|
refs/heads/staging
|
apps/webdriver_testing/pages/site_pages/teams_dir_page.py
|
5
|
#!/usr/bin/env python
import time
from nose.tools import assert_true, assert_false
from webdriver_testing.pages.site_pages import UnisubsPage
class TeamsDirPage(UnisubsPage):
"""
Search page is the main search page stub. Watch Page and Search Results page
"""
_URL = "teams/"
_SEARCH = "form.search input[name='q']"
_SORT = "span.sort_label"
_SORT_OPTION = "div.sort_button ul li a[href*='%s']"
_TEAM = "ul.listing li"
_TEAM_LINK = 'h3 a[href*=%s]'
_TEAM_NAME = 'a'
_TEAM_MEMBERS = 'ul.actions h4'
_TEAM_VIDEOS = 'ul.actions li:nth-child(2)'
_TEAM_DESCRIPTION = 'p'
_TEAM_DESCRIPTOR = '.descriptor'
_NO_MATCHES = 'p.empty'
_NO_MATCH_TEXT = 'Sorry, no teams found.'
_TEAM_BY_POSITION = "ul.listing li:nth-child(%d)"
_INVITE_ERROR = 'div#error h1'
#YOUR TEAMS TAB
_YOUR_URL = "teams/my/"
_LEAVE_URL = "teams/leave_team/%s/"
_LEAVE = "a#leave"
def open_teams_page(self):
self.open_page(self._URL)
def _team_elem(self, team):
"""Given the team's text name, return the css locator string.
"""
self.wait_for_element_present(self._TEAM + " " + self._TEAM_NAME)
teams = self.browser.find_elements_by_css_selector(self._TEAM)
for el in teams:
team_el = el.find_element_by_css_selector(self._TEAM_NAME)
print teams.index(el)
if team == team_el.text:
return self._TEAM_BY_POSITION % (teams.index(el) + 1)
def team_search(self, search_term):
self.clear_text(self._SEARCH)
self.submit_form_text_by_css(self._SEARCH, search_term)
def search_has_no_matches(self):
if self.is_text_present(self._NO_MATCHES, self._NO_MATCH_TEXT):
return True
def sort(self, order):
"""Sort the teams.
"""
sort_orders = ['date', 'name', 'members']
assert_true(order in sort_orders,
"unknown value for order, expected one of %s" % sort_orders)
self.open_page("teams/?o=%s" % order)
def first_team(self):
return self.teams_on_page()[0]
def last_team(self):
return self.teams_on_page()[-1:][0]
def all_team_elements(self):
team_elements = self.browser.find_elements_by_css_selector(self._TEAM)
return team_elements
def team_el(self, team):
team_elements = self.all_team_elements()
for el in team_elements:
if el.find_element_by_css_selector(self._TEAM_NAME).text == team:
return el
else:
self.fail("Did not find the team named %s on the page" % team)
def members(self, team):
"""Return the number of members displayed for a team.
"""
element = self.team_el(team)
members = element.find_element_by_css_selector(self._TEAM_MEMBERS).text
return int(members.split()[0])
def videos(self, team):
"""Return the number of videos displayed for a team.
"""
element = self.team_el(team)
videos = element.find_element_by_css_selector(self._TEAM_VIDEOS).text
return int(videos.split()[0])
def teams_on_page(self):
"""Return a list of teams displayed on the page.
"""
teams_list = []
team_name_els = self.browser.find_elements_by_css_selector(
" ".join([self._TEAM, self._TEAM_NAME]))
for el in team_name_els:
teams_list.append(el.text)
return teams_list
def team_displayed(self, team):
team_list = self.teams_on_page()
if team in team_list:
return True
else:
return "Team {0} not found in the list of teams {1}".format(team,
team_list)
def marked_private(self, team):
"""Return True if a team is marked private.
"""
self.team_search(team)
descriptor_text = self.get_text_by_css(self._TEAM_DESCRIPTOR)
if descriptor_text == "Private":
return True
def open_team_with_link(self, team_slug):
"""Open a specific team page, given the team slug.
"""
self.click_by_css(self._TEAM_LINK % team_slug)
# YOUR TEAMS TAB SPECIFIC
def open_my_teams_page(self):
"""Open the teams/my/ url.
"""
self.open_page(self._YOUR_URL)
def open_my_team(self, team=None):
if self._YOUR_URL not in self.browser.current_url:
self.open_my_teams_page()
if not team:
first_team = self._TEAM, self._TEAM_NAME
self.click_by_css(first_team)
else:
team_el = self._team_elem(team)
team = team_el.find_element_by_css_selector(self._TEAM_NAME)
team.click()
def leave_team(self, team_stub):
self.open_page(self._LEAVE_URL % team_stub)
def leave_team_successful(self):
self.wait_for_element_present(self._SUCCESS_MESSAGE)
if self.is_text_present(self._SUCCESS_MESSAGE,
'You have left this team.'):
return True
def leave_team_failed(self):
if self.is_text_present(self._ERROR_MESSAGE,
'You are the last owner of this team.'):
return True
def _hover_team(self, team):
team_el = self._team_elem(team)
self.hover_by_css(team_el)
def leave_present(self, team):
leave_link = False
self._hover_team(team)
if self.is_element_visible(self._LEAVE):
leave_link = True
return leave_link
def click_leave_link(self, team):
self.click_item_after_hover(self._team_elem(team), self._LEAVE)
self.handle_js_alert('accept')
def invite_error(self):
return self.get_text_by_css(self._INVITE_ERROR)
|
ogajduse/spacewalk
|
refs/heads/master
|
client/rhel/rhn-client-tools/src/up2date_client/debUtils.py
|
5
|
# Client code for Update Agent
# Copyright (c) 2011--2016 Red Hat, Inc. Distributed under GPLv2.
#
# Author: Simon Lukasik
# Lukas Durfina
#
import os
import apt
import gettext
t = gettext.translation('rhn-client-tools', fallback=True)
# Python 3 translations don't have a ugettext method
if not hasattr(t, 'ugettext'):
t.ugettext = t.gettext
_ = t.ugettext
# FIXME: After Debian bug 187019 is resolved
def verifyPackages(packages):
cache = apt.Cache()
missing_packages = []
for package in packages:
pkg = cache[package[0]]
if pkg == None or not pkg.is_installed:
missing_packages.append(package)
return [], missing_packages
def parseVRE(version):
epoch = ''
release = 'X'
if version.find(':') != -1:
epoch, version = version.split(':')
if version.find('-') != -1:
tmp = version.split('-')
version = '-'.join(tmp[:-1])
release = tmp[-1]
return version, release, epoch
def installTime(pkg_name, pkg_arch):
dir = '/var/lib/dpkg/info'
files = [ '%s.list' % pkg_name,
'%s:%s.list' % (pkg_name, pkg_arch) ]
for f in files:
path = os.path.join(dir,f)
if os.path.isfile(path):
return os.path.getmtime(path)
return None
#FIXME: Using Apt cache might not be an ultimate solution.
# It could be better to parse /var/lib/dpkg/status manually.
# Apt cache might not contain all the packages.
def getInstalledPackageList(msgCallback = None, progressCallback = None,
getArch=None, getInfo = None):
""" Return list of packages. Package is dict with following keys:
name, epoch, version, release and optionaly arch.
"""
if msgCallback != None:
msgCallback(_("Getting list of packages installed on the system"))
cache = apt.Cache()
total = 0
for pkg in cache:
if pkg.installed != None:
total += 1
count = 0
pkg_list = []
for pkg in cache:
if pkg.installed == None:
continue
version, release, epoch = parseVRE(pkg.installed.version)
package = {
'name': pkg.name,
'epoch': epoch,
'version': version,
'release': release,
'arch': pkg.installed.architecture + '-deb',
'installtime': installTime(pkg.name, pkg.installed.architecture)
}
pkg_list.append(package)
if progressCallback != None:
progressCallback(count, total)
count = count + 1
pkg_list.sort()
return pkg_list
def setDebugVerbosity():
pass
|
blaswan/dbn-cifar-10
|
refs/heads/master
|
code/logistic_regression.py
|
1
|
"""
This tutorial introduces logistic regression using Theano.
Logistic regression is a probabilistic, linear classifier. It is parametrized
by a weight matrix :math:`W` and a bias vector :math:`b`. Classification is
done by projecting data points onto a set of hyperplanes, the distance to
which is used to determine a class membership probability.
Mathematically, this can be written as:
.. math::
P(Y=i|x, W,b) &= softmax_i(W x + b) \\
&= \frac {e^{W_i x + b_i}} {\sum_j e^{W_j x + b_j}}
The output of the model or prediction is then done by taking the argmax of
the vector whose i'th element is P(Y=i|x).
.. math::
y_{pred} = argmax_i P(Y=i|x,W,b)
References:
- textbooks: "Pattern Recognition and Machine Learning" -
Christopher M. Bishop, section 4.3.2
"""
__docformat__ = 'restructedtext en'
import numpy
import theano
import theano.tensor as T
class LogisticRegression(object):
"""Multi-class Logistic Regression Class
The logistic regression is fully described by a weight matrix :math:`W`
and bias vector :math:`b`. Classification is done by projecting data
points onto a set of hyperplanes, the distance to which is used to
determine a class membership probability.
"""
def __init__(self, input, n_in, n_out):
""" Initialize the parameters of the logistic regression
:type input: theano.tensor.TensorType
:param input: symbolic variable that describes the input of the
architecture (one minibatch)
:type n_in: int
:param n_in: number of input units, the dimension of the space in
which the datapoints lie
:type n_out: int
:param n_out: number of output units, the dimension of the space in
which the labels lie
"""
# start-snippet-1
# initialize with 0 the weights W as a matrix of shape (n_in, n_out)
self.W = theano.shared(
value=numpy.zeros(
(n_in, n_out),
dtype=theano.config.floatX
),
name='W',
borrow=True
)
# initialize the baises b as a vector of n_out 0s
self.b = theano.shared(
value=numpy.zeros(
(n_out,),
dtype=theano.config.floatX
),
name='b',
borrow=True
)
# symbolic expression for computing the matrix of class-membership
# probabilities
# Where:
# W is a matrix where column-k represent the separation hyper plain for
# class-k
# x is a matrix where row-j represents input training sample-j
# b is a vector where element-k represent the free parameter of hyper
# plain-k
self.p_y_given_x = T.nnet.softmax(T.dot(input, self.W) + self.b)
# symbolic description of how to compute prediction as class whose
# probability is maximal
self.y_pred = T.argmax(self.p_y_given_x, axis=1)
# end-snippet-1
# parameters of the model
self.params = [self.W, self.b]
def negative_log_likelihood(self, y):
"""Return the mean of the negative log-likelihood of the prediction
of this model under a given target distribution.
.. math::
\frac{1}{|\mathcal{D}|} \mathcal{L} (\theta=\{W,b\}, \mathcal{D}) =
\frac{1}{|\mathcal{D}|} \sum_{i=0}^{|\mathcal{D}|}
\log(P(Y=y^{(i)}|x^{(i)}, W,b)) \\
\ell (\theta=\{W,b\}, \mathcal{D})
:type y: theano.tensor.TensorType
:param y: corresponds to a vector that gives for each example the
correct label
Note: we use the mean instead of the sum so that
the learning rate is less dependent on the batch size
"""
# start-snippet-2
# y.shape[0] is (symbolically) the number of rows in y, i.e.,
# number of examples (call it n) in the minibatch
# T.arange(y.shape[0]) is a symbolic vector which will contain
# [0,1,2,... n-1] T.log(self.p_y_given_x) is a matrix of
# Log-Probabilities (call it LP) with one row per example and
# one column per class LP[T.arange(y.shape[0]),y] is a vector
# v containing [LP[0,y[0]], LP[1,y[1]], LP[2,y[2]], ...,
# LP[n-1,y[n-1]]] and T.mean(LP[T.arange(y.shape[0]),y]) is
# the mean (across minibatch examples) of the elements in v,
# i.e., the mean log-likelihood across the minibatch.
return -T.mean(T.log(self.p_y_given_x)[T.arange(y.shape[0]), y])
# end-snippet-2
def errors(self, y):
"""Return a float representing the number of errors in the minibatch
over the total number of examples of the minibatch ; zero one
loss over the size of the minibatch
:type y: theano.tensor.TensorType
:param y: corresponds to a vector that gives for each example the
correct label
"""
# check if y has same dimension of y_pred
if y.ndim != self.y_pred.ndim:
raise TypeError(
'y should have the same shape as self.y_pred',
('y', y.type, 'y_pred', self.y_pred.type)
)
# check if y is of the correct datatype
if y.dtype.startswith('int'):
# the T.neq operator returns a vector of 0s and 1s, where 1
# represents a mistake in prediction
return T.mean(T.neq(self.y_pred, y))
else:
raise NotImplementedError()
|
learningequality/kolibri
|
refs/heads/develop
|
kolibri/core/device/test/test_api.py
|
2
|
import os
import platform
import sys
from collections import namedtuple
import mock
from django.conf import settings
from django.core.exceptions import ValidationError
from django.core.urlresolvers import reverse
from mock import patch
from morango.models import DatabaseIDModel
from morango.models import InstanceIDModel
from rest_framework import status
from rest_framework.test import APITestCase
import kolibri
from kolibri.core.auth.constants.role_kinds import ADMIN
from kolibri.core.auth.models import Facility
from kolibri.core.auth.models import FacilityDataset
from kolibri.core.auth.models import FacilityUser
from kolibri.core.auth.models import Role
from kolibri.core.auth.test.helpers import clear_process_cache
from kolibri.core.auth.test.helpers import create_superuser
from kolibri.core.auth.test.helpers import provision_device
from kolibri.core.auth.test.test_api import FacilityFactory
from kolibri.core.auth.test.test_api import FacilityUserFactory
from kolibri.core.device.models import DevicePermissions
from kolibri.core.device.models import DeviceSettings
DUMMY_PASSWORD = "password"
class DeviceProvisionTestCase(APITestCase):
def setUp(self):
clear_process_cache()
superuser_data = {"username": "superuser", "password": "password"}
facility_data = {"name": "Wilson Elementary"}
preset_data = "nonformal"
dataset_data = {
"learner_can_edit_username": True,
"learner_can_edit_name": True,
"learner_can_edit_password": True,
"learner_can_sign_up": True,
"learner_can_delete_account": True,
"learner_can_login_with_no_password": False,
}
settings = {}
allow_guest_access = True
language_id = "en"
def _default_provision_data(self):
return {
"device_name": None,
"superuser": self.superuser_data,
"facility": self.facility_data,
"preset": self.preset_data,
"settings": self.settings,
"language_id": self.language_id,
"allow_guest_access": self.allow_guest_access,
}
def _post_deviceprovision(self, data):
return self.client.post(
reverse("kolibri:core:deviceprovision"), data, format="json"
)
def test_personal_setup_defaults(self):
data = self._default_provision_data()
data["preset"] = "informal"
# Client should pass an empty Dict for settings
data["settings"] = {}
self._post_deviceprovision(data)
settings = FacilityDataset.objects.get()
self.assertEqual(settings.learner_can_edit_username, True)
self.assertEqual(settings.learner_can_edit_name, True)
self.assertEqual(settings.learner_can_edit_password, True)
self.assertEqual(settings.learner_can_sign_up, True)
self.assertEqual(settings.learner_can_delete_account, True)
self.assertEqual(settings.learner_can_login_with_no_password, False)
self.assertEqual(settings.show_download_button_in_learn, True)
device_settings = DeviceSettings.objects.get()
self.assertEqual(device_settings.allow_guest_access, True)
def test_cannot_post_if_provisioned(self):
provision_device()
data = self._default_provision_data()
response = self._post_deviceprovision(data)
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
def test_superuser_created(self):
data = self._default_provision_data()
self._post_deviceprovision(data)
self.assertEqual(
FacilityUser.objects.get().username, self.superuser_data["username"]
)
def test_superuser_password_set_correctly(self):
data = self._default_provision_data()
self._post_deviceprovision(data)
self.assertTrue(
FacilityUser.objects.get().check_password(self.superuser_data["password"])
)
def test_superuser_device_permissions_created(self):
data = self._default_provision_data()
self._post_deviceprovision(data)
self.assertEqual(
DevicePermissions.objects.get(),
FacilityUser.objects.get().devicepermissions,
)
def test_facility_created(self):
data = self._default_provision_data()
self._post_deviceprovision(data)
self.assertEqual(Facility.objects.get().name, self.facility_data["name"])
def test_admin_role_created(self):
data = self._default_provision_data()
self._post_deviceprovision(data)
self.assertEqual(Role.objects.get().kind, ADMIN)
def test_facility_role_created(self):
data = self._default_provision_data()
self._post_deviceprovision(data)
self.assertEqual(Role.objects.get().collection.name, self.facility_data["name"])
def test_dataset_set_created(self):
data = self._default_provision_data()
self._post_deviceprovision(data)
self.assertEqual(
FacilityDataset.objects.get().learner_can_edit_username,
self.dataset_data["learner_can_edit_username"],
)
self.assertEqual(
FacilityDataset.objects.get().learner_can_edit_name,
self.dataset_data["learner_can_edit_name"],
)
self.assertEqual(
FacilityDataset.objects.get().learner_can_edit_password,
self.dataset_data["learner_can_edit_password"],
)
self.assertEqual(
FacilityDataset.objects.get().learner_can_sign_up,
self.dataset_data["learner_can_sign_up"],
)
self.assertEqual(
FacilityDataset.objects.get().learner_can_delete_account,
self.dataset_data["learner_can_delete_account"],
)
self.assertEqual(
FacilityDataset.objects.get().learner_can_login_with_no_password,
self.dataset_data["learner_can_login_with_no_password"],
)
def test_device_settings_created(self):
data = self._default_provision_data()
self.assertEqual(DeviceSettings.objects.count(), 0)
self._post_deviceprovision(data)
self.assertEqual(DeviceSettings.objects.count(), 1)
def test_device_settings_values(self):
data = self._default_provision_data()
data["allow_guest_access"] = False
self._post_deviceprovision(data)
device_settings = DeviceSettings.objects.get()
self.assertEqual(device_settings.default_facility, Facility.objects.get())
self.assertFalse(device_settings.allow_guest_access)
self.assertFalse(device_settings.allow_peer_unlisted_channel_import)
self.assertTrue(device_settings.allow_learner_unassigned_resource_access)
class DeviceSettingsTestCase(APITestCase):
@classmethod
def setUpTestData(cls):
cls.settings = {
"language_id": "en",
"allow_guest_access": False,
"allow_peer_unlisted_channel_import": True,
"allow_learner_unassigned_resource_access": False,
}
cls.facility = FacilityFactory.create()
provision_device(language_id="es", default_facility=cls.facility)
cls.superuser = create_superuser(cls.facility)
cls.user = FacilityUserFactory.create(facility=cls.facility)
def setUp(self):
super(DeviceSettingsTestCase, self).setUp()
self.client.login(
username=self.superuser.username,
password=DUMMY_PASSWORD,
facility=self.facility,
)
def test_requires_authentication(self):
self.client.logout()
response = self.client.post(
reverse("kolibri:core:devicesettings"), self.settings, format="json"
)
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
def test_cannot_post(self):
response = self.client.post(
reverse("kolibri:core:devicesettings"), self.settings, format="json"
)
self.assertEqual(response.status_code, status.HTTP_405_METHOD_NOT_ALLOWED)
def test_cannot_put(self):
response = self.client.put(
reverse("kolibri:core:devicesettings"), self.settings, format="json"
)
self.assertEqual(response.status_code, status.HTTP_405_METHOD_NOT_ALLOWED)
def test_patch(self):
device_settings = DeviceSettings.objects.get()
self.assertEqual("es", device_settings.language_id)
self.assertTrue(device_settings.allow_guest_access)
self.assertFalse(device_settings.allow_peer_unlisted_channel_import)
self.assertTrue(device_settings.allow_learner_unassigned_resource_access)
self.client.patch(
reverse("kolibri:core:devicesettings"), self.settings, format="json"
)
device_settings.refresh_from_db()
self.assertEqual("en", device_settings.language_id)
self.assertFalse(device_settings.allow_guest_access)
self.assertTrue(device_settings.allow_peer_unlisted_channel_import)
self.assertFalse(device_settings.allow_learner_unassigned_resource_access)
class DevicePermissionsTestCase(APITestCase):
@classmethod
def setUpTestData(cls):
provision_device()
cls.facility = FacilityFactory.create()
cls.superuser = create_superuser(cls.facility)
cls.user = FacilityUserFactory.create(facility=cls.facility)
def setUp(self):
self.client.login(
username=self.superuser.username,
password=DUMMY_PASSWORD,
facility=self.facility,
)
def test_superuser_delete_own_permissions(self):
response = self.client.delete(
reverse(
"kolibri:core:devicepermissions-detail",
kwargs={"pk": self.superuser.devicepermissions.pk},
),
format="json",
)
self.assertEqual(response.status_code, 403)
def test_superuser_update_own_permissions(self):
response = self.client.patch(
reverse(
"kolibri:core:devicepermissions-detail",
kwargs={"pk": self.superuser.devicepermissions.pk},
),
{"is_superuser": False},
format="json",
)
self.assertEqual(response.status_code, 403)
class FreeSpaceTestCase(APITestCase):
def setUp(self):
provision_device()
self.facility = FacilityFactory.create()
self.superuser = create_superuser(self.facility)
self.user = FacilityUserFactory.create(facility=self.facility)
self.client.login(
username=self.superuser.username,
password=DUMMY_PASSWORD,
facility=self.facility,
)
def test_posix_freespace(self):
if not sys.platform.startswith("win"):
with mock.patch("kolibri.utils.system.os.statvfs") as os_statvfs_mock:
statvfs_result = namedtuple("statvfs_result", ["f_frsize", "f_bavail"])
os_statvfs_mock.return_value = statvfs_result(f_frsize=1, f_bavail=2)
response = self.client.get(
reverse("kolibri:core:freespace"), {"path": "test"}
)
os_statvfs_mock.assert_called_with(os.path.realpath("test"))
self.assertEqual(response.data, {"freespace": 2})
def test_win_freespace_fail(self):
if sys.platform.startswith("win"):
ctypes_mock = mock.MagicMock()
with mock.patch.dict("sys.modules", ctypes=ctypes_mock):
ctypes_mock.windll.kernel32.GetDiskFreeSpaceExW.return_value = 0
ctypes_mock.winError.side_effect = OSError
try:
self.client.get(reverse("kolibri:core:freespace"), {"path": "test"})
except OSError:
# check if ctypes.winError() has been called
ctypes_mock.winError.assert_called_with()
class DeviceInfoTestCase(APITestCase):
@classmethod
def setUpTestData(cls):
provision_device()
DatabaseIDModel.objects.create()
cls.facility = FacilityFactory.create()
cls.superuser = create_superuser(cls.facility)
def setUp(self):
self.client.login(
username=self.superuser.username,
password=DUMMY_PASSWORD,
facility=self.facility,
)
def test_has_version(self):
response = self.client.get(reverse("kolibri:core:deviceinfo"), format="json")
self.assertEqual(response.data["version"], kolibri.__version__)
def test_urls(self):
response = self.client.get(reverse("kolibri:core:deviceinfo"), format="json")
self.assertFalse(len(response.data["urls"]) == 0)
for url in response.data["urls"]:
# Make sure each url is a valid link
self.assertTrue(url.startswith("http://"))
@patch(
"kolibri.core.device.api.get_urls",
return_value=(1, ["http://127.0.0.1:8000", "http://kolibri.com"]),
)
def test_no_localhost_urls_when_others_available(self, get_urls_mock):
response = self.client.get(reverse("kolibri:core:deviceinfo"), format="json")
self.assertEqual(len(response.data["urls"]), 1)
self.assertEqual(response.data["urls"][0], "http://kolibri.com")
@patch(
"kolibri.core.device.api.get_urls", return_value=(1, ["http://127.0.0.1:8000"])
)
def test_localhost_urls_when_no_others_available(self, get_urls_mock):
response = self.client.get(reverse("kolibri:core:deviceinfo"), format="json")
self.assertEqual(len(response.data["urls"]), 1)
self.assertEqual(response.data["urls"][0], "http://127.0.0.1:8000")
def test_database_path(self):
response = self.client.get(reverse("kolibri:core:deviceinfo"), format="json")
db_engine = settings.DATABASES["default"]["ENGINE"]
db_path = response.data["database_path"]
if db_engine.endswith("sqlite3"):
self.assertEqual(db_path, settings.DATABASES["default"]["NAME"])
elif db_engine.endswith("postgresql"):
self.assertEqual(db_path, "postgresql")
else:
self.assertEqual(db_path, "unknown")
def test_os(self):
response = self.client.get(reverse("kolibri:core:deviceinfo"), format="json")
self.assertEqual(response.data["os"], platform.platform())
def test_device_id(self):
response = self.client.get(reverse("kolibri:core:deviceinfo"), format="json")
self.assertEqual(
response.data["device_id"],
InstanceIDModel.get_or_create_current_instance()[0].id,
)
def test_time_zone(self):
response = self.client.get(reverse("kolibri:core:deviceinfo"), format="json")
self.assertTrue(response.data["server_timezone"], settings.TIME_ZONE)
def test_free_space(self):
response = self.client.get(reverse("kolibri:core:deviceinfo"), format="json")
self.assertEqual(type(response.data["content_storage_free_space"]), int)
def test_superuser_permissions(self):
response = self.client.get(reverse("kolibri:core:deviceinfo"), format="json")
self.assertEqual(response.status_code, 200)
def test_user_permissions(self):
self.user = FacilityUserFactory.create(facility=self.facility)
self.client.logout()
self.client.login(
username=self.user.username, password=DUMMY_PASSWORD, facility=self.facility
)
response = self.client.get(reverse("kolibri:core:deviceinfo"), format="json")
self.assertEqual(response.status_code, 403)
def test_user_with_permissions(self):
self.user = FacilityUserFactory.create(facility=self.facility)
DevicePermissions.objects.create(user=self.user, can_manage_content=True)
self.client.logout()
self.client.login(
username=self.user.username, password=DUMMY_PASSWORD, facility=self.facility
)
response = self.client.get(reverse("kolibri:core:deviceinfo"), format="json")
self.assertEqual(response.status_code, 200)
class DeviceNameTestCase(APITestCase):
@classmethod
def setUpTestData(cls):
cls.device_name = {"name": "test device"}
cls.facility = FacilityFactory.create()
provision_device(language_id="es", default_facility=cls.facility)
cls.superuser = create_superuser(cls.facility)
cls.user = FacilityUserFactory.create(facility=cls.facility)
def setUp(self):
super(DeviceNameTestCase, self).setUp()
self.client.login(
username=self.superuser.username,
password=DUMMY_PASSWORD,
facility=self.facility,
)
def test_requires_authentication(self):
self.client.logout()
response = self.client.post(
reverse("kolibri:core:devicename"), self.device_name, format="json"
)
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
def test_existing_device_name(self):
response = self.client.get(reverse("kolibri:core:devicename"))
self.assertEqual(
response.data["name"],
InstanceIDModel.get_or_create_current_instance()[0].hostname,
)
def test_patch(self):
device_settings = DeviceSettings.objects.get()
self.assertEqual(
device_settings.name,
InstanceIDModel.get_or_create_current_instance()[0].hostname,
)
response = self.client.patch(
reverse("kolibri:core:devicename"), self.device_name, format="json"
)
self.assertEqual(response.data, self.device_name)
device_settings.refresh_from_db()
self.assertEqual(device_settings.name, self.device_name["name"])
self.assertNotEqual(
device_settings.name,
InstanceIDModel.get_or_create_current_instance()[0].hostname,
)
def test_device_name_max_length(self):
with self.assertRaises(ValidationError):
exceeds_max_length_name = {"name": "a" * 60}
self.client.patch(
reverse("kolibri:core:devicename"),
exceeds_max_length_name,
format="json",
)
|
loli/sklearn-ensembletrees
|
refs/heads/master
|
sklearn/datasets/mlcomp.py
|
5
|
# Copyright (c) 2010 Olivier Grisel <olivier.grisel@ensta.org>
# License: BSD 3 clause
"""Glue code to load http://mlcomp.org data as a scikit.learn dataset"""
import os
import numbers
from sklearn.datasets.base import load_files
def _load_document_classification(dataset_path, metadata, set_=None, **kwargs):
if set_ is not None:
dataset_path = os.path.join(dataset_path, set_)
return load_files(dataset_path, metadata.get('description'), **kwargs)
LOADERS = {
'DocumentClassification': _load_document_classification,
# TODO: implement the remaining domain formats
}
def load_mlcomp(name_or_id, set_="raw", mlcomp_root=None, **kwargs):
"""Load a datasets as downloaded from http://mlcomp.org
Parameters
----------
name_or_id : the integer id or the string name metadata of the MLComp
dataset to load
`set_` : select the portion to load: 'train', 'test' or 'raw'
mlcomp_root : the filesystem path to the root folder where MLComp datasets
are stored, if mlcomp_root is None, the MLCOMP_DATASETS_HOME
environment variable is looked up instead.
**kwargs : domain specific kwargs to be passed to the dataset loader.
Returns
-------
data : Bunch
Dictionary-like object, the interesting attributes are:
'filenames', the files holding the raw to learn, 'target', the
classification labels (integer index), 'target_names',
the meaning of the labels, and 'DESCR', the full description of the
dataset.
Note on the lookup process: depending on the type of name_or_id,
will choose between integer id lookup or metadata name lookup by
looking at the unzipped archives and metadata file.
TODO: implement zip dataset loading too
"""
if mlcomp_root is None:
try:
mlcomp_root = os.environ['MLCOMP_DATASETS_HOME']
except KeyError:
raise ValueError("MLCOMP_DATASETS_HOME env variable is undefined")
mlcomp_root = os.path.expanduser(mlcomp_root)
mlcomp_root = os.path.abspath(mlcomp_root)
mlcomp_root = os.path.normpath(mlcomp_root)
if not os.path.exists(mlcomp_root):
raise ValueError("Could not find folder: " + mlcomp_root)
# dataset lookup
if isinstance(name_or_id, numbers.Integral):
# id lookup
dataset_path = os.path.join(mlcomp_root, str(name_or_id))
else:
# assume name based lookup
dataset_path = None
expected_name_line = "name: " + name_or_id
for dataset in os.listdir(mlcomp_root):
metadata_file = os.path.join(mlcomp_root, dataset, 'metadata')
if not os.path.exists(metadata_file):
continue
with open(metadata_file) as f:
for line in f:
if line.strip() == expected_name_line:
dataset_path = os.path.join(mlcomp_root, dataset)
break
if dataset_path is None:
raise ValueError("Could not find dataset with metadata line: " +
expected_name_line)
# loading the dataset metadata
metadata = dict()
metadata_file = os.path.join(dataset_path, 'metadata')
if not os.path.exists(metadata_file):
raise ValueError(dataset_path + ' is not a valid MLComp dataset')
with open(metadata_file) as f:
for line in f:
if ":" in line:
key, value = line.split(":", 1)
metadata[key.strip()] = value.strip()
format = metadata.get('format', 'unknow')
loader = LOADERS.get(format)
if loader is None:
raise ValueError("No loader implemented for format: " + format)
return loader(dataset_path, metadata, set_=set_, **kwargs)
|
Innovahn/cybex
|
refs/heads/master
|
addons/web/controllers/main.py
|
20
|
# -*- coding: utf-8 -*-
import ast
import base64
import csv
import functools
import glob
import itertools
import jinja2
import logging
import operator
import datetime
import hashlib
import os
import re
import simplejson
import sys
import time
import urllib2
import zlib
from xml.etree import ElementTree
from cStringIO import StringIO
import babel.messages.pofile
import werkzeug.utils
import werkzeug.wrappers
try:
import xlwt
except ImportError:
xlwt = None
import openerp
import openerp.modules.registry
from openerp.addons.base.ir.ir_qweb import AssetsBundle, QWebTemplateNotFound
from openerp.modules import get_module_resource
from openerp.tools import topological_sort
from openerp.tools.translate import _
from openerp import http
from openerp.http import request, serialize_exception as _serialize_exception
_logger = logging.getLogger(__name__)
if hasattr(sys, 'frozen'):
# When running on compiled windows binary, we don't have access to package loader.
path = os.path.realpath(os.path.join(os.path.dirname(__file__), '..', 'views'))
loader = jinja2.FileSystemLoader(path)
else:
loader = jinja2.PackageLoader('openerp.addons.web', "views")
env = jinja2.Environment(loader=loader, autoescape=True)
env.filters["json"] = simplejson.dumps
# 1 week cache for asset bundles as advised by Google Page Speed
BUNDLE_MAXAGE = 60 * 60 * 24 * 7
#----------------------------------------------------------
# OpenERP Web helpers
#----------------------------------------------------------
db_list = http.db_list
db_monodb = http.db_monodb
def serialize_exception(f):
@functools.wraps(f)
def wrap(*args, **kwargs):
try:
return f(*args, **kwargs)
except Exception, e:
_logger.exception("An exception occured during an http request")
se = _serialize_exception(e)
error = {
'code': 200,
'message': "Odoo Server Error",
'data': se
}
return werkzeug.exceptions.InternalServerError(simplejson.dumps(error))
return wrap
def redirect_with_hash(*args, **kw):
"""
.. deprecated:: 8.0
Use the ``http.redirect_with_hash()`` function instead.
"""
return http.redirect_with_hash(*args, **kw)
def abort_and_redirect(url):
r = request.httprequest
response = werkzeug.utils.redirect(url, 302)
response = r.app.get_response(r, response, explicit_session=False)
werkzeug.exceptions.abort(response)
def ensure_db(redirect='/web/database/selector'):
# This helper should be used in web client auth="none" routes
# if those routes needs a db to work with.
# If the heuristics does not find any database, then the users will be
# redirected to db selector or any url specified by `redirect` argument.
# If the db is taken out of a query parameter, it will be checked against
# `http.db_filter()` in order to ensure it's legit and thus avoid db
# forgering that could lead to xss attacks.
db = request.params.get('db')
# Ensure db is legit
if db and db not in http.db_filter([db]):
db = None
if db and not request.session.db:
# User asked a specific database on a new session.
# That mean the nodb router has been used to find the route
# Depending on installed module in the database, the rendering of the page
# may depend on data injected by the database route dispatcher.
# Thus, we redirect the user to the same page but with the session cookie set.
# This will force using the database route dispatcher...
r = request.httprequest
url_redirect = r.base_url
if r.query_string:
# Can't use werkzeug.wrappers.BaseRequest.url with encoded hashes:
# https://github.com/amigrave/werkzeug/commit/b4a62433f2f7678c234cdcac6247a869f90a7eb7
url_redirect += '?' + r.query_string
response = werkzeug.utils.redirect(url_redirect, 302)
request.session.db = db
abort_and_redirect(url_redirect)
# if db not provided, use the session one
if not db and request.session.db and http.db_filter([request.session.db]):
db = request.session.db
# if no database provided and no database in session, use monodb
if not db:
db = db_monodb(request.httprequest)
# if no db can be found til here, send to the database selector
# the database selector will redirect to database manager if needed
if not db:
werkzeug.exceptions.abort(werkzeug.utils.redirect(redirect, 303))
# always switch the session to the computed db
if db != request.session.db:
request.session.logout()
abort_and_redirect(request.httprequest.url)
request.session.db = db
def module_installed():
# Candidates module the current heuristic is the /static dir
loadable = http.addons_manifest.keys()
modules = {}
# Retrieve database installed modules
# TODO The following code should move to ir.module.module.list_installed_modules()
Modules = request.session.model('ir.module.module')
domain = [('state','=','installed'), ('name','in', loadable)]
for module in Modules.search_read(domain, ['name', 'dependencies_id']):
modules[module['name']] = []
deps = module.get('dependencies_id')
if deps:
deps_read = request.session.model('ir.module.module.dependency').read(deps, ['name'])
dependencies = [i['name'] for i in deps_read]
modules[module['name']] = dependencies
sorted_modules = topological_sort(modules)
return sorted_modules
def module_installed_bypass_session(dbname):
loadable = http.addons_manifest.keys()
modules = {}
try:
registry = openerp.modules.registry.RegistryManager.get(dbname)
with registry.cursor() as cr:
m = registry.get('ir.module.module')
# TODO The following code should move to ir.module.module.list_installed_modules()
domain = [('state','=','installed'), ('name','in', loadable)]
ids = m.search(cr, 1, [('state','=','installed'), ('name','in', loadable)])
for module in m.read(cr, 1, ids, ['name', 'dependencies_id']):
modules[module['name']] = []
deps = module.get('dependencies_id')
if deps:
deps_read = registry.get('ir.module.module.dependency').read(cr, 1, deps, ['name'])
dependencies = [i['name'] for i in deps_read]
modules[module['name']] = dependencies
except Exception,e:
pass
sorted_modules = topological_sort(modules)
return sorted_modules
def module_boot(db=None):
server_wide_modules = openerp.conf.server_wide_modules or ['web']
serverside = []
dbside = []
for i in server_wide_modules:
if i in http.addons_manifest:
serverside.append(i)
monodb = db or db_monodb()
if monodb:
dbside = module_installed_bypass_session(monodb)
dbside = [i for i in dbside if i not in serverside]
addons = serverside + dbside
return addons
def concat_xml(file_list):
"""Concatenate xml files
:param list(str) file_list: list of files to check
:returns: (concatenation_result, checksum)
:rtype: (str, str)
"""
checksum = hashlib.new('sha1')
if not file_list:
return '', checksum.hexdigest()
root = None
for fname in file_list:
with open(fname, 'rb') as fp:
contents = fp.read()
checksum.update(contents)
fp.seek(0)
xml = ElementTree.parse(fp).getroot()
if root is None:
root = ElementTree.Element(xml.tag)
#elif root.tag != xml.tag:
# raise ValueError("Root tags missmatch: %r != %r" % (root.tag, xml.tag))
for child in xml.getchildren():
root.append(child)
return ElementTree.tostring(root, 'utf-8'), checksum.hexdigest()
def fs2web(path):
"""convert FS path into web path"""
return '/'.join(path.split(os.path.sep))
def manifest_glob(extension, addons=None, db=None, include_remotes=False):
if addons is None:
addons = module_boot(db=db)
else:
addons = addons.split(',')
r = []
for addon in addons:
manifest = http.addons_manifest.get(addon, None)
if not manifest:
continue
# ensure does not ends with /
addons_path = os.path.join(manifest['addons_path'], '')[:-1]
globlist = manifest.get(extension, [])
for pattern in globlist:
if pattern.startswith(('http://', 'https://', '//')):
if include_remotes:
r.append((None, pattern))
else:
for path in glob.glob(os.path.normpath(os.path.join(addons_path, addon, pattern))):
r.append((path, fs2web(path[len(addons_path):])))
return r
def manifest_list(extension, mods=None, db=None, debug=None):
""" list ressources to load specifying either:
mods: a comma separated string listing modules
db: a database name (return all installed modules in that database)
"""
if debug is not None:
_logger.warning("openerp.addons.web.main.manifest_list(): debug parameter is deprecated")
files = manifest_glob(extension, addons=mods, db=db, include_remotes=True)
return [wp for _fp, wp in files]
def get_last_modified(files):
""" Returns the modification time of the most recently modified
file provided
:param list(str) files: names of files to check
:return: most recent modification time amongst the fileset
:rtype: datetime.datetime
"""
files = list(files)
if files:
return max(datetime.datetime.fromtimestamp(os.path.getmtime(f))
for f in files)
return datetime.datetime(1970, 1, 1)
def make_conditional(response, last_modified=None, etag=None, max_age=0):
""" Makes the provided response conditional based upon the request,
and mandates revalidation from clients
Uses Werkzeug's own :meth:`ETagResponseMixin.make_conditional`, after
setting ``last_modified`` and ``etag`` correctly on the response object
:param response: Werkzeug response
:type response: werkzeug.wrappers.Response
:param datetime.datetime last_modified: last modification date of the response content
:param str etag: some sort of checksum of the content (deep etag)
:return: the response object provided
:rtype: werkzeug.wrappers.Response
"""
response.cache_control.must_revalidate = True
response.cache_control.max_age = max_age
if last_modified:
response.last_modified = last_modified
if etag:
response.set_etag(etag)
return response.make_conditional(request.httprequest)
def login_and_redirect(db, login, key, redirect_url='/web'):
request.session.authenticate(db, login, key)
return set_cookie_and_redirect(redirect_url)
def set_cookie_and_redirect(redirect_url):
redirect = werkzeug.utils.redirect(redirect_url, 303)
redirect.autocorrect_location_header = False
return redirect
def login_redirect():
url = '/web/login?'
if request.debug:
url += 'debug&'
return """<html><head><script>
window.location = '%sredirect=' + encodeURIComponent(window.location);
</script></head></html>
""" % (url,)
def load_actions_from_ir_values(key, key2, models, meta):
Values = request.session.model('ir.values')
actions = Values.get(key, key2, models, meta, request.context)
return [(id, name, clean_action(action))
for id, name, action in actions]
def clean_action(action):
action.setdefault('flags', {})
action_type = action.setdefault('type', 'ir.actions.act_window_close')
if action_type == 'ir.actions.act_window':
return fix_view_modes(action)
return action
# I think generate_views,fix_view_modes should go into js ActionManager
def generate_views(action):
"""
While the server generates a sequence called "views" computing dependencies
between a bunch of stuff for views coming directly from the database
(the ``ir.actions.act_window model``), it's also possible for e.g. buttons
to return custom view dictionaries generated on the fly.
In that case, there is no ``views`` key available on the action.
Since the web client relies on ``action['views']``, generate it here from
``view_mode`` and ``view_id``.
Currently handles two different cases:
* no view_id, multiple view_mode
* single view_id, single view_mode
:param dict action: action descriptor dictionary to generate a views key for
"""
view_id = action.get('view_id') or False
if isinstance(view_id, (list, tuple)):
view_id = view_id[0]
# providing at least one view mode is a requirement, not an option
view_modes = action['view_mode'].split(',')
if len(view_modes) > 1:
if view_id:
raise ValueError('Non-db action dictionaries should provide '
'either multiple view modes or a single view '
'mode and an optional view id.\n\n Got view '
'modes %r and view id %r for action %r' % (
view_modes, view_id, action))
action['views'] = [(False, mode) for mode in view_modes]
return
action['views'] = [(view_id, view_modes[0])]
def fix_view_modes(action):
""" For historical reasons, OpenERP has weird dealings in relation to
view_mode and the view_type attribute (on window actions):
* one of the view modes is ``tree``, which stands for both list views
and tree views
* the choice is made by checking ``view_type``, which is either
``form`` for a list view or ``tree`` for an actual tree view
This methods simply folds the view_type into view_mode by adding a
new view mode ``list`` which is the result of the ``tree`` view_mode
in conjunction with the ``form`` view_type.
TODO: this should go into the doc, some kind of "peculiarities" section
:param dict action: an action descriptor
:returns: nothing, the action is modified in place
"""
if not action.get('views'):
generate_views(action)
if action.pop('view_type', 'form') != 'form':
return action
if 'view_mode' in action:
action['view_mode'] = ','.join(
mode if mode != 'tree' else 'list'
for mode in action['view_mode'].split(','))
action['views'] = [
[id, mode if mode != 'tree' else 'list']
for id, mode in action['views']
]
return action
def _local_web_translations(trans_file):
messages = []
try:
with open(trans_file) as t_file:
po = babel.messages.pofile.read_po(t_file)
except Exception:
return
for x in po:
if x.id and x.string and "openerp-web" in x.auto_comments:
messages.append({'id': x.id, 'string': x.string})
return messages
def xml2json_from_elementtree(el, preserve_whitespaces=False):
""" xml2json-direct
Simple and straightforward XML-to-JSON converter in Python
New BSD Licensed
http://code.google.com/p/xml2json-direct/
"""
res = {}
if el.tag[0] == "{":
ns, name = el.tag.rsplit("}", 1)
res["tag"] = name
res["namespace"] = ns[1:]
else:
res["tag"] = el.tag
res["attrs"] = {}
for k, v in el.items():
res["attrs"][k] = v
kids = []
if el.text and (preserve_whitespaces or el.text.strip() != ''):
kids.append(el.text)
for kid in el:
kids.append(xml2json_from_elementtree(kid, preserve_whitespaces))
if kid.tail and (preserve_whitespaces or kid.tail.strip() != ''):
kids.append(kid.tail)
res["children"] = kids
return res
def content_disposition(filename):
filename = filename.encode('utf8')
escaped = urllib2.quote(filename)
browser = request.httprequest.user_agent.browser
version = int((request.httprequest.user_agent.version or '0').split('.')[0])
if browser == 'msie' and version < 9:
return "attachment; filename=%s" % escaped
elif browser == 'safari':
return "attachment; filename=%s" % filename
else:
return "attachment; filename*=UTF-8''%s" % escaped
#----------------------------------------------------------
# OpenERP Web web Controllers
#----------------------------------------------------------
class Home(http.Controller):
@http.route('/', type='http', auth="none")
def index(self, s_action=None, db=None, **kw):
return http.local_redirect('/web', query=request.params, keep_hash=True)
@http.route('/web', type='http', auth="none")
def web_client(self, s_action=None, **kw):
ensure_db()
if request.session.uid:
if kw.get('redirect'):
return werkzeug.utils.redirect(kw.get('redirect'), 303)
if not request.uid:
request.uid = request.session.uid
menu_data = request.registry['ir.ui.menu'].load_menus(request.cr, request.uid, context=request.context)
return request.render('web.webclient_bootstrap', qcontext={'menu_data': menu_data})
else:
return login_redirect()
@http.route('/web/login', type='http', auth="none")
def web_login(self, redirect=None, **kw):
ensure_db()
if request.httprequest.method == 'GET' and redirect and request.session.uid:
return http.redirect_with_hash(redirect)
if not request.uid:
request.uid = openerp.SUPERUSER_ID
values = request.params.copy()
if not redirect:
redirect = '/web?' + request.httprequest.query_string
values['redirect'] = redirect
try:
values['databases'] = http.db_list()
except openerp.exceptions.AccessDenied:
values['databases'] = None
if request.httprequest.method == 'POST':
old_uid = request.uid
uid = request.session.authenticate(request.session.db, request.params['login'], request.params['password'])
if uid is not False:
return http.redirect_with_hash(redirect)
request.uid = old_uid
values['error'] = "Wrong login/password"
return request.render('web.login', values)
@http.route('/login', type='http', auth="none")
def login(self, db, login, key, redirect="/web", **kw):
if not http.db_filter([db]):
return werkzeug.utils.redirect('/', 303)
return login_and_redirect(db, login, key, redirect_url=redirect)
@http.route([
'/web/js/<xmlid>',
'/web/js/<xmlid>/<version>',
], type='http', auth='public')
def js_bundle(self, xmlid, version=None, **kw):
try:
bundle = AssetsBundle(xmlid)
except QWebTemplateNotFound:
return request.not_found()
response = request.make_response(bundle.js(), [('Content-Type', 'application/javascript')])
return make_conditional(response, bundle.last_modified, max_age=BUNDLE_MAXAGE)
@http.route([
'/web/css/<xmlid>',
'/web/css/<xmlid>/<version>',
], type='http', auth='public')
def css_bundle(self, xmlid, version=None, **kw):
try:
bundle = AssetsBundle(xmlid)
except QWebTemplateNotFound:
return request.not_found()
response = request.make_response(bundle.css(), [('Content-Type', 'text/css')])
return make_conditional(response, bundle.last_modified, max_age=BUNDLE_MAXAGE)
class WebClient(http.Controller):
@http.route('/web/webclient/csslist', type='json', auth="none")
def csslist(self, mods=None):
return manifest_list('css', mods=mods)
@http.route('/web/webclient/jslist', type='json', auth="none")
def jslist(self, mods=None):
return manifest_list('js', mods=mods)
@http.route('/web/webclient/qweb', type='http', auth="none")
def qweb(self, mods=None, db=None):
files = [f[0] for f in manifest_glob('qweb', addons=mods, db=db)]
last_modified = get_last_modified(files)
if request.httprequest.if_modified_since and request.httprequest.if_modified_since >= last_modified:
return werkzeug.wrappers.Response(status=304)
content, checksum = concat_xml(files)
return make_conditional(
request.make_response(content, [('Content-Type', 'text/xml')]),
last_modified, checksum)
@http.route('/web/webclient/bootstrap_translations', type='json', auth="none")
def bootstrap_translations(self, mods):
""" Load local translations from *.po files, as a temporary solution
until we have established a valid session. This is meant only
for translating the login page and db management chrome, using
the browser's language. """
# For performance reasons we only load a single translation, so for
# sub-languages (that should only be partially translated) we load the
# main language PO instead - that should be enough for the login screen.
lang = request.lang.split('_')[0]
translations_per_module = {}
for addon_name in mods:
if http.addons_manifest[addon_name].get('bootstrap'):
addons_path = http.addons_manifest[addon_name]['addons_path']
f_name = os.path.join(addons_path, addon_name, "i18n", lang + ".po")
if not os.path.exists(f_name):
continue
translations_per_module[addon_name] = {'messages': _local_web_translations(f_name)}
return {"modules": translations_per_module,
"lang_parameters": None}
@http.route('/web/webclient/translations', type='json', auth="none")
def translations(self, mods=None, lang=None):
request.disable_db = False
uid = openerp.SUPERUSER_ID
if mods is None:
m = request.registry.get('ir.module.module')
mods = [x['name'] for x in m.search_read(request.cr, uid,
[('state','=','installed')], ['name'])]
if lang is None:
lang = request.context["lang"]
res_lang = request.registry.get('res.lang')
ids = res_lang.search(request.cr, uid, [("code", "=", lang)])
lang_params = None
if ids:
lang_params = res_lang.read(request.cr, uid, ids[0], ["direction", "date_format", "time_format",
"grouping", "decimal_point", "thousands_sep"])
# Regional languages (ll_CC) must inherit/override their parent lang (ll), but this is
# done server-side when the language is loaded, so we only need to load the user's lang.
ir_translation = request.registry.get('ir.translation')
translations_per_module = {}
messages = ir_translation.search_read(request.cr, uid, [('module','in',mods),('lang','=',lang),
('comments','like','openerp-web'),('value','!=',False),
('value','!=','')],
['module','src','value','lang'], order='module')
for mod, msg_group in itertools.groupby(messages, key=operator.itemgetter('module')):
translations_per_module.setdefault(mod,{'messages':[]})
translations_per_module[mod]['messages'].extend({'id': m['src'],
'string': m['value']} \
for m in msg_group)
return {"modules": translations_per_module,
"lang_parameters": lang_params}
@http.route('/web/webclient/version_info', type='json', auth="none")
def version_info(self):
return openerp.service.common.exp_version()
@http.route('/web/tests', type='http', auth="none")
def index(self, mod=None, **kwargs):
return request.render('web.qunit_suite')
class Proxy(http.Controller):
@http.route('/web/proxy/load', type='json', auth="none")
def load(self, path):
""" Proxies an HTTP request through a JSON request.
It is strongly recommended to not request binary files through this,
as the result will be a binary data blob as well.
:param path: actual request path
:return: file content
"""
from werkzeug.test import Client
from werkzeug.wrappers import BaseResponse
base_url = request.httprequest.base_url
return Client(request.httprequest.app, BaseResponse).get(path, base_url=base_url).data
class Database(http.Controller):
@http.route('/web/database/selector', type='http', auth="none")
def selector(self, **kw):
try:
dbs = http.db_list()
if not dbs:
return http.local_redirect('/web/database/manager')
except openerp.exceptions.AccessDenied:
dbs = False
return env.get_template("database_selector.html").render({
'databases': dbs,
'debug': request.debug,
})
@http.route('/web/database/manager', type='http', auth="none")
def manager(self, **kw):
# TODO: migrate the webclient's database manager to server side views
request.session.logout()
return env.get_template("database_manager.html").render({
'modules': simplejson.dumps(module_boot()),
})
@http.route('/web/database/get_list', type='json', auth="none")
def get_list(self):
# TODO change js to avoid calling this method if in monodb mode
try:
return http.db_list()
except openerp.exceptions.AccessDenied:
monodb = db_monodb()
if monodb:
return [monodb]
raise
@http.route('/web/database/create', type='json', auth="none")
def create(self, fields):
params = dict(map(operator.itemgetter('name', 'value'), fields))
db_created = request.session.proxy("db").create_database(
params['super_admin_pwd'],
params['db_name'],
bool(params.get('demo_data')),
params['db_lang'],
params['create_admin_pwd'])
if db_created:
request.session.authenticate(params['db_name'], 'admin', params['create_admin_pwd'])
return db_created
@http.route('/web/database/duplicate', type='json', auth="none")
def duplicate(self, fields):
params = dict(map(operator.itemgetter('name', 'value'), fields))
duplicate_attrs = (
params['super_admin_pwd'],
params['db_original_name'],
params['db_name'],
)
return request.session.proxy("db").duplicate_database(*duplicate_attrs)
@http.route('/web/database/drop', type='json', auth="none")
def drop(self, fields):
password, db = operator.itemgetter(
'drop_pwd', 'drop_db')(
dict(map(operator.itemgetter('name', 'value'), fields)))
try:
if request.session.proxy("db").drop(password, db):
return True
else:
return False
except openerp.exceptions.AccessDenied:
return {'error': 'AccessDenied', 'title': 'Drop Database'}
except Exception:
return {'error': _('Could not drop database !'), 'title': _('Drop Database')}
@http.route('/web/database/backup', type='http', auth="none")
def backup(self, backup_db, backup_pwd, token):
try:
db_dump = base64.b64decode(
request.session.proxy("db").dump(backup_pwd, backup_db))
filename = "%(db)s_%(timestamp)s.dump" % {
'db': backup_db,
'timestamp': datetime.datetime.utcnow().strftime(
"%Y-%m-%d_%H-%M-%SZ")
}
return request.make_response(db_dump,
[('Content-Type', 'application/octet-stream; charset=binary'),
('Content-Disposition', content_disposition(filename))],
{'fileToken': token}
)
except Exception, e:
return simplejson.dumps([[],[{'error': openerp.tools.ustr(e), 'title': _('Backup Database')}]])
@http.route('/web/database/restore', type='http', auth="none")
def restore(self, db_file, restore_pwd, new_db, mode):
try:
copy = mode == 'copy'
data = base64.b64encode(db_file.read())
request.session.proxy("db").restore(restore_pwd, new_db, data, copy)
return ''
except openerp.exceptions.AccessDenied, e:
raise Exception("AccessDenied")
@http.route('/web/database/change_password', type='json', auth="none")
def change_password(self, fields):
old_password, new_password = operator.itemgetter(
'old_pwd', 'new_pwd')(
dict(map(operator.itemgetter('name', 'value'), fields)))
try:
return request.session.proxy("db").change_admin_password(old_password, new_password)
except openerp.exceptions.AccessDenied:
return {'error': 'AccessDenied', 'title': _('Change Password')}
except Exception:
return {'error': _('Error, password not changed !'), 'title': _('Change Password')}
class Session(http.Controller):
def session_info(self):
request.session.ensure_valid()
return {
"session_id": request.session_id,
"uid": request.session.uid,
"user_context": request.session.get_context() if request.session.uid else {},
"db": request.session.db,
"username": request.session.login,
}
@http.route('/web/session/get_session_info', type='json', auth="none")
def get_session_info(self):
request.uid = request.session.uid
request.disable_db = False
return self.session_info()
@http.route('/web/session/authenticate', type='json', auth="none")
def authenticate(self, db, login, password, base_location=None):
request.session.authenticate(db, login, password)
return self.session_info()
@http.route('/web/session/change_password', type='json', auth="user")
def change_password(self, fields):
old_password, new_password,confirm_password = operator.itemgetter('old_pwd', 'new_password','confirm_pwd')(
dict(map(operator.itemgetter('name', 'value'), fields)))
if not (old_password.strip() and new_password.strip() and confirm_password.strip()):
return {'error':_('You cannot leave any password empty.'),'title': _('Change Password')}
if new_password != confirm_password:
return {'error': _('The new password and its confirmation must be identical.'),'title': _('Change Password')}
try:
if request.session.model('res.users').change_password(
old_password, new_password):
return {'new_password':new_password}
except Exception:
return {'error': _('The old password you provided is incorrect, your password was not changed.'), 'title': _('Change Password')}
return {'error': _('Error, password not changed !'), 'title': _('Change Password')}
@http.route('/web/session/get_lang_list', type='json', auth="none")
def get_lang_list(self):
try:
return request.session.proxy("db").list_lang() or []
except Exception, e:
return {"error": e, "title": _("Languages")}
@http.route('/web/session/modules', type='json', auth="user")
def modules(self):
# return all installed modules. Web client is smart enough to not load a module twice
return module_installed()
@http.route('/web/session/save_session_action', type='json', auth="user")
def save_session_action(self, the_action):
"""
This method store an action object in the session object and returns an integer
identifying that action. The method get_session_action() can be used to get
back the action.
:param the_action: The action to save in the session.
:type the_action: anything
:return: A key identifying the saved action.
:rtype: integer
"""
return request.httpsession.save_action(the_action)
@http.route('/web/session/get_session_action', type='json', auth="user")
def get_session_action(self, key):
"""
Gets back a previously saved action. This method can return None if the action
was saved since too much time (this case should be handled in a smart way).
:param key: The key given by save_session_action()
:type key: integer
:return: The saved action or None.
:rtype: anything
"""
return request.httpsession.get_action(key)
@http.route('/web/session/check', type='json', auth="user")
def check(self):
request.session.assert_valid()
return None
@http.route('/web/session/destroy', type='json', auth="user")
def destroy(self):
request.session.logout()
@http.route('/web/session/logout', type='http', auth="none")
def logout(self, redirect='/web'):
request.session.logout(keep_db=True)
return werkzeug.utils.redirect(redirect, 303)
class Menu(http.Controller):
@http.route('/web/menu/load_needaction', type='json', auth="user")
def load_needaction(self, menu_ids):
""" Loads needaction counters for specific menu ids.
:return: needaction data
:rtype: dict(menu_id: {'needaction_enabled': boolean, 'needaction_counter': int})
"""
return request.session.model('ir.ui.menu').get_needaction_data(menu_ids, request.context)
class DataSet(http.Controller):
@http.route('/web/dataset/search_read', type='json', auth="user")
def search_read(self, model, fields=False, offset=0, limit=False, domain=None, sort=None):
return self.do_search_read(model, fields, offset, limit, domain, sort)
def do_search_read(self, model, fields=False, offset=0, limit=False, domain=None
, sort=None):
""" Performs a search() followed by a read() (if needed) using the
provided search criteria
:param str model: the name of the model to search on
:param fields: a list of the fields to return in the result records
:type fields: [str]
:param int offset: from which index should the results start being returned
:param int limit: the maximum number of records to return
:param list domain: the search domain for the query
:param list sort: sorting directives
:returns: A structure (dict) with two keys: ids (all the ids matching
the (domain, context) pair) and records (paginated records
matching fields selection set)
:rtype: list
"""
Model = request.session.model(model)
records = Model.search_read(domain, fields, offset or 0, limit or False, sort or False,
request.context)
if not records:
return {
'length': 0,
'records': []
}
if limit and len(records) == limit:
length = Model.search_count(domain, request.context)
else:
length = len(records) + (offset or 0)
return {
'length': length,
'records': records
}
@http.route('/web/dataset/load', type='json', auth="user")
def load(self, model, id, fields):
m = request.session.model(model)
value = {}
r = m.read([id], False, request.context)
if r:
value = r[0]
return {'value': value}
def call_common(self, model, method, args, domain_id=None, context_id=None):
return self._call_kw(model, method, args, {})
def _call_kw(self, model, method, args, kwargs):
# Temporary implements future display_name special field for model#read()
if method in ('read', 'search_read') and kwargs.get('context', {}).get('future_display_name'):
if 'display_name' in args[1]:
if method == 'read':
names = dict(request.session.model(model).name_get(args[0], **kwargs))
else:
names = dict(request.session.model(model).name_search('', args[0], **kwargs))
args[1].remove('display_name')
records = getattr(request.session.model(model), method)(*args, **kwargs)
for record in records:
record['display_name'] = \
names.get(record['id']) or "{0}#{1}".format(model, (record['id']))
return records
if method.startswith('_'):
raise Exception("Access Denied: Underscore prefixed methods cannot be remotely called")
return getattr(request.registry.get(model), method)(request.cr, request.uid, *args, **kwargs)
@http.route('/web/dataset/call', type='json', auth="user")
def call(self, model, method, args, domain_id=None, context_id=None):
return self._call_kw(model, method, args, {})
@http.route(['/web/dataset/call_kw', '/web/dataset/call_kw/<path:path>'], type='json', auth="user")
def call_kw(self, model, method, args, kwargs, path=None):
return self._call_kw(model, method, args, kwargs)
@http.route('/web/dataset/call_button', type='json', auth="user")
def call_button(self, model, method, args, domain_id=None, context_id=None):
action = self._call_kw(model, method, args, {})
if isinstance(action, dict) and action.get('type') != '':
return clean_action(action)
return False
@http.route('/web/dataset/exec_workflow', type='json', auth="user")
def exec_workflow(self, model, id, signal):
return request.session.exec_workflow(model, id, signal)
@http.route('/web/dataset/resequence', type='json', auth="user")
def resequence(self, model, ids, field='sequence', offset=0):
""" Re-sequences a number of records in the model, by their ids
The re-sequencing starts at the first model of ``ids``, the sequence
number is incremented by one after each record and starts at ``offset``
:param ids: identifiers of the records to resequence, in the new sequence order
:type ids: list(id)
:param str field: field used for sequence specification, defaults to
"sequence"
:param int offset: sequence number for first record in ``ids``, allows
starting the resequencing from an arbitrary number,
defaults to ``0``
"""
m = request.session.model(model)
if not m.fields_get([field]):
return False
# python 2.6 has no start parameter
for i, id in enumerate(ids):
m.write(id, { field: i + offset })
return True
class View(http.Controller):
@http.route('/web/view/add_custom', type='json', auth="user")
def add_custom(self, view_id, arch):
CustomView = request.session.model('ir.ui.view.custom')
CustomView.create({
'user_id': request.session.uid,
'ref_id': view_id,
'arch': arch
}, request.context)
return {'result': True}
@http.route('/web/view/undo_custom', type='json', auth="user")
def undo_custom(self, view_id, reset=False):
CustomView = request.session.model('ir.ui.view.custom')
vcustom = CustomView.search([('user_id', '=', request.session.uid), ('ref_id' ,'=', view_id)],
0, False, False, request.context)
if vcustom:
if reset:
CustomView.unlink(vcustom, request.context)
else:
CustomView.unlink([vcustom[0]], request.context)
return {'result': True}
return {'result': False}
class TreeView(View):
@http.route('/web/treeview/action', type='json', auth="user")
def action(self, model, id):
return load_actions_from_ir_values(
'action', 'tree_but_open',[(model, id)],
False)
class Binary(http.Controller):
@http.route('/web/binary/image', type='http', auth="public")
def image(self, model, id, field, **kw):
last_update = '__last_update'
Model = request.session.model(model)
headers = [('Content-Type', 'image/png')]
etag = request.httprequest.headers.get('If-None-Match')
hashed_session = hashlib.md5(request.session_id).hexdigest()
retag = hashed_session
id = None if not id else simplejson.loads(id)
if type(id) is list:
id = id[0] # m2o
try:
if etag:
if not id and hashed_session == etag:
return werkzeug.wrappers.Response(status=304)
else:
date = Model.read([id], [last_update], request.context)[0].get(last_update)
if hashlib.md5(date).hexdigest() == etag:
return werkzeug.wrappers.Response(status=304)
if not id:
res = Model.default_get([field], request.context).get(field)
image_base64 = res
else:
res = Model.read([id], [last_update, field], request.context)[0]
retag = hashlib.md5(res.get(last_update)).hexdigest()
image_base64 = res.get(field)
if kw.get('resize'):
resize = kw.get('resize').split(',')
if len(resize) == 2 and int(resize[0]) and int(resize[1]):
width = int(resize[0])
height = int(resize[1])
# resize maximum 500*500
if width > 500: width = 500
if height > 500: height = 500
image_base64 = openerp.tools.image_resize_image(base64_source=image_base64, size=(width, height), encoding='base64', filetype='PNG')
image_data = base64.b64decode(image_base64)
except Exception:
image_data = self.placeholder()
headers.append(('ETag', retag))
headers.append(('Content-Length', len(image_data)))
try:
ncache = int(kw.get('cache'))
headers.append(('Cache-Control', 'no-cache' if ncache == 0 else 'max-age=%s' % (ncache)))
except:
pass
return request.make_response(image_data, headers)
def placeholder(self, image='placeholder.png'):
addons_path = http.addons_manifest['web']['addons_path']
return open(os.path.join(addons_path, 'web', 'static', 'src', 'img', image), 'rb').read()
@http.route('/web/binary/saveas', type='http', auth="public")
@serialize_exception
def saveas(self, model, field, id=None, filename_field=None, **kw):
""" Download link for files stored as binary fields.
If the ``id`` parameter is omitted, fetches the default value for the
binary field (via ``default_get``), otherwise fetches the field for
that precise record.
:param str model: name of the model to fetch the binary from
:param str field: binary field
:param str id: id of the record from which to fetch the binary
:param str filename_field: field holding the file's name, if any
:returns: :class:`werkzeug.wrappers.Response`
"""
Model = request.session.model(model)
fields = [field]
if filename_field:
fields.append(filename_field)
if id:
res = Model.read([int(id)], fields, request.context)[0]
else:
res = Model.default_get(fields, request.context)
filecontent = base64.b64decode(res.get(field, ''))
if not filecontent:
return request.not_found()
else:
filename = '%s_%s' % (model.replace('.', '_'), id)
if filename_field:
filename = res.get(filename_field, '') or filename
return request.make_response(filecontent,
[('Content-Type', 'application/octet-stream'),
('Content-Disposition', content_disposition(filename))])
@http.route('/web/binary/saveas_ajax', type='http', auth="public")
@serialize_exception
def saveas_ajax(self, data, token):
jdata = simplejson.loads(data)
model = jdata['model']
field = jdata['field']
data = jdata['data']
id = jdata.get('id', None)
filename_field = jdata.get('filename_field', None)
context = jdata.get('context', {})
Model = request.session.model(model)
fields = [field]
if filename_field:
fields.append(filename_field)
if data:
res = { field: data }
elif id:
res = Model.read([int(id)], fields, context)[0]
else:
res = Model.default_get(fields, context)
filecontent = base64.b64decode(res.get(field, ''))
if not filecontent:
raise ValueError(_("No content found for field '%s' on '%s:%s'") %
(field, model, id))
else:
filename = '%s_%s' % (model.replace('.', '_'), id)
if filename_field:
filename = res.get(filename_field, '') or filename
return request.make_response(filecontent,
headers=[('Content-Type', 'application/octet-stream'),
('Content-Disposition', content_disposition(filename))],
cookies={'fileToken': token})
@http.route('/web/binary/upload', type='http', auth="user")
@serialize_exception
def upload(self, callback, ufile):
# TODO: might be useful to have a configuration flag for max-length file uploads
out = """<script language="javascript" type="text/javascript">
var win = window.top.window;
win.jQuery(win).trigger(%s, %s);
</script>"""
try:
data = ufile.read()
args = [len(data), ufile.filename,
ufile.content_type, base64.b64encode(data)]
except Exception, e:
args = [False, e.message]
return out % (simplejson.dumps(callback), simplejson.dumps(args))
@http.route('/web/binary/upload_attachment', type='http', auth="user")
@serialize_exception
def upload_attachment(self, callback, model, id, ufile):
Model = request.session.model('ir.attachment')
out = """<script language="javascript" type="text/javascript">
var win = window.top.window;
win.jQuery(win).trigger(%s, %s);
</script>"""
try:
attachment_id = Model.create({
'name': ufile.filename,
'datas': base64.encodestring(ufile.read()),
'datas_fname': ufile.filename,
'res_model': model,
'res_id': int(id)
}, request.context)
args = {
'filename': ufile.filename,
'id': attachment_id
}
except Exception:
args = {'error': "Something horrible happened"}
return out % (simplejson.dumps(callback), simplejson.dumps(args))
@http.route([
'/web/binary/company_logo',
'/logo',
'/logo.png',
], type='http', auth="none")
def company_logo(self, dbname=None, **kw):
imgname = 'logo.png'
placeholder = functools.partial(get_module_resource, 'web', 'static', 'src', 'img')
uid = None
if request.session.db:
dbname = request.session.db
uid = request.session.uid
elif dbname is None:
dbname = db_monodb()
if not uid:
uid = openerp.SUPERUSER_ID
if not dbname:
response = http.send_file(placeholder(imgname))
else:
try:
# create an empty registry
registry = openerp.modules.registry.Registry(dbname)
with registry.cursor() as cr:
cr.execute("""SELECT c.logo_web, c.write_date
FROM res_users u
LEFT JOIN res_company c
ON c.id = u.company_id
WHERE u.id = %s
""", (uid,))
row = cr.fetchone()
if row and row[0]:
image_data = StringIO(str(row[0]).decode('base64'))
response = http.send_file(image_data, filename=imgname, mtime=row[1])
else:
response = http.send_file(placeholder('nologo.png'))
except Exception:
response = http.send_file(placeholder(imgname))
return response
class Action(http.Controller):
@http.route('/web/action/load', type='json', auth="user")
def load(self, action_id, do_not_eval=False, additional_context=None):
Actions = request.session.model('ir.actions.actions')
value = False
try:
action_id = int(action_id)
except ValueError:
try:
module, xmlid = action_id.split('.', 1)
model, action_id = request.session.model('ir.model.data').get_object_reference(module, xmlid)
assert model.startswith('ir.actions.')
except Exception:
action_id = 0 # force failed read
base_action = Actions.read([action_id], ['type'], request.context)
if base_action:
ctx = request.context
action_type = base_action[0]['type']
if action_type == 'ir.actions.report.xml':
ctx.update({'bin_size': True})
if additional_context:
ctx.update(additional_context)
action = request.session.model(action_type).read([action_id], False, ctx)
if action:
value = clean_action(action[0])
return value
@http.route('/web/action/run', type='json', auth="user")
def run(self, action_id):
return_action = request.session.model('ir.actions.server').run(
[action_id], request.context)
if return_action:
return clean_action(return_action)
else:
return False
class Export(http.Controller):
@http.route('/web/export/formats', type='json', auth="user")
def formats(self):
""" Returns all valid export formats
:returns: for each export format, a pair of identifier and printable name
:rtype: [(str, str)]
"""
return [
{'tag': 'csv', 'label': 'CSV'},
{'tag': 'xls', 'label': 'Excel', 'error': None if xlwt else "XLWT required"},
]
def fields_get(self, model):
Model = request.session.model(model)
fields = Model.fields_get(False, request.context)
return fields
@http.route('/web/export/get_fields', type='json', auth="user")
def get_fields(self, model, prefix='', parent_name= '',
import_compat=True, parent_field_type=None,
exclude=None):
if import_compat and parent_field_type == "many2one":
fields = {}
else:
fields = self.fields_get(model)
if import_compat:
fields.pop('id', None)
else:
fields['.id'] = fields.pop('id', {'string': 'ID'})
fields_sequence = sorted(fields.iteritems(),
key=lambda field: openerp.tools.ustr(field[1].get('string', '')))
records = []
for field_name, field in fields_sequence:
if import_compat:
if exclude and field_name in exclude:
continue
if field.get('readonly'):
# If none of the field's states unsets readonly, skip the field
if all(dict(attrs).get('readonly', True)
for attrs in field.get('states', {}).values()):
continue
if not field.get('exportable', True):
continue
id = prefix + (prefix and '/'or '') + field_name
name = parent_name + (parent_name and '/' or '') + field['string']
record = {'id': id, 'string': name,
'value': id, 'children': False,
'field_type': field.get('type'),
'required': field.get('required'),
'relation_field': field.get('relation_field')}
records.append(record)
if len(name.split('/')) < 3 and 'relation' in field:
ref = field.pop('relation')
record['value'] += '/id'
record['params'] = {'model': ref, 'prefix': id, 'name': name}
if not import_compat or field['type'] == 'one2many':
# m2m field in import_compat is childless
record['children'] = True
return records
@http.route('/web/export/namelist', type='json', auth="user")
def namelist(self, model, export_id):
# TODO: namelist really has no reason to be in Python (although itertools.groupby helps)
export = request.session.model("ir.exports").read([export_id])[0]
export_fields_list = request.session.model("ir.exports.line").read(
export['export_fields'])
fields_data = self.fields_info(
model, map(operator.itemgetter('name'), export_fields_list))
return [
{'name': field['name'], 'label': fields_data[field['name']]}
for field in export_fields_list
]
def fields_info(self, model, export_fields):
info = {}
fields = self.fields_get(model)
if ".id" in export_fields:
fields['.id'] = fields.pop('id', {'string': 'ID'})
# To make fields retrieval more efficient, fetch all sub-fields of a
# given field at the same time. Because the order in the export list is
# arbitrary, this requires ordering all sub-fields of a given field
# together so they can be fetched at the same time
#
# Works the following way:
# * sort the list of fields to export, the default sorting order will
# put the field itself (if present, for xmlid) and all of its
# sub-fields right after it
# * then, group on: the first field of the path (which is the same for
# a field and for its subfields and the length of splitting on the
# first '/', which basically means grouping the field on one side and
# all of the subfields on the other. This way, we have the field (for
# the xmlid) with length 1, and all of the subfields with the same
# base but a length "flag" of 2
# * if we have a normal field (length 1), just add it to the info
# mapping (with its string) as-is
# * otherwise, recursively call fields_info via graft_subfields.
# all graft_subfields does is take the result of fields_info (on the
# field's model) and prepend the current base (current field), which
# rebuilds the whole sub-tree for the field
#
# result: because we're not fetching the fields_get for half the
# database models, fetching a namelist with a dozen fields (including
# relational data) falls from ~6s to ~300ms (on the leads model).
# export lists with no sub-fields (e.g. import_compatible lists with
# no o2m) are even more efficient (from the same 6s to ~170ms, as
# there's a single fields_get to execute)
for (base, length), subfields in itertools.groupby(
sorted(export_fields),
lambda field: (field.split('/', 1)[0], len(field.split('/', 1)))):
subfields = list(subfields)
if length == 2:
# subfields is a seq of $base/*rest, and not loaded yet
info.update(self.graft_subfields(
fields[base]['relation'], base, fields[base]['string'],
subfields
))
elif base in fields:
info[base] = fields[base]['string']
return info
def graft_subfields(self, model, prefix, prefix_string, fields):
export_fields = [field.split('/', 1)[1] for field in fields]
return (
(prefix + '/' + k, prefix_string + '/' + v)
for k, v in self.fields_info(model, export_fields).iteritems())
class ExportFormat(object):
raw_data = False
@property
def content_type(self):
""" Provides the format's content type """
raise NotImplementedError()
def filename(self, base):
""" Creates a valid filename for the format (with extension) from the
provided base name (exension-less)
"""
raise NotImplementedError()
def from_data(self, fields, rows):
""" Conversion method from OpenERP's export data to whatever the
current export class outputs
:params list fields: a list of fields to export
:params list rows: a list of records to export
:returns:
:rtype: bytes
"""
raise NotImplementedError()
def base(self, data, token):
params = simplejson.loads(data)
model, fields, ids, domain, import_compat = \
operator.itemgetter('model', 'fields', 'ids', 'domain',
'import_compat')(
params)
Model = request.session.model(model)
context = dict(request.context or {}, **params.get('context', {}))
ids = ids or Model.search(domain, 0, False, False, context)
field_names = map(operator.itemgetter('name'), fields)
import_data = Model.export_data(ids, field_names, self.raw_data, context=context).get('datas',[])
if import_compat:
columns_headers = field_names
else:
columns_headers = [val['label'].strip() for val in fields]
return request.make_response(self.from_data(columns_headers, import_data),
headers=[('Content-Disposition',
content_disposition(self.filename(model))),
('Content-Type', self.content_type)],
cookies={'fileToken': token})
class CSVExport(ExportFormat, http.Controller):
@http.route('/web/export/csv', type='http', auth="user")
@serialize_exception
def index(self, data, token):
return self.base(data, token)
@property
def content_type(self):
return 'text/csv;charset=utf8'
def filename(self, base):
return base + '.csv'
def from_data(self, fields, rows):
fp = StringIO()
writer = csv.writer(fp, quoting=csv.QUOTE_ALL)
writer.writerow([name.encode('utf-8') for name in fields])
for data in rows:
row = []
for d in data:
if isinstance(d, basestring):
d = d.replace('\n',' ').replace('\t',' ')
try:
d = d.encode('utf-8')
except UnicodeError:
pass
if d is False: d = None
row.append(d)
writer.writerow(row)
fp.seek(0)
data = fp.read()
fp.close()
return data
class ExcelExport(ExportFormat, http.Controller):
# Excel needs raw data to correctly handle numbers and date values
raw_data = True
@http.route('/web/export/xls', type='http', auth="user")
@serialize_exception
def index(self, data, token):
return self.base(data, token)
@property
def content_type(self):
return 'application/vnd.ms-excel'
def filename(self, base):
return base + '.xls'
def from_data(self, fields, rows):
workbook = xlwt.Workbook()
worksheet = workbook.add_sheet('Sheet 1')
for i, fieldname in enumerate(fields):
worksheet.write(0, i, fieldname)
worksheet.col(i).width = 8000 # around 220 pixels
base_style = xlwt.easyxf('align: wrap yes')
date_style = xlwt.easyxf('align: wrap yes', num_format_str='YYYY-MM-DD')
datetime_style = xlwt.easyxf('align: wrap yes', num_format_str='YYYY-MM-DD HH:mm:SS')
for row_index, row in enumerate(rows):
for cell_index, cell_value in enumerate(row):
cell_style = base_style
if isinstance(cell_value, basestring):
cell_value = re.sub("\r", " ", cell_value)
elif isinstance(cell_value, datetime.datetime):
cell_style = datetime_style
elif isinstance(cell_value, datetime.date):
cell_style = date_style
worksheet.write(row_index + 1, cell_index, cell_value, cell_style)
fp = StringIO()
workbook.save(fp)
fp.seek(0)
data = fp.read()
fp.close()
return data
class Reports(http.Controller):
POLLING_DELAY = 0.25
TYPES_MAPPING = {
'doc': 'application/vnd.ms-word',
'html': 'text/html',
'odt': 'application/vnd.oasis.opendocument.text',
'pdf': 'application/pdf',
'sxw': 'application/vnd.sun.xml.writer',
'xls': 'application/vnd.ms-excel',
}
@http.route('/web/report', type='http', auth="user")
@serialize_exception
def index(self, action, token):
action = simplejson.loads(action)
report_srv = request.session.proxy("report")
context = dict(request.context)
context.update(action["context"])
report_data = {}
report_ids = context.get("active_ids", None)
if 'report_type' in action:
report_data['report_type'] = action['report_type']
if 'datas' in action:
if 'ids' in action['datas']:
report_ids = action['datas'].pop('ids')
report_data.update(action['datas'])
report_id = report_srv.report(
request.session.db, request.session.uid, request.session.password,
action["report_name"], report_ids,
report_data, context)
report_struct = None
while True:
report_struct = report_srv.report_get(
request.session.db, request.session.uid, request.session.password, report_id)
if report_struct["state"]:
break
time.sleep(self.POLLING_DELAY)
report = base64.b64decode(report_struct['result'])
if report_struct.get('code') == 'zlib':
report = zlib.decompress(report)
report_mimetype = self.TYPES_MAPPING.get(
report_struct['format'], 'octet-stream')
file_name = action.get('name', 'report')
if 'name' not in action:
reports = request.session.model('ir.actions.report.xml')
res_id = reports.search([('report_name', '=', action['report_name']),],
0, False, False, context)
if len(res_id) > 0:
file_name = reports.read(res_id[0], ['name'], context)['name']
else:
file_name = action['report_name']
file_name = '%s.%s' % (file_name, report_struct['format'])
return request.make_response(report,
headers=[
('Content-Disposition', content_disposition(file_name)),
('Content-Type', report_mimetype),
('Content-Length', len(report))],
cookies={'fileToken': token})
class Apps(http.Controller):
@http.route('/apps/<app>', auth='user')
def get_app_url(self, req, app):
act_window_obj = request.session.model('ir.actions.act_window')
ir_model_data = request.session.model('ir.model.data')
try:
action_id = ir_model_data.get_object_reference('base', 'open_module_tree')[1]
action = act_window_obj.read(action_id, ['name', 'type', 'res_model', 'view_mode', 'view_type', 'context', 'views', 'domain'])
action['target'] = 'current'
except ValueError:
action = False
try:
app_id = ir_model_data.get_object_reference('base', 'module_%s' % app)[1]
except ValueError:
app_id = False
if action and app_id:
action['res_id'] = app_id
action['view_mode'] = 'form'
action['views'] = [(False, u'form')]
sakey = Session().save_session_action(action)
debug = '?debug' if req.debug else ''
return werkzeug.utils.redirect('/web{0}#sa={1}'.format(debug, sakey))
# vim:expandtab:tabstop=4:softtabstop=4:shiftwidth=4:
|
israel-lugo/netcalc
|
refs/heads/master
|
netcalc/version.py
|
1
|
# NetCalc - advanced network calculator and address planning helper
# Copyright (C) 2016, 2017 Israel G. Lugo
#
# This file is part of NetCalc.
#
# NetCalc is free software: you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# NetCalc is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with NetCalc. If not, see <http://www.gnu.org/licenses/>.
#
# For suggestions, feedback or bug reports: israel.lugo@lugosys.com
"""Package version number.
Public members:
__version__ -- package version number, for internal use
"""
__version__ = '0.6.2'
|
simonpatrick/bite-project
|
refs/heads/master
|
deps/mrtaskman/server/mapreduce/lib/files/testutil.py
|
44
|
#!/usr/bin/env python
#
# Copyright 2007 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""Testing utils for writing tests involving Files API."""
__all__ = ['TestFileServiceStub']
from google.appengine.api import apiproxy_stub
class TestFileServiceStub(apiproxy_stub.APIProxyStub):
"""A FileServiceStub to be used with tests.
Doesn't perform any kind of file validation and stores all
file content in memory.
Can be used to test low-level file calls only, because it doesn't
support all features (like blobstore files).
"""
def __init__(self):
super(TestFileServiceStub, self).__init__('file')
self._file_content = {}
def _Dynamic_Open(self, request, response):
pass
def _Dynamic_Close(self, request, response):
pass
def _Dynamic_Append(self, request, response):
self._file_content[request.filename()] = (
self.get_content(request.filename()) + request.data())
def _Dynamic_Read(self, request, response):
content = self._file_content[request.filename()]
pos = request.pos()
response.set_data(content[pos:pos + request.max_bytes()])
def get_content(self, filename):
"""Get current in-memory file content."""
return self._file_content.get(filename, '')
def set_content(self, filename, content):
"""Set current in-memory file content."""
self._file_content[filename] = content
|
EricMuller/mywebmarks-backend
|
refs/heads/master
|
requirements/twisted/Twisted-17.1.0/docs/core/howto/tutorial/listings/finger/finger/finger.py
|
2
|
# finger.py module
from zope.interface import Interface, implementer
from twisted.application import internet, service, strports
from twisted.internet import protocol, reactor, defer, endpoints
from twisted.words.protocols import irc
from twisted.protocols import basic
from twisted.python import components, log
from twisted.web import resource, server, xmlrpc
from twisted.spread import pb
class IFingerService(Interface):
def getUser(user):
"""
Return a deferred returning a string.
"""
def getUsers():
"""
Return a deferred returning a list of strings.
"""
class IFingerSetterService(Interface):
def setUser(user, status):
"""
Set the user's status to something.
"""
def catchError(err):
return "Internal error in server"
class FingerProtocol(basic.LineReceiver):
def lineReceived(self, user):
d = self.factory.getUser(user)
d.addErrback(catchError)
def writeValue(value):
self.transport.write(value+'\n')
self.transport.loseConnection()
d.addCallback(writeValue)
class IFingerFactory(Interface):
def getUser(user):
"""
Return a deferred returning a string.
"""
def buildProtocol(addr):
"""
Return a protocol returning a string.
"""
@implementer(IFingerFactory)
class FingerFactoryFromService(protocol.ServerFactory):
protocol = FingerProtocol
def __init__(self, service):
self.service = service
def getUser(self, user):
return self.service.getUser(user)
components.registerAdapter(FingerFactoryFromService,
IFingerService,
IFingerFactory)
class FingerSetterProtocol(basic.LineReceiver):
def connectionMade(self):
self.lines = []
def lineReceived(self, line):
self.lines.append(line)
def connectionLost(self, reason):
if len(self.lines) == 2:
self.factory.setUser(*self.lines)
class IFingerSetterFactory(Interface):
def setUser(user, status):
"""
Return a deferred returning a string.
"""
def buildProtocol(addr):
"""
Return a protocol returning a string.
"""
@implementer(IFingerSetterFactory)
class FingerSetterFactoryFromService(protocol.ServerFactory):
protocol = FingerSetterProtocol
def __init__(self, service):
self.service = service
def setUser(self, user, status):
self.service.setUser(user, status)
components.registerAdapter(FingerSetterFactoryFromService,
IFingerSetterService,
IFingerSetterFactory)
class IRCReplyBot(irc.IRCClient):
def connectionMade(self):
self.nickname = self.factory.nickname
irc.IRCClient.connectionMade(self)
def privmsg(self, user, channel, msg):
user = user.split('!')[0]
if self.nickname.lower() == channel.lower():
d = self.factory.getUser(msg)
d.addErrback(catchError)
d.addCallback(lambda m: "Status of %s: %s" % (msg, m))
d.addCallback(lambda m: self.msg(user, m))
class IIRCClientFactory(Interface):
"""
@ivar nickname
"""
def getUser(user):
"""
Return a deferred returning a string.
"""
def buildProtocol(addr):
"""
Return a protocol.
"""
@implementer(IIRCClientFactory)
class IRCClientFactoryFromService(protocol.ClientFactory):
protocol = IRCReplyBot
nickname = None
def __init__(self, service):
self.service = service
def getUser(self, user):
return self.service.getUser(user)
components.registerAdapter(IRCClientFactoryFromService,
IFingerService,
IIRCClientFactory)
class UserStatusTree(resource.Resource):
template = """<html><head><title>Users</title></head><body>
<h1>Users</h1>
<ul>
%(users)s
</ul>
</body>
</html>"""
def __init__(self, service):
resource.Resource.__init__(self)
self.service = service
def getChild(self, path, request):
if path == '':
return self
elif path == 'RPC2':
return UserStatusXR(self.service)
else:
return UserStatus(path, self.service)
def render_GET(self, request):
users = self.service.getUsers()
def cbUsers(users):
request.write(self.template % {'users': ''.join([
# Name should be quoted properly these uses.
'<li><a href="%s">%s</a></li>' % (name, name)
for name in users])})
request.finish()
users.addCallback(cbUsers)
def ebUsers(err):
log.err(err, "UserStatusTree failed")
request.finish()
users.addErrback(ebUsers)
return server.NOT_DONE_YET
components.registerAdapter(UserStatusTree, IFingerService, resource.IResource)
class UserStatus(resource.Resource):
template='''<html><head><title>%(title)s</title></head>
<body><h1>%(name)s</h1><p>%(status)s</p></body></html>'''
def __init__(self, user, service):
resource.Resource.__init__(self)
self.user = user
self.service = service
def render_GET(self, request):
status = self.service.getUser(self.user)
def cbStatus(status):
request.write(self.template % {
'title': self.user,
'name': self.user,
'status': status})
request.finish()
status.addCallback(cbStatus)
def ebStatus(err):
log.err(err, "UserStatus failed")
request.finish()
status.addErrback(ebStatus)
return server.NOT_DONE_YET
class UserStatusXR(xmlrpc.XMLRPC):
def __init__(self, service):
xmlrpc.XMLRPC.__init__(self)
self.service = service
def xmlrpc_getUser(self, user):
return self.service.getUser(user)
def xmlrpc_getUsers(self):
return self.service.getUsers()
class IPerspectiveFinger(Interface):
def remote_getUser(username):
"""
Return a user's status.
"""
def remote_getUsers():
"""
Return a user's status.
"""
@implementer(IPerspectiveFinger)
class PerspectiveFingerFromService(pb.Root):
def __init__(self, service):
self.service = service
def remote_getUser(self, username):
return self.service.getUser(username)
def remote_getUsers(self):
return self.service.getUsers()
components.registerAdapter(PerspectiveFingerFromService,
IFingerService,
IPerspectiveFinger)
@implementer(IFingerService)
class FingerService(service.Service):
def __init__(self, filename):
self.filename = filename
def _read(self):
self.users = {}
with open(self.filename) as f:
for line in f:
user, status = line.split(':', 1)
user = user.strip()
status = status.strip()
self.users[user] = status
self.call = reactor.callLater(30, self._read)
def getUser(self, user):
return defer.succeed(self.users.get(user, "No such user"))
def getUsers(self):
return defer.succeed(self.users.keys())
def startService(self):
self._read()
service.Service.startService(self)
def stopService(self):
service.Service.stopService(self)
self.call.cancel()
# Easy configuration
def makeService(config):
# finger on port 79
s = service.MultiService()
f = FingerService(config['file'])
h = strports.service("tcp:1079", IFingerFactory(f))
h.setServiceParent(s)
# website on port 8000
r = resource.IResource(f)
r.templateDirectory = config['templates']
site = server.Site(r)
j = strports.service("tcp:8000", site)
j.setServiceParent(s)
# ssl on port 443
# if config.get('ssl'):
# k = strports.service(
# "ssl:port=443:certKey=cert.pem:privateKey=key.pem", site
# )
# k.setServiceParent(s)
# irc fingerbot
if 'ircnick' in config:
i = IIRCClientFactory(f)
i.nickname = config['ircnick']
ircserver = config['ircserver']
b = internet.ClientService(
endpoints.HostnameEndpoint(reactor, ircserver, 6667), i
)
b.setServiceParent(s)
# Pespective Broker on port 8889
if 'pbport' in config:
m = internet.StreamServerEndpointService(
endpoints.TCP4ServerEndpoint(reactor, int(config['pbport'])),
pb.PBServerFactory(IPerspectiveFinger(f))
)
m.setServiceParent(s)
return s
|
yorvic/.vim
|
refs/heads/master
|
bundle/python-mode/pylibs/pylama/checkers/pylint/logilab/astng/utils.py
|
1
|
# copyright 2003-2013 LOGILAB S.A. (Paris, FRANCE), all rights reserved.
# contact http://www.logilab.fr/ -- mailto:contact@logilab.fr
#
# This file is part of logilab-astng.
#
# logilab-astng is free software: you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by the
# Free Software Foundation, either version 2.1 of the License, or (at your
# option) any later version.
#
# logilab-astng is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
# FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License
# for more details.
#
# You should have received a copy of the GNU Lesser General Public License along
# with logilab-astng. If not, see <http://www.gnu.org/licenses/>.
"""this module contains some utilities to navigate in the tree or to
extract information from it
"""
__docformat__ = "restructuredtext en"
from .exceptions import ASTNGBuildingException
from .builder import parse
class ASTWalker:
"""a walker visiting a tree in preorder, calling on the handler:
* visit_<class name> on entering a node, where class name is the class of
the node in lower case
* leave_<class name> on leaving a node, where class name is the class of
the node in lower case
"""
def __init__(self, handler):
self.handler = handler
self._cache = {}
def walk(self, node, _done=None):
"""walk on the tree from <node>, getting callbacks from handler"""
if _done is None:
_done = set()
if node in _done:
raise AssertionError((id(node), node, node.parent))
_done.add(node)
self.visit(node)
for child_node in node.get_children():
self.handler.set_context(node, child_node)
assert child_node is not node
self.walk(child_node, _done)
self.leave(node)
assert node.parent is not node
def get_callbacks(self, node):
"""get callbacks from handler for the visited node"""
klass = node.__class__
methods = self._cache.get(klass)
if methods is None:
handler = self.handler
kid = klass.__name__.lower()
e_method = getattr(handler, 'visit_%s' % kid,
getattr(handler, 'visit_default', None))
l_method = getattr(handler, 'leave_%s' % kid,
getattr(handler, 'leave_default', None))
self._cache[klass] = (e_method, l_method)
else:
e_method, l_method = methods
return e_method, l_method
def visit(self, node):
"""walk on the tree from <node>, getting callbacks from handler"""
method = self.get_callbacks(node)[0]
if method is not None:
method(node)
def leave(self, node):
"""walk on the tree from <node>, getting callbacks from handler"""
method = self.get_callbacks(node)[1]
if method is not None:
method(node)
class LocalsVisitor(ASTWalker):
"""visit a project by traversing the locals dictionary"""
def __init__(self):
ASTWalker.__init__(self, self)
self._visited = {}
def visit(self, node):
"""launch the visit starting from the given node"""
if node in self._visited:
return
self._visited[node] = 1 # FIXME: use set ?
methods = self.get_callbacks(node)
if methods[0] is not None:
methods[0](node)
if 'locals' in node.__dict__: # skip Instance and other proxy
for name, local_node in node.items():
self.visit(local_node)
if methods[1] is not None:
return methods[1](node)
def _check_children(node):
"""a helper function to check children - parent relations"""
for child in node.get_children():
ok = False
if child is None:
print "Hm, child of %s is None" % node
continue
if not hasattr(child, 'parent'):
print " ERROR: %s has child %s %x with no parent" % (node, child, id(child))
elif not child.parent:
print " ERROR: %s has child %s %x with parent %r" % (node, child, id(child), child.parent)
elif child.parent is not node:
print " ERROR: %s %x has child %s %x with wrong parent %s" % (node,
id(node), child, id(child), child.parent)
else:
ok = True
if not ok:
print "lines;", node.lineno, child.lineno
print "of module", node.root(), node.root().name
raise ASTNGBuildingException
_check_children(child)
class TreeTester(object):
'''A helper class to see _ast tree and compare with astng tree
indent: string for tree indent representation
lineno: bool to tell if we should print the line numbers
>>> tester = TreeTester('print')
>>> print tester.native_tree_repr()
<Module>
. body = [
. <Print>
. . nl = True
. ]
>>> print tester.astng_tree_repr()
Module()
body = [
Print()
dest =
values = [
]
]
'''
indent = '. '
lineno = False
def __init__(self, sourcecode):
self._string = ''
self.sourcecode = sourcecode
self._ast_node = None
self.build_ast()
def build_ast(self):
"""build the _ast tree from the source code"""
self._ast_node = parse(self.sourcecode)
def native_tree_repr(self, node=None, indent=''):
"""get a nice representation of the _ast tree"""
self._string = ''
if node is None:
node = self._ast_node
self._native_repr_tree(node, indent)
return self._string
def _native_repr_tree(self, node, indent, _done=None):
"""recursive method for the native tree representation"""
from _ast import Load as _Load, Store as _Store, Del as _Del
from _ast import AST as Node
if _done is None:
_done = set()
if node in _done:
self._string += '\nloop in tree: %r (%s)' % (node,
getattr(node, 'lineno', None))
return
_done.add(node)
self._string += '\n' + indent + '<%s>' % node.__class__.__name__
indent += self.indent
if not hasattr(node, '__dict__'):
self._string += '\n' + self.indent + " ** node has no __dict__ " + str(node)
return
node_dict = node.__dict__
if hasattr(node, '_attributes'):
for a in node._attributes:
attr = node_dict[a]
if attr is None:
continue
if a in ("lineno", "col_offset") and not self.lineno:
continue
self._string +='\n' + indent + a + " = " + repr(attr)
for field in node._fields or ():
attr = node_dict[field]
if attr is None:
continue
if isinstance(attr, list):
if not attr:
continue
self._string += '\n' + indent + field + ' = ['
for elt in attr:
self._native_repr_tree(elt, indent, _done)
self._string += '\n' + indent + ']'
continue
if isinstance(attr, (_Load, _Store, _Del)):
continue
if isinstance(attr, Node):
self._string += '\n' + indent + field + " = "
self._native_repr_tree(attr, indent, _done)
else:
self._string += '\n' + indent + field + " = " + repr(attr)
def build_astng_tree(self):
"""build astng tree from the _ast tree
"""
from logilab.astng.builder import ASTNGBuilder
tree = ASTNGBuilder().string_build(self.sourcecode)
return tree
def astng_tree_repr(self, ids=False):
"""build the astng tree and return a nice tree representation"""
mod = self.build_astng_tree()
return mod.repr_tree(ids)
__all__ = ('LocalsVisitor', 'ASTWalker',)
|
bolmez/Class-HARK
|
refs/heads/master
|
HARKutilities.py
|
2
|
'''
General purpose / miscellaneous functions. Includes functions to approximate
continuous distributions with discrete ones, utility functions (and their
derivatives), manipulation of discrete distributions, and basic plotting tools.
'''
from __future__ import division # Import Python 3.x division function
import functools
import re # Regular expression, for string cleaning
import warnings
import numpy as np # Python's numeric library, abbreviated "np"
import pylab as plt # Python's plotting library
import scipy.stats as stats # Python's statistics library
from scipy.interpolate import interp1d
from scipy.special import erf
def _warning(message,category = UserWarning,filename = '',lineno = -1):
'''
A "monkeypatch" to warnings, to print pretty-looking warnings. The
default behavior of the "warnings" module is to print some extra, unusual-
looking things when the user calls a warning. A common "fix" for this is
to "monkeypatch" the warnings module. See:
http://stackoverflow.com/questions/2187269/python-print-only-the-message-on-warnings
I implement this fix directly below, for all simulation and solution utilities.
'''
print(message)
warnings.showwarning = _warning
def memoize(obj):
'''
A decorator to (potentially) make functions more efficient.
With this decorator, functions will "remember" if they have been evaluated with given inputs
before. If they have, they will "remember" the outputs that have already been calculated
for those inputs, rather than calculating them again.
'''
cache = obj._cache = {}
@functools.wraps(obj)
def memoizer(*args, **kwargs):
key = str(args) + str(kwargs)
if key not in cache:
cache[key] = obj(*args, **kwargs)
return cache[key]
return memoizer
# ==============================================================================
# ============== Some basic function tools ====================================
# ==============================================================================
def getArgNames(function):
'''
Returns a list of strings naming all of the arguments for the passed function.
Parameters
----------
function : function
A function whose argument names are wanted.
Returns
-------
argNames : [string]
The names of the arguments of function.
'''
argCount = function.__code__.co_argcount
argNames = function.__code__.co_varnames[:argCount]
return argNames
class NullFunc():
'''
A trivial class that acts as a placeholder "do nothing" function.
'''
def __call__(self,*args):
'''
Returns meaningless output no matter what the input(s) is. If no input,
returns None. Otherwise, returns an array of NaNs (or a single NaN) of
the same size as the first input.
'''
if len(args) == 0:
return None
else:
arg = args[0]
if hasattr(arg,'shape'):
return np.zeros_like(arg) + np.nan
else:
return np.nan
def distance(self,other):
'''
Trivial distance metric that only cares whether the other object is also
an instance of NullFunc. Intentionally does not inherit from HARKobject
as this might create dependency problems.
Parameters
----------
other : any
Any object for comparison to this instance of NullFunc.
Returns
-------
(unnamed) : float
The distance between self and other. Returns 0 if other is also a
NullFunc; otherwise returns an arbitrary high number.
'''
try:
if other.__class__ is self.__class__:
return 0.0
else:
return 1000.0
except:
return 10000.0
# ==============================================================================
# ============== Define utility functions ===============================
# ==============================================================================
def CRRAutility(c, gam):
'''
Evaluates constant relative risk aversion (CRRA) utility of consumption c
given risk aversion parameter gam.
Parameters
----------
c : float
Consumption value
gam : float
Risk aversion
Returns
-------
(unnamed) : float
Utility
Tests
-----
Test a value which should pass:
>>> c, gamma = 1.0, 2.0 # Set two values at once with Python syntax
>>> utility(c=c, gam=gamma)
-1.0
'''
if gam == 1:
return np.log(c)
else:
return( c**(1.0 - gam) / (1.0 - gam) )
def CRRAutilityP(c, gam):
'''
Evaluates constant relative risk aversion (CRRA) marginal utility of consumption
c given risk aversion parameter gam.
Parameters
----------
c : float
Consumption value
gam : float
Risk aversion
Returns
-------
(unnamed) : float
Marginal utility
'''
return( c**-gam )
def CRRAutilityPP(c, gam):
'''
Evaluates constant relative risk aversion (CRRA) marginal marginal utility of
consumption c given risk aversion parameter gam.
Parameters
----------
c : float
Consumption value
gam : float
Risk aversion
Returns
-------
(unnamed) : float
Marginal marginal utility
'''
return( -gam*c**(-gam-1.0) )
def CRRAutilityPPP(c, gam):
'''
Evaluates constant relative risk aversion (CRRA) marginal marginal marginal
utility of consumption c given risk aversion parameter gam.
Parameters
----------
c : float
Consumption value
gam : float
Risk aversion
Returns
-------
(unnamed) : float
Marginal marginal marginal utility
'''
return( (gam+1.0)*gam*c**(-gam-2.0) )
def CRRAutilityPPPP(c, gam):
'''
Evaluates constant relative risk aversion (CRRA) marginal marginal marginal
marginal utility of consumption c given risk aversion parameter gam.
Parameters
----------
c : float
Consumption value
gam : float
Risk aversion
Returns
-------
(unnamed) : float
Marginal marginal marginal marginal utility
'''
return( -(gam+2.0)*(gam+1.0)*gam*c**(-gam-3.0) )
def CRRAutility_inv(u, gam):
'''
Evaluates the inverse of the CRRA utility function (with risk aversion para-
meter gam) at a given utility level u.
Parameters
----------
u : float
Utility value
gam : float
Risk aversion
Returns
-------
(unnamed) : float
Consumption corresponding to given utility value
'''
if gam == 1:
return np.exp(u)
else:
return( ((1.0-gam)*u)**(1/(1.0-gam)) )
def CRRAutilityP_inv(uP, gam):
'''
Evaluates the inverse of the CRRA marginal utility function (with risk aversion
parameter gam) at a given marginal utility level uP.
Parameters
----------
uP : float
Marginal utility value
gam : float
Risk aversion
Returns
-------
(unnamed) : float
Consumption corresponding to given marginal utility value.
'''
return( uP**(-1.0/gam) )
def CRRAutility_invP(u, gam):
'''
Evaluates the derivative of the inverse of the CRRA utility function (with
risk aversion parameter gam) at a given utility level u.
Parameters
----------
u : float
Utility value
gam : float
Risk aversion
Returns
-------
(unnamed) : float
Marginal consumption corresponding to given utility value
'''
if gam == 1:
return np.exp(u)
else:
return( ((1.0-gam)*u)**(gam/(1.0-gam)) )
def CRRAutilityP_invP(u, gam):
'''
Evaluates the derivative of the inverse of the CRRA marginal utility function
(with risk aversion parameter gam) at a given marginal utility level uP.
Parameters
----------
uP : float
Marginal utility value
gam : float
Risk aversion
Returns
-------
(unnamed) : float
Marginal consumption corresponding to given marginal utility value
'''
return( (-1.0/gam)*u**(-1.0/gam-1.0) )
def CARAutility(c, alpha):
'''
Evaluates constant absolute risk aversion (CARA) utility of consumption c
given risk aversion parameter alpha.
Parameters
----------
c: float
Consumption value
alpha: float
Risk aversion
Returns
-------
(unnamed): float
Utility
'''
return( 1 - np.exp(-alpha*c)/alpha )
def CARAutilityP(c, alpha):
'''
Evaluates constant absolute risk aversion (CARA) marginal utility of
consumption c given risk aversion parameter alpha.
Parameters
----------
c: float
Consumption value
alpha: float
Risk aversion
Returns
-------
(unnamed): float
Marginal utility
'''
return( np.exp(-alpha*c) )
def CARAutilityPP(c, alpha):
'''
Evaluates constant absolute risk aversion (CARA) marginal marginal utility
of consumption c given risk aversion parameter alpha.
Parameters
----------
c: float
Consumption value
alpha: float
Risk aversion
Returns
-------
(unnamed): float
Marginal marginal utility
'''
return( -alpha*np.exp(-alpha*c) )
def CARAutilityPPP(c, alpha):
'''
Evaluates constant absolute risk aversion (CARA) marginal marginal marginal
utility of consumption c given risk aversion parameter alpha.
Parameters
----------
c: float
Consumption value
alpha: float
Risk aversion
Returns
-------
(unnamed): float
Marginal marginal marginal utility
'''
return( alpha**2.0*np.exp(-alpha*c) )
def CARAutility_inv(u, alpha):
'''
Evaluates inverse of constant absolute risk aversion (CARA) utility function
at utility level u given risk aversion parameter alpha.
Parameters
----------
u: float
Utility value
alpha: float
Risk aversion
Returns
-------
(unnamed): float
Consumption value corresponding to u
'''
return( -1.0/alpha * np.log(alpha*(1-u)) )
def CARAutilityP_inv(u, alpha):
'''
Evaluates the inverse of constant absolute risk aversion (CARA) marginal
utility function at marginal utility uP given risk aversion parameter alpha.
Parameters
----------
u: float
Utility value
alpha: float
Risk aversion
Returns
-------
(unnamed): float
Consumption value corresponding to uP
'''
return( -1.0/alpha*np.log(u) )
def CARAutility_invP(u, alpha):
'''
Evaluates the derivative of inverse of constant absolute risk aversion (CARA)
utility function at utility level u given risk aversion parameter alpha.
Parameters
----------
u: float
Utility value
alpha: float
Risk aversion
Returns
-------
(unnamed): float
Marginal onsumption value corresponding to u
'''
return( 1.0/(alpha*(1.0-u)) )
def approxLognormal(N, mu=0.0, sigma=1.0, tail_N=0, tail_bound=[0.02,0.98], tail_order=np.e):
'''
Construct a discrete approximation to a lognormal distribution with underlying
normal distribution N(exp(mu),sigma). Makes an equiprobable distribution by
default, but user can optionally request augmented tails with exponentially
sized point masses. This can improve solution accuracy in some models.
Parameters
----------
N: int
Number of discrete points in the "main part" of the approximation.
mu: float
Mean of underlying normal distribution.
sigma: float
Standard deviation of underlying normal distribution.
tail_N: int
Number of points in each "tail part" of the approximation; 0 = no tail.
tail_bound: [float]
CDF boundaries of the tails vs main portion; tail_bound[0] is the lower
tail bound, tail_bound[1] is the upper tail bound. Inoperative when
tail_N = 0. Can make "one tailed" approximations with 0.0 or 1.0.
tail_order: float
Factor by which consecutive point masses in a "tail part" differ in
probability. Should be >= 1 for sensible spacing.
Returns
-------
pmf: np.ndarray
Probabilities for discrete probability mass function.
X: np.ndarray
Discrete values in probability mass function.
Written by Luca Gerotto
Based on Matab function "setup_workspace.m," from Chris Carroll's
[Solution Methods for Microeconomic Dynamic Optimization Problems]
(http://www.econ2.jhu.edu/people/ccarroll/solvingmicrodsops/) toolkit.
Latest update: 21 April 2016 by Matthew N. White
'''
# Find the CDF boundaries of each segment
if sigma > 0.0:
if tail_N > 0:
lo_cut = tail_bound[0]
hi_cut = tail_bound[1]
else:
lo_cut = 0.0
hi_cut = 1.0
inner_size = hi_cut - lo_cut
inner_CDF_vals = [lo_cut + x*N**(-1.0)*inner_size for x in range(1, N)]
if inner_size < 1.0:
scale = 1.0/tail_order
mag = (1.0-scale**tail_N)/(1.0-scale)
lower_CDF_vals = [0.0]
if lo_cut > 0.0:
for x in range(tail_N-1,-1,-1):
lower_CDF_vals.append(lower_CDF_vals[-1] + lo_cut*scale**x/mag)
upper_CDF_vals = [hi_cut]
if hi_cut < 1.0:
for x in range(tail_N):
upper_CDF_vals.append(upper_CDF_vals[-1] + (1.0-hi_cut)*scale**x/mag)
CDF_vals = lower_CDF_vals + inner_CDF_vals + upper_CDF_vals
temp_cutoffs = list(stats.lognorm.ppf(CDF_vals[1:-1], s=sigma, loc=0,
scale=np.exp(mu)))
cutoffs = [0] + temp_cutoffs + [np.inf]
CDF_vals = np.array(CDF_vals)
# Construct the discrete approximation by finding the average value within each segment
K = CDF_vals.size-1 # number of points in approximation
pmf = CDF_vals[1:(K+1)] - CDF_vals[0:K]
X = np.zeros(K)
for i in range(K):
zBot = cutoffs[i]
zTop = cutoffs[i+1]
X[i] = (-0.5)*np.exp(mu+(sigma**2)*0.5)*(erf((mu+sigma**2-np.log(zTop))*(
(np.sqrt(2)*sigma)**(-1)))-erf((mu+sigma**2-np.log(zBot))*((np.sqrt(2)*sigma)
**(-1))))*(pmf[i]**(-1))
else:
pmf = np.ones(N)/N
X = np.exp(mu)*np.ones(N)
return [pmf, X]
@memoize
def approxMeanOneLognormal(N, sigma=1.0, **kwargs):
'''
Calculate a discrete approximation to a mean one lognormal distribution.
Based on function approxLognormal; see that function's documentation for
further notes.
Parameters
----------
N : int
Size of discrete space vector to be returned.
sigma : float
standard deviation associated with underlying normal probability distribution.
Returns
-------
X : np.array
Discrete points for discrete probability mass function.
pmf : np.array
Probability associated with each point in X.
Written by Nathan M. Palmer
Based on Matab function "setup_shocks.m," from Chris Carroll's
[Solution Methods for Microeconomic Dynamic Optimization Problems]
(http://www.econ2.jhu.edu/people/ccarroll/solvingmicrodsops/) toolkit.
Latest update: 01 May 2015
'''
mu_adj = - 0.5*sigma**2;
pmf,X = approxLognormal(N=N, mu=mu_adj, sigma=sigma, **kwargs)
return [pmf,X]
def approxBeta(N,a=1.0,b=1.0):
'''
Calculate a discrete approximation to the beta distribution. May be quite
slow, as it uses a rudimentary numeric integration method to generate the
discrete approximation.
Parameters
----------
N : int
Size of discrete space vector to be returned.
a : float
First shape parameter (sometimes called alpha).
b : float
Second shape parameter (sometimes called beta).
Returns
-------
X : np.array
Discrete points for discrete probability mass function.
pmf : np.array
Probability associated with each point in X.
'''
P = 1000
vals = np.reshape(stats.beta.ppf(np.linspace(0.0,1.0,N*P),a,b),(N,P))
X = np.mean(vals,axis=1)
pmf = np.ones(N)/float(N)
return( [pmf, X] )
def approxUniform(N,bot=0.0,top=1.0):
'''
Makes a discrete approximation to a uniform distribution, given its bottom
and top limits and number of points.
Parameters
----------
N : int
The number of points in the discrete approximation
bot : float
The bottom of the uniform distribution
top : float
The top of the uniform distribution
Returns
-------
(unnamed) : np.array
An equiprobable discrete approximation to the uniform distribution.
'''
pmf = np.ones(N)/float(N)
center = (top+bot)/2.0
width = (top-bot)/2.0
X = center + width*np.linspace(-(N-1.0)/2.0,(N-1.0)/2.0,N)/(N/2.0)
return [pmf,X]
def makeMarkovApproxToNormal(x_grid,mu,sigma,K=351,bound=3.5):
'''
Creates an approximation to a normal distribution with mean mu and standard
deviation sigma, returning a stochastic vector called p_vec, corresponding
to values in x_grid. If a RV is distributed x~N(mu,sigma), then the expectation
of a continuous function f() is E[f(x)] = numpy.dot(p_vec,f(x_grid)).
Parameters
----------
x_grid: numpy.array
A sorted 1D array of floats representing discrete values that a normally
distributed RV could take on.
mu: float
Mean of the normal distribution to be approximated.
sigma: float
Standard deviation of the normal distribution to be approximated.
K: int
Number of points in the normal distribution to sample.
bound: float
Truncation bound of the normal distribution, as +/- bound*sigma.
Returns
-------
p_vec: numpy.array
A stochastic vector with probability weights for each x in x_grid.
'''
x_n = x_grid.size # Number of points in the outcome grid
lower_bound = -bound # Lower bound of normal draws to consider, in SD
upper_bound = bound # Upper bound of normal draws to consider, in SD
raw_sample = np.linspace(lower_bound,upper_bound,K) # Evenly spaced draws between bounds
f_weights = stats.norm.pdf(raw_sample) # Relative probability of each draw
sample = mu + sigma*raw_sample # Adjusted bounds, given mean and stdev
w_vec = np.zeros(x_n) # A vector of outcome weights
# Find the relative position of each of the draws
sample_pos = np.searchsorted(x_grid,sample)
sample_pos[sample_pos < 1] = 1
sample_pos[sample_pos > x_n-1] = x_n-1
# Make arrays of the x_grid point directly above and below each draw
bot = x_grid[sample_pos-1]
top = x_grid[sample_pos]
alpha = (sample-bot)/(top-bot)
# Loop through each x_grid point and add up the probability that each nearby
# draw contributes to it (accounting for distance)
for j in range(1,x_n):
c = sample_pos == j
w_vec[j-1] = w_vec[j-1] + np.dot(f_weights[c],1.0-alpha[c])
w_vec[j] = w_vec[j] + np.dot(f_weights[c],alpha[c])
# Reweight the probabilities so they sum to 1, and return
W = np.sum(w_vec)
p_vec = w_vec/W
return p_vec
# ================================================================================
# ==================== Functions for manipulating discrete distributions =========
# ================================================================================
def addDiscreteOutcomeConstantMean(distribution, x, p, sort = False):
'''
Adds a discrete outcome of x with probability p to an existing distribution,
holding constant the relative probabilities of other outcomes and overall mean.
Parameters
----------
distribution : [np.array]
Two element list containing a list of probabilities and a list of outcomes.
x : float
The new value to be added to the distribution.
p : float
The probability of the discrete outcome x occuring.
sort: bool
Whether or not to sort X before returning it
Returns
-------
X : np.array
Discrete points for discrete probability mass function.
pmf : np.array
Probability associated with each point in X.
Written by Matthew N. White
Latest update: 08 December 2015 by David Low
'''
X = np.append(x,distribution[1]*(1-p*x)/(1-p))
pmf = np.append(p,distribution[0]*(1-p))
if sort:
indices = np.argsort(X)
X = X[indices]
pmf = pmf[indices]
return([pmf,X])
def addDiscreteOutcome(distribution, x, p, sort = False):
'''
Adds a discrete outcome of x with probability p to an existing distribution,
holding constant the relative probabilities of other outcomes.
Parameters
----------
distribution : [np.array]
Two element list containing a list of probabilities and a list of outcomes.
x : float
The new value to be added to the distribution.
p : float
The probability of the discrete outcome x occuring.
Returns
-------
X : np.array
Discrete points for discrete probability mass function.
pmf : np.array
Probability associated with each point in X.
Written by Matthew N. White
Latest update: 11 December 2015
'''
X = np.append(x,distribution[1])
pmf = np.append(p,distribution[0]*(1-p))
if sort:
indices = np.argsort(X)
X = X[indices]
pmf = pmf[indices]
return([pmf,X])
def combineIndepDstns(*distributions):
'''
Given n lists (or tuples) whose elements represent n independent, discrete
probability spaces (probabilities and values), construct a joint pmf over
all combinations of these independent points.
Parameters
----------
distributions : [np.array]
Arbitrary number of distributions (pmfs). Each pmf is a list or tuple.
For each pmf, the first vector is probabilities and the second is values.
For each pmf, this should be true: len(X_pmf[0]) = len(X_pmf[1])
Returns
-------
List of arrays, consisting of:
P_out: np.array
Probability associated with each point in X_out.
X_out: np.array (as many as in *distributions)
Discrete points for the joint discrete probability mass function.
Written by Nathan Palmer
Latest update: 31 August 2015 by David Low
'''
# Very quick and incomplete parameter check:
for dist in distributions:
assert len(dist[0]) == len(dist[-1]), "len(dist[0]) != len(dist[-1])"
# Get information on the distributions
dist_lengths = ()
for dist in distributions:
dist_lengths += (len(dist[0]),)
number_of_distributions = len(distributions)
# Initialize lists we will use
X_out = []
P_temp = []
# Now loop through the distributions, tiling and flattening as necessary.
for dd,dist in enumerate(distributions):
# The shape we want before we tile
dist_newshape = (1,) * dd + (len(dist[0]),) + \
(1,) * (number_of_distributions - dd)
# The tiling we want to do
dist_tiles = dist_lengths[:dd] + (1,) + dist_lengths[dd+1:]
# Now we are ready to tile.
# We don't use the np.meshgrid commands, because they do not
# easily support non-symmetric grids.
Xmesh = np.tile(dist[1].reshape(dist_newshape),dist_tiles)
Pmesh = np.tile(dist[0].reshape(dist_newshape),dist_tiles)
# Now flatten the tiled arrays.
flatX = Xmesh.ravel()
flatP = Pmesh.ravel()
# Add the flattened arrays to the output lists.
X_out += [flatX,]
P_temp += [flatP,]
# We're done getting the flattened X_out arrays we wanted.
# However, we have a bunch of flattened P_temp arrays, and just want one
# probability array. So get the probability array, P_out, here.
P_out = np.ones_like(X_out[0])
for pp in P_temp:
P_out *= pp
assert np.isclose(np.sum(P_out),1),'Probabilities do not sum to 1!'
return [P_out,] + X_out
# ==============================================================================
# ============== Functions for generating state space grids ===================
# ==============================================================================
def makeGridExpMult(ming, maxg, ng, timestonest=20):
'''
Make a multi-exponentially spaced grid.
Parameters
----------
ming : float
Minimum value of the grid
maxg : float
Maximum value of the grid
ng : int
The number of grid points
timestonest : int
the number of times to nest the exponentiation
Returns
-------
points : np.array
A multi-exponentially spaced grid
Original Matab code can be found in Chris Carroll's
[Solution Methods for Microeconomic Dynamic Optimization Problems]
(http://www.econ2.jhu.edu/people/ccarroll/solvingmicrodsops/) toolkit.
Latest update: 01 May 2015
'''
if timestonest > 0:
Lming = ming
Lmaxg = maxg
for j in range(timestonest):
Lming = np.log(Lming + 1)
Lmaxg = np.log(Lmaxg + 1)
Lgrid = np.linspace(Lming,Lmaxg,ng)
grid = Lgrid
for j in range(timestonest):
grid = np.exp(grid) - 1
else:
Lming = np.log(ming)
Lmaxg = np.log(maxg)
Lstep = (Lmaxg - Lming)/(ng - 1)
Lgrid = np.arange(Lming,Lmaxg+0.000001,Lstep)
grid = np.exp(Lgrid)
return(grid)
# ==============================================================================
# ============== Uncategorized general functions ===================
# ==============================================================================
def calcWeightedAvg(data,weights):
'''
Generates a weighted average of simulated data. The Nth row of data is averaged
and then weighted by the Nth element of weights in an aggregate average.
Parameters
----------
data : numpy.array
An array of data with N rows of J floats
weights : numpy.array
A length N array of weights for the N rows of data.
Returns
-------
weighted_sum : float
The weighted sum of the data.
'''
data_avg = np.mean(data,axis=1)
weighted_sum = np.dot(data_avg,weights)
return weighted_sum
def getPercentiles(data,weights=None,percentiles=[0.5],presorted=False):
'''
Calculates the requested percentiles of (weighted) data. Median by default.
Parameters
----------
data : numpy.array
A 1D array of float data.
weights : np.array
A weighting vector for the data.
percentiles : [float]
A list of percentiles to calculate for the data. Each element should
be in (0,1).
presorted : boolean
Indicator for whether data has already been sorted.
Returns
-------
pctl_out : numpy.array
The requested percentiles of the data.
'''
if weights is None: # Set equiprobable weights if none were passed
weights = np.ones(data.size)/float(data.size)
if presorted: # Sort the data if it is not already
data_sorted = data
weights_sorted = weights
else:
order = np.argsort(data)
data_sorted = data[order]
weights_sorted = weights[order]
cum_dist = np.cumsum(weights_sorted)/np.sum(weights_sorted) # cumulative probability distribution
# Calculate the requested percentiles by interpolating the data over the
# cumulative distribution, then evaluating at the percentile values
inv_CDF = interp1d(cum_dist,data_sorted,bounds_error=False,assume_sorted=True)
pctl_out = inv_CDF(percentiles)
return pctl_out
def getLorenzShares(data,weights=None,percentiles=[0.5],presorted=False):
'''
Calculates the Lorenz curve at the requested percentiles of (weighted) data.
Median by default.
Parameters
----------
data : numpy.array
A 1D array of float data.
weights : numpy.array
A weighting vector for the data.
percentiles : [float]
A list of percentiles to calculate for the data. Each element should
be in (0,1).
presorted : boolean
Indicator for whether data has already been sorted.
Returns
-------
lorenz_out : numpy.array
The requested Lorenz curve points of the data.
'''
if weights is None: # Set equiprobable weights if none were given
weights = np.ones(data.size)
if presorted: # Sort the data if it is not already
data_sorted = data
weights_sorted = weights
else:
order = np.argsort(data)
data_sorted = data[order]
weights_sorted = weights[order]
cum_dist = np.cumsum(weights_sorted)/np.sum(weights_sorted) # cumulative probability distribution
temp = data_sorted*weights_sorted
cum_data = np.cumsum(temp)/sum(temp) # cumulative ownership shares
# Calculate the requested Lorenz shares by interpolating the cumulative ownership
# shares over the cumulative distribution, then evaluating at requested points
lorenzFunc = interp1d(cum_dist,cum_data,bounds_error=False,assume_sorted=True)
lorenz_out = lorenzFunc(percentiles)
return lorenz_out
def calcSubpopAvg(data,reference,cutoffs,weights=None):
'''
Calculates the average of (weighted) data between cutoff percentiles of a
reference variable.
Parameters
----------
data : numpy.array
A 1D array of float data.
reference : numpy.array
A 1D array of float data of the same length as data.
cutoffs : [(float,float)]
A list of doubles with the lower and upper percentile bounds (should be
in [0,1]).
weights : numpy.array
A weighting vector for the data.
Returns
-------
slice_avg
The (weighted) average of data that falls within the cutoff percentiles
of reference.
'''
if weights is None: # Set equiprobable weights if none were given
weights = np.ones(data.size)
# Sort the data and generate a cumulative distribution
order = np.argsort(reference)
data_sorted = data[order]
weights_sorted = weights[order]
cum_dist = np.cumsum(weights_sorted)/np.sum(weights_sorted)
# For each set of cutoffs, calculate the average of data that falls within
# the cutoff percentiles of reference
slice_avg = []
for j in range(len(cutoffs)):
bot = np.searchsorted(cum_dist,cutoffs[j][0])
top = np.searchsorted(cum_dist,cutoffs[j][1])
slice_avg.append(np.sum(data_sorted[bot:top]*weights_sorted[bot:top])/
np.sum(weights_sorted[bot:top]))
return slice_avg
def kernelRegression(x,y,bot=None,top=None,N=500,h=None):
'''
Performs a non-parametric Nadaraya-Watson 1D kernel regression on given data
with optionally specified range, number of points, and kernel bandwidth.
Parameters
----------
x : np.array
The independent variable in the kernel regression.
y : np.array
The dependent variable in the kernel regression.
bot : float
Minimum value of interest in the regression; defaults to min(x).
top : float
Maximum value of interest in the regression; defaults to max(y).
N : int
Number of points to compute.
h : float
The bandwidth of the (Epanechnikov) kernel. To-do: GENERALIZE.
Returns
-------
regression : LinearInterp
A piecewise locally linear kernel regression: y = f(x).
'''
# Fix omitted inputs
if bot is None:
bot = np.min(x)
if top is None:
top = np.max(x)
if h is None:
h = 2.0*(top - bot)/float(N) # This is an arbitrary default
# Construct a local linear approximation
x_vec = np.linspace(bot,top,num=N)
y_vec = np.zeros_like(x_vec) + np.nan
for j in range(N):
x_here = x_vec[j]
weights = epanechnikovKernel(x,x_here,h)
y_vec[j] = np.dot(weights,y)/np.sum(weights)
regression = interp1d(x_vec,y_vec,bounds_error=False,assume_sorted=True)
return regression
def epanechnikovKernel(x,ref_x,h=1.0):
'''
The Epanechnikov kernel.
Parameters
----------
x : np.array
Values at which to evaluate the kernel
x_ref : float
The reference point
h : float
Kernel bandwidth
Returns
-------
out : np.array
Kernel values at each value of x
'''
u = (x-ref_x)/h # Normalize distance by bandwidth
these = np.abs(u) <= 1.0 # Kernel = 0 outside [-1,1]
out = np.zeros_like(x) # Initialize kernel output
out[these] = 0.75*(1.0-u[these]**2.0) # Evaluate kernel
return out
# ==============================================================================
# ============== Some basic plotting tools ====================================
# ==============================================================================
def plotFuncs(functions,bottom,top,N=1000,legend_kwds = None):
'''
Plots 1D function(s) over a given range.
Parameters
----------
functions : [function] or function
A single function, or a list of functions, to be plotted.
bottom : float
The lower limit of the domain to be plotted.
top : float
The upper limit of the domain to be plotted.
N : int
Number of points in the domain to evaluate.
legend_kwds: None, or dictionary
If not None, the keyword dictionary to pass to plt.legend
Returns
-------
none
'''
if type(functions)==list:
function_list = functions
else:
function_list = [functions]
step = (top-bottom)/N
for function in function_list:
x = np.arange(bottom,top,step)
y = function(x)
plt.plot(x,y)
plt.xlim([bottom, top])
if legend_kwds is not None:
plt.legend(**legend_kwds)
plt.show()
def plotFuncsDer(functions,bottom,top,N=1000,legend_kwds = None):
'''
Plots the first derivative of 1D function(s) over a given range.
Parameters
----------
function : function
A function or list of functions, the derivatives of which are to be plotted.
bottom : float
The lower limit of the domain to be plotted.
top : float
The upper limit of the domain to be plotted.
N : int
Number of points in the domain to evaluate.
legend_kwds: None, or dictionary
If not None, the keyword dictionary to pass to plt.legend
Returns
-------
none
'''
if type(functions)==list:
function_list = functions
else:
function_list = [functions]
step = (top-bottom)/N
for function in function_list:
x = np.arange(bottom,top,step)
y = function.derivative(x)
plt.plot(x,y)
plt.xlim([bottom, top])
if legend_kwds is not None:
plt.legend(**legend_kwds)
plt.show()
if __name__ == '__main__':
print("Sorry, HARKutilities doesn't actually do anything on its own.")
print("To see some examples of its functions in action, look at any")
print("of the model modules in /ConsumptionSavingModel. As these functions")
print("are the basic building blocks of HARK, you'll find them used")
print("everywhere! In the future, this module will show examples of each")
print("function in the module.")
|
zofuthan/edx-platform
|
refs/heads/master
|
common/djangoapps/enrollment/data.py
|
39
|
"""
Data Aggregation Layer of the Enrollment API. Collects all enrollment specific data into a single
source to be used throughout the API.
"""
import logging
from django.contrib.auth.models import User
from opaque_keys.edx.keys import CourseKey
from enrollment.errors import (
CourseNotFoundError, CourseEnrollmentClosedError, CourseEnrollmentFullError,
CourseEnrollmentExistsError, UserNotFoundError, InvalidEnrollmentAttribute
)
from enrollment.serializers import CourseEnrollmentSerializer, CourseField
from openedx.core.djangoapps.content.course_overviews.models import CourseOverview
from student.models import (
CourseEnrollment, NonExistentCourseError, EnrollmentClosedError,
CourseFullError, AlreadyEnrolledError, CourseEnrollmentAttribute
)
log = logging.getLogger(__name__)
def get_course_enrollments(user_id):
"""Retrieve a list representing all aggregated data for a user's course enrollments.
Construct a representation of all course enrollment data for a specific user.
Args:
user_id (str): The name of the user to retrieve course enrollment information for.
Returns:
A serializable list of dictionaries of all aggregated enrollment data for a user.
"""
qset = CourseEnrollment.objects.filter(
user__username=user_id, is_active=True
).order_by('created')
return CourseEnrollmentSerializer(qset).data
def get_course_enrollment(username, course_id):
"""Retrieve an object representing all aggregated data for a user's course enrollment.
Get the course enrollment information for a specific user and course.
Args:
username (str): The name of the user to retrieve course enrollment information for.
course_id (str): The course to retrieve course enrollment information for.
Returns:
A serializable dictionary representing the course enrollment.
"""
course_key = CourseKey.from_string(course_id)
try:
enrollment = CourseEnrollment.objects.get(
user__username=username, course_id=course_key
)
return CourseEnrollmentSerializer(enrollment).data
except CourseEnrollment.DoesNotExist:
return None
def create_course_enrollment(username, course_id, mode, is_active):
"""Create a new course enrollment for the given user.
Creates a new course enrollment for the specified user username.
Args:
username (str): The name of the user to create a new course enrollment for.
course_id (str): The course to create the course enrollment for.
mode (str): (Optional) The mode for the new enrollment.
is_active (boolean): (Optional) Determines if the enrollment is active.
Returns:
A serializable dictionary representing the new course enrollment.
Raises:
CourseNotFoundError
CourseEnrollmentFullError
EnrollmentClosedError
CourseEnrollmentExistsError
"""
course_key = CourseKey.from_string(course_id)
try:
user = User.objects.get(username=username)
except User.DoesNotExist:
msg = u"Not user with username '{username}' found.".format(username=username)
log.warn(msg)
raise UserNotFoundError(msg)
try:
enrollment = CourseEnrollment.enroll(user, course_key, check_access=True)
return _update_enrollment(enrollment, is_active=is_active, mode=mode)
except NonExistentCourseError as err:
raise CourseNotFoundError(err.message)
except EnrollmentClosedError as err:
raise CourseEnrollmentClosedError(err.message)
except CourseFullError as err:
raise CourseEnrollmentFullError(err.message)
except AlreadyEnrolledError as err:
enrollment = get_course_enrollment(username, course_id)
raise CourseEnrollmentExistsError(err.message, enrollment)
def update_course_enrollment(username, course_id, mode=None, is_active=None):
"""Modify a course enrollment for a user.
Allows updates to a specific course enrollment.
Args:
username (str): The name of the user to retrieve course enrollment information for.
course_id (str): The course to retrieve course enrollment information for.
mode (str): (Optional) If specified, modify the mode for this enrollment.
is_active (boolean): (Optional) Determines if the enrollment is active.
Returns:
A serializable dictionary representing the modified course enrollment.
"""
course_key = CourseKey.from_string(course_id)
try:
user = User.objects.get(username=username)
except User.DoesNotExist:
msg = u"Not user with username '{username}' found.".format(username=username)
log.warn(msg)
raise UserNotFoundError(msg)
try:
enrollment = CourseEnrollment.objects.get(user=user, course_id=course_key)
return _update_enrollment(enrollment, is_active=is_active, mode=mode)
except CourseEnrollment.DoesNotExist:
return None
def add_or_update_enrollment_attr(user_id, course_id, attributes):
"""Set enrollment attributes for the enrollment of given user in the
course provided.
Args:
course_id (str): The Course to set enrollment attributes for.
user_id (str): The User to set enrollment attributes for.
attributes (list): Attributes to be set.
Example:
>>>add_or_update_enrollment_attr(
"Bob",
"course-v1-edX-DemoX-1T2015",
[
{
"namespace": "credit",
"name": "provider_id",
"value": "hogwarts",
},
]
)
"""
course_key = CourseKey.from_string(course_id)
user = _get_user(user_id)
enrollment = CourseEnrollment.get_enrollment(user, course_key)
if not _invalid_attribute(attributes) and enrollment is not None:
CourseEnrollmentAttribute.add_enrollment_attr(enrollment, attributes)
def get_enrollment_attributes(user_id, course_id):
"""Retrieve enrollment attributes for given user for provided course.
Args:
user_id: The User to get enrollment attributes for
course_id (str): The Course to get enrollment attributes for.
Example:
>>>get_enrollment_attributes("Bob", "course-v1-edX-DemoX-1T2015")
[
{
"namespace": "credit",
"name": "provider_id",
"value": "hogwarts",
},
]
Returns: list
"""
course_key = CourseKey.from_string(course_id)
user = _get_user(user_id)
enrollment = CourseEnrollment.get_enrollment(user, course_key)
return CourseEnrollmentAttribute.get_enrollment_attributes(enrollment)
def _get_user(user_id):
"""Retrieve user with provided user_id
Args:
user_id(str): username of the user for which object is to retrieve
Returns: obj
"""
try:
return User.objects.get(username=user_id)
except User.DoesNotExist:
msg = u"Not user with username '{username}' found.".format(username=user_id)
log.warn(msg)
raise UserNotFoundError(msg)
def _update_enrollment(enrollment, is_active=None, mode=None):
enrollment.update_enrollment(is_active=is_active, mode=mode)
enrollment.save()
return CourseEnrollmentSerializer(enrollment).data
def _invalid_attribute(attributes):
"""Validate enrollment attribute
Args:
attributes(dict): dict of attribute
Return:
list of invalid attributes
"""
invalid_attributes = []
for attribute in attributes:
if "namespace" not in attribute:
msg = u"'namespace' not in enrollment attribute"
log.warn(msg)
invalid_attributes.append("namespace")
raise InvalidEnrollmentAttribute(msg)
if "name" not in attribute:
msg = u"'name' not in enrollment attribute"
log.warn(msg)
invalid_attributes.append("name")
raise InvalidEnrollmentAttribute(msg)
if "value" not in attribute:
msg = u"'value' not in enrollment attribute"
log.warn(msg)
invalid_attributes.append("value")
raise InvalidEnrollmentAttribute(msg)
return invalid_attributes
def get_course_enrollment_info(course_id, include_expired=False):
"""Returns all course enrollment information for the given course.
Based on the course id, return all related course information.
Args:
course_id (str): The course to retrieve enrollment information for.
include_expired (bool): Boolean denoting whether expired course modes
should be included in the returned JSON data.
Returns:
A serializable dictionary representing the course's enrollment information.
Raises:
CourseNotFoundError
"""
course_key = CourseKey.from_string(course_id)
try:
course = CourseOverview.get_from_id(course_key)
except CourseOverview.DoesNotExist:
msg = u"Requested enrollment information for unknown course {course}".format(course=course_id)
log.warning(msg)
raise CourseNotFoundError(msg)
else:
return CourseField().to_native(course, include_expired=include_expired)
|
levenlabs/ansible
|
refs/heads/stable-2.1
|
lib/ansible/utils/module_docs_fragments/azure_tags.py
|
16
|
#!/usr/bin/python
#
# Copyright (c) 2016 Matt Davis, <mdavis@ansible.com>
# Chris Houseknecht, <house@redhat.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
class ModuleDocFragment(object):
# Azure doc fragment
DOCUMENTATION = '''
options:
tags:
description:
- "Dictionary of string:string pairs to assign as metadata to the object. Metadata tags on the object will be updated with any provided values. To remove tags set append_tags option to false."
required: false
default: null
append_tags:
description:
- Use to control if tags field is cannonical or just appends to existing tags. When cannonical, any tags not found in the tags parameter will be removed from the object's metadata.
default: True
required: false
'''
|
kronicz/ecommerce-2
|
refs/heads/master
|
lib/python2.7/site-packages/django/contrib/gis/db/models/lookups.py
|
84
|
from __future__ import unicode_literals
import re
from django.core.exceptions import FieldDoesNotExist
from django.db.models.constants import LOOKUP_SEP
from django.db.models.expressions import Col, Expression
from django.db.models.lookups import Lookup
from django.utils import six
gis_lookups = {}
class GISLookup(Lookup):
sql_template = None
transform_func = None
distance = False
@classmethod
def _check_geo_field(cls, opts, lookup):
"""
Utility for checking the given lookup with the given model options.
The lookup is a string either specifying the geographic field, e.g.
'point, 'the_geom', or a related lookup on a geographic field like
'address__point'.
If a GeometryField exists according to the given lookup on the model
options, it will be returned. Otherwise returns None.
"""
from django.contrib.gis.db.models.fields import GeometryField
# This takes into account the situation where the lookup is a
# lookup to a related geographic field, e.g., 'address__point'.
field_list = lookup.split(LOOKUP_SEP)
# Reversing so list operates like a queue of related lookups,
# and popping the top lookup.
field_list.reverse()
fld_name = field_list.pop()
try:
geo_fld = opts.get_field(fld_name)
# If the field list is still around, then it means that the
# lookup was for a geometry field across a relationship --
# thus we keep on getting the related model options and the
# model field associated with the next field in the list
# until there's no more left.
while len(field_list):
opts = geo_fld.rel.to._meta
geo_fld = opts.get_field(field_list.pop())
except (FieldDoesNotExist, AttributeError):
return False
# Finally, make sure we got a Geographic field and return.
if isinstance(geo_fld, GeometryField):
return geo_fld
else:
return False
def get_db_prep_lookup(self, value, connection):
# get_db_prep_lookup is called by process_rhs from super class
if isinstance(value, (tuple, list)):
# First param is assumed to be the geometric object
params = [connection.ops.Adapter(value[0])] + list(value)[1:]
else:
params = [connection.ops.Adapter(value)]
return ('%s', params)
def process_rhs(self, compiler, connection):
rhs, rhs_params = super(GISLookup, self).process_rhs(compiler, connection)
if hasattr(self.rhs, '_as_sql'):
# If rhs is some QuerySet, don't touch it
return rhs, rhs_params
geom = self.rhs
if isinstance(self.rhs, Col):
# Make sure the F Expression destination field exists, and
# set an `srid` attribute with the same as that of the
# destination.
geo_fld = self.rhs.output_field
if not hasattr(geo_fld, 'srid'):
raise ValueError('No geographic field found in expression.')
self.rhs.srid = geo_fld.srid
elif isinstance(self.rhs, Expression):
raise ValueError('Complex expressions not supported for GeometryField')
elif isinstance(self.rhs, (list, tuple)):
geom = self.rhs[0]
rhs = connection.ops.get_geom_placeholder(self.lhs.output_field, geom, compiler)
return rhs, rhs_params
def as_sql(self, compiler, connection):
lhs_sql, sql_params = self.process_lhs(compiler, connection)
rhs_sql, rhs_params = self.process_rhs(compiler, connection)
sql_params.extend(rhs_params)
template_params = {'lhs': lhs_sql, 'rhs': rhs_sql}
backend_op = connection.ops.gis_operators[self.lookup_name]
return backend_op.as_sql(connection, self, template_params, sql_params)
# ------------------
# Geometry operators
# ------------------
class OverlapsLeftLookup(GISLookup):
"""
The overlaps_left operator returns true if A's bounding box overlaps or is to the
left of B's bounding box.
"""
lookup_name = 'overlaps_left'
gis_lookups['overlaps_left'] = OverlapsLeftLookup
class OverlapsRightLookup(GISLookup):
"""
The 'overlaps_right' operator returns true if A's bounding box overlaps or is to the
right of B's bounding box.
"""
lookup_name = 'overlaps_right'
gis_lookups['overlaps_right'] = OverlapsRightLookup
class OverlapsBelowLookup(GISLookup):
"""
The 'overlaps_below' operator returns true if A's bounding box overlaps or is below
B's bounding box.
"""
lookup_name = 'overlaps_below'
gis_lookups['overlaps_below'] = OverlapsBelowLookup
class OverlapsAboveLookup(GISLookup):
"""
The 'overlaps_above' operator returns true if A's bounding box overlaps or is above
B's bounding box.
"""
lookup_name = 'overlaps_above'
gis_lookups['overlaps_above'] = OverlapsAboveLookup
class LeftLookup(GISLookup):
"""
The 'left' operator returns true if A's bounding box is strictly to the left
of B's bounding box.
"""
lookup_name = 'left'
gis_lookups['left'] = LeftLookup
class RightLookup(GISLookup):
"""
The 'right' operator returns true if A's bounding box is strictly to the right
of B's bounding box.
"""
lookup_name = 'right'
gis_lookups['right'] = RightLookup
class StrictlyBelowLookup(GISLookup):
"""
The 'strictly_below' operator returns true if A's bounding box is strictly below B's
bounding box.
"""
lookup_name = 'strictly_below'
gis_lookups['strictly_below'] = StrictlyBelowLookup
class StrictlyAboveLookup(GISLookup):
"""
The 'strictly_above' operator returns true if A's bounding box is strictly above B's
bounding box.
"""
lookup_name = 'strictly_above'
gis_lookups['strictly_above'] = StrictlyAboveLookup
class SameAsLookup(GISLookup):
"""
The "~=" operator is the "same as" operator. It tests actual geometric
equality of two features. So if A and B are the same feature,
vertex-by-vertex, the operator returns true.
"""
lookup_name = 'same_as'
gis_lookups['same_as'] = SameAsLookup
class ExactLookup(SameAsLookup):
# Alias of same_as
lookup_name = 'exact'
gis_lookups['exact'] = ExactLookup
class BBContainsLookup(GISLookup):
"""
The 'bbcontains' operator returns true if A's bounding box completely contains
by B's bounding box.
"""
lookup_name = 'bbcontains'
gis_lookups['bbcontains'] = BBContainsLookup
class BBOverlapsLookup(GISLookup):
"""
The 'bboverlaps' operator returns true if A's bounding box overlaps B's bounding box.
"""
lookup_name = 'bboverlaps'
gis_lookups['bboverlaps'] = BBOverlapsLookup
class ContainedLookup(GISLookup):
"""
The 'contained' operator returns true if A's bounding box is completely contained
by B's bounding box.
"""
lookup_name = 'contained'
gis_lookups['contained'] = ContainedLookup
# ------------------
# Geometry functions
# ------------------
class ContainsLookup(GISLookup):
lookup_name = 'contains'
gis_lookups['contains'] = ContainsLookup
class ContainsProperlyLookup(GISLookup):
lookup_name = 'contains_properly'
gis_lookups['contains_properly'] = ContainsProperlyLookup
class CoveredByLookup(GISLookup):
lookup_name = 'coveredby'
gis_lookups['coveredby'] = CoveredByLookup
class CoversLookup(GISLookup):
lookup_name = 'covers'
gis_lookups['covers'] = CoversLookup
class CrossesLookup(GISLookup):
lookup_name = 'crosses'
gis_lookups['crosses'] = CrossesLookup
class DisjointLookup(GISLookup):
lookup_name = 'disjoint'
gis_lookups['disjoint'] = DisjointLookup
class EqualsLookup(GISLookup):
lookup_name = 'equals'
gis_lookups['equals'] = EqualsLookup
class IntersectsLookup(GISLookup):
lookup_name = 'intersects'
gis_lookups['intersects'] = IntersectsLookup
class OverlapsLookup(GISLookup):
lookup_name = 'overlaps'
gis_lookups['overlaps'] = OverlapsLookup
class RelateLookup(GISLookup):
lookup_name = 'relate'
sql_template = '%(func)s(%(lhs)s, %(rhs)s, %%s)'
pattern_regex = re.compile(r'^[012TF\*]{9}$')
def get_db_prep_lookup(self, value, connection):
if len(value) != 2:
raise ValueError('relate must be passed a two-tuple')
# Check the pattern argument
backend_op = connection.ops.gis_operators[self.lookup_name]
if hasattr(backend_op, 'check_relate_argument'):
backend_op.check_relate_argument(value[1])
else:
pattern = value[1]
if not isinstance(pattern, six.string_types) or not self.pattern_regex.match(pattern):
raise ValueError('Invalid intersection matrix pattern "%s".' % pattern)
return super(RelateLookup, self).get_db_prep_lookup(value, connection)
gis_lookups['relate'] = RelateLookup
class TouchesLookup(GISLookup):
lookup_name = 'touches'
gis_lookups['touches'] = TouchesLookup
class WithinLookup(GISLookup):
lookup_name = 'within'
gis_lookups['within'] = WithinLookup
class DistanceLookupBase(GISLookup):
distance = True
sql_template = '%(func)s(%(lhs)s, %(rhs)s) %(op)s %%s'
def get_db_prep_lookup(self, value, connection):
if isinstance(value, (tuple, list)):
if not 2 <= len(value) <= 3:
raise ValueError("2 or 3-element tuple required for '%s' lookup." % self.lookup_name)
params = [connection.ops.Adapter(value[0])]
# Getting the distance parameter in the units of the field.
params += connection.ops.get_distance(self.lhs.output_field, value[1:], self.lookup_name)
return ('%s', params)
else:
return super(DistanceLookupBase, self).get_db_prep_lookup(value, connection)
class DWithinLookup(DistanceLookupBase):
lookup_name = 'dwithin'
sql_template = '%(func)s(%(lhs)s, %(rhs)s, %%s)'
gis_lookups['dwithin'] = DWithinLookup
class DistanceGTLookup(DistanceLookupBase):
lookup_name = 'distance_gt'
gis_lookups['distance_gt'] = DistanceGTLookup
class DistanceGTELookup(DistanceLookupBase):
lookup_name = 'distance_gte'
gis_lookups['distance_gte'] = DistanceGTELookup
class DistanceLTLookup(DistanceLookupBase):
lookup_name = 'distance_lt'
gis_lookups['distance_lt'] = DistanceLTLookup
class DistanceLTELookup(DistanceLookupBase):
lookup_name = 'distance_lte'
gis_lookups['distance_lte'] = DistanceLTELookup
|
indictranstech/reciphergroup-erpnext
|
refs/heads/master
|
erpnext/accounts/doctype/journal_entry/journal_entry.py
|
4
|
# Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors
# License: GNU General Public License v3. See license.txt
from __future__ import unicode_literals
import frappe
from frappe.utils import cstr, flt, fmt_money, formatdate, getdate, date_diff
from frappe import msgprint, _, scrub
from erpnext.setup.utils import get_company_currency
from erpnext.controllers.accounts_controller import AccountsController
from erpnext.accounts.utils import get_balance_on
class JournalEntry(AccountsController):
def __init__(self, arg1, arg2=None):
super(JournalEntry, self).__init__(arg1, arg2)
def get_feed(self):
return self.voucher_type
def validate(self):
if not self.is_opening:
self.is_opening='No'
self.clearance_date = None
super(JournalEntry, self).validate_date_with_fiscal_year()
self.validate_party()
self.validate_cheque_info()
self.validate_entries_for_advance()
self.validate_debit_and_credit()
self.validate_against_jv()
self.validate_reference_doc()
self.set_against_account()
self.create_remarks()
self.set_print_format_fields()
self.validate_expense_claim()
self.validate_credit_debit_note()
self.validate_empty_accounts_table()
self.set_title()
def on_submit(self):
self.check_credit_limit()
self.make_gl_entries()
self.update_advance_paid()
self.update_expense_claim()
def set_title(self):
self.title = self.pay_to_recd_from or self.accounts[0].account
def update_advance_paid(self):
advance_paid = frappe._dict()
for d in self.get("accounts"):
if d.is_advance:
if d.reference_type in ("Sales Order", "Purchase Order"):
advance_paid.setdefault(d.reference_type, []).append(d.reference_name)
for voucher_type, order_list in advance_paid.items():
for voucher_no in list(set(order_list)):
frappe.get_doc(voucher_type, voucher_no).set_total_advance_paid()
def on_cancel(self):
from erpnext.accounts.utils import remove_against_link_from_jv
remove_against_link_from_jv(self.doctype, self.name)
self.make_gl_entries(1)
self.update_advance_paid()
self.update_expense_claim()
def validate_party(self):
for d in self.get("accounts"):
account_type = frappe.db.get_value("Account", d.account, "account_type")
if account_type in ["Receivable", "Payable"]:
if not (d.party_type and d.party):
frappe.throw(_("Row {0}: Party Type and Party is required for Receivable / Payable account {1}").format(d.idx, d.account))
elif d.party_type and d.party:
frappe.throw(_("Row {0}: Party Type and Party is only applicable against Receivable / Payable account").format(d.idx))
def check_credit_limit(self):
customers = list(set([d.party for d in self.get("accounts")
if d.party_type=="Customer" and d.party and flt(d.debit) > 0]))
if customers:
from erpnext.selling.doctype.customer.customer import check_credit_limit
for customer in customers:
check_credit_limit(customer, self.company)
def validate_cheque_info(self):
if self.voucher_type in ['Bank Entry']:
if not self.cheque_no or not self.cheque_date:
msgprint(_("Reference No & Reference Date is required for {0}").format(self.voucher_type),
raise_exception=1)
if self.cheque_date and not self.cheque_no:
msgprint(_("Reference No is mandatory if you entered Reference Date"), raise_exception=1)
def validate_entries_for_advance(self):
for d in self.get('accounts'):
if d.reference_type not in ("Sales Invoice", "Purchase Invoice", "Journal Entry"):
if (d.party_type == 'Customer' and flt(d.credit) > 0) or \
(d.party_type == 'Supplier' and flt(d.debit) > 0):
if d.is_advance=="No":
msgprint(_("Row {0}: Please check 'Is Advance' against Account {1} if this is an advance entry.").format(d.idx, d.account))
elif d.reference_type in ("Sales Order", "Purchase Order") and d.is_advance != "Yes":
frappe.throw(_("Row {0}: Payment against Sales/Purchase Order should always be marked as advance").format(d.idx))
def validate_against_jv(self):
for d in self.get('accounts'):
if d.reference_type=="Journal Entry":
account_root_type = frappe.db.get_value("Account", d.account, "root_type")
if account_root_type == "Asset" and flt(d.debit) > 0:
frappe.throw(_("For {0}, only credit accounts can be linked against another debit entry")
.format(d.account))
elif account_root_type == "Liability" and flt(d.credit) > 0:
frappe.throw(_("For {0}, only debit accounts can be linked against another credit entry")
.format(d.account))
if d.reference_name == self.name:
frappe.throw(_("You can not enter current voucher in 'Against Journal Entry' column"))
against_entries = frappe.db.sql("""select * from `tabJournal Entry Account`
where account = %s and docstatus = 1 and parent = %s
and ifnull(reference_type, '') in ("", "Sales Order", "Purchase Order")
""", (d.account, d.reference_name), as_dict=True)
if not against_entries:
frappe.throw(_("Journal Entry {0} does not have account {1} or already matched against other voucher")
.format(d.reference_name, d.account))
else:
dr_or_cr = "debit" if d.credit > 0 else "credit"
valid = False
for jvd in against_entries:
if flt(jvd[dr_or_cr]) > 0:
valid = True
if not valid:
frappe.throw(_("Against Journal Entry {0} does not have any unmatched {1} entry")
.format(d.reference_name, dr_or_cr))
def validate_reference_doc(self):
"""Validates reference document"""
field_dict = {
'Sales Invoice': ["Customer", "Debit To"],
'Purchase Invoice': ["Supplier", "Credit To"],
'Sales Order': ["Customer"],
'Purchase Order': ["Supplier"]
}
self.reference_totals = {}
self.reference_types = {}
for d in self.get("accounts"):
if not d.reference_type:
d.reference_name = None
if not d.reference_name:
d.reference_type = None
if d.reference_type and d.reference_name and (d.reference_type in field_dict.keys()):
dr_or_cr = "credit" if d.reference_type in ("Sales Order", "Sales Invoice") \
else "debit"
# check debit or credit type Sales / Purchase Order
if d.reference_type=="Sales Order" and flt(d.debit) > 0:
frappe.throw(_("Row {0}: Debit entry can not be linked with a {1}").format(d.idx, d.reference_type))
if d.reference_type == "Purchase Order" and flt(d.credit) > 0:
frappe.throw(_("Row {0}: Credit entry can not be linked with a {1}").format(d.idx, d.reference_type))
# set totals
if not d.reference_name in self.reference_totals:
self.reference_totals[d.reference_name] = 0.0
self.reference_totals[d.reference_name] += flt(d.get(dr_or_cr))
self.reference_types[d.reference_name] = d.reference_type
against_voucher = frappe.db.get_value(d.reference_type, d.reference_name,
[scrub(dt) for dt in field_dict.get(d.reference_type)])
# check if party and account match
if d.reference_type in ("Sales Invoice", "Purchase Invoice"):
if (against_voucher[0] != d.party or against_voucher[1] != d.account):
frappe.throw(_("Row {0}: Party / Account does not match with {1} / {2} in {3} {4}")
.format(d.idx, field_dict.get(d.reference_type)[0], field_dict.get(d.reference_type)[1],
d.reference_type, d.reference_name))
# check if party matches for Sales / Purchase Order
if d.reference_type in ("Sales Order", "Purchase Order"):
# set totals
if against_voucher != d.party:
frappe.throw(_("Row {0}: {1} {2} does not match with {3}") \
.format(d.idx, d.party_type, d.party, d.reference_type))
self.validate_orders()
self.validate_invoices()
def validate_orders(self):
"""Validate totals, stopped and docstatus for orders"""
for reference_name, total in self.reference_totals.iteritems():
reference_type = self.reference_types[reference_name]
if reference_type in ("Sales Order", "Purchase Order"):
voucher_properties = frappe.db.get_value(reference_type, reference_name,
["docstatus", "per_billed", "status", "advance_paid", "base_grand_total"])
if voucher_properties[0] != 1:
frappe.throw(_("{0} {1} is not submitted").format(reference_type, reference_name))
if flt(voucher_properties[1]) >= 100:
frappe.throw(_("{0} {1} is fully billed").format(reference_type, reference_name))
if cstr(voucher_properties[2]) == "Stopped":
frappe.throw(_("{0} {1} is stopped").format(reference_type, reference_name))
if flt(voucher_properties[4]) < (flt(voucher_properties[3]) + total):
frappe.throw(_("Advance paid against {0} {1} cannot be greater \
than Grand Total {2}").format(reference_type, reference_name, voucher_properties[4]))
def validate_invoices(self):
"""Validate totals and docstatus for invoices"""
for reference_name, total in self.reference_totals.iteritems():
reference_type = self.reference_types[reference_name]
if reference_type in ("Sales Invoice", "Purchase Invoice"):
voucher_properties = frappe.db.get_value(reference_type, reference_name,
["docstatus", "outstanding_amount"])
if voucher_properties[0] != 1:
frappe.throw(_("{0} {1} is not submitted").format(reference_type, reference_name))
if total and flt(voucher_properties[1]) < total:
frappe.throw(_("Payment against {0} {1} cannot be greater \
than Outstanding Amount {2}").format(reference_type, reference_name, voucher_properties[1]))
def set_against_account(self):
accounts_debited, accounts_credited = [], []
for d in self.get("accounts"):
if flt(d.debit > 0): accounts_debited.append(d.party or d.account)
if flt(d.credit) > 0: accounts_credited.append(d.party or d.account)
for d in self.get("accounts"):
if flt(d.debit > 0): d.against_account = ", ".join(list(set(accounts_credited)))
if flt(d.credit > 0): d.against_account = ", ".join(list(set(accounts_debited)))
def validate_debit_and_credit(self):
self.total_debit, self.total_credit, self.difference = 0, 0, 0
for d in self.get("accounts"):
if d.debit and d.credit:
frappe.throw(_("You cannot credit and debit same account at the same time"))
self.total_debit = flt(self.total_debit) + flt(d.debit, self.precision("debit", "accounts"))
self.total_credit = flt(self.total_credit) + flt(d.credit, self.precision("credit", "accounts"))
self.difference = flt(self.total_debit, self.precision("total_debit")) - \
flt(self.total_credit, self.precision("total_credit"))
if self.difference:
frappe.throw(_("Total Debit must be equal to Total Credit. The difference is {0}")
.format(self.difference))
def create_remarks(self):
r = []
if self.cheque_no:
if self.cheque_date:
r.append(_('Reference #{0} dated {1}').format(self.cheque_no, formatdate(self.cheque_date)))
else:
msgprint(_("Please enter Reference date"), raise_exception=frappe.MandatoryError)
company_currency = get_company_currency(self.company)
for d in self.get('accounts'):
if d.reference_type=="Sales Invoice" and d.credit:
r.append(_("{0} against Sales Invoice {1}").format(fmt_money(flt(d.credit), currency = company_currency), \
d.reference_name))
if d.reference_type=="Sales Order" and d.credit:
r.append(_("{0} against Sales Order {1}").format(fmt_money(flt(d.credit), currency = company_currency), \
d.reference_name))
if d.reference_type == "Purchase Invoice" and d.debit:
bill_no = frappe.db.sql("""select bill_no, bill_date
from `tabPurchase Invoice` where name=%s""", d.reference_name)
if bill_no and bill_no[0][0] and bill_no[0][0].lower().strip() \
not in ['na', 'not applicable', 'none']:
r.append(_('{0} against Bill {1} dated {2}').format(fmt_money(flt(d.debit), currency=company_currency), bill_no[0][0],
bill_no[0][1] and formatdate(bill_no[0][1].strftime('%Y-%m-%d'))))
if d.reference_type == "Purchase Order" and d.debit:
r.append(_("{0} against Purchase Order {1}").format(fmt_money(flt(d.credit), currency = company_currency), \
d.reference_name))
if self.user_remark:
r.append(_("Note: {0}").format(self.user_remark))
if r:
self.remark = ("\n").join(r) #User Remarks is not mandatory
def set_print_format_fields(self):
for d in self.get('accounts'):
if d.party_type and d.party:
if not self.pay_to_recd_from:
self.pay_to_recd_from = frappe.db.get_value(d.party_type, d.party,
"customer_name" if d.party_type=="Customer" else "supplier_name")
self.set_total_amount(d.debit or d.credit)
elif frappe.db.get_value("Account", d.account, "account_type") in ["Bank", "Cash"]:
self.set_total_amount(d.debit or d.credit)
def set_total_amount(self, amt):
company_currency = get_company_currency(self.company)
self.total_amount = amt
from frappe.utils import money_in_words
self.total_amount_in_words = money_in_words(amt, company_currency)
def make_gl_entries(self, cancel=0, adv_adj=0):
from erpnext.accounts.general_ledger import make_gl_entries
gl_map = []
for d in self.get("accounts"):
if d.debit or d.credit:
gl_map.append(
self.get_gl_dict({
"account": d.account,
"party_type": d.party_type,
"party": d.party,
"against": d.against_account,
"debit": flt(d.debit, self.precision("debit", "accounts")),
"credit": flt(d.credit, self.precision("credit", "accounts")),
"against_voucher_type": d.reference_type,
"against_voucher": d.reference_name,
"remarks": self.remark,
"cost_center": d.cost_center
})
)
if gl_map:
make_gl_entries(gl_map, cancel=cancel, adv_adj=adv_adj)
def get_balance(self):
if not self.get('accounts'):
msgprint(_("'Entries' cannot be empty"), raise_exception=True)
else:
flag, self.total_debit, self.total_credit = 0, 0, 0
diff = flt(self.difference, self.precision("difference"))
# If any row without amount, set the diff on that row
for d in self.get('accounts'):
if not d.credit and not d.debit and diff != 0:
if diff>0:
d.credit = diff
elif diff<0:
d.debit = diff
flag = 1
# Set the diff in a new row
if flag == 0 and diff != 0:
jd = self.append('accounts', {})
if diff>0:
jd.credit = abs(diff)
elif diff<0:
jd.debit = abs(diff)
self.validate_debit_and_credit()
def get_outstanding_invoices(self):
self.set('accounts', [])
total = 0
for d in self.get_values():
total += flt(d.outstanding_amount, self.precision("credit", "accounts"))
jd1 = self.append('accounts', {})
jd1.account = d.account
jd1.party = d.party
if self.write_off_based_on == 'Accounts Receivable':
jd1.party_type = "Customer"
jd1.credit = flt(d.outstanding_amount, self.precision("credit", "accounts"))
jd1.reference_type = "Sales Invoice"
jd1.reference_name = cstr(d.name)
elif self.write_off_based_on == 'Accounts Payable':
jd1.party_type = "Supplier"
jd1.debit = flt(d.outstanding_amount, self.precision("debit", "accounts"))
jd1.reference_type = "Purchase Invoice"
jd1.reference_name = cstr(d.name)
jd2 = self.append('accounts', {})
if self.write_off_based_on == 'Accounts Receivable':
jd2.debit = total
elif self.write_off_based_on == 'Accounts Payable':
jd2.credit = total
self.validate_debit_and_credit()
def get_values(self):
cond = " and outstanding_amount <= {0}".format(self.write_off_amount) \
if flt(self.write_off_amount) > 0 else ""
if self.write_off_based_on == 'Accounts Receivable':
return frappe.db.sql("""select name, debit_to as account, customer as party, outstanding_amount
from `tabSales Invoice` where docstatus = 1 and company = %s
and outstanding_amount > 0 %s""" % ('%s', cond), self.company, as_dict=True)
elif self.write_off_based_on == 'Accounts Payable':
return frappe.db.sql("""select name, credit_to as account, supplier as party, outstanding_amount
from `tabPurchase Invoice` where docstatus = 1 and company = %s
and outstanding_amount > 0 %s""" % ('%s', cond), self.company, as_dict=True)
def update_expense_claim(self):
for d in self.accounts:
if d.reference_type=="Expense Claim":
amt = frappe.db.sql("""select sum(debit) as amt from `tabJournal Entry Account`
where reference_type = "Expense Claim" and
reference_name = %s and docstatus = 1""", d.reference_name ,as_dict=1)[0].amt
frappe.db.set_value("Expense Claim", d.reference_name , "total_amount_reimbursed", amt)
def validate_expense_claim(self):
for d in self.accounts:
if d.reference_type=="Expense Claim":
sanctioned_amount, reimbursed_amount = frappe.db.get_value("Expense Claim",
d.reference_name, ("total_sanctioned_amount", "total_amount_reimbursed"))
pending_amount = flt(sanctioned_amount) - flt(reimbursed_amount)
if d.debit > pending_amount:
frappe.throw(_("Row No {0}: Amount cannot be greater than Pending Amount against Expense Claim {1}. Pending Amount is {2}".format(d.idx, d.reference_name, pending_amount)))
def validate_credit_debit_note(self):
if self.stock_entry:
if frappe.db.get_value("Stock Entry", self.stock_entry, "docstatus") != 1:
frappe.throw(_("Stock Entry {0} is not submitted").format(self.stock_entry))
if frappe.db.exists({"doctype": "Journal Entry", "stock_entry": self.stock_entry, "docstatus":1}):
frappe.msgprint(_("Warning: Another {0} # {1} exists against stock entry {2}".format(self.voucher_type, self.name, self.stock_entry)))
def validate_empty_accounts_table(self):
if not self.get('accounts'):
frappe.throw("Accounts table cannot be blank.")
@frappe.whitelist()
def get_default_bank_cash_account(company, voucher_type, mode_of_payment=None):
from erpnext.accounts.doctype.sales_invoice.sales_invoice import get_bank_cash_account
if mode_of_payment:
account = get_bank_cash_account(mode_of_payment, company)
if account.get("account"):
account.update({"balance": get_balance_on(account.get("account"))})
return account
if voucher_type=="Bank Entry":
account = frappe.db.get_value("Company", company, "default_bank_account")
if not account:
account = frappe.db.get_value("Account", {"company": company, "account_type": "Bank", "is_group": 0})
elif voucher_type=="Cash Entry":
account = frappe.db.get_value("Company", company, "default_cash_account")
if not account:
account = frappe.db.get_value("Account", {"company": company, "account_type": "Cash", "is_group": 0})
if account:
return {
"account": account,
"balance": get_balance_on(account)
}
@frappe.whitelist()
def get_payment_entry_from_sales_invoice(sales_invoice):
"""Returns new Journal Entry document as dict for given Sales Invoice"""
from erpnext.accounts.utils import get_balance_on
si = frappe.get_doc("Sales Invoice", sales_invoice)
jv = get_payment_entry(si)
jv.remark = 'Payment received against Sales Invoice {0}. {1}'.format(si.name, si.remarks)
# credit customer
jv.get("accounts")[0].account = si.debit_to
jv.get("accounts")[0].party_type = "Customer"
jv.get("accounts")[0].party = si.customer
jv.get("accounts")[0].balance = get_balance_on(si.debit_to)
jv.get("accounts")[0].party_balance = get_balance_on(party=si.customer, party_type="Customer")
jv.get("accounts")[0].credit = si.outstanding_amount
jv.get("accounts")[0].reference_type = si.doctype
jv.get("accounts")[0].reference_name = si.name
# debit bank
jv.get("accounts")[1].debit = si.outstanding_amount
return jv.as_dict()
@frappe.whitelist()
def get_payment_entry_from_purchase_invoice(purchase_invoice):
"""Returns new Journal Entry document as dict for given Purchase Invoice"""
pi = frappe.get_doc("Purchase Invoice", purchase_invoice)
jv = get_payment_entry(pi)
jv.remark = 'Payment against Purchase Invoice {0}. {1}'.format(pi.name, pi.remarks)
# credit supplier
jv.get("accounts")[0].account = pi.credit_to
jv.get("accounts")[0].party_type = "Supplier"
jv.get("accounts")[0].party = pi.supplier
jv.get("accounts")[0].balance = get_balance_on(pi.credit_to)
jv.get("accounts")[0].party_balance = get_balance_on(party=pi.supplier, party_type="Supplier")
jv.get("accounts")[0].debit = pi.outstanding_amount
jv.get("accounts")[0].reference_type = pi.doctype
jv.get("accounts")[0].reference_name = pi.name
# credit bank
jv.get("accounts")[1].credit = pi.outstanding_amount
return jv.as_dict()
@frappe.whitelist()
def get_payment_entry_from_sales_order(sales_order):
"""Returns new Journal Entry document as dict for given Sales Order"""
from erpnext.accounts.utils import get_balance_on
from erpnext.accounts.party import get_party_account
so = frappe.get_doc("Sales Order", sales_order)
if flt(so.per_billed, 2) != 0.0:
frappe.throw(_("Can only make payment against unbilled Sales Order"))
jv = get_payment_entry(so)
jv.remark = 'Advance payment received against Sales Order {0}.'.format(so.name)
party_account = get_party_account(so.company, so.customer, "Customer")
amount = flt(so.base_grand_total) - flt(so.advance_paid)
# credit customer
jv.get("accounts")[0].account = party_account
jv.get("accounts")[0].party_type = "Customer"
jv.get("accounts")[0].party = so.customer
jv.get("accounts")[0].balance = get_balance_on(party_account)
jv.get("accounts")[0].party_balance = get_balance_on(party=so.customer, party_type="Customer")
jv.get("accounts")[0].credit = amount
jv.get("accounts")[0].reference_type = so.doctype
jv.get("accounts")[0].reference_name = so.name
jv.get("accounts")[0].is_advance = "Yes"
# debit bank
jv.get("accounts")[1].debit = amount
return jv.as_dict()
@frappe.whitelist()
def get_payment_entry_from_purchase_order(purchase_order):
"""Returns new Journal Entry document as dict for given Sales Order"""
from erpnext.accounts.utils import get_balance_on
from erpnext.accounts.party import get_party_account
po = frappe.get_doc("Purchase Order", purchase_order)
if flt(po.per_billed, 2) != 0.0:
frappe.throw(_("Can only make payment against unbilled Sales Order"))
jv = get_payment_entry(po)
jv.remark = 'Advance payment made against Purchase Order {0}.'.format(po.name)
party_account = get_party_account(po.company, po.supplier, "Supplier")
amount = flt(po.base_grand_total) - flt(po.advance_paid)
# credit customer
jv.get("accounts")[0].account = party_account
jv.get("accounts")[0].party_type = "Supplier"
jv.get("accounts")[0].party = po.supplier
jv.get("accounts")[0].balance = get_balance_on(party_account)
jv.get("accounts")[0].party_balance = get_balance_on(party=po.supplier, party_type="Supplier")
jv.get("accounts")[0].debit = amount
jv.get("accounts")[0].reference_type = po.doctype
jv.get("accounts")[0].reference_name = po.name
jv.get("accounts")[0].is_advance = "Yes"
# debit bank
jv.get("accounts")[1].credit = amount
return jv.as_dict()
def get_payment_entry(doc):
bank_account = get_default_bank_cash_account(doc.company, "Bank Entry")
jv = frappe.new_doc('Journal Entry')
jv.voucher_type = 'Bank Entry'
jv.company = doc.company
jv.fiscal_year = doc.fiscal_year
jv.append("accounts")
d2 = jv.append("accounts")
if bank_account:
d2.account = bank_account["account"]
d2.balance = bank_account["balance"]
return jv
@frappe.whitelist()
def get_opening_accounts(company):
"""get all balance sheet accounts for opening entry"""
accounts = frappe.db.sql_list("""select name from tabAccount
where is_group=0 and report_type='Balance Sheet' and company=%s""", company)
return [{"account": a, "balance": get_balance_on(a)} for a in accounts]
def get_against_jv(doctype, txt, searchfield, start, page_len, filters):
return frappe.db.sql("""select jv.name, jv.posting_date, jv.user_remark
from `tabJournal Entry` jv, `tabJournal Entry Account` jv_detail
where jv_detail.parent = jv.name and jv_detail.account = %s and ifnull(jv_detail.party, '') = %s
and ifnull(jv_detail.reference_type, '') = ''
and jv.docstatus = 1 and jv.{0} like %s order by jv.name desc limit %s, %s""".format(searchfield),
(filters.get("account"), cstr(filters.get("party")), "%{0}%".format(txt), start, page_len))
@frappe.whitelist()
def get_outstanding(args):
if not frappe.has_permission("Account"):
frappe.msgprint(_("No Permission"), raise_exception=1)
args = eval(args)
if args.get("doctype") == "Journal Entry":
condition = " and party=%(party)s" if args.get("party") else ""
against_jv_amount = frappe.db.sql("""
select sum(ifnull(debit, 0)) - sum(ifnull(credit, 0))
from `tabJournal Entry Account` where parent=%(docname)s and account=%(account)s {0}
and ifnull(reference_type, '')=''""".format(condition), args)
against_jv_amount = flt(against_jv_amount[0][0]) if against_jv_amount else 0
return {
("credit" if against_jv_amount > 0 else "debit"): abs(against_jv_amount)
}
elif args.get("doctype") == "Sales Invoice":
outstanding_amount = flt(frappe.db.get_value("Sales Invoice", args["docname"], "outstanding_amount"))
return {
("credit" if outstanding_amount > 0 else "debit"): abs(outstanding_amount)
}
elif args.get("doctype") == "Purchase Invoice":
outstanding_amount = flt(frappe.db.get_value("Purchase Invoice", args["docname"], "outstanding_amount"))
return {
("debit" if outstanding_amount > 0 else "credit"): abs(outstanding_amount)
}
@frappe.whitelist()
def get_party_account_and_balance(company, party_type, party):
if not frappe.has_permission("Account"):
frappe.msgprint(_("No Permission"), raise_exception=1)
from erpnext.accounts.party import get_party_account
account = get_party_account(company, party, party_type)
account_balance = get_balance_on(account=account)
party_balance = get_balance_on(party_type=party_type, party=party)
return {
"account": account,
"balance": account_balance,
"party_balance": party_balance
}
@frappe.whitelist()
def get_account_balance_and_party_type(account, date):
"""Returns dict of account balance and party type to be set in Journal Entry on selection of account."""
if not frappe.has_permission("Account"):
frappe.msgprint(_("No Permission"), raise_exception=1)
account_type = frappe.db.get_value("Account", account, "account_type")
return {
"balance": get_balance_on(account, date),
"party_type": {"Receivable":"Customer", "Payable":"Supplier"}.get(account_type, "")
}
|
huongttlan/bokeh
|
refs/heads/master
|
bokeh/charts/builder/bar_builder.py
|
43
|
"""This is the Bokeh charts interface. It gives you a high level API to build
complex plot is a simple way.
This is the Bar class which lets you build your Bar charts just passing
the arguments to the Chart class and calling the proper functions.
It also add a new chained stacked method.
"""
#-----------------------------------------------------------------------------
# Copyright (c) 2012 - 2014, Continuum Analytics, Inc. All rights reserved.
#
# Powered by the Bokeh Development Team.
#
# The full license is in the file LICENSE.txt, distributed with this software.
#-----------------------------------------------------------------------------
#-----------------------------------------------------------------------------
# Imports
#-----------------------------------------------------------------------------
from __future__ import absolute_import, print_function, division
try:
import numpy as np
except ImportError:
raise RuntimeError("bokeh.charts Bar chart requires NumPy.")
from ..utils import chunk, cycle_colors
from .._builder import Builder, create_and_build
from ...models import ColumnDataSource, FactorRange, GlyphRenderer, Range1d
from ...models.glyphs import Rect
from ...properties import Any, Bool, Either, List
#-----------------------------------------------------------------------------
# Classes and functions
#-----------------------------------------------------------------------------
def Bar(values, cat=None, stacked=False, xscale="categorical", yscale="linear",
xgrid=False, ygrid=True, continuous_range=None, **kw):
""" Create a Bar chart using :class:`BarBuilder <bokeh.charts.builder.bar_builder.BarBuilder>`
render the geometry from values, cat and stacked.
Args:
values (iterable): iterable 2d representing the data series
values matrix.
cat (list or bool, optional): list of string representing the categories.
(Defaults to None)
stacked (bool, optional): to see the bars stacked or grouped.
(Defaults to False, so grouping is assumed)
continuous_range(Range1d, optional): Custom continuous_range to be
used. (Defaults to None)
In addition the the parameters specific to this chart,
:ref:`userguide_charts_generic_arguments` are also accepted as keyword parameters.
Returns:
a new :class:`Chart <bokeh.charts.Chart>`
Examples:
.. bokeh-plot::
:source-position: above
from collections import OrderedDict
from bokeh.charts import Bar, output_file, show
# (dict, OrderedDict, lists, arrays and DataFrames are valid inputs)
xyvalues = OrderedDict()
xyvalues['python']=[-2, 5]
xyvalues['pypy']=[12, 40]
xyvalues['jython']=[22, 30]
cat = ['1st', '2nd']
bar = Bar(xyvalues, cat, title="Stacked bars",
xlabel="category", ylabel="language")
output_file("stacked_bar.html")
show(bar)
"""
if continuous_range and not isinstance(continuous_range, Range1d):
raise ValueError(
"continuous_range must be an instance of bokeh.models.ranges.Range1d"
)
# The continuous_range is the y_range (until we implement HBar charts)
y_range = continuous_range
return create_and_build(
BarBuilder, values, cat=cat, stacked=stacked,
xscale=xscale, yscale=yscale,
xgrid=xgrid, ygrid=ygrid, y_range=y_range, **kw
)
class BarBuilder(Builder):
"""This is the Bar class and it is in charge of plotting
Bar chart (grouped and stacked) in an easy and intuitive way.
Essentially, it provides a way to ingest the data, make the proper
calculations and push the references into a source object.
We additionally make calculations for the ranges.
And finally add the needed glyphs (rects) taking the references
from the source.
The x_range is categorical, and is made either from the cat argument
or from the indexes of the passed values if no cat is supplied. The
y_range can be supplied as the parameter continuous_range,
or will be calculated as a linear range (Range1d) based on the supplied
values using the following rules:
* with all positive data: start = 0, end = 1.1 * max
* with all negative data: start = 1.1 * min, end = 0
* with mixed sign data: start = 1.1 * min, end = 1.1 * max
"""
cat = Either(Bool, List(Any), help="""
List of string representing the categories. (Defaults to None.)
""")
stacked = Bool(False, help="""
Whether to stack the bars. (Defaults to False)
If True, bars are draw as a stack, to show the relationship of
parts to a whole. Otherwise, bars are grouped on the same chart.
""")
def _process_data(self):
"""Take the Bar data from the input **value.
It calculates the chart properties accordingly. Then build a dict
containing references to all the calculated points to be used by
the rect glyph inside the ``_yield_renderers`` method.
"""
if not self.cat:
self.cat = [str(x) for x in self._values.index]
width = [0.8] * len(self.cat)
# width should decrease proportionally to the value length.
# 1./len(value) doesn't work well as the width needs to decrease a
# little bit faster
width_cat = [min(0.2, (1. / len(self._values)) ** 1.1)] * len(self.cat)
zero = np.zeros(len(self.cat))
self._data = dict(
cat=self.cat, width=width, width_cat=width_cat, zero=zero
)
# list to save all the groups available in the incomming input grouping
step = np.linspace(0, 1.0, len(self._values.keys()) + 1, endpoint=False)
self._groups.extend(self._values.keys())
for i, (val, values) in enumerate(self._values.items()):
self.set_and_get("", val, list(values))
mid = np.array(values) / 2
self.set_and_get("mid", val, mid)
self.set_and_get("stacked", val, zero + mid)
# Grouped
grouped = [c + ":" + str(step[i + 1]) for c in self.cat]
self.set_and_get("cat", val, grouped)
# Stacked
zero += values
def _set_sources(self):
"""Push the Bar data into the ColumnDataSource and calculate
the proper ranges.
"""
self._source = ColumnDataSource(self._data)
self.x_range = FactorRange(factors=self._source.data["cat"])
if not self.y_range:
if self.stacked:
data = np.array(self._data['zero'])
else:
cats = [i for i in self._attr if not i.startswith(("mid", "stacked", "cat"))]
data = np.array([self._data[cat] for cat in cats])
all_positive = True if np.all(data > 0) else False
all_negative = True if np.all(data < 0) else False
# Set the start value
if all_positive:
start = 0
else:
start = 1.1 * data.min() # Will always be negative
# Set the end value
if all_negative:
end = 0
else:
end = 1.1 * data.max()
self.y_range = Range1d(start=start, end=end)
def _yield_renderers(self):
"""Use the rect glyphs to display the bars.
Takes reference points from data loaded at the ColumnDataSource.
"""
quartets = list(chunk(self._attr, 4))
colors = cycle_colors(quartets, self.palette)
# quartet elements are: [data, mid, stacked, cat]
for i, quartet in enumerate(quartets):
if self.stacked:
glyph = Rect(
x="cat", y=quartet[2],
width="width", height=quartet[0],
fill_color=colors[i], fill_alpha=0.7,
line_color="white"
)
else: # Grouped
glyph = Rect(
x=quartet[3], y=quartet[1],
width="width_cat", height=quartet[0],
fill_color=colors[i], fill_alpha=0.7,
line_color="white"
)
renderer = GlyphRenderer(data_source=self._source, glyph=glyph)
self._legends.append((self._groups[i], [renderer]))
yield renderer
|
xombiemp/CouchPotatoServer
|
refs/heads/master
|
libs/pyutil/find_exe.py
|
106
|
import warnings
import os, sys
from twisted.python.procutils import which
def find_exe(exename):
"""
Look for something named exename or exename + ".py".
This is a kludge.
@return: a list containing one element which is the path to the exename
(if it is thought to be executable), or else the first element being
sys.executable and the second element being the path to the
exename + ".py", or else return False if one can't be found
"""
warnings.warn("deprecated", DeprecationWarning)
exes = which(exename)
exe = exes and exes[0]
if not exe:
exe = os.path.join(sys.prefix, 'scripts', exename + '.py')
if os.path.exists(exe):
path, ext = os.path.splitext(exe)
if ext.lower() in [".exe", ".bat",]:
cmd = [exe,]
else:
cmd = [sys.executable, exe,]
return cmd
else:
return False
|
mosaic-cloud/mosaic-distribution-dependencies
|
refs/heads/development
|
dependencies/mozilla-js/1.8.5/js/src/jit-test/progressbar.py
|
9
|
# Text progress bar library, like curl or scp.
import sys, datetime
class ProgressBar:
def __init__(self, label, limit, label_width=12):
self.label = label
self.limit = limit
self.label_width = label_width
self.cur = 0
self.t0 = datetime.datetime.now()
self.barlen = 64 - self.label_width
self.fmt = '\r%-' + str(label_width) + 's %3d%% %-' + str(self.barlen) + 's| %6.1fs'
def update(self, value):
self.cur = value
pct = int(100.0 * self.cur / self.limit)
barlen = int(1.0 * self.barlen * self.cur / self.limit) - 1
bar = '='*barlen + '>'
dt = datetime.datetime.now() - self.t0
dt = dt.seconds + dt.microseconds * 1e-6
sys.stdout.write(self.fmt%(self.label[:self.label_width], pct, bar, dt))
sys.stdout.flush()
def finish(self):
self.update(self.limit)
sys.stdout.write('\n')
if __name__ == '__main__':
pb = ProgressBar('test', 12)
for i in range(12):
pb.update(i)
time.sleep(0.5)
pb.finish()
|
rdbhost/Rdbhdb
|
refs/heads/master
|
unittests-py2/test_rdbhdb_compressed.py
|
1
|
#!/usr/bin/env python
''' unit test suite for compressed features of rdbhdb'''
import unittest
import time
import sys, os
import accounts
sys.path.insert(0, '..\lib')
from rdbhdb import rdbhdb
need_version = '0.11.0'
class test_Rdbhdb_compressedRequest(unittest.TestCase):
driver = rdbhdb
# get choice of server from environment
HOST = os.environ.get('RDBHOST_TEST', "dev.rdbhost.com").strip("'")
connect_args = ()
connect_kw_args = {
'role': accounts.demo['role'],
'authcode': accounts.demo['authcode'],
'host': HOST}
table_prefix = 'compressed_' # If you need to specify a prefix for tables
ddl1 = '''CREATE TABLE %sbig (descr TEXT, idx INTEGER);''' % table_prefix
xddl1 = 'DROP TABLE %sbig;' % table_prefix
# Some drivers may need to override these helpers, for example adding
# a 'commit' after the execute.
def executeDDL1(self, cursor):
cursor.execute(self.ddl1)
def setUp(self):
# Call superclass setUp In case this does something in the
# future
try:
con = self._connect()
con.close()
except Exception as e:
print 'connection not made. %s db must be created online.' % e[0]
sys.exit(2)
def tearDown(self):
''' self.drivers should override this method to perform required cleanup
if any is necessary, such as deleting the test database.
The default drops the tables that may be created.
'''
con = self._connect()
try:
cur = con.cursor()
for ddl in (self.xddl1, ):
try:
cur.execute(ddl)
con.commit()
except self.driver.Error:
# Assume table didn't exist. Other tests will check if
# execute is busted.
pass
finally:
con.close()
def _connect(self):
try:
return self.driver.connect(*self.connect_args, **self.connect_kw_args)
except AttributeError:
self.fail("No connect method found in self.driver module")
def test0_host(self):
print >> sys.stderr, 'using server', self.HOST
def test1_version(self):
"""Verify correct API module version in use."""
lVersion = rdbhdb.__version__.split('.')
nVersion = need_version.split('.')
self.assert_(lVersion >= nVersion, rdbhdb.__version__)
def test_Compressed(self):
con = self._connect()
try:
cur = con.cursor()
# cursor.fetchone should raise an Error if called after
# executing a query that cannnot return rows
self.executeDDL1(cur)
cur.execute('select idx from %sbig' % self.table_prefix)
self.assertEqual(cur.fetchone(), None, 'cursor.fetchone should return None if a query retrieves no rows')
self.failUnless(cur.rowcount in (-1, 0))
stuff = "asjdfasdl;kfjasdkl;fj asdklfh asdlkfj asdklfj asdklfj asdklf adddsdklfasdfas f" + \
"asjkdf askldfjasdlkfjasdlkfj asdklfjasdlkfj asdklfjasdlkfj asdlfj asdlfjasdlkf" + \
"asldkjfasdl;kfjaskldfjasdklfjasdklfjasdklfjasdfasdfkljasdklfjasdklfjasdlkfjsdl"
qs = []
args = []
for i in range(15):
qs.append('INSERT INTO %sbig (idx, descr) VALUES(%%s, %%s);' % self.table_prefix)
assert 235 > len(stuff) > 230, len(stuff)
args.extend([i+1, stuff])
q = '\n'.join(qs)
self.assert_(len(q)>150, len(q))
self.assert_(sum([len(a) for a in args if type(a) == type('')]) > 2000)
self.assert_(len(args) == 30)
self.assert_(not any([a for a in args if type(a) == type('') and len(a)>300]))
cur.execute(q, args)
self.assertRaises(self.driver.Error, cur.fetchone)
cur.execute('select count(*) from %sbig' % self.table_prefix)
r = cur.fetchone()
self.assertEqual(len(r), 1, 'cursor.fetchone should have retrieved a single row')
self.assertEqual(r[0], 15, 'cursor.fetchone retrieved incorrect data %s' % r[0])
self.failUnless(cur.rowcount in (-1, 1))
finally:
con.close()
class test_Rdbhdb_compressedRequest_ws(test_Rdbhdb_compressedRequest):
connect_kw_args = {
'role': accounts.demo['role'],
'authcode': accounts.demo['authcode'],
'host': test_Rdbhdb_compressedRequest.HOST,
'useWebsocket': True
}
if __name__ == '__main__':
unittest.main()
|
maropu/spark
|
refs/heads/master
|
python/pyspark/pandas/tests/data_type_ops/test_datetime_ops.py
|
13
|
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import datetime
import numpy as np
import pandas as pd
from pandas.api.types import CategoricalDtype
from pyspark import pandas as ps
from pyspark.pandas.config import option_context
from pyspark.pandas.tests.data_type_ops.testing_utils import TestCasesUtils
from pyspark.testing.pandasutils import PandasOnSparkTestCase
class DatetimeOpsTest(PandasOnSparkTestCase, TestCasesUtils):
@property
def pser(self):
return pd.Series(pd.date_range("1994-1-31 10:30:15", periods=3, freq="M"))
@property
def psser(self):
return ps.from_pandas(self.pser)
@property
def some_datetime(self):
return datetime.datetime(1994, 1, 31, 10, 30, 00)
def test_add(self):
self.assertRaises(TypeError, lambda: self.psser + "x")
self.assertRaises(TypeError, lambda: self.psser + 1)
self.assertRaises(TypeError, lambda: self.psser + self.some_datetime)
with option_context("compute.ops_on_diff_frames", True):
for psser in self.pssers:
self.assertRaises(TypeError, lambda: self.psser + psser)
def test_sub(self):
self.assertRaises(TypeError, lambda: self.psser - "x")
self.assertRaises(TypeError, lambda: self.psser - 1)
self.assert_eq(
(self.pser - self.some_datetime).dt.total_seconds().astype("int"),
self.psser - self.some_datetime,
)
with option_context("compute.ops_on_diff_frames", True):
for pser, psser in self.pser_psser_pairs:
if pser.dtype == np.dtype("<M8[ns]"):
self.assert_eq(
(self.pser - pser).dt.total_seconds().astype("int"),
(self.psser - psser).sort_index(),
)
else:
self.assertRaises(TypeError, lambda: self.psser - psser)
def test_mul(self):
self.assertRaises(TypeError, lambda: self.psser * "x")
self.assertRaises(TypeError, lambda: self.psser * 1)
self.assertRaises(TypeError, lambda: self.psser * self.some_datetime)
with option_context("compute.ops_on_diff_frames", True):
for psser in self.pssers:
self.assertRaises(TypeError, lambda: self.psser * psser)
def test_truediv(self):
self.assertRaises(TypeError, lambda: self.psser / "x")
self.assertRaises(TypeError, lambda: self.psser / 1)
self.assertRaises(TypeError, lambda: self.psser / self.some_datetime)
with option_context("compute.ops_on_diff_frames", True):
for psser in self.pssers:
self.assertRaises(TypeError, lambda: self.psser / psser)
def test_floordiv(self):
self.assertRaises(TypeError, lambda: self.psser // "x")
self.assertRaises(TypeError, lambda: self.psser // 1)
self.assertRaises(TypeError, lambda: self.psser // self.some_datetime)
with option_context("compute.ops_on_diff_frames", True):
for psser in self.pssers:
self.assertRaises(TypeError, lambda: self.psser // psser)
def test_mod(self):
self.assertRaises(TypeError, lambda: self.psser % "x")
self.assertRaises(TypeError, lambda: self.psser % 1)
self.assertRaises(TypeError, lambda: self.psser % self.some_datetime)
with option_context("compute.ops_on_diff_frames", True):
for psser in self.pssers:
self.assertRaises(TypeError, lambda: self.psser % psser)
def test_pow(self):
self.assertRaises(TypeError, lambda: self.psser ** "x")
self.assertRaises(TypeError, lambda: self.psser ** 1)
self.assertRaises(TypeError, lambda: self.psser ** self.some_datetime)
with option_context("compute.ops_on_diff_frames", True):
for psser in self.pssers:
self.assertRaises(TypeError, lambda: self.psser ** psser)
def test_radd(self):
self.assertRaises(TypeError, lambda: "x" + self.psser)
self.assertRaises(TypeError, lambda: 1 + self.psser)
self.assertRaises(TypeError, lambda: self.some_datetime + self.psser)
def test_rsub(self):
self.assertRaises(TypeError, lambda: "x" - self.psser)
self.assertRaises(TypeError, lambda: 1 - self.psser)
self.assert_eq(
(self.some_datetime - self.pser).dt.total_seconds().astype("int"),
self.some_datetime - self.psser,
)
def test_rmul(self):
self.assertRaises(TypeError, lambda: "x" * self.psser)
self.assertRaises(TypeError, lambda: 1 * self.psser)
self.assertRaises(TypeError, lambda: self.some_datetime * self.psser)
def test_rtruediv(self):
self.assertRaises(TypeError, lambda: "x" / self.psser)
self.assertRaises(TypeError, lambda: 1 / self.psser)
self.assertRaises(TypeError, lambda: self.some_datetime / self.psser)
def test_rfloordiv(self):
self.assertRaises(TypeError, lambda: "x" // self.psser)
self.assertRaises(TypeError, lambda: 1 // self.psser)
self.assertRaises(TypeError, lambda: self.some_datetime // self.psser)
def test_rmod(self):
self.assertRaises(TypeError, lambda: 1 % self.psser)
self.assertRaises(TypeError, lambda: self.some_datetime % self.psser)
def test_rpow(self):
self.assertRaises(TypeError, lambda: "x" ** self.psser)
self.assertRaises(TypeError, lambda: 1 ** self.psser)
self.assertRaises(TypeError, lambda: self.some_datetime ** self.psser)
def test_and(self):
self.assertRaises(TypeError, lambda: self.psser & True)
self.assertRaises(TypeError, lambda: self.psser & False)
self.assertRaises(TypeError, lambda: self.psser & self.psser)
def test_rand(self):
self.assertRaises(TypeError, lambda: True & self.psser)
self.assertRaises(TypeError, lambda: False & self.psser)
def test_or(self):
self.assertRaises(TypeError, lambda: self.psser | True)
self.assertRaises(TypeError, lambda: self.psser | False)
self.assertRaises(TypeError, lambda: self.psser | self.psser)
def test_ror(self):
self.assertRaises(TypeError, lambda: True | self.psser)
self.assertRaises(TypeError, lambda: False | self.psser)
def test_from_to_pandas(self):
data = pd.date_range("1994-1-31 10:30:15", periods=3, freq="M")
pser = pd.Series(data)
psser = ps.Series(data)
self.assert_eq(pser, psser.to_pandas())
self.assert_eq(ps.from_pandas(pser), psser)
def test_isnull(self):
self.assert_eq(self.pser.isnull(), self.psser.isnull())
def test_astype(self):
pser = self.pser
psser = self.psser
self.assert_eq(pser.astype(str), psser.astype(str))
self.assert_eq(pser.astype("category"), psser.astype("category"))
cat_type = CategoricalDtype(categories=["a", "b", "c"])
self.assert_eq(pser.astype(cat_type), psser.astype(cat_type))
if __name__ == "__main__":
import unittest
from pyspark.pandas.tests.data_type_ops.test_datetime_ops import * # noqa: F401
try:
import xmlrunner # type: ignore[import]
testRunner = xmlrunner.XMLTestRunner(output="target/test-reports", verbosity=2)
except ImportError:
testRunner = None
unittest.main(testRunner=testRunner, verbosity=2)
|
leoliujie/odoo
|
refs/heads/8.0
|
addons/gamification/tests/__init__.py
|
268
|
# -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Business Applications
# Copyright (c) 2013 OpenERP S.A. <http://openerp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
from . import test_challenge
|
nicememory/pie
|
refs/heads/master
|
pyglet/tests/extlibs/future/py2_3/future/moves/urllib/response.py
|
70
|
from future import standard_library
from future.utils import PY3
if PY3:
from urllib.response import *
else:
__future_module__ = True
with standard_library.suspend_hooks():
from urllib import (addbase,
addclosehook,
addinfo,
addinfourl)
|
chaluemwut/fbserver
|
refs/heads/master
|
venv/lib/python2.7/site-packages/sklearn/ensemble/base.py
|
2
|
"""
Base class for ensemble-based estimators.
"""
# Authors: Gilles Louppe
# License: BSD 3 clause
import numpy as np
from ..base import clone
from ..base import BaseEstimator
from ..base import MetaEstimatorMixin
from ..externals.joblib import cpu_count
class BaseEnsemble(BaseEstimator, MetaEstimatorMixin):
"""Base class for all ensemble classes.
Warning: This class should not be used directly. Use derived classes
instead.
Parameters
----------
base_estimator : object, optional (default=None)
The base estimator from which the ensemble is built.
n_estimators : integer
The number of estimators in the ensemble.
estimator_params : list of strings
The list of attributes to use as parameters when instantiating a
new base estimator. If none are given, default parameters are used.
Attributes
----------
`base_estimator_`: list of estimators
The base estimator from which the ensemble is grown.
`estimators_`: list of estimators
The collection of fitted base estimators.
"""
def __init__(self, base_estimator, n_estimators=10,
estimator_params=tuple()):
# Set parameters
self.base_estimator = base_estimator
self.n_estimators = n_estimators
self.estimator_params = estimator_params
# Don't instantiate estimators now! Parameters of base_estimator might
# still change. Eg., when grid-searching with the nested object syntax.
# This needs to be filled by the derived classes.
self.estimators_ = []
def _validate_estimator(self, default=None):
"""Check the estimator and the n_estimator attribute, set the
`base_estimator_` attribute."""
if self.n_estimators <= 0:
raise ValueError("n_estimators must be greater than zero, "
"got {0}.".format(self.n_estimators))
if self.base_estimator is not None:
self.base_estimator_ = self.base_estimator
else:
self.base_estimator_ = default
if self.base_estimator_ is None:
raise ValueError("base_estimator cannot be None")
def _make_estimator(self, append=True):
"""Make and configure a copy of the `base_estimator_` attribute.
Warning: This method should be used to properly instantiate new
sub-estimators.
"""
estimator = clone(self.base_estimator_)
estimator.set_params(**dict((p, getattr(self, p))
for p in self.estimator_params))
if append:
self.estimators_.append(estimator)
return estimator
def __len__(self):
"""Returns the number of estimators in the ensemble."""
return len(self.estimators_)
def __getitem__(self, index):
"""Returns the index'th estimator in the ensemble."""
return self.estimators_[index]
def __iter__(self):
"""Returns iterator over estimators in the ensemble."""
return iter(self.estimators_)
def _partition_estimators(ensemble):
"""Private function used to partition estimators between jobs."""
# Compute the number of jobs
if ensemble.n_jobs == -1:
n_jobs = min(cpu_count(), ensemble.n_estimators)
else:
n_jobs = min(ensemble.n_jobs, ensemble.n_estimators)
# Partition estimators between jobs
n_estimators = (ensemble.n_estimators // n_jobs) * np.ones(n_jobs,
dtype=np.int)
n_estimators[:ensemble.n_estimators % n_jobs] += 1
starts = np.cumsum(n_estimators)
return n_jobs, n_estimators.tolist(), [0] + starts.tolist()
|
ChinaQuants/zipline
|
refs/heads/master
|
zipline/protocol.py
|
3
|
#
# Copyright 2013 Quantopian, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from copy import copy
from six import iteritems, iterkeys
import pandas as pd
import numpy as np
from . utils.protocol_utils import Enum
from . utils.math_utils import nanstd, nanmean, nansum
from zipline.utils.algo_instance import get_algo_instance
from zipline.utils.serialization_utils import (
VERSION_LABEL
)
# Datasource type should completely determine the other fields of a
# message with its type.
DATASOURCE_TYPE = Enum(
'AS_TRADED_EQUITY',
'MERGER',
'SPLIT',
'DIVIDEND',
'TRADE',
'TRANSACTION',
'ORDER',
'EMPTY',
'DONE',
'CUSTOM',
'BENCHMARK',
'COMMISSION',
'CLOSE_POSITION'
)
# Expected fields/index values for a dividend Series.
DIVIDEND_FIELDS = [
'declared_date',
'ex_date',
'gross_amount',
'net_amount',
'pay_date',
'payment_sid',
'ratio',
'sid',
]
# Expected fields/index values for a dividend payment Series.
DIVIDEND_PAYMENT_FIELDS = [
'id',
'payment_sid',
'cash_amount',
'share_count',
]
def dividend_payment(data=None):
"""
Take a dictionary whose values are in DIVIDEND_PAYMENT_FIELDS and return a
series representing the payment of a dividend.
Ids are assigned to each historical dividend in
PerformanceTracker.update_dividends. They are guaranteed to be unique
integers with the context of a single simulation. If @data is non-empty, a
id is required to identify the historical dividend associated with this
payment.
Additionally, if @data is non-empty, either data['cash_amount'] should be
nonzero or data['payment_sid'] should be an asset identifier and
data['share_count'] should be nonzero.
The returned Series is given its id value as a name so that concatenating
payments results in a DataFrame indexed by id. (Note, however, that the
name value is not used to construct an index when this series is returned
by function passed to `DataFrame.apply`. In such a case, pandas preserves
the index of the DataFrame on which `apply` is being called.)
"""
return pd.Series(
data=data,
name=data['id'] if data is not None else None,
index=DIVIDEND_PAYMENT_FIELDS,
dtype=object,
)
class Event(object):
def __init__(self, initial_values=None):
if initial_values:
self.__dict__ = initial_values
def __getitem__(self, name):
return getattr(self, name)
def __setitem__(self, name, value):
setattr(self, name, value)
def __delitem__(self, name):
delattr(self, name)
def keys(self):
return self.__dict__.keys()
def __eq__(self, other):
return hasattr(other, '__dict__') and self.__dict__ == other.__dict__
def __contains__(self, name):
return name in self.__dict__
def __repr__(self):
return "Event({0})".format(self.__dict__)
def to_series(self, index=None):
return pd.Series(self.__dict__, index=index)
class Order(Event):
pass
class Portfolio(object):
def __init__(self):
self.capital_used = 0.0
self.starting_cash = 0.0
self.portfolio_value = 0.0
self.pnl = 0.0
self.returns = 0.0
self.cash = 0.0
self.positions = Positions()
self.start_date = None
self.positions_value = 0.0
def __getitem__(self, key):
return self.__dict__[key]
def __repr__(self):
return "Portfolio({0})".format(self.__dict__)
def __getstate__(self):
state_dict = copy(self.__dict__)
# Have to convert to primitive dict
state_dict['positions'] = dict(self.positions)
STATE_VERSION = 1
state_dict[VERSION_LABEL] = STATE_VERSION
return state_dict
def __setstate__(self, state):
OLDEST_SUPPORTED_STATE = 1
version = state.pop(VERSION_LABEL)
if version < OLDEST_SUPPORTED_STATE:
raise BaseException("Portfolio saved state is too old.")
self.positions = Positions()
self.positions.update(state.pop('positions'))
self.__dict__.update(state)
class Account(object):
'''
The account object tracks information about the trading account. The
values are updated as the algorithm runs and its keys remain unchanged.
If connected to a broker, one can update these values with the trading
account values as reported by the broker.
'''
def __init__(self):
self.settled_cash = 0.0
self.accrued_interest = 0.0
self.buying_power = float('inf')
self.equity_with_loan = 0.0
self.total_positions_value = 0.0
self.regt_equity = 0.0
self.regt_margin = float('inf')
self.initial_margin_requirement = 0.0
self.maintenance_margin_requirement = 0.0
self.available_funds = 0.0
self.excess_liquidity = 0.0
self.cushion = 0.0
self.day_trades_remaining = float('inf')
self.leverage = 0.0
self.net_leverage = 0.0
self.net_liquidation = 0.0
def __getitem__(self, key):
return self.__dict__[key]
def __repr__(self):
return "Account({0})".format(self.__dict__)
def __getstate__(self):
state_dict = copy(self.__dict__)
STATE_VERSION = 1
state_dict[VERSION_LABEL] = STATE_VERSION
return state_dict
def __setstate__(self, state):
OLDEST_SUPPORTED_STATE = 1
version = state.pop(VERSION_LABEL)
if version < OLDEST_SUPPORTED_STATE:
raise BaseException("Account saved state is too old.")
self.__dict__.update(state)
class Position(object):
def __init__(self, sid):
self.sid = sid
self.amount = 0
self.cost_basis = 0.0 # per share
self.last_sale_price = 0.0
def __getitem__(self, key):
return self.__dict__[key]
def __repr__(self):
return "Position({0})".format(self.__dict__)
def __getstate__(self):
state_dict = copy(self.__dict__)
STATE_VERSION = 1
state_dict[VERSION_LABEL] = STATE_VERSION
return state_dict
def __setstate__(self, state):
OLDEST_SUPPORTED_STATE = 1
version = state.pop(VERSION_LABEL)
if version < OLDEST_SUPPORTED_STATE:
raise BaseException("Protocol Position saved state is too old.")
self.__dict__.update(state)
class Positions(dict):
def __missing__(self, key):
pos = Position(key)
self[key] = pos
return pos
class SIDData(object):
# Cache some data on the class so that this is shared for all instances of
# siddata.
# The dt where we cached the history.
_history_cache_dt = None
# _history_cache is a a dict mapping fields to pd.DataFrames. This is the
# most data we have for a given field for the _history_cache_dt.
_history_cache = {}
# This is the cache that is used for returns. This will have a different
# structure than the other history cache as this is always daily.
_returns_cache_dt = None
_returns_cache = None
# The last dt that we needed to cache the number of minutes.
_minute_bar_cache_dt = None
# If we are in minute mode, there is some cost associated with computing
# the number of minutes that we need to pass to the bar count of history.
# This will remain constant for a given bar and day count.
# This maps days to number of minutes.
_minute_bar_cache = {}
def __init__(self, sid, initial_values=None):
self._sid = sid
self._freqstr = None
# To check if we have data, we use the __len__ which depends on the
# __dict__. Because we are foward defining the attributes needed, we
# need to account for their entrys in the __dict__.
# We will add 1 because we need to account for the _initial_len entry
# itself.
self._initial_len = len(self.__dict__) + 1
if initial_values:
self.__dict__.update(initial_values)
@property
def datetime(self):
"""
Provides an alias from data['foo'].datetime -> data['foo'].dt
`datetime` was previously provided by adding a seperate `datetime`
member of the SIDData object via a generator that wrapped the incoming
data feed and added the field to each equity event.
This alias is intended to be temporary, to provide backwards
compatibility with existing algorithms, but should be considered
deprecated, and may be removed in the future.
"""
return self.dt
def get(self, name, default=None):
return self.__dict__.get(name, default)
def __getitem__(self, name):
return self.__dict__[name]
def __setitem__(self, name, value):
self.__dict__[name] = value
def __len__(self):
return len(self.__dict__) - self._initial_len
def __contains__(self, name):
return name in self.__dict__
def __repr__(self):
return "SIDData({0})".format(self.__dict__)
def _get_buffer(self, bars, field='price', raw=False):
"""
Gets the result of history for the given number of bars and field.
This will cache the results internally.
"""
cls = self.__class__
algo = get_algo_instance()
now = algo.datetime
if now != cls._history_cache_dt:
# For a given dt, the history call for this field will not change.
# We have a new dt, so we should reset the cache.
cls._history_cache_dt = now
cls._history_cache = {}
if field not in self._history_cache \
or bars > len(cls._history_cache[field][0].index):
# If we have never cached this field OR the amount of bars that we
# need for this field is greater than the amount we have cached,
# then we need to get more history.
hst = algo.history(
bars, self._freqstr, field, ffill=True,
)
# Assert that the column holds ints, not security objects.
if not isinstance(self._sid, str):
hst.columns = hst.columns.astype(int)
self._history_cache[field] = (hst, hst.values, hst.columns)
# Slice of only the bars needed. This is because we strore the LARGEST
# amount of history for the field, and we might request less than the
# largest from the cache.
buffer_, values, columns = cls._history_cache[field]
if raw:
sid_index = columns.get_loc(self._sid)
return values[-bars:, sid_index]
else:
return buffer_[self._sid][-bars:]
def _get_bars(self, days):
"""
Gets the number of bars needed for the current number of days.
Figures this out based on the algo datafrequency and caches the result.
This caches the result by replacing this function on the object.
This means that after the first call to _get_bars, this method will
point to a new function object.
"""
def daily_get_max_bars(days):
return days
def minute_get_max_bars(days):
# max number of minute. regardless of current days or short
# sessions
return days * 390
def daily_get_bars(days):
return days
def minute_get_bars(days):
cls = self.__class__
now = get_algo_instance().datetime
if now != cls._minute_bar_cache_dt:
cls._minute_bar_cache_dt = now
cls._minute_bar_cache = {}
if days not in cls._minute_bar_cache:
# Cache this calculation to happen once per bar, even if we
# use another transform with the same number of days.
env = get_algo_instance().trading_environment
prev = env.previous_trading_day(now)
ds = env.days_in_range(
env.add_trading_days(-days + 2, prev),
prev,
)
# compute the number of minutes in the (days - 1) days before
# today.
# 210 minutes in a an early close and 390 in a full day.
ms = sum(210 if d in env.early_closes else 390 for d in ds)
# Add the number of minutes for today.
ms += int(
(now - env.get_open_and_close(now)[0]).total_seconds() / 60
)
cls._minute_bar_cache[days] = ms + 1 # Account for this minute
return cls._minute_bar_cache[days]
if get_algo_instance().sim_params.data_frequency == 'daily':
self._freqstr = '1d'
# update this method to point to the daily variant.
self._get_bars = daily_get_bars
self._get_max_bars = daily_get_max_bars
else:
self._freqstr = '1m'
# update this method to point to the minute variant.
self._get_bars = minute_get_bars
self._get_max_bars = minute_get_max_bars
# Not actually recursive because we have already cached the new method.
return self._get_bars(days)
def mavg(self, days):
bars = self._get_bars(days)
max_bars = self._get_max_bars(days)
prices = self._get_buffer(max_bars, raw=True)[-bars:]
return nanmean(prices)
def stddev(self, days):
bars = self._get_bars(days)
max_bars = self._get_max_bars(days)
prices = self._get_buffer(max_bars, raw=True)[-bars:]
return nanstd(prices, ddof=1)
def vwap(self, days):
bars = self._get_bars(days)
max_bars = self._get_max_bars(days)
prices = self._get_buffer(max_bars, raw=True)[-bars:]
vols = self._get_buffer(max_bars, field='volume', raw=True)[-bars:]
vol_sum = nansum(vols)
try:
ret = nansum(prices * vols) / vol_sum
except ZeroDivisionError:
ret = np.nan
return ret
def returns(self):
algo = get_algo_instance()
now = algo.datetime
if now != self._returns_cache_dt:
self._returns_cache_dt = now
self._returns_cache = algo.history(2, '1d', 'price', ffill=True)
hst = self._returns_cache[self._sid]
return (hst.iloc[-1] - hst.iloc[0]) / hst.iloc[0]
class BarData(object):
"""
Holds the event data for all sids for a given dt.
This is what is passed as `data` to the `handle_data` function.
Note: Many methods are analogues of dictionary because of historical
usage of what this replaced as a dictionary subclass.
"""
def __init__(self, data=None):
self._data = data or {}
self._contains_override = None
def __contains__(self, name):
if self._contains_override:
if self._contains_override(name):
return name in self._data
else:
return False
else:
return name in self._data
def has_key(self, name):
"""
DEPRECATED: __contains__ is preferred, but this method is for
compatibility with existing algorithms.
"""
return name in self
def __setitem__(self, name, value):
self._data[name] = value
def __getitem__(self, name):
return self._data[name]
def __delitem__(self, name):
del self._data[name]
def __iter__(self):
for sid, data in iteritems(self._data):
# Allow contains override to filter out sids.
if sid in self:
if len(data):
yield sid
def iterkeys(self):
# Allow contains override to filter out sids.
return (sid for sid in iterkeys(self._data) if sid in self)
def keys(self):
# Allow contains override to filter out sids.
return list(self.iterkeys())
def itervalues(self):
return (value for _sid, value in self.iteritems())
def values(self):
return list(self.itervalues())
def iteritems(self):
return ((sid, value) for sid, value
in iteritems(self._data)
if sid in self)
def items(self):
return list(self.iteritems())
def __len__(self):
return len(self.keys())
def __repr__(self):
return '{0}({1})'.format(self.__class__.__name__, self._data)
|
vanyh/handkeinzungen-app
|
refs/heads/master
|
stats/apps.py
|
14
|
from django.apps import AppConfig
class StatsConfig(AppConfig):
name = 'stats'
|
HyunsukBaek/etherpad-lite
|
refs/heads/master
|
src/node_modules/npm/node_modules/node-gyp/gyp/test/mac/gyptest-rebuild.py
|
299
|
#!/usr/bin/env python
# Copyright (c) 2012 Google Inc. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""
Verifies that app bundles are rebuilt correctly.
"""
import TestGyp
import sys
if sys.platform == 'darwin':
test = TestGyp.TestGyp(formats=['ninja', 'make', 'xcode'])
CHDIR = 'rebuild'
test.run_gyp('test.gyp', chdir=CHDIR)
test.build('test.gyp', 'test_app', chdir=CHDIR)
# Touch a source file, rebuild, and check that the app target is up-to-date.
test.touch('rebuild/main.c')
test.build('test.gyp', 'test_app', chdir=CHDIR)
test.up_to_date('test.gyp', 'test_app', chdir=CHDIR)
# Xcode runs postbuilds on every build, so targets with postbuilds are
# never marked as up_to_date.
if test.format != 'xcode':
# Same for a framework bundle.
test.build('test.gyp', 'test_framework_postbuilds', chdir=CHDIR)
test.up_to_date('test.gyp', 'test_framework_postbuilds', chdir=CHDIR)
# Test that an app bundle with a postbuild that touches the app binary needs
# to be built only once.
test.build('test.gyp', 'test_app_postbuilds', chdir=CHDIR)
test.up_to_date('test.gyp', 'test_app_postbuilds', chdir=CHDIR)
test.pass_test()
|
foo123/sikuli-framework
|
refs/heads/master
|
src/log/logger.py
|
2
|
"""
Copyright (c) 2013, SMART Technologies ULC
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of the Copyright holder (SMART Technologies ULC) nor
the names of its contributors (Joshua Henn) may be used to endorse or
promote products derived from this software without specific prior
written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER (SMART Technologies
ULC) "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""
import logging
import string
from log.robotframework import Handler, Formatter
class Logger(logging.Logger):
"""
Main logger class
"""
formatter = None
def __init__(self):
logging.Logger.__init__(self, "Logger")
# RobotFramework log handler
fh = Handler()
fh.setFormatter(Formatter())
self.addHandler(fh)
# Textfile log handler
fh = logging.FileHandler('results/log.txt', mode='w')
fh.setFormatter(Formatter())
self.addHandler(fh)
def log(self, level, msg, *args, **kwargs):
logging.Logger.log(self, level, msg)
|
coinkeeper/2015-06-22_19-07_digitalcoin
|
refs/heads/master
|
contrib/testgen/gen_base58_test_vectors.py
|
1000
|
#!/usr/bin/env python
'''
Generate valid and invalid base58 address and private key test vectors.
Usage:
gen_base58_test_vectors.py valid 50 > ../../src/test/data/base58_keys_valid.json
gen_base58_test_vectors.py invalid 50 > ../../src/test/data/base58_keys_invalid.json
'''
# 2012 Wladimir J. van der Laan
# Released under MIT License
import os
from itertools import islice
from base58 import b58encode, b58decode, b58encode_chk, b58decode_chk, b58chars
import random
from binascii import b2a_hex
# key types
PUBKEY_ADDRESS = 0
SCRIPT_ADDRESS = 5
PUBKEY_ADDRESS_TEST = 111
SCRIPT_ADDRESS_TEST = 196
PRIVKEY = 128
PRIVKEY_TEST = 239
metadata_keys = ['isPrivkey', 'isTestnet', 'addrType', 'isCompressed']
# templates for valid sequences
templates = [
# prefix, payload_size, suffix, metadata
# None = N/A
((PUBKEY_ADDRESS,), 20, (), (False, False, 'pubkey', None)),
((SCRIPT_ADDRESS,), 20, (), (False, False, 'script', None)),
((PUBKEY_ADDRESS_TEST,), 20, (), (False, True, 'pubkey', None)),
((SCRIPT_ADDRESS_TEST,), 20, (), (False, True, 'script', None)),
((PRIVKEY,), 32, (), (True, False, None, False)),
((PRIVKEY,), 32, (1,), (True, False, None, True)),
((PRIVKEY_TEST,), 32, (), (True, True, None, False)),
((PRIVKEY_TEST,), 32, (1,), (True, True, None, True))
]
def is_valid(v):
'''Check vector v for validity'''
result = b58decode_chk(v)
if result is None:
return False
valid = False
for template in templates:
prefix = str(bytearray(template[0]))
suffix = str(bytearray(template[2]))
if result.startswith(prefix) and result.endswith(suffix):
if (len(result) - len(prefix) - len(suffix)) == template[1]:
return True
return False
def gen_valid_vectors():
'''Generate valid test vectors'''
while True:
for template in templates:
prefix = str(bytearray(template[0]))
payload = os.urandom(template[1])
suffix = str(bytearray(template[2]))
rv = b58encode_chk(prefix + payload + suffix)
assert is_valid(rv)
metadata = dict([(x,y) for (x,y) in zip(metadata_keys,template[3]) if y is not None])
yield (rv, b2a_hex(payload), metadata)
def gen_invalid_vector(template, corrupt_prefix, randomize_payload_size, corrupt_suffix):
'''Generate possibly invalid vector'''
if corrupt_prefix:
prefix = os.urandom(1)
else:
prefix = str(bytearray(template[0]))
if randomize_payload_size:
payload = os.urandom(max(int(random.expovariate(0.5)), 50))
else:
payload = os.urandom(template[1])
if corrupt_suffix:
suffix = os.urandom(len(template[2]))
else:
suffix = str(bytearray(template[2]))
return b58encode_chk(prefix + payload + suffix)
def randbool(p = 0.5):
'''Return True with P(p)'''
return random.random() < p
def gen_invalid_vectors():
'''Generate invalid test vectors'''
# start with some manual edge-cases
yield "",
yield "x",
while True:
# kinds of invalid vectors:
# invalid prefix
# invalid payload length
# invalid (randomized) suffix (add random data)
# corrupt checksum
for template in templates:
val = gen_invalid_vector(template, randbool(0.2), randbool(0.2), randbool(0.2))
if random.randint(0,10)<1: # line corruption
if randbool(): # add random character to end
val += random.choice(b58chars)
else: # replace random character in the middle
n = random.randint(0, len(val))
val = val[0:n] + random.choice(b58chars) + val[n+1:]
if not is_valid(val):
yield val,
if __name__ == '__main__':
import sys, json
iters = {'valid':gen_valid_vectors, 'invalid':gen_invalid_vectors}
try:
uiter = iters[sys.argv[1]]
except IndexError:
uiter = gen_valid_vectors
try:
count = int(sys.argv[2])
except IndexError:
count = 0
data = list(islice(uiter(), count))
json.dump(data, sys.stdout, sort_keys=True, indent=4)
sys.stdout.write('\n')
|
neilhan/tensorflow
|
refs/heads/master
|
tensorflow/python/lib/io/python_io.py
|
18
|
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""## Data IO (Python Functions)
A TFRecords file represents a sequence of (binary) strings. The format is not
random access, so it is suitable for streaming large amounts of data but not
suitable if fast sharding or other non-sequential access is desired.
@@TFRecordWriter
@@tf_record_iterator
- - -
### TFRecords Format Details
A TFRecords file contains a sequence of strings with CRC hashes. Each record
has the format
uint64 length
uint32 masked_crc32_of_length
byte data[length]
uint32 masked_crc32_of_data
and the records are concatenated together to produce the file. The CRC32s
are [described here](https://en.wikipedia.org/wiki/Cyclic_redundancy_check),
and the mask of a CRC is
masked_crc = ((crc >> 15) | (crc << 17)) + 0xa282ead8ul
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# go/tf-wildcard-import
# pylint: disable=wildcard-import
from tensorflow.python.lib.io.tf_record import *
# pylint: enable=wildcard-import
from tensorflow.python.util.all_util import make_all
__all__ = make_all(__name__)
|
wearpants/osf.io
|
refs/heads/develop
|
website/prereg/views.py
|
11
|
"""Back-end code to support the Prereg Challenge initiative
Keeping the code together in this file should make it easier to remove the
features added to the OSF specifically to support this initiative in the future.
Other resources that are a part of the Prereg Challenge:
* website/static/js/pages/prereg-landing-page.js
* website/static/css/prereg.css
"""
from flask import request
from framework.auth import decorators
from framework.utils import iso8601format
from website.prereg import utils
@decorators.must_be_logged_in
def prereg_landing_page(auth, **kwargs):
"""Landing page for the prereg challenge"""
campaign = request.path.strip('/') or 'prereg'
registerable_nodes = [
node for node
in auth.user.contributor_to
if node.has_permission(user=auth.user, permission='admin')
]
has_projects = bool(registerable_nodes)
has_draft_registrations = bool(utils.drafts_for_user(auth.user, campaign).count())
return {
'has_draft_registrations': has_draft_registrations,
'has_projects': has_projects,
'campaign_long': utils.PREREG_CAMPAIGNS[campaign],
'campaign_short': campaign
}
@decorators.must_be_logged_in
def prereg_draft_registrations(auth, **kwargs):
"""API endpoint; returns prereg draft registrations the user can resume"""
campaign = kwargs.get('campaign', 'prereg')
drafts = utils.drafts_for_user(auth.user, campaign)
return {
'draftRegistrations': [
{
'dateUpdated': iso8601format(draft.datetime_updated),
'dateInitiated': iso8601format(draft.datetime_initiated),
'node': {
'title': draft.branched_from.title,
},
'initiator': {
'name': draft.initiator.fullname,
},
'url': draft.branched_from.web_url_for(
'edit_draft_registration_page',
draft_id=draft._id,
),
}
for draft in drafts
],
}
|
linea-it/qlf
|
refs/heads/master
|
backend/framework/qlf/dashboard/bokeh/globalfiber/main.py
|
2
|
from bokeh.plotting import figure
from bokeh.layouts import row, column
from bokeh.models import HoverTool, ColumnDataSource, Range1d
from qlf_models import QLFModels
from dashboard.bokeh.plots.descriptors.title import Title
import numpy as np
import logging
from bokeh.resources import CDN
from bokeh.embed import file_html
import os
from dashboard.models import Job, Process, Fibermap
spectro_data = os.environ.get('DESI_SPECTRO_DATA')
logger = logging.getLogger(__name__)
class GlobalFiber:
def __init__(self, process_id, arm):
self.selected_process_id = process_id
self.selected_arm = arm
def data_source(self, fmap):
""" Creating data source for plots
"""
data_model = {
'goodfiber': [],
'status': [],
'color': [],
'cam': [],
'OBJ_TYPE': [],
'ra': [],
'dec': [],
}
process_id = self.selected_process_id
joblist = [entry.camera.camera for entry in Job.objects.filter(
process_id=process_id)]
ra_tile = fmap.fiber_ra
dec_tile = fmap.fiber_dec
otype_tile = fmap.objtype
y = []
color = []
status = []
cam_inst = []
for spec in list(range(10)):
cam = self.selected_arm+str(spec)
if cam in joblist:
mergedqa = QLFModels().get_output(
self.selected_process_id, cam)
countbins = mergedqa['TASKS']['CHECK_FIBERS']['METRICS']['GOOD_FIBERS']
y = y + countbins
color = color + ['green' if idx ==
1 else 'red' for idx in countbins]
status = status + ['GOOD' if idx ==
1 else 'BAD' for idx in countbins]
else:
y = y + 500*['']
color = color + ['lightgray']*500
status = status + ['']*500
cam_inst = cam_inst + [cam]*500
data_model['goodfiber'] = y
data_model['color'] = color
data_model['status'] = status
data_model['cam'] = cam_inst
data_model['OBJ_TYPE'] = otype_tile
data_model['ra'] = ra_tile
data_model['dec'] = dec_tile
source = ColumnDataSource(data=data_model)
return source
def wedge_plot(self, wedge_arm, fmap, common_source=None):
ra_center = fmap.exposure.telra
dec_center = fmap.exposure.teldec
fiber_tooltip = """
<div>
<div>
<span style="font-size: 12px; font-weight: bold; color: #303030;">FIBER STATUS: </span>
<span style="font-size: 13px; color: #515151">@status</span>
</div>
<div>
<span style="font-size: 12px; font-weight: bold; color: #303030;">RA: </span>
<span style="font-size: 13px; color: #515151;">@ra</span>
</div>
<div>
<span style="font-size: 12px; font-weight: bold; color: #303030;">DEC: </span>
<span style="font-size: 13px; color: #515151;">@dec</span>
</div>
<div>
<span style="font-size: 12px; font-weight: bold; color: #303030;">Obj Type: </span>
<span style="font-size: 13px; color: #515151;">@OBJ_TYPE</span>
</div>
<div>
<span style="font-size: 12px; font-weight: bold; color: #303030;">CAM: </span>
<span style="font-size: 13px; color: #515151;">@cam_</span>
</div>
""".replace('@status', '@status').replace('@cam_', '@cam')
hover = HoverTool(tooltips=fiber_tooltip)
source = common_source
radius = 0.017
radius_hover = 0.018
plot_space=-0.1
xrange = Range1d(start=max(source.data['ra'])-plot_space, end=min(source.data['ra'])+plot_space)
yrange = Range1d(start=max(source.data['dec'])-plot_space, end=min(source.data['dec'])+plot_space)
p = figure(title='FIBERS - ARM %s' % (wedge_arm),
x_axis_label='RA',
y_axis_label='DEC',
tools=[hover,"box_zoom,pan,wheel_zoom,reset,lasso_select,crosshair"],
active_drag="box_zoom",
x_range=xrange,
y_range=yrange,
sizing_mode='scale_width')
p.title.align = 'center'
p.circle('ra', 'dec', source=source, name="data", radius=radius,
fill_color={'field': 'color'},
line_color='black', line_width=0.4,
hover_line_color='red')
p.circle('ra', 'dec', source=source, name="data", radius=radius_hover,
hover_fill_color={'field': 'color'}, fill_color=None,
line_color=None, line_width=3, hover_line_color='red')
return p
def load_qa(self):
process_id = self.selected_process_id
process = Process.objects.get(pk=process_id)
exposure = process.exposure
fmap = Fibermap.objects.filter(exposure=exposure)[0]
src = self.data_source(fmap)
p = self.wedge_plot(self.selected_arm, fmap, common_source=src)
info_col = Title().write_description('globalfiber')
layout = column([info_col, p], sizing_mode='scale_width')
return file_html(layout, CDN, "Global Fiber")
if __name__ == '__main__':
print('debbuging instance')
|
xwolf12/django
|
refs/heads/master
|
tests/utils_tests/test_os_utils.py
|
482
|
import os
import unittest
from django.core.exceptions import SuspiciousFileOperation
from django.utils._os import safe_join
class SafeJoinTests(unittest.TestCase):
def test_base_path_ends_with_sep(self):
drive, path = os.path.splitdrive(safe_join("/abc/", "abc"))
self.assertEqual(
path,
"{0}abc{0}abc".format(os.path.sep)
)
def test_root_path(self):
drive, path = os.path.splitdrive(safe_join("/", "path"))
self.assertEqual(
path,
"{}path".format(os.path.sep),
)
drive, path = os.path.splitdrive(safe_join("/", ""))
self.assertEqual(
path,
os.path.sep,
)
def test_parent_path(self):
with self.assertRaises(SuspiciousFileOperation):
safe_join("/abc/", "../def")
|
jswanljung/iris
|
refs/heads/master
|
lib/iris/tests/unit/quickplot/test_pcolor.py
|
11
|
# (C) British Crown Copyright 2014 - 2016, Met Office
#
# This file is part of Iris.
#
# Iris is free software: you can redistribute it and/or modify it under
# the terms of the GNU Lesser General Public License as published by the
# Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Iris is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with Iris. If not, see <http://www.gnu.org/licenses/>.
"""Unit tests for the `iris.quickplot.pcolor` function."""
from __future__ import (absolute_import, division, print_function)
from six.moves import (filter, input, map, range, zip) # noqa
# Import iris.tests first so that some things can be initialised before
# importing anything else.
import iris.tests as tests
import numpy as np
from iris.tests.stock import simple_2d
from iris.tests.unit.plot import TestGraphicStringCoord, MixinCoords
if tests.MPL_AVAILABLE:
import iris.quickplot as qplt
@tests.skip_plot
class TestStringCoordPlot(TestGraphicStringCoord):
def test_yaxis_labels(self):
qplt.pcolor(self.cube, coords=('bar', 'str_coord'))
self.assertBoundsTickLabels('yaxis')
def test_xaxis_labels(self):
qplt.pcolor(self.cube, coords=('str_coord', 'bar'))
self.assertBoundsTickLabels('xaxis')
@tests.skip_plot
class TestCoords(tests.IrisTest, MixinCoords):
def setUp(self):
# We have a 2d cube with dimensionality (bar: 3; foo: 4)
self.cube = simple_2d(with_bounds=True)
coord = self.cube.coord('foo')
self.foo = coord.contiguous_bounds()
self.foo_index = np.arange(coord.points.size + 1)
coord = self.cube.coord('bar')
self.bar = coord.contiguous_bounds()
self.bar_index = np.arange(coord.points.size + 1)
self.data = self.cube.data
self.dataT = self.data.T
self.mpl_patch = self.patch('matplotlib.pyplot.pcolor',
return_value=None)
self.draw_func = qplt.pcolor
if __name__ == "__main__":
tests.main()
|
chriskmanx/qmole
|
refs/heads/master
|
QMOLEDEV/vte-0.24.1/python/vte-demo.py
|
7
|
#!/usr/bin/python
import sys
import string
import getopt
import gtk
import vte
def selected_cb(terminal, column, row, cb_data):
if (row == 15):
if (column < 40):
return 1
return 0
def restore_cb(terminal):
(text, attrs) = terminal.get_text(selected_cb, 1)
print "A portion of the text at restore-time is:"
print text
def child_exited_cb(terminal):
gtk.main_quit()
if __name__ == '__main__':
child_pid = -1;
# Defaults.
audible = 0
background = None
blink = 0
command = None
emulation = "xterm"
font = "fixed 12"
scrollback = 100
transparent = 0
visible = 0
# Let the user override them.
(shorts, longs) = getopt.getopt(sys.argv[1:], "B:Tabc:f:n:t:v", ["background", "transparent", "audible", "blink", "command=", "font=", "scrollback=", "terminal=", "visible"])
for argpair in (shorts + longs):
if ((argpair[0] == '-B') or (argpair[0] == '--background')):
print "Setting background image to `" + argpair[1] + "'."
background = argpair[1]
if ((argpair[0] == '-T') or (argpair[0] == '--transparent')):
print "Setting transparency."
transparent = not transparent
if ((argpair[0] == '-a') or (argpair[0] == '--audible')):
print "Setting audible bell."
audible = not audible
if ((argpair[0] == '-b') or (argpair[0] == '--blink')):
print "Setting blinking cursor."
blink = not blink
if ((argpair[0] == '-c') or (argpair[0] == '--command')):
print "Running command `" + argpair[1] + "'."
command = argpair[1]
if ((argpair[0] == '-f') or (argpair[0] == '--font')):
print "Setting font to `" + argpair[1] + "'."
font = argpair[1]
if ((argpair[0] == '-n') or (argpair[0] == '--scrollback')):
scrollback = string.atoi(argpair[1])
if (scrollback == 0):
scrollback = 100
else:
print "Setting scrollback size to `" + str(scrollback) + "'."
if ((argpair[0] == '-t') or (argpair[0] == '--terminal')):
print "Setting terminal type to `" + argpair[1] + "'."
emulation = argpair[1]
if ((argpair[0] == '-v') or (argpair[0] == '--visible')):
print "Setting visible bell."
visible = not visible
window = gtk.Window()
terminal = vte.Terminal()
if (background):
terminal.set_background_image(background)
if (transparent):
terminal.set_background_transparent(gtk.TRUE)
terminal.set_cursor_blinks(blink)
terminal.set_emulation(emulation)
terminal.set_font_from_string(font)
terminal.set_scrollback_lines(scrollback)
terminal.set_audible_bell(audible)
terminal.set_visible_bell(visible)
terminal.connect("child-exited", child_exited_cb)
terminal.connect("restore-window", restore_cb)
if (command):
# Start up the specified command.
child_pid = terminal.fork_command(command)
else:
# Start up the default command, the user's shell.
child_pid = terminal.fork_command()
terminal.show()
scrollbar = gtk.VScrollbar()
scrollbar.set_adjustment(terminal.get_adjustment())
box = gtk.HBox()
box.pack_start(terminal)
box.pack_start(scrollbar)
window.add(box)
window.show_all()
gtk.main()
|
renatopp/liac
|
refs/heads/master
|
samples/models/samples_kernelpca.py
|
1
|
# =============================================================================
# Federal University of Rio Grande do Sul (UFRGS)
# Connectionist Artificial Intelligence Laboratory (LIAC)
# Renato de Pontes Pereira - rppereira@inf.ufrgs.br
# =============================================================================
# Copyright (c) 2011 Renato de Pontes Pereira, renato.ppontes at gmail dot com
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# =============================================================================
import sys
import os
sys.path.append(os.path.join(os.path.dirname(__file__), '..', '..'))
import liac
from liac import plot
data = liac.dataset.load('iris')
targets = data.iloc[:, -1]
classes = targets.unique()
data = data.iloc[:, 0:-1]
kpca = liac.models.KernelPCA(2, 'linear')
kpca.fit(data)
transformed_data = kpca.transform(data)
for i, label in enumerate(classes):
idx = targets == label
d = transformed_data[idx]
plot.scatter(d[:, 0], d[:, 1], color=liac.random.make_color(i))
plot.show()
|
jlegendary/nupic
|
refs/heads/master
|
tests/swarming/nupic/swarming/experiments/dummyV2/description.py
|
8
|
# ----------------------------------------------------------------------
# Numenta Platform for Intelligent Computing (NuPIC)
# Copyright (C) 2013, Numenta, Inc. Unless you have an agreement
# with Numenta, Inc., for a separate license for this software code, the
# following terms and conditions apply:
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see http://www.gnu.org/licenses.
#
# http://numenta.org/licenses/
# ----------------------------------------------------------------------
"""
Template file used by the OPF Experiment Generator to generate the actual
description.py file by replacing $XXXXXXXX tokens with desired values.
This description.py file was generated by:
'/Users/ronmarianetti/nupic/eng/lib/python2.6/site-packages/nupic/frameworks/opf/expGenerator/ExpGenerator.py'
"""
from nupic.frameworks.opf.expdescriptionapi import ExperimentDescriptionAPI
from nupic.frameworks.opf.expdescriptionhelpers import (
updateConfigFromSubConfig,
applyValueGettersToContainer,
DeferredDictLookup)
from nupic.frameworks.opf.clamodelcallbacks import *
from nupic.frameworks.opf.metrics import MetricSpec
from nupic.frameworks.opf.opfutils import (InferenceType,
InferenceElement)
from nupic.support import aggregationDivide
from nupic.frameworks.opf.opftaskdriver import (
IterationPhaseSpecLearnOnly,
IterationPhaseSpecInferOnly,
IterationPhaseSpecLearnAndInfer)
# Model Configuration Dictionary:
#
# Define the model parameters and adjust for any modifications if imported
# from a sub-experiment.
#
# These fields might be modified by a sub-experiment; this dict is passed
# between the sub-experiment and base experiment
#
#
# NOTE: Use of DEFERRED VALUE-GETTERs: dictionary fields and list elements
# within the config dictionary may be assigned futures derived from the
# ValueGetterBase class, such as DeferredDictLookup.
# This facility is particularly handy for enabling substitution of values in
# the config dictionary from other values in the config dictionary, which is
# needed by permutation.py-based experiments. These values will be resolved
# during the call to applyValueGettersToContainer(),
# which we call after the base experiment's config dictionary is updated from
# the sub-experiment. See ValueGetterBase and
# DeferredDictLookup for more details about value-getters.
#
# For each custom encoder parameter to be exposed to the sub-experiment/
# permutation overrides, define a variable in this section, using key names
# beginning with a single underscore character to avoid collisions with
# pre-defined keys (e.g., _dsEncoderFieldName2_N).
#
# Example:
# config = dict(
# _dsEncoderFieldName2_N = 70,
# _dsEncoderFieldName2_W = 5,
# dsEncoderSchema = [
# base=dict(
# fieldname='Name2', type='ScalarEncoder',
# name='Name2', minval=0, maxval=270, clipInput=True,
# n=DeferredDictLookup('_dsEncoderFieldName2_N'),
# w=DeferredDictLookup('_dsEncoderFieldName2_W')),
# ],
# )
# updateConfigFromSubConfig(config)
# applyValueGettersToContainer(config)
config = {
# Type of model that the rest of these parameters apply to.
'model': "CLA",
# Version that specifies the format of the config.
'version': 1,
# Intermediate variables used to compute fields in modelParams and also
# referenced from the control section.
'aggregationInfo': { 'days': 0,
'fields': [ (u'timestamp', 'first'),
(u'gym', 'first'),
(u'consumption', 'mean'),
(u'address', 'first')],
'hours': 0,
'microseconds': 0,
'milliseconds': 0,
'minutes': 0,
'months': 0,
'seconds': 0,
'weeks': 0,
'years': 0},
'predictAheadTime': None,
# Model parameter dictionary.
'modelParams': {
# The type of inference that this model will perform
'inferenceType': 'TemporalNextStep',
'sensorParams': {
# Sensor diagnostic output verbosity control;
# if > 0: sensor region will print out on screen what it's sensing
# at each step 0: silent; >=1: some info; >=2: more info;
# >=3: even more info (see compute() in py/regions/RecordSensor.py)
'verbosity' : 0,
# Example:
# dsEncoderSchema = [
# DeferredDictLookup('__field_name_encoder'),
# ],
#
# (value generated from DS_ENCODER_SCHEMA)
'encoders': {
'address': { 'fieldname': u'address',
'n': 300,
'name': u'address',
'type': 'SDRCategoryEncoder',
'w': 21},
'consumption': { 'clipInput': True,
'fieldname': u'consumption',
'maxval': 200,
'minval': 0,
'n': 1500,
'name': u'consumption',
'type': 'ScalarEncoder',
'w': 21},
'gym': { 'fieldname': u'gym',
'n': 600,
'name': u'gym',
'type': 'SDRCategoryEncoder',
'w': 21},
'timestamp_dayOfWeek': { 'dayOfWeek': (7, 3),
'fieldname': u'timestamp',
'name': u'timestamp_dayOfWeek',
'type': 'DateEncoder'},
'timestamp_timeOfDay': { 'fieldname': u'timestamp',
'name': u'timestamp_timeOfDay',
'timeOfDay': (7, 8),
'type': 'DateEncoder'}},
# A dictionary specifying the period for automatically-generated
# resets from a RecordSensor;
#
# None = disable automatically-generated resets (also disabled if
# all of the specified values evaluate to 0).
# Valid keys is the desired combination of the following:
# days, hours, minutes, seconds, milliseconds, microseconds, weeks
#
# Example for 1.5 days: sensorAutoReset = dict(days=1,hours=12),
#
# (value generated from SENSOR_AUTO_RESET)
'sensorAutoReset' : None,
},
'spEnable': True,
'spParams': {
# SP diagnostic output verbosity control;
# 0: silent; >=1: some info; >=2: more info;
'spVerbosity' : 0,
'globalInhibition': 1,
# Number of cell columns in the cortical region (same number for
# SP and TP)
# (see also tpNCellsPerCol)
'columnCount': 2048,
'inputWidth': 0,
# SP inhibition control (absolute value);
# Maximum number of active columns in the SP region's output (when
# there are more, the weaker ones are suppressed)
'numActiveColumnsPerInhArea': 40,
'seed': 1956,
# potentialPct
# What percent of the columns's receptive field is available
# for potential synapses. At initialization time, we will
# choose potentialPct * (2*potentialRadius+1)^2
'potentialPct': 0.5,
# The default connected threshold. Any synapse whose
# permanence value is above the connected threshold is
# a "connected synapse", meaning it can contribute to the
# cell's firing. Typical value is 0.10. Cells whose activity
# level before inhibition falls below minDutyCycleBeforeInh
# will have their own internal synPermConnectedCell
# threshold set below this default value.
# (This concept applies to both SP and TP and so 'cells'
# is correct here as opposed to 'columns')
'synPermConnected': 0.1,
'synPermActiveInc': 0.1,
'synPermInactiveDec': 0.01,
},
# Controls whether TP is enabled or disabled;
# TP is necessary for making temporal predictions, such as predicting
# the next inputs. Without TP, the model is only capable of
# reconstructing missing sensor inputs (via SP).
'tpEnable' : True,
'tpParams': {
# TP diagnostic output verbosity control;
# 0: silent; [1..6]: increasing levels of verbosity
# (see verbosity in nupic/trunk/py/nupic/research/TP.py and TP10X*.py)
'verbosity': 0,
# Number of cell columns in the cortical region (same number for
# SP and TP)
# (see also tpNCellsPerCol)
'columnCount': 2048,
# The number of cells (i.e., states), allocated per column.
'cellsPerColumn': 32,
'inputWidth': 2048,
'seed': 1960,
# Temporal Pooler implementation selector (see _getTPClass in
# CLARegion.py).
'temporalImp': 'cpp',
# New Synapse formation count
# NOTE: If None, use spNumActivePerInhArea
#
# TODO: need better explanation
'newSynapseCount': 15,
# Maximum number of synapses per segment
# > 0 for fixed-size CLA
# -1 for non-fixed-size CLA
#
# TODO: for Ron: once the appropriate value is placed in TP
# constructor, see if we should eliminate this parameter from
# description.py.
'maxSynapsesPerSegment': 32,
# Maximum number of segments per cell
# > 0 for fixed-size CLA
# -1 for non-fixed-size CLA
#
# TODO: for Ron: once the appropriate value is placed in TP
# constructor, see if we should eliminate this parameter from
# description.py.
'maxSegmentsPerCell': 128,
# Initial Permanence
# TODO: need better explanation
'initialPerm': 0.21,
# Permanence Increment
'permanenceInc': 0.1,
# Permanence Decrement
# If set to None, will automatically default to tpPermanenceInc
# value.
'permanenceDec' : 0.1,
'globalDecay': 0.0,
'maxAge': 0,
# Minimum number of active synapses for a segment to be considered
# during search for the best-matching segments.
# None=use default
# Replaces: tpMinThreshold
'minThreshold': 12,
# Segment activation threshold.
# A segment is active if it has >= tpSegmentActivationThreshold
# connected synapses that are active due to infActiveState
# None=use default
# Replaces: tpActivationThreshold
'activationThreshold': 16,
'outputType': 'normal',
# "Pay Attention Mode" length. This tells the TP how many new
# elements to append to the end of a learned sequence at a time.
# Smaller values are better for datasets with short sequences,
# higher values are better for datasets with long sequences.
'pamLength': 1,
},
'clParams': {
'regionName' : 'CLAClassifierRegion',
# Classifier diagnostic output verbosity control;
# 0: silent; [1..6]: increasing levels of verbosity
'clVerbosity' : 0,
# This controls how fast the classifier learns/forgets. Higher values
# make it adapt faster and forget older patterns faster.
'alpha': 0.001,
# This is set after the call to updateConfigFromSubConfig and is
# computed from the aggregationInfo and predictAheadTime.
'steps': '1',
},
'trainSPNetOnlyIfRequested': False,
},
}
# end of config dictionary
# Adjust base config dictionary for any modifications if imported from a
# sub-experiment
updateConfigFromSubConfig(config)
# Compute predictionSteps based on the predictAheadTime and the aggregation
# period, which may be permuted over.
if config['predictAheadTime'] is not None:
predictionSteps = int(round(aggregationDivide(
config['predictAheadTime'], config['aggregationInfo'])))
assert (predictionSteps >= 1)
config['modelParams']['clParams']['steps'] = str(predictionSteps)
# Adjust config by applying ValueGetterBase-derived
# futures. NOTE: this MUST be called after updateConfigFromSubConfig() in order
# to support value-getter-based substitutions from the sub-experiment (if any)
applyValueGettersToContainer(config)
control = {
# The environment that the current model is being run in
"environment": 'nupic',
# Input stream specification per py/nupicengine/cluster/database/StreamDef.json.
#
'dataset' : {u'info': u'test_NoProviders',
u'streams': [ { u'columns': [u'*'],
u'info': "test data",
u'source': "file://swarming/test_data.csv"}],
u'version': 1},
# Iteration count: maximum number of iterations. Each iteration corresponds
# to one record from the (possibly aggregated) dataset. The task is
# terminated when either number of iterations reaches iterationCount or
# all records in the (possibly aggregated) database have been processed,
# whichever occurs first.
#
# iterationCount of -1 = iterate over the entire dataset
#'iterationCount' : ITERATION_COUNT,
# Metrics: A list of MetricSpecs that instantiate the metrics that are
# computed for this experiment
'metrics':[
MetricSpec(field=u'consumption',inferenceElement=InferenceElement.prediction,
metric='rmse'),
],
# Logged Metrics: A sequence of regular expressions that specify which of
# the metrics from the Inference Specifications section MUST be logged for
# every prediction. The regex's correspond to the automatically generated
# metric labels. This is similar to the way the optimization metric is
# specified in permutations.py.
}
descriptionInterface = ExperimentDescriptionAPI(modelConfig=config,
control=control)
|
aperigault/ansible
|
refs/heads/devel
|
lib/ansible/modules/cloud/ovirt/ovirt_template.py
|
15
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright (c) 2016 Red Hat, Inc.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: ovirt_template
short_description: Module to manage virtual machine templates in oVirt/RHV
version_added: "2.3"
author: "Ondra Machacek (@machacekondra)"
description:
- "Module to manage virtual machine templates in oVirt/RHV."
options:
name:
description:
- "Name of the template to manage."
id:
description:
- "ID of the template to be registered."
version_added: "2.4"
state:
description:
- "Should the template be present/absent/exported/imported/registered.
When C(state) is I(registered) and the unregistered template's name
belongs to an already registered in engine template in the same DC
then we fail to register the unregistered template."
choices: ['present', 'absent', 'exported', 'imported', 'registered']
default: present
vm:
description:
- "Name of the VM, which will be used to create template."
description:
description:
- "Description of the template."
cpu_profile:
description:
- "CPU profile to be set to template."
cluster:
description:
- "Name of the cluster, where template should be created/imported."
allow_partial_import:
description:
- "Boolean indication whether to allow partial registration of a template when C(state) is registered."
type: bool
version_added: "2.4"
vnic_profile_mappings:
description:
- "Mapper which maps an external virtual NIC profile to one that exists in the engine when C(state) is registered.
vnic_profile is described by the following dictionary:"
suboptions:
source_network_name:
description:
- The network name of the source network.
source_profile_name:
description:
- The profile name related to the source network.
target_profile_id:
description:
- The id of the target profile id to be mapped to in the engine.
version_added: "2.5"
cluster_mappings:
description:
- "Mapper which maps cluster name between Template's OVF and the destination cluster this Template should be registered to,
relevant when C(state) is registered.
Cluster mapping is described by the following dictionary:"
suboptions:
source_name:
description:
- The name of the source cluster.
dest_name:
description:
- The name of the destination cluster.
version_added: "2.5"
role_mappings:
description:
- "Mapper which maps role name between Template's OVF and the destination role this Template should be registered to,
relevant when C(state) is registered.
Role mapping is described by the following dictionary:"
suboptions:
source_name:
description:
- The name of the source role.
dest_name:
description:
- The name of the destination role.
version_added: "2.5"
domain_mappings:
description:
- "Mapper which maps aaa domain name between Template's OVF and the destination aaa domain this Template should be registered to,
relevant when C(state) is registered.
The aaa domain mapping is described by the following dictionary:"
suboptions:
source_name:
description:
- The name of the source aaa domain.
dest_name:
description:
- The name of the destination aaa domain.
version_added: "2.5"
exclusive:
description:
- "When C(state) is I(exported) this parameter indicates if the existing templates with the
same name should be overwritten."
type: bool
export_domain:
description:
- "When C(state) is I(exported) or I(imported) this parameter specifies the name of the
export storage domain."
image_provider:
description:
- "When C(state) is I(imported) this parameter specifies the name of the image provider to be used."
image_disk:
description:
- "When C(state) is I(imported) and C(image_provider) is used this parameter specifies the name of disk
to be imported as template."
aliases: ['glance_image_disk_name']
io_threads:
description:
- "Number of IO threads used by virtual machine. I(0) means IO threading disabled."
version_added: "2.7"
template_image_disk_name:
description:
- "When C(state) is I(imported) and C(image_provider) is used this parameter specifies the new name for imported disk,
if omitted then I(image_disk) name is used by default.
This parameter is used only in case of importing disk image from Glance domain."
version_added: "2.4"
storage_domain:
description:
- "When C(state) is I(imported) this parameter specifies the name of the destination data storage domain.
When C(state) is I(registered) this parameter specifies the name of the data storage domain of the unregistered template."
clone_permissions:
description:
- "If I(True) then the permissions of the VM (only the direct ones, not the inherited ones)
will be copied to the created template."
- "This parameter is used only when C(state) I(present)."
type: bool
default: False
seal:
description:
- "'Sealing' is an operation that erases all machine-specific configurations from a filesystem:
This includes SSH keys, UDEV rules, MAC addresses, system ID, hostname, etc.
If I(true) subsequent virtual machines made from this template will avoid configuration inheritance."
- "This parameter is used only when C(state) I(present)."
default: False
type: bool
version_added: "2.5"
operating_system:
description:
- Operating system of the template.
- Default value is set by oVirt/RHV engine.
- "Possible values are: debian_7, freebsd, freebsdx64, other, other_linux,
other_linux_ppc64, other_ppc64, rhel_3, rhel_4, rhel_4x64, rhel_5, rhel_5x64,
rhel_6, rhel_6x64, rhel_6_ppc64, rhel_7x64, rhel_7_ppc64, sles_11,
sles_11_ppc64, ubuntu_12_04, ubuntu_12_10, ubuntu_13_04, ubuntu_13_10,
ubuntu_14_04, ubuntu_14_04_ppc64, windows_10, windows_10x64, windows_2003,
windows_2003x64, windows_2008, windows_2008x64, windows_2008r2x64,
windows_2008R2x64, windows_2012x64, windows_2012R2x64,
windows_7, windows_7x64, windows_8, windows_8x64, windows_xp"
version_added: "2.6"
memory:
description:
- Amount of memory of the template. Prefix uses IEC 60027-2 standard (for example 1GiB, 1024MiB).
version_added: "2.6"
memory_guaranteed:
description:
- Amount of minimal guaranteed memory of the template.
Prefix uses IEC 60027-2 standard (for example 1GiB, 1024MiB).
- C(memory_guaranteed) parameter can't be lower than C(memory) parameter.
version_added: "2.6"
memory_max:
description:
- Upper bound of template memory up to which memory hot-plug can be performed.
Prefix uses IEC 60027-2 standard (for example 1GiB, 1024MiB).
version_added: "2.6"
version:
description:
- "C(name) - The name of this version."
- "C(number) - The index of this version in the versions hierarchy of the template. Used for editing of sub template."
version_added: "2.8"
clone_name:
description:
- Name for importing Template from storage domain.
- If not defined, C(name) will be used.
version_added: "2.8"
usb_support:
description:
- "I(True) enable USB support, I(False) to disable it. By default is chosen by oVirt/RHV engine."
type: bool
version_added: "2.9"
timezone:
description:
- Sets time zone offset of the guest hardware clock.
- For example C(Etc/GMT)
version_added: "2.9"
sso:
description:
- "I(True) enable Single Sign On by Guest Agent, I(False) to disable it. By default is chosen by oVirt/RHV engine."
type: bool
version_added: "2.9"
soundcard_enabled:
description:
- "If I(true), the sound card is added to the virtual machine."
type: bool
version_added: "2.9"
smartcard_enabled:
description:
- "If I(true), use smart card authentication."
type: bool
version_added: "2.9"
cloud_init:
description:
- Dictionary with values for Unix-like Virtual Machine initialization using cloud init.
suboptions:
host_name:
description:
- Hostname to be set to Virtual Machine when deployed.
timezone:
description:
- Timezone to be set to Virtual Machine when deployed.
user_name:
description:
- Username to be used to set password to Virtual Machine when deployed.
root_password:
description:
- Password to be set for user specified by C(user_name) parameter.
authorized_ssh_keys:
description:
- Use this SSH keys to login to Virtual Machine.
regenerate_ssh_keys:
description:
- If I(True) SSH keys will be regenerated on Virtual Machine.
type: bool
custom_script:
description:
- Cloud-init script which will be executed on Virtual Machine when deployed.
- This is appended to the end of the cloud-init script generated by any other options.
dns_servers:
description:
- DNS servers to be configured on Virtual Machine.
dns_search:
description:
- DNS search domains to be configured on Virtual Machine.
nic_boot_protocol:
description:
- Set boot protocol of the network interface of Virtual Machine.
choices: ['none', 'dhcp', 'static']
nic_ip_address:
description:
- If boot protocol is static, set this IP address to network interface of Virtual Machine.
nic_netmask:
description:
- If boot protocol is static, set this netmask to network interface of Virtual Machine.
nic_gateway:
description:
- If boot protocol is static, set this gateway to network interface of Virtual Machine.
nic_name:
description:
- Set name to network interface of Virtual Machine.
nic_on_boot:
description:
- If I(True) network interface will be set to start on boot.
type: bool
version_added: "2.9"
cloud_init_nics:
description:
- List of dictionaries representing network interfaces to be setup by cloud init.
- This option is used, when user needs to setup more network interfaces via cloud init.
- If one network interface is enough, user should use C(cloud_init) I(nic_*) parameters. C(cloud_init) I(nic_*) parameters
are merged with C(cloud_init_nics) parameters.
suboptions:
nic_boot_protocol:
description:
- Set boot protocol of the network interface of Virtual Machine. Can be one of C(none), C(dhcp) or C(static).
nic_ip_address:
description:
- If boot protocol is static, set this IP address to network interface of Virtual Machine.
nic_netmask:
description:
- If boot protocol is static, set this netmask to network interface of Virtual Machine.
nic_gateway:
description:
- If boot protocol is static, set this gateway to network interface of Virtual Machine.
nic_name:
description:
- Set name to network interface of Virtual Machine.
nic_on_boot:
description:
- If I(True) network interface will be set to start on boot.
type: bool
version_added: "2.9"
ballooning_enabled:
description:
- "If I(true), use memory ballooning."
- "Memory balloon is a guest device, which may be used to re-distribute / reclaim the host memory
based on VM needs in a dynamic way. In this way it's possible to create memory over commitment states."
type: bool
version_added: "2.9"
nics:
description:
- List of NICs, which should be attached to Virtual Machine. NIC is described by following dictionary.
suboptions:
name:
description:
- Name of the NIC.
profile_name:
description:
- Profile name where NIC should be attached.
interface:
description:
- Type of the network interface.
choices: ['virtio', 'e1000', 'rtl8139']
default: 'virtio'
mac_address:
description:
- Custom MAC address of the network interface, by default it's obtained from MAC pool.
version_added: "2.9"
sysprep:
description:
- Dictionary with values for Windows Virtual Machine initialization using sysprep.
suboptions:
host_name:
description:
- Hostname to be set to Virtual Machine when deployed.
active_directory_ou:
description:
- Active Directory Organizational Unit, to be used for login of user.
org_name:
description:
- Organization name to be set to Windows Virtual Machine.
domain:
description:
- Domain to be set to Windows Virtual Machine.
timezone:
description:
- Timezone to be set to Windows Virtual Machine.
ui_language:
description:
- UI language of the Windows Virtual Machine.
system_locale:
description:
- System localization of the Windows Virtual Machine.
input_locale:
description:
- Input localization of the Windows Virtual Machine.
windows_license_key:
description:
- License key to be set to Windows Virtual Machine.
user_name:
description:
- Username to be used for set password to Windows Virtual Machine.
root_password:
description:
- Password to be set for username to Windows Virtual Machine.
version_added: "2.9"
extends_documentation_fragment: ovirt
'''
EXAMPLES = '''
# Examples don't contain auth parameter for simplicity,
# look at ovirt_auth module to see how to reuse authentication:
# Create template from vm
- ovirt_template:
cluster: Default
name: mytemplate
vm: rhel7
cpu_profile: Default
description: Test
# Import template
- ovirt_template:
state: imported
name: mytemplate
export_domain: myexport
storage_domain: mystorage
cluster: mycluster
# Remove template
- ovirt_template:
state: absent
name: mytemplate
# Change Template Name
- ovirt_template:
id: 00000000-0000-0000-0000-000000000000
name: "new_template_name"
# Register template
- ovirt_template:
state: registered
storage_domain: mystorage
cluster: mycluster
name: mytemplate
# Register template using id
- ovirt_template:
state: registered
storage_domain: mystorage
cluster: mycluster
id: 1111-1111-1111-1111
# Register template, allowing partial import
- ovirt_template:
state: registered
storage_domain: mystorage
allow_partial_import: "True"
cluster: mycluster
id: 1111-1111-1111-1111
# Register template with vnic profile mappings
- ovirt_template:
state: registered
storage_domain: mystorage
cluster: mycluster
id: 1111-1111-1111-1111
vnic_profile_mappings:
- source_network_name: mynetwork
source_profile_name: mynetwork
target_profile_id: 3333-3333-3333-3333
- source_network_name: mynetwork2
source_profile_name: mynetwork2
target_profile_id: 4444-4444-4444-4444
# Register template with mapping
- ovirt_template:
state: registered
storage_domain: mystorage
cluster: mycluster
id: 1111-1111-1111-1111
role_mappings:
- source_name: Role_A
dest_name: Role_B
domain_mappings:
- source_name: Domain_A
dest_name: Domain_B
cluster_mappings:
- source_name: cluster_A
dest_name: cluster_B
# Import image from Glance s a template
- ovirt_template:
state: imported
name: mytemplate
image_disk: "centos7"
template_image_disk_name: centos7_from_glance
image_provider: "glance_domain"
storage_domain: mystorage
cluster: mycluster
# Edit template subeversion
- ovirt_template:
cluster: mycluster
name: mytemplate
vm: rhel7
version:
number: 2
name: subversion
# Create new template subeversion
- ovirt_template:
cluster: mycluster
name: mytemplate
vm: rhel7
version:
name: subversion
- name: Template with cloud init
ovirt_template:
name: mytemplate
cluster: Default
memory: 1GiB
cloud_init:
nic_boot_protocol: static
nic_ip_address: 10.34.60.86
nic_netmask: 255.255.252.0
nic_gateway: 10.34.63.254
nic_name: eth1
nic_on_boot: true
host_name: example.com
custom_script: |
write_files:
- content: |
Hello, world!
path: /tmp/greeting.txt
permissions: '0644'
user_name: root
root_password: super_password
- name: Template with cloud init, with multiple network interfaces
ovirt_template:
name: mytemplate
cluster: mycluster
cloud_init_nics:
- nic_name: eth0
nic_boot_protocol: dhcp
nic_on_boot: true
- nic_name: eth1
nic_boot_protocol: static
nic_ip_address: 10.34.60.86
nic_netmask: 255.255.252.0
nic_gateway: 10.34.63.254
nic_on_boot: true
- name: Template with timezone and nic
ovirt_template:
cluster: MyCluster
name: mytemplate
timezone: America/Godthab
memory_max: 2Gib
nics:
- name: nic1
- name: Template with sysprep
ovirt_vm:
name: windows2012R2_AD
cluster: Default
memory: 3GiB
sysprep:
host_name: windowsad.example.com
user_name: Administrator
root_password: SuperPassword123
'''
RETURN = '''
id:
description: ID of the template which is managed
returned: On success if template is found.
type: str
sample: 7de90f31-222c-436c-a1ca-7e655bd5b60c
template:
description: "Dictionary of all the template attributes. Template attributes can be found on your oVirt/RHV instance
at following url: http://ovirt.github.io/ovirt-engine-api-model/master/#types/template."
returned: On success if template is found.
type: dict
'''
import time
import traceback
try:
import ovirtsdk4.types as otypes
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.ovirt import (
BaseModule,
check_sdk,
convert_to_bytes,
create_connection,
equal,
get_dict_of_struct,
get_link_name,
get_id_by_name,
ovirt_full_argument_spec,
search_by_attributes,
search_by_name,
)
class TemplatesModule(BaseModule):
def __init__(self, *args, **kwargs):
super(TemplatesModule, self).__init__(*args, **kwargs)
self._initialization = None
def build_entity(self):
return otypes.Template(
id=self._module.params['id'],
name=self._module.params['name'],
cluster=otypes.Cluster(
name=self._module.params['cluster']
) if self._module.params['cluster'] else None,
vm=otypes.Vm(
name=self._module.params['vm']
) if self._module.params['vm'] else None,
description=self._module.params['description'],
cpu_profile=otypes.CpuProfile(
id=search_by_name(
self._connection.system_service().cpu_profiles_service(),
self._module.params['cpu_profile'],
).id
) if self._module.params['cpu_profile'] else None,
display=otypes.Display(
smartcard_enabled=self.param('smartcard_enabled')
) if self.param('smartcard_enabled') is not None else None,
os=otypes.OperatingSystem(
type=self.param('operating_system'),
) if self.param('operating_system') else None,
memory=convert_to_bytes(
self.param('memory')
) if self.param('memory') else None,
soundcard_enabled=self.param('soundcard_enabled'),
usb=(
otypes.Usb(enabled=self.param('usb_support'))
) if self.param('usb_support') is not None else None,
sso=(
otypes.Sso(
methods=[otypes.Method(id=otypes.SsoMethod.GUEST_AGENT)] if self.param('sso') else []
)
) if self.param('sso') is not None else None,
time_zone=otypes.TimeZone(
name=self.param('timezone'),
) if self.param('timezone') else None,
version=otypes.TemplateVersion(
base_template=self._get_base_template(),
version_name=self.param('version').get('name'),
) if self.param('version') else None,
memory_policy=otypes.MemoryPolicy(
guaranteed=convert_to_bytes(self.param('memory_guaranteed')),
ballooning=self.param('ballooning_enabled'),
max=convert_to_bytes(self.param('memory_max')),
) if any((
self.param('memory_guaranteed'),
self.param('ballooning_enabled'),
self.param('memory_max')
)) else None,
io=otypes.Io(
threads=self.param('io_threads'),
) if self.param('io_threads') is not None else None,
initialization=self.get_initialization(),
)
def _get_base_template(self):
templates = self._connection.system_service().templates_service().list()
for template in templates:
if template.version.version_number == 1 and template.name == self.param('name'):
return otypes.Template(
id=template.id
)
def post_update(self, entity):
self.post_present(entity.id)
def post_present(self, entity_id):
# After creation of the VM, attach disks and NICs:
entity = self._service.service(entity_id).get()
self.__attach_nics(entity)
def __get_vnic_profile_id(self, nic):
"""
Return VNIC profile ID looked up by it's name, because there can be
more VNIC profiles with same name, other criteria of filter is cluster.
"""
vnics_service = self._connection.system_service().vnic_profiles_service()
clusters_service = self._connection.system_service().clusters_service()
cluster = search_by_name(clusters_service, self.param('cluster'))
profiles = [
profile for profile in vnics_service.list()
if profile.name == nic.get('profile_name')
]
cluster_networks = [
net.id for net in self._connection.follow_link(cluster.networks)
]
try:
return next(
profile.id for profile in profiles
if profile.network.id in cluster_networks
)
except StopIteration:
raise Exception(
"Profile '%s' was not found in cluster '%s'" % (
nic.get('profile_name'),
self.param('cluster')
)
)
def __attach_nics(self, entity):
# Attach NICs to VM, if specified:
nics_service = self._service.service(entity.id).nics_service()
for nic in self.param('nics'):
if search_by_name(nics_service, nic.get('name')) is None:
if not self._module.check_mode:
nics_service.add(
otypes.Nic(
name=nic.get('name'),
interface=otypes.NicInterface(
nic.get('interface', 'virtio')
),
vnic_profile=otypes.VnicProfile(
id=self.__get_vnic_profile_id(nic),
) if nic.get('profile_name') else None,
mac=otypes.Mac(
address=nic.get('mac_address')
) if nic.get('mac_address') else None,
)
)
self.changed = True
def get_initialization(self):
if self._initialization is not None:
return self._initialization
sysprep = self.param('sysprep')
cloud_init = self.param('cloud_init')
cloud_init_nics = self.param('cloud_init_nics') or []
if cloud_init is not None:
cloud_init_nics.append(cloud_init)
if cloud_init or cloud_init_nics:
self._initialization = otypes.Initialization(
nic_configurations=[
otypes.NicConfiguration(
boot_protocol=otypes.BootProtocol(
nic.pop('nic_boot_protocol').lower()
) if nic.get('nic_boot_protocol') else None,
name=nic.pop('nic_name', None),
on_boot=nic.pop('nic_on_boot', None),
ip=otypes.Ip(
address=nic.pop('nic_ip_address', None),
netmask=nic.pop('nic_netmask', None),
gateway=nic.pop('nic_gateway', None),
) if (
nic.get('nic_gateway') is not None or
nic.get('nic_netmask') is not None or
nic.get('nic_ip_address') is not None
) else None,
)
for nic in cloud_init_nics
if (
nic.get('nic_gateway') is not None or
nic.get('nic_netmask') is not None or
nic.get('nic_ip_address') is not None or
nic.get('nic_boot_protocol') is not None or
nic.get('nic_on_boot') is not None
)
] if cloud_init_nics else None,
**cloud_init
)
elif sysprep:
self._initialization = otypes.Initialization(
**sysprep
)
return self._initialization
def update_check(self, entity):
template_display = entity.display
return (
equal(self._module.params.get('cluster'), get_link_name(self._connection, entity.cluster)) and
equal(self._module.params.get('description'), entity.description) and
equal(self.param('operating_system'), str(entity.os.type)) and
equal(self.param('name'), str(entity.name)) and
equal(self.param('smartcard_enabled'), getattr(template_display, 'smartcard_enabled', False)) and
equal(self.param('soundcard_enabled'), entity.soundcard_enabled) and
equal(self.param('ballooning_enabled'), entity.memory_policy.ballooning) and
equal(self.param('sso'), True if entity.sso.methods else False) and
equal(self.param('timezone'), getattr(entity.time_zone, 'name', None)) and
equal(self.param('usb_support'), entity.usb.enabled) and
equal(convert_to_bytes(self.param('memory_guaranteed')), entity.memory_policy.guaranteed) and
equal(convert_to_bytes(self.param('memory_max')), entity.memory_policy.max) and
equal(convert_to_bytes(self.param('memory')), entity.memory) and
equal(self._module.params.get('cpu_profile'), get_link_name(self._connection, entity.cpu_profile)) and
equal(self.param('io_threads'), entity.io.threads)
)
def _get_export_domain_service(self):
provider_name = self._module.params['export_domain'] or self._module.params['image_provider']
export_sds_service = self._connection.system_service().storage_domains_service()
export_sd = search_by_name(export_sds_service, provider_name)
if export_sd is None:
raise ValueError(
"Export storage domain/Image Provider '%s' wasn't found." % provider_name
)
return export_sds_service.service(export_sd.id)
def post_export_action(self, entity):
self._service = self._get_export_domain_service().templates_service()
def post_import_action(self, entity):
self._service = self._connection.system_service().templates_service()
def _get_role_mappings(module):
roleMappings = list()
for roleMapping in module.params['role_mappings']:
roleMappings.append(
otypes.RegistrationRoleMapping(
from_=otypes.Role(
name=roleMapping['source_name'],
) if roleMapping['source_name'] else None,
to=otypes.Role(
name=roleMapping['dest_name'],
) if roleMapping['dest_name'] else None,
)
)
return roleMappings
def _get_domain_mappings(module):
domainMappings = list()
for domainMapping in module.params['domain_mappings']:
domainMappings.append(
otypes.RegistrationDomainMapping(
from_=otypes.Domain(
name=domainMapping['source_name'],
) if domainMapping['source_name'] else None,
to=otypes.Domain(
name=domainMapping['dest_name'],
) if domainMapping['dest_name'] else None,
)
)
return domainMappings
def _get_cluster_mappings(module):
clusterMappings = list()
for clusterMapping in module.params['cluster_mappings']:
clusterMappings.append(
otypes.RegistrationClusterMapping(
from_=otypes.Cluster(
name=clusterMapping['source_name'],
),
to=otypes.Cluster(
name=clusterMapping['dest_name'],
),
)
)
return clusterMappings
def _get_vnic_profile_mappings(module):
vnicProfileMappings = list()
for vnicProfileMapping in module.params['vnic_profile_mappings']:
vnicProfileMappings.append(
otypes.VnicProfileMapping(
source_network_name=vnicProfileMapping['source_network_name'],
source_network_profile_name=vnicProfileMapping['source_profile_name'],
target_vnic_profile=otypes.VnicProfile(
id=vnicProfileMapping['target_profile_id'],
) if vnicProfileMapping['target_profile_id'] else None,
)
)
return vnicProfileMappings
def find_subversion_template(module, templates_service):
version = module.params.get('version')
templates = templates_service.list()
for template in templates:
if version.get('number') == template.version.version_number and module.params.get('name') == template.name:
return template
# when user puts version number which does not exist
raise ValueError(
"Template with name '%s' and version '%s' in cluster '%s' was not found'" % (
module.params['name'],
module.params['version']['number'],
module.params['cluster'],
)
)
def searchable_attributes(module):
"""
Return all searchable template attributes passed to module.
"""
attributes = {
'name': module.params.get('name'),
'cluster': module.params.get('cluster'),
}
return dict((k, v) for k, v in attributes.items() if v is not None)
def main():
argument_spec = ovirt_full_argument_spec(
state=dict(
choices=['present', 'absent', 'exported', 'imported', 'registered'],
default='present',
),
id=dict(default=None),
name=dict(default=None),
vm=dict(default=None),
timezone=dict(type='str'),
description=dict(default=None),
sso=dict(type='bool'),
ballooning_enabled=dict(type='bool', default=None),
cluster=dict(default=None),
usb_support=dict(type='bool'),
allow_partial_import=dict(default=None, type='bool'),
cpu_profile=dict(default=None),
clone_permissions=dict(type='bool'),
export_domain=dict(default=None),
storage_domain=dict(default=None),
exclusive=dict(type='bool'),
clone_name=dict(default=None),
image_provider=dict(default=None),
soundcard_enabled=dict(type='bool', default=None),
smartcard_enabled=dict(type='bool', default=None),
image_disk=dict(default=None, aliases=['glance_image_disk_name']),
io_threads=dict(type='int', default=None),
template_image_disk_name=dict(default=None),
version=dict(default=None, type='dict'),
seal=dict(type='bool'),
vnic_profile_mappings=dict(default=[], type='list'),
cluster_mappings=dict(default=[], type='list'),
role_mappings=dict(default=[], type='list'),
domain_mappings=dict(default=[], type='list'),
operating_system=dict(type='str'),
memory=dict(type='str'),
memory_guaranteed=dict(type='str'),
memory_max=dict(type='str'),
nics=dict(type='list', default=[]),
cloud_init=dict(type='dict'),
cloud_init_nics=dict(type='list', default=[]),
sysprep=dict(type='dict'),
)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
required_one_of=[['id', 'name']],
)
check_sdk(module)
try:
auth = module.params.pop('auth')
connection = create_connection(auth)
templates_service = connection.system_service().templates_service()
templates_module = TemplatesModule(
connection=connection,
module=module,
service=templates_service,
)
entity = None
if module.params['version'] is not None and module.params['version'].get('number') is not None:
entity = find_subversion_template(module, templates_service)
state = module.params['state']
if state == 'present':
force_create = False
if entity is None and module.params['version'] is not None:
force_create = True
ret = templates_module.create(
entity=entity,
# When user want to create new template subversion, we must make sure
# template is force created as it already exists, but new version should be created.
force_create=force_create,
result_state=otypes.TemplateStatus.OK,
search_params=searchable_attributes(module),
clone_permissions=module.params['clone_permissions'],
seal=module.params['seal'],
)
elif state == 'absent':
ret = templates_module.remove(entity=entity)
elif state == 'exported':
template = templates_module.search_entity()
if entity is not None:
template = entity
export_service = templates_module._get_export_domain_service()
export_template = search_by_attributes(export_service.templates_service(), id=template.id)
ret = templates_module.action(
entity=template,
action='export',
action_condition=lambda t: export_template is None or module.params['exclusive'],
wait_condition=lambda t: t is not None,
post_action=templates_module.post_export_action,
storage_domain=otypes.StorageDomain(id=export_service.get().id),
exclusive=module.params['exclusive'],
)
elif state == 'imported':
template = templates_module.search_entity()
if entity is not None:
template = entity
if template and module.params['clone_name'] is None:
ret = templates_module.create(
result_state=otypes.TemplateStatus.OK,
)
else:
kwargs = {}
if module.params['image_provider']:
kwargs.update(
disk=otypes.Disk(
name=module.params['template_image_disk_name'] or module.params['image_disk']
),
template=otypes.Template(
name=module.params['name'] if module.params['clone_name'] is None else module.params['clone_name'],
),
clone=True if module.params['clone_name'] is not None else False,
import_as_template=True,
)
if module.params['image_disk']:
# We need to refresh storage domain to get list of images:
templates_module._get_export_domain_service().images_service().list()
glance_service = connection.system_service().openstack_image_providers_service()
image_provider = search_by_name(glance_service, module.params['image_provider'])
images_service = glance_service.service(image_provider.id).images_service()
else:
images_service = templates_module._get_export_domain_service().templates_service()
template_name = module.params['image_disk'] or module.params['name']
entity = search_by_name(images_service, template_name)
if entity is None:
raise Exception("Image/template '%s' was not found." % template_name)
images_service.service(entity.id).import_(
storage_domain=otypes.StorageDomain(
name=module.params['storage_domain']
) if module.params['storage_domain'] else None,
cluster=otypes.Cluster(
name=module.params['cluster']
) if module.params['cluster'] else None,
**kwargs
)
# Wait for template to appear in system:
template = templates_module.wait_for_import(
condition=lambda t: t.status == otypes.TemplateStatus.OK
)
ret = templates_module.create(result_state=otypes.TemplateStatus.OK)
ret = {
'changed': True,
'id': template.id,
'template': get_dict_of_struct(template),
}
elif state == 'registered':
storage_domains_service = connection.system_service().storage_domains_service()
# Find the storage domain with unregistered template:
sd_id = get_id_by_name(storage_domains_service, module.params['storage_domain'])
storage_domain_service = storage_domains_service.storage_domain_service(sd_id)
templates_service = storage_domain_service.templates_service()
# Find the unregistered Template we want to register:
templates = templates_service.list(unregistered=True)
template = next(
(t for t in templates if (t.id == module.params['id'] or t.name == module.params['name'])),
None
)
changed = False
if template is None:
template = templates_module.search_entity()
if template is None:
raise ValueError(
"Template '%s(%s)' wasn't found." % (module.params['name'], module.params['id'])
)
else:
# Register the template into the system:
changed = True
template_service = templates_service.template_service(template.id)
template_service.register(
allow_partial_import=module.params['allow_partial_import'],
cluster=otypes.Cluster(
name=module.params['cluster']
) if module.params['cluster'] else None,
vnic_profile_mappings=_get_vnic_profile_mappings(module)
if module.params['vnic_profile_mappings'] else None,
registration_configuration=otypes.RegistrationConfiguration(
cluster_mappings=_get_cluster_mappings(module),
role_mappings=_get_role_mappings(module),
domain_mappings=_get_domain_mappings(module),
) if (module.params['cluster_mappings']
or module.params['role_mappings']
or module.params['domain_mappings']) else None
)
if module.params['wait']:
template = templates_module.wait_for_import()
else:
# Fetch template to initialize return.
template = template_service.get()
ret = templates_module.create(result_state=otypes.TemplateStatus.OK)
ret = {
'changed': changed,
'id': template.id,
'template': get_dict_of_struct(template)
}
module.exit_json(**ret)
except Exception as e:
module.fail_json(msg=str(e), exception=traceback.format_exc())
finally:
connection.close(logout=auth.get('token') is None)
if __name__ == "__main__":
main()
|
2014c2g1/c2g1
|
refs/heads/master
|
w2/static/Brython2.0.0-20140209-164925/Lib/antigravity.py
|
917
|
import webbrowser
import hashlib
webbrowser.open("http://xkcd.com/353/")
def geohash(latitude, longitude, datedow):
'''Compute geohash() using the Munroe algorithm.
>>> geohash(37.421542, -122.085589, b'2005-05-26-10458.68')
37.857713 -122.544543
'''
# http://xkcd.com/426/
h = hashlib.md5(datedow).hexdigest()
p, q = [('%f' % float.fromhex('0.' + x)) for x in (h[:16], h[16:32])]
print('%d%s %d%s' % (latitude, p[1:], longitude, q[1:]))
|
pyrocko/pyrocko
|
refs/heads/master
|
test/base/test_obspy_compat.py
|
1
|
from __future__ import division, print_function, absolute_import
import unittest
from .. import common
import pyrocko.trace
from pyrocko import util, io, model, pile
if common.have_obspy():
import obspy
from pyrocko import obspy_compat
obspy_compat.plant()
def close_win(win):
win.close()
@common.require_obspy
class ObsPyCompatTestCase(unittest.TestCase):
@common.require_gui
def test_obspy_snuffle(self):
fn = common.test_data_file('test1.mseed')
stream = obspy.read(fn)
stream.snuffle(launch_hook=close_win, instant_close=True)
trace = stream[0]
trace.snuffle(launch_hook=close_win, instant_close=True)
@common.require_gui
def test_obspy_fiddle(self):
fn = common.test_data_file('test1.mseed')
stream = obspy.read(fn)
stream2 = stream.fiddle(launch_hook=close_win, instant_close=True) # noqa
trace = stream[0]
trace2 = trace.fiddle(launch_hook=close_win, instant_close=True) # noqa
def test_to_obspy_trace(self):
traces = io.load(common.test_data_file('test1.mseed'))
for tr in traces:
obs_tr = tr.to_obspy_trace()
assert isinstance(obs_tr, obspy.Trace)
assert obs_tr.data.size == tr.data_len()
obs_stats = obs_tr.stats
for attr in ('network', 'station', 'location', 'channel'):
assert obs_stats.__getattr__(attr) == tr.__getattribute__(attr)
def test_to_obspy_stream(self):
pl = pile.Pile()
pl.load_files([common.test_data_file('test1.mseed')],
show_progress=False)
st = pl.to_obspy_stream()
assert isinstance(st, obspy.Stream)
assert len(st) == len([tr for tr in pl.iter_all()])
for tr in st:
assert isinstance(tr, obspy.Trace)
def test_to_pyrocko_traces(self):
st = obspy.read(common.test_data_file('test1.mseed'))
traces = st.to_pyrocko_traces()
assert isinstance(traces, list)
for tr in traces:
assert isinstance(tr, pyrocko.trace.Trace)
for tr in st:
assert isinstance(tr.to_pyrocko_trace(), pyrocko.trace.Trace)
def test_to_pyrocko_stations(self):
fn = common.test_data_file('geeil.geofon.xml')
inventory = obspy.read_inventory(fn)
for sta in inventory.to_pyrocko_stations():
assert isinstance(sta, model.Station)
def test_to_pyrocko_events(self):
from obspy.clients.fdsn.client import Client
client = Client('IRIS')
cat = client.get_events(eventid=609301)
events = cat.to_pyrocko_events()
self.assertEqual(len(events), len(cat))
if __name__ == "__main__":
util.setup_logging('test_obspy_compat', 'warning')
unittest.main()
|
sysadmin75/ansible
|
refs/heads/devel
|
test/units/module_utils/urls/test_RedirectHandlerFactory.py
|
70
|
# -*- coding: utf-8 -*-
# (c) 2018 Matt Martz <matt@sivel.net>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from ansible.module_utils.urls import HAS_SSLCONTEXT, RedirectHandlerFactory, urllib_request, urllib_error
from ansible.module_utils.six import StringIO
import pytest
@pytest.fixture
def urllib_req():
req = urllib_request.Request(
'https://ansible.com/'
)
return req
@pytest.fixture
def request_body():
return StringIO('TESTS')
def test_no_redirs(urllib_req, request_body):
handler = RedirectHandlerFactory('none', False)
inst = handler()
with pytest.raises(urllib_error.HTTPError):
inst.redirect_request(urllib_req, request_body, 301, '301 Moved Permanently', {}, 'https://docs.ansible.com/')
def test_urllib2_redir(urllib_req, request_body, mocker):
redir_request_mock = mocker.patch('ansible.module_utils.urls.urllib_request.HTTPRedirectHandler.redirect_request')
handler = RedirectHandlerFactory('urllib2', False)
inst = handler()
inst.redirect_request(urllib_req, request_body, 301, '301 Moved Permanently', {}, 'https://docs.ansible.com/')
redir_request_mock.assert_called_once_with(inst, urllib_req, request_body, 301, '301 Moved Permanently', {}, 'https://docs.ansible.com/')
def test_all_redir(urllib_req, request_body, mocker):
req_mock = mocker.patch('ansible.module_utils.urls.RequestWithMethod')
handler = RedirectHandlerFactory('all', False)
inst = handler()
inst.redirect_request(urllib_req, request_body, 301, '301 Moved Permanently', {}, 'https://docs.ansible.com/')
req_mock.assert_called_once_with('https://docs.ansible.com/', data=None, headers={}, method='GET', origin_req_host='ansible.com', unverifiable=True)
def test_all_redir_post(request_body, mocker):
handler = RedirectHandlerFactory('all', False)
inst = handler()
req = urllib_request.Request(
'https://ansible.com/',
'POST'
)
req_mock = mocker.patch('ansible.module_utils.urls.RequestWithMethod')
inst.redirect_request(req, request_body, 301, '301 Moved Permanently', {}, 'https://docs.ansible.com/')
req_mock.assert_called_once_with('https://docs.ansible.com/', data=None, headers={}, method='GET', origin_req_host='ansible.com', unverifiable=True)
def test_redir_headers_removal(urllib_req, request_body, mocker):
req_mock = mocker.patch('ansible.module_utils.urls.RequestWithMethod')
handler = RedirectHandlerFactory('all', False)
inst = handler()
urllib_req.headers = {
'Content-Type': 'application/json',
'Content-Length': 100,
'Foo': 'bar',
}
inst.redirect_request(urllib_req, request_body, 301, '301 Moved Permanently', {}, 'https://docs.ansible.com/')
req_mock.assert_called_once_with('https://docs.ansible.com/', data=None, headers={'Foo': 'bar'}, method='GET', origin_req_host='ansible.com',
unverifiable=True)
def test_redir_url_spaces(urllib_req, request_body, mocker):
req_mock = mocker.patch('ansible.module_utils.urls.RequestWithMethod')
handler = RedirectHandlerFactory('all', False)
inst = handler()
inst.redirect_request(urllib_req, request_body, 301, '301 Moved Permanently', {}, 'https://docs.ansible.com/foo bar')
req_mock.assert_called_once_with('https://docs.ansible.com/foo%20bar', data=None, headers={}, method='GET', origin_req_host='ansible.com',
unverifiable=True)
def test_redir_safe(urllib_req, request_body, mocker):
req_mock = mocker.patch('ansible.module_utils.urls.RequestWithMethod')
handler = RedirectHandlerFactory('safe', False)
inst = handler()
inst.redirect_request(urllib_req, request_body, 301, '301 Moved Permanently', {}, 'https://docs.ansible.com/')
req_mock.assert_called_once_with('https://docs.ansible.com/', data=None, headers={}, method='GET', origin_req_host='ansible.com', unverifiable=True)
def test_redir_safe_not_safe(request_body):
handler = RedirectHandlerFactory('safe', False)
inst = handler()
req = urllib_request.Request(
'https://ansible.com/',
'POST'
)
with pytest.raises(urllib_error.HTTPError):
inst.redirect_request(req, request_body, 301, '301 Moved Permanently', {}, 'https://docs.ansible.com/')
def test_redir_no_error_on_invalid(urllib_req, request_body):
handler = RedirectHandlerFactory('invalid', False)
inst = handler()
with pytest.raises(urllib_error.HTTPError):
inst.redirect_request(urllib_req, request_body, 301, '301 Moved Permanently', {}, 'https://docs.ansible.com/')
def test_redir_validate_certs(urllib_req, request_body, mocker):
opener_mock = mocker.patch('ansible.module_utils.urls.urllib_request._opener')
handler = RedirectHandlerFactory('all', True)
inst = handler()
inst.redirect_request(urllib_req, request_body, 301, '301 Moved Permanently', {}, 'https://docs.ansible.com/')
assert opener_mock.add_handler.call_count == int(not HAS_SSLCONTEXT)
def test_redir_http_error_308_urllib2(urllib_req, request_body):
handler = RedirectHandlerFactory('urllib2', False)
inst = handler()
with pytest.raises(urllib_error.HTTPError):
inst.redirect_request(urllib_req, request_body, 308, '308 Permanent Redirect', {}, 'https://docs.ansible.com/')
|
souravbadami/oppia
|
refs/heads/develop
|
core/controllers/features_test.py
|
1
|
# Copyright 2019 The Oppia Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS-IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for fetching the features Oppia provides to its users."""
from core.domain import config_domain
from core.domain import rights_manager
from core.domain import user_services
from core.tests import test_utils
import feconf
def exploration_features_url(exp_id):
"""Returns URL for getting which features the given exploration supports."""
return '%s/%s' % (feconf.EXPLORATION_FEATURES_PREFIX, exp_id)
class ExplorationFeaturesTestBase(test_utils.GenericTestBase):
"""Does common exploration set up for testing feature handlers."""
EXP_ID = 'expId'
def setUp(self):
super(ExplorationFeaturesTestBase, self).setUp()
self.signup(self.EDITOR_EMAIL, self.EDITOR_USERNAME)
editor_id = self.get_user_id_from_email(self.EDITOR_EMAIL)
self.save_new_valid_exploration(
self.EXP_ID, editor_id, title='Explore!', end_state_name='END')
editor_actions_info = user_services.UserActionsInfo(editor_id)
rights_manager.publish_exploration(editor_actions_info, self.EXP_ID)
class ExplorationPlaythroughRecordingFeatureTest(ExplorationFeaturesTestBase):
"""Tests for fetching whether playthrough recording is enabled."""
def test_can_record_playthroughs_in_whitelisted_explorations(self):
self.set_config_property(
config_domain.WHITELISTED_EXPLORATION_IDS_FOR_PLAYTHROUGHS,
new_config_value=[self.EXP_ID])
json_response = self.get_json(exploration_features_url(self.EXP_ID))
self.assertTrue(json_response['is_exploration_whitelisted'])
def test_can_not_record_playthroughs_with_empty_whitelist(self):
self.set_config_property(
config_domain.WHITELISTED_EXPLORATION_IDS_FOR_PLAYTHROUGHS,
new_config_value=[])
json_response = self.get_json(exploration_features_url(self.EXP_ID))
self.assertFalse(json_response['is_exploration_whitelisted'])
def test_can_not_record_playthroughs_for_exploration_not_in_whitelist(self):
self.set_config_property(
config_domain.WHITELISTED_EXPLORATION_IDS_FOR_PLAYTHROUGHS,
new_config_value=[self.EXP_ID + '-differentiate'])
json_response = self.get_json(exploration_features_url(self.EXP_ID))
self.assertFalse(json_response['is_exploration_whitelisted'])
class ExplorationImprovementsTabFeatureTest(ExplorationFeaturesTestBase):
"""Tests for fetching whether the improvements tab is enabled."""
def test_improvements_tab_is_enabled(self):
self.set_config_property(
config_domain.IS_IMPROVEMENTS_TAB_ENABLED, new_config_value=True)
json_response = self.get_json(exploration_features_url(self.EXP_ID))
self.assertTrue(json_response['is_improvements_tab_enabled'])
def test_improvements_tab_is_disabled(self):
self.set_config_property(
config_domain.IS_IMPROVEMENTS_TAB_ENABLED, new_config_value=False)
json_response = self.get_json(exploration_features_url(self.EXP_ID))
self.assertFalse(json_response['is_improvements_tab_enabled'])
|
MrSurly/micropython
|
refs/heads/master
|
tests/basics/builtin_round.py
|
103
|
# test round() with integral values
tests = [
False, True,
0, 1, -1, 10
]
for t in tests:
print(round(t))
|
Tejal011089/trufil-erpnext
|
refs/heads/master
|
erpnext/setup/doctype/sales_person/test_sales_person.py
|
104
|
# Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors
# License: GNU General Public License v3. See license.txt
from __future__ import unicode_literals
test_dependencies = ["Employee"]
import frappe
test_records = frappe.get_test_records('Sales Person')
|
flh/odoo
|
refs/heads/master
|
addons/product_email_template/__openerp__.py
|
65
|
# -*- coding: utf-8 -*-
{
'name': 'Product Email Template',
'depends': ['account'],
'author': 'OpenERP SA',
'category': 'Accounting & Finance',
'description': """
Add email templates to products to be send on invoice confirmation
==================================================================
With this module, link your products to a template to send complete information and tools to your customer.
For instance when invoicing a training, the training agenda and materials will automatically be sent to your customers.'
""",
'website': 'http://www.openerp.com',
'demo': [
'data/product_demo.xml',
],
'data': [
'views/product_view.xml',
'views/email_template_view.xml',
],
'installable': True,
'auto_install': False,
}
|
MalloyPower/parsing-python
|
refs/heads/master
|
front-end/testsuite-python-lib/Python-2.1/Lib/py_compile.py
|
2
|
"""Routine to "compile" a .py file to a .pyc (or .pyo) file.
This module has intimate knowledge of the format of .pyc files.
"""
import imp
MAGIC = imp.get_magic()
__all__ = ["compile"]
def wr_long(f, x):
"""Internal; write a 32-bit int to a file in little-endian order."""
f.write(chr( x & 0xff))
f.write(chr((x >> 8) & 0xff))
f.write(chr((x >> 16) & 0xff))
f.write(chr((x >> 24) & 0xff))
def compile(file, cfile=None, dfile=None):
"""Byte-compile one Python source file to Python bytecode.
Arguments:
file: source filename
cfile: target filename; defaults to source with 'c' or 'o' appended
('c' normally, 'o' in optimizing mode, giving .pyc or .pyo)
dfile: purported filename; defaults to source (this is the filename
that will show up in error messages)
Note that it isn't necessary to byte-compile Python modules for
execution efficiency -- Python itself byte-compiles a module when
it is loaded, and if it can, writes out the bytecode to the
corresponding .pyc (or .pyo) file.
However, if a Python installation is shared between users, it is a
good idea to byte-compile all modules upon installation, since
other users may not be able to write in the source directories,
and thus they won't be able to write the .pyc/.pyo file, and then
they would be byte-compiling every module each time it is loaded.
This can slow down program start-up considerably.
See compileall.py for a script/module that uses this module to
byte-compile all installed files (or all files in selected
directories).
"""
import os, marshal, __builtin__
f = open(file)
try:
timestamp = long(os.fstat(f.fileno())[8])
except AttributeError:
timestamp = long(os.stat(file)[8])
codestring = f.read()
# If parsing from a string, line breaks are \n (see parsetok.c:tok_nextc)
# Replace will return original string if pattern is not found, so
# we don't need to check whether it is found first.
codestring = codestring.replace("\r\n","\n")
codestring = codestring.replace("\r","\n")
f.close()
if codestring and codestring[-1] != '\n':
codestring = codestring + '\n'
try:
codeobject = __builtin__.compile(codestring, dfile or file, 'exec')
except SyntaxError, detail:
import traceback, sys
lines = traceback.format_exception_only(SyntaxError, detail)
for line in lines:
sys.stderr.write(line.replace('File "<string>"',
'File "%s"' % (dfile or file)))
return
if not cfile:
cfile = file + (__debug__ and 'c' or 'o')
fc = open(cfile, 'wb')
fc.write('\0\0\0\0')
wr_long(fc, timestamp)
marshal.dump(codeobject, fc)
fc.flush()
fc.seek(0, 0)
fc.write(MAGIC)
fc.close()
if os.name == 'mac':
import macfs
macfs.FSSpec(cfile).SetCreatorType('Pyth', 'PYC ')
|
poljeff/odoo
|
refs/heads/8.0
|
addons/l10n_fr/report/compute_resultant_report.py
|
374
|
# -*- coding: utf-8 -*-
#
#
# Copyright (c) 2008 JAILLET Simon - CrysaLEAD - www.crysalead.fr
#
# WARNING: This program as such is intended to be used by professional
# programmers who take the whole responsability of assessing all potential
# consequences resulting from its eventual inadequacies and bugs
# End users who are looking for a ready-to-use solution with commercial
# garantees and support are strongly adviced to contract a Free Software
# Service Company
#
# This program is Free Software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
#
#
import base_report
from openerp.osv import osv
class cdr(base_report.base_report):
def __init__(self, cr, uid, name, context):
super(cdr, self).__init__(cr, uid, name, context)
def set_context(self, objects, data, ids):
super(cdr, self).set_context(objects, data, ids)
self._load('cdr', self.localcontext['data']['form'])
self._set_variable(
'ct1',
self.localcontext['cdrc1']+self.localcontext['cdrc2']+self.localcontext['cdrc3']+
self.localcontext['cdrc4']+self.localcontext['cdrc5']+self.localcontext['cdrc6']+
self.localcontext['cdrc7']+self.localcontext['cdrc8']+self.localcontext['cdrc9']+
self.localcontext['cdrc10']+self.localcontext['cdrc11']+self.localcontext['cdrc12']+
self.localcontext['cdrc13']+self.localcontext['cdrc14']+self.localcontext['cdrc15']
)
self._set_variable(
'ct3',
self.localcontext['cdrc17']+self.localcontext['cdrc18']+self.localcontext['cdrc19']+
self.localcontext['cdrc20']
)
self._set_variable(
'ct4',
self.localcontext['cdrc21']+self.localcontext['cdrc22']+self.localcontext['cdrc23']
)
self._set_variable(
'charges',
self.localcontext['ct1']+self.localcontext['cdrc16']+self.localcontext['ct3']+
self.localcontext['ct4']+self.localcontext['cdrc24']+self.localcontext['cdrc25']
)
self._set_variable(
'pta',
self.localcontext['cdrp1']+self.localcontext['cdrp2']
)
self._set_variable(
'ptb',
self.localcontext['cdrp3']+self.localcontext['cdrp4']+self.localcontext['cdrp5']+
self.localcontext['cdrp6']+self.localcontext['cdrp7']
)
self._set_variable(
'pt1',
self.localcontext['pta']+self.localcontext['ptb']
)
self._set_variable(
'pt3',
self.localcontext['cdrp9']+self.localcontext['cdrp10']+self.localcontext['cdrp11']+
self.localcontext['cdrp12']+self.localcontext['cdrp13']+self.localcontext['cdrp14']
)
self._set_variable(
'pt4',
self.localcontext['cdrp15']+self.localcontext['cdrp16']+self.localcontext['cdrp17']
)
self._set_variable(
'produits',
self.localcontext['pt1']+self.localcontext['cdrp8']+self.localcontext['pt3']+
self.localcontext['pt4']
)
class wrapped_report_resultat(osv.AbstractModel):
_name = 'report.l10n_fr.report_l10nfrresultat'
_inherit = 'report.abstract_report'
_template = 'l10n_fr.report_l10nfrresultat'
_wrapped_report_class = cdr
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
|
gdub/django
|
refs/heads/master
|
django/utils/inspect.py
|
146
|
from __future__ import absolute_import
import inspect
from django.utils import six
def getargspec(func):
if six.PY2:
return inspect.getargspec(func)
sig = inspect.signature(func)
args = [
p.name for p in sig.parameters.values()
if p.kind == inspect.Parameter.POSITIONAL_OR_KEYWORD
]
varargs = [
p.name for p in sig.parameters.values()
if p.kind == inspect.Parameter.VAR_POSITIONAL
]
varargs = varargs[0] if varargs else None
varkw = [
p.name for p in sig.parameters.values()
if p.kind == inspect.Parameter.VAR_KEYWORD
]
varkw = varkw[0] if varkw else None
defaults = [
p.default for p in sig.parameters.values()
if p.kind == inspect.Parameter.POSITIONAL_OR_KEYWORD and p.default is not p.empty
] or None
return args, varargs, varkw, defaults
def get_func_args(func):
if six.PY2:
argspec = inspect.getargspec(func)
return argspec.args[1:] # ignore 'self'
sig = inspect.signature(func)
return [
arg_name for arg_name, param in sig.parameters.items()
if param.kind == inspect.Parameter.POSITIONAL_OR_KEYWORD
]
def func_accepts_kwargs(func):
if six.PY2:
# Not all callables are inspectable with getargspec, so we'll
# try a couple different ways but in the end fall back on assuming
# it is -- we don't want to prevent registration of valid but weird
# callables.
try:
argspec = inspect.getargspec(func)
except TypeError:
try:
argspec = inspect.getargspec(func.__call__)
except (TypeError, AttributeError):
argspec = None
return not argspec or argspec[2] is not None
return any(
p for p in inspect.signature(func).parameters.values()
if p.kind == p.VAR_KEYWORD
)
def func_has_no_args(func):
args = inspect.getargspec(func)[0] if six.PY2 else [
p for p in inspect.signature(func).parameters.values()
if p.kind == p.POSITIONAL_OR_KEYWORD and p.default is p.empty
]
return len(args) == 1
def func_supports_parameter(func, parameter):
if six.PY3:
return parameter in inspect.signature(func).parameters
else:
args, varargs, varkw, defaults = inspect.getargspec(func)
return parameter in args
|
gunicorn/gunicorn
|
refs/heads/master
|
examples/frameworks/django/testing/testing/apps/someapp/views.py
|
7
|
# Create your views here.
import csv
import io
import os
from django import forms
from django.http import HttpResponse
from django.shortcuts import render_to_response
from django.template import RequestContext
import tempfile
class MsgForm(forms.Form):
subject = forms.CharField(max_length=100)
message = forms.CharField()
f = forms.FileField()
def home(request):
from django.conf import settings
print(settings.SOME_VALUE)
subject = None
message = None
size = 0
print(request.META)
if request.POST:
form = MsgForm(request.POST, request.FILES)
print(request.FILES)
if form.is_valid():
subject = form.cleaned_data['subject']
message = form.cleaned_data['message']
f = request.FILES['f']
if not hasattr(f, "fileno"):
size = len(f.read())
else:
try:
size = int(os.fstat(f.fileno())[6])
except io.UnsupportedOperation:
size = len(f.read())
else:
form = MsgForm()
return render_to_response('home.html', {
'form': form,
'subject': subject,
'message': message,
'size': size
}, RequestContext(request))
def acsv(request):
rows = [
{'a': 1, 'b': 2},
{'a': 3, 'b': 3}
]
response = HttpResponse(mimetype='text/csv')
response['Content-Disposition'] = 'attachment; filename=report.csv'
writer = csv.writer(response)
writer.writerow(['a', 'b'])
for r in rows:
writer.writerow([r['a'], r['b']])
return response
|
Lekanich/intellij-community
|
refs/heads/master
|
python/testData/keywordCompletion/continue.py
|
83
|
for x in []:
conti<caret>
|
toshywoshy/ansible
|
refs/heads/devel
|
lib/ansible/modules/cloud/google/gcp_bigquery_table.py
|
6
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2017 Google
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# ----------------------------------------------------------------------------
#
# *** AUTO GENERATED CODE *** AUTO GENERATED CODE ***
#
# ----------------------------------------------------------------------------
#
# This file is automatically generated by Magic Modules and manual
# changes will be clobbered when the file is regenerated.
#
# Please read more about how to change this file at
# https://www.github.com/GoogleCloudPlatform/magic-modules
#
# ----------------------------------------------------------------------------
from __future__ import absolute_import, division, print_function
__metaclass__ = type
################################################################################
# Documentation
################################################################################
ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ["preview"], 'supported_by': 'community'}
DOCUMENTATION = '''
---
module: gcp_bigquery_table
description:
- A Table that belongs to a Dataset .
short_description: Creates a GCP Table
version_added: '2.8'
author: Google Inc. (@googlecloudplatform)
requirements:
- python >= 2.6
- requests >= 2.18.4
- google-auth >= 1.3.0
options:
state:
description:
- Whether the given object should exist in GCP
choices:
- present
- absent
default: present
type: str
table_reference:
description:
- Reference describing the ID of this table.
required: false
type: dict
suboptions:
dataset_id:
description:
- The ID of the dataset containing this table.
required: false
type: str
project_id:
description:
- The ID of the project containing this table.
required: false
type: str
table_id:
description:
- The ID of the table.
required: false
type: str
clustering:
description:
- One or more fields on which data should be clustered. Only top-level, non-repeated,
simple-type fields are supported. When you cluster a table using multiple columns,
the order of columns you specify is important. The order of the specified columns
determines the sort order of the data.
required: false
type: list
version_added: '2.9'
description:
description:
- A user-friendly description of the dataset.
required: false
type: str
friendly_name:
description:
- A descriptive name for this table.
required: false
type: str
labels:
description:
- The labels associated with this dataset. You can use these to organize and group
your datasets .
required: false
type: dict
name:
description:
- Name of the table.
required: false
type: str
num_rows:
description:
- The number of rows of data in this table, excluding any data in the streaming
buffer.
required: false
type: int
version_added: '2.9'
view:
description:
- The view definition.
required: false
type: dict
suboptions:
use_legacy_sql:
description:
- Specifies whether to use BigQuery's legacy SQL for this view .
required: false
type: bool
user_defined_function_resources:
description:
- Describes user-defined function resources used in the query.
required: false
type: list
suboptions:
inline_code:
description:
- An inline resource that contains code for a user-defined function (UDF).
Providing a inline code resource is equivalent to providing a URI for
a file containing the same code.
required: false
type: str
resource_uri:
description:
- A code resource to load from a Google Cloud Storage URI (gs://bucket/path).
required: false
type: str
time_partitioning:
description:
- If specified, configures time-based partitioning for this table.
required: false
type: dict
suboptions:
expiration_ms:
description:
- Number of milliseconds for which to keep the storage for a partition.
required: false
type: int
field:
description:
- If not set, the table is partitioned by pseudo column, referenced via either
'_PARTITIONTIME' as TIMESTAMP type, or '_PARTITIONDATE' as DATE type. If
field is specified, the table is instead partitioned by this field. The
field must be a top-level TIMESTAMP or DATE field. Its mode must be NULLABLE
or REQUIRED.
required: false
type: str
version_added: '2.9'
type:
description:
- The only type supported is DAY, which will generate one partition per day.
- 'Some valid choices include: "DAY"'
required: false
type: str
schema:
description:
- Describes the schema of this table.
required: false
type: dict
suboptions:
fields:
description:
- Describes the fields in a table.
required: false
type: list
suboptions:
description:
description:
- The field description. The maximum length is 1,024 characters.
required: false
type: str
fields:
description:
- Describes the nested schema fields if the type property is set to RECORD.
required: false
type: list
mode:
description:
- The field mode.
- 'Some valid choices include: "NULLABLE", "REQUIRED", "REPEATED"'
required: false
type: str
name:
description:
- The field name.
required: false
type: str
type:
description:
- The field data type.
- 'Some valid choices include: "STRING", "BYTES", "INTEGER", "FLOAT",
"TIMESTAMP", "DATE", "TIME", "DATETIME", "RECORD"'
required: false
type: str
encryption_configuration:
description:
- Custom encryption configuration.
required: false
type: dict
suboptions:
kms_key_name:
description:
- Describes the Cloud KMS encryption key that will be used to protect destination
BigQuery table. The BigQuery Service Account associated with your project
requires access to this encryption key.
required: false
type: str
expiration_time:
description:
- The time when this table expires, in milliseconds since the epoch. If not present,
the table will persist indefinitely.
required: false
type: int
external_data_configuration:
description:
- Describes the data format, location, and other properties of a table stored
outside of BigQuery. By defining these properties, the data source can then
be queried as if it were a standard BigQuery table.
required: false
type: dict
suboptions:
autodetect:
description:
- Try to detect schema and format options automatically. Any option specified
explicitly will be honored.
required: false
type: bool
compression:
description:
- The compression type of the data source.
- 'Some valid choices include: "GZIP", "NONE"'
required: false
type: str
ignore_unknown_values:
description:
- Indicates if BigQuery should allow extra values that are not represented
in the table schema .
required: false
type: bool
max_bad_records:
description:
- The maximum number of bad records that BigQuery can ignore when reading
data .
required: false
default: '0'
type: int
source_format:
description:
- The data format.
- 'Some valid choices include: "CSV", "GOOGLE_SHEETS", "NEWLINE_DELIMITED_JSON",
"AVRO", "DATASTORE_BACKUP", "BIGTABLE"'
required: false
type: str
source_uris:
description:
- The fully-qualified URIs that point to your data in Google Cloud.
- 'For Google Cloud Storage URIs: Each URI can contain one ''*'' wildcard
character and it must come after the ''bucket'' name. Size limits related
to load jobs apply to external data sources. For Google Cloud Bigtable URIs:
Exactly one URI can be specified and it has be a fully specified and valid
HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore
backups, exactly one URI can be specified. Also, the ''*'' wildcard character
is not allowed.'
required: false
type: list
schema:
description:
- The schema for the data. Schema is required for CSV and JSON formats.
required: false
type: dict
suboptions:
fields:
description:
- Describes the fields in a table.
required: false
type: list
suboptions:
description:
description:
- The field description.
required: false
type: str
fields:
description:
- Describes the nested schema fields if the type property is set to
RECORD .
required: false
type: list
mode:
description:
- Field mode.
- 'Some valid choices include: "NULLABLE", "REQUIRED", "REPEATED"'
required: false
type: str
name:
description:
- Field name.
required: false
type: str
type:
description:
- Field data type.
- 'Some valid choices include: "STRING", "BYTES", "INTEGER", "FLOAT",
"TIMESTAMP", "DATE", "TIME", "DATETIME", "RECORD"'
required: false
type: str
google_sheets_options:
description:
- Additional options if sourceFormat is set to GOOGLE_SHEETS.
required: false
type: dict
suboptions:
skip_leading_rows:
description:
- The number of rows at the top of a Google Sheet that BigQuery will skip
when reading the data.
required: false
default: '0'
type: int
csv_options:
description:
- Additional properties to set if sourceFormat is set to CSV.
required: false
type: dict
suboptions:
allow_jagged_rows:
description:
- Indicates if BigQuery should accept rows that are missing trailing optional
columns .
required: false
type: bool
allow_quoted_newlines:
description:
- Indicates if BigQuery should allow quoted data sections that contain
newline characters in a CSV file .
required: false
type: bool
encoding:
description:
- The character encoding of the data.
- 'Some valid choices include: "UTF-8", "ISO-8859-1"'
required: false
type: str
field_delimiter:
description:
- The separator for fields in a CSV file.
required: false
type: str
quote:
description:
- The value that is used to quote data sections in a CSV file.
required: false
type: str
skip_leading_rows:
description:
- The number of rows at the top of a CSV file that BigQuery will skip
when reading the data.
required: false
default: '0'
type: int
bigtable_options:
description:
- Additional options if sourceFormat is set to BIGTABLE.
required: false
type: dict
suboptions:
ignore_unspecified_column_families:
description:
- If field is true, then the column families that are not specified in
columnFamilies list are not exposed in the table schema .
required: false
type: bool
read_rowkey_as_string:
description:
- If field is true, then the rowkey column families will be read and converted
to string.
required: false
type: bool
column_families:
description:
- List of column families to expose in the table schema along with their
types.
required: false
type: list
suboptions:
columns:
description:
- Lists of columns that should be exposed as individual fields as
opposed to a list of (column name, value) pairs.
required: false
type: list
suboptions:
encoding:
description:
- The encoding of the values when the type is not STRING.
- 'Some valid choices include: "TEXT", "BINARY"'
required: false
type: str
field_name:
description:
- If the qualifier is not a valid BigQuery field identifier, a
valid identifier must be provided as the column field name and
is used as field name in queries.
required: false
type: str
only_read_latest:
description:
- If this is set, only the latest version of value in this column
are exposed .
required: false
type: bool
qualifier_string:
description:
- Qualifier of the column.
required: true
type: str
type:
description:
- The type to convert the value in cells of this column.
- 'Some valid choices include: "BYTES", "STRING", "INTEGER", "FLOAT",
"BOOLEAN"'
required: false
type: str
encoding:
description:
- The encoding of the values when the type is not STRING.
- 'Some valid choices include: "TEXT", "BINARY"'
required: false
type: str
family_id:
description:
- Identifier of the column family.
required: false
type: str
only_read_latest:
description:
- If this is set only the latest version of value are exposed for
all columns in this column family .
required: false
type: bool
type:
description:
- The type to convert the value in cells of this column family.
- 'Some valid choices include: "BYTES", "STRING", "INTEGER", "FLOAT",
"BOOLEAN"'
required: false
type: str
dataset:
description:
- Name of the dataset.
required: false
type: str
project:
description:
- The Google Cloud Platform project to use.
type: str
auth_kind:
description:
- The type of credential used.
type: str
required: true
choices:
- application
- machineaccount
- serviceaccount
service_account_contents:
description:
- The contents of a Service Account JSON file, either in a dictionary or as a
JSON string that represents it.
type: jsonarg
service_account_file:
description:
- The path of a Service Account JSON file if serviceaccount is selected as type.
type: path
service_account_email:
description:
- An optional service account email address if machineaccount is selected and
the user does not wish to use the default email.
type: str
scopes:
description:
- Array of scopes to be used
type: list
env_type:
description:
- Specifies which Ansible environment you're running this module within.
- This should not be set unless you know what you're doing.
- This only alters the User Agent string for any API requests.
type: str
'''
EXAMPLES = '''
- name: create a dataset
gcp_bigquery_dataset:
name: example_dataset
dataset_reference:
dataset_id: example_dataset
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
state: present
register: dataset
- name: create a table
gcp_bigquery_table:
name: example_table
dataset: example_dataset
table_reference:
dataset_id: example_dataset
project_id: test_project
table_id: example_table
project: test_project
auth_kind: serviceaccount
service_account_file: "/tmp/auth.pem"
state: present
'''
RETURN = '''
tableReference:
description:
- Reference describing the ID of this table.
returned: success
type: complex
contains:
datasetId:
description:
- The ID of the dataset containing this table.
returned: success
type: str
projectId:
description:
- The ID of the project containing this table.
returned: success
type: str
tableId:
description:
- The ID of the table.
returned: success
type: str
clustering:
description:
- One or more fields on which data should be clustered. Only top-level, non-repeated,
simple-type fields are supported. When you cluster a table using multiple columns,
the order of columns you specify is important. The order of the specified columns
determines the sort order of the data.
returned: success
type: list
creationTime:
description:
- The time when this dataset was created, in milliseconds since the epoch.
returned: success
type: int
description:
description:
- A user-friendly description of the dataset.
returned: success
type: str
friendlyName:
description:
- A descriptive name for this table.
returned: success
type: str
id:
description:
- An opaque ID uniquely identifying the table.
returned: success
type: str
labels:
description:
- The labels associated with this dataset. You can use these to organize and group
your datasets .
returned: success
type: dict
lastModifiedTime:
description:
- The time when this table was last modified, in milliseconds since the epoch.
returned: success
type: int
location:
description:
- The geographic location where the table resides. This value is inherited from
the dataset.
returned: success
type: str
name:
description:
- Name of the table.
returned: success
type: str
numBytes:
description:
- The size of this table in bytes, excluding any data in the streaming buffer.
returned: success
type: int
numLongTermBytes:
description:
- The number of bytes in the table that are considered "long-term storage".
returned: success
type: int
numRows:
description:
- The number of rows of data in this table, excluding any data in the streaming
buffer.
returned: success
type: int
requirePartitionFilter:
description:
- If set to true, queries over this table require a partition filter that can be
used for partition elimination to be specified.
returned: success
type: bool
type:
description:
- Describes the table type.
returned: success
type: str
view:
description:
- The view definition.
returned: success
type: complex
contains:
useLegacySql:
description:
- Specifies whether to use BigQuery's legacy SQL for this view .
returned: success
type: bool
userDefinedFunctionResources:
description:
- Describes user-defined function resources used in the query.
returned: success
type: complex
contains:
inlineCode:
description:
- An inline resource that contains code for a user-defined function (UDF).
Providing a inline code resource is equivalent to providing a URI for
a file containing the same code.
returned: success
type: str
resourceUri:
description:
- A code resource to load from a Google Cloud Storage URI (gs://bucket/path).
returned: success
type: str
timePartitioning:
description:
- If specified, configures time-based partitioning for this table.
returned: success
type: complex
contains:
expirationMs:
description:
- Number of milliseconds for which to keep the storage for a partition.
returned: success
type: int
field:
description:
- If not set, the table is partitioned by pseudo column, referenced via either
'_PARTITIONTIME' as TIMESTAMP type, or '_PARTITIONDATE' as DATE type. If field
is specified, the table is instead partitioned by this field. The field must
be a top-level TIMESTAMP or DATE field. Its mode must be NULLABLE or REQUIRED.
returned: success
type: str
type:
description:
- The only type supported is DAY, which will generate one partition per day.
returned: success
type: str
streamingBuffer:
description:
- Contains information regarding this table's streaming buffer, if one is present.
This field will be absent if the table is not being streamed to or if there is
no data in the streaming buffer.
returned: success
type: complex
contains:
estimatedBytes:
description:
- A lower-bound estimate of the number of bytes currently in the streaming buffer.
returned: success
type: int
estimatedRows:
description:
- A lower-bound estimate of the number of rows currently in the streaming buffer.
returned: success
type: int
oldestEntryTime:
description:
- Contains the timestamp of the oldest entry in the streaming buffer, in milliseconds
since the epoch, if the streaming buffer is available.
returned: success
type: int
schema:
description:
- Describes the schema of this table.
returned: success
type: complex
contains:
fields:
description:
- Describes the fields in a table.
returned: success
type: complex
contains:
description:
description:
- The field description. The maximum length is 1,024 characters.
returned: success
type: str
fields:
description:
- Describes the nested schema fields if the type property is set to RECORD.
returned: success
type: list
mode:
description:
- The field mode.
returned: success
type: str
name:
description:
- The field name.
returned: success
type: str
type:
description:
- The field data type.
returned: success
type: str
encryptionConfiguration:
description:
- Custom encryption configuration.
returned: success
type: complex
contains:
kmsKeyName:
description:
- Describes the Cloud KMS encryption key that will be used to protect destination
BigQuery table. The BigQuery Service Account associated with your project
requires access to this encryption key.
returned: success
type: str
expirationTime:
description:
- The time when this table expires, in milliseconds since the epoch. If not present,
the table will persist indefinitely.
returned: success
type: int
externalDataConfiguration:
description:
- Describes the data format, location, and other properties of a table stored outside
of BigQuery. By defining these properties, the data source can then be queried
as if it were a standard BigQuery table.
returned: success
type: complex
contains:
autodetect:
description:
- Try to detect schema and format options automatically. Any option specified
explicitly will be honored.
returned: success
type: bool
compression:
description:
- The compression type of the data source.
returned: success
type: str
ignoreUnknownValues:
description:
- Indicates if BigQuery should allow extra values that are not represented in
the table schema .
returned: success
type: bool
maxBadRecords:
description:
- The maximum number of bad records that BigQuery can ignore when reading data
.
returned: success
type: int
sourceFormat:
description:
- The data format.
returned: success
type: str
sourceUris:
description:
- The fully-qualified URIs that point to your data in Google Cloud.
- 'For Google Cloud Storage URIs: Each URI can contain one ''*'' wildcard character
and it must come after the ''bucket'' name. Size limits related to load jobs
apply to external data sources. For Google Cloud Bigtable URIs: Exactly one
URI can be specified and it has be a fully specified and valid HTTPS URL for
a Google Cloud Bigtable table. For Google Cloud Datastore backups, exactly
one URI can be specified. Also, the ''*'' wildcard character is not allowed.'
returned: success
type: list
schema:
description:
- The schema for the data. Schema is required for CSV and JSON formats.
returned: success
type: complex
contains:
fields:
description:
- Describes the fields in a table.
returned: success
type: complex
contains:
description:
description:
- The field description.
returned: success
type: str
fields:
description:
- Describes the nested schema fields if the type property is set to
RECORD .
returned: success
type: list
mode:
description:
- Field mode.
returned: success
type: str
name:
description:
- Field name.
returned: success
type: str
type:
description:
- Field data type.
returned: success
type: str
googleSheetsOptions:
description:
- Additional options if sourceFormat is set to GOOGLE_SHEETS.
returned: success
type: complex
contains:
skipLeadingRows:
description:
- The number of rows at the top of a Google Sheet that BigQuery will skip
when reading the data.
returned: success
type: int
csvOptions:
description:
- Additional properties to set if sourceFormat is set to CSV.
returned: success
type: complex
contains:
allowJaggedRows:
description:
- Indicates if BigQuery should accept rows that are missing trailing optional
columns .
returned: success
type: bool
allowQuotedNewlines:
description:
- Indicates if BigQuery should allow quoted data sections that contain newline
characters in a CSV file .
returned: success
type: bool
encoding:
description:
- The character encoding of the data.
returned: success
type: str
fieldDelimiter:
description:
- The separator for fields in a CSV file.
returned: success
type: str
quote:
description:
- The value that is used to quote data sections in a CSV file.
returned: success
type: str
skipLeadingRows:
description:
- The number of rows at the top of a CSV file that BigQuery will skip when
reading the data.
returned: success
type: int
bigtableOptions:
description:
- Additional options if sourceFormat is set to BIGTABLE.
returned: success
type: complex
contains:
ignoreUnspecifiedColumnFamilies:
description:
- If field is true, then the column families that are not specified in columnFamilies
list are not exposed in the table schema .
returned: success
type: bool
readRowkeyAsString:
description:
- If field is true, then the rowkey column families will be read and converted
to string.
returned: success
type: bool
columnFamilies:
description:
- List of column families to expose in the table schema along with their
types.
returned: success
type: complex
contains:
columns:
description:
- Lists of columns that should be exposed as individual fields as opposed
to a list of (column name, value) pairs.
returned: success
type: complex
contains:
encoding:
description:
- The encoding of the values when the type is not STRING.
returned: success
type: str
fieldName:
description:
- If the qualifier is not a valid BigQuery field identifier, a valid
identifier must be provided as the column field name and is used
as field name in queries.
returned: success
type: str
onlyReadLatest:
description:
- If this is set, only the latest version of value in this column
are exposed .
returned: success
type: bool
qualifierString:
description:
- Qualifier of the column.
returned: success
type: str
type:
description:
- The type to convert the value in cells of this column.
returned: success
type: str
encoding:
description:
- The encoding of the values when the type is not STRING.
returned: success
type: str
familyId:
description:
- Identifier of the column family.
returned: success
type: str
onlyReadLatest:
description:
- If this is set only the latest version of value are exposed for all
columns in this column family .
returned: success
type: bool
type:
description:
- The type to convert the value in cells of this column family.
returned: success
type: str
dataset:
description:
- Name of the dataset.
returned: success
type: str
'''
################################################################################
# Imports
################################################################################
from ansible.module_utils.gcp_utils import navigate_hash, GcpSession, GcpModule, GcpRequest, remove_nones_from_dict, replace_resource_dict
import json
################################################################################
# Main
################################################################################
def main():
"""Main function"""
module = GcpModule(
argument_spec=dict(
state=dict(default='present', choices=['present', 'absent'], type='str'),
table_reference=dict(type='dict', options=dict(dataset_id=dict(type='str'), project_id=dict(type='str'), table_id=dict(type='str'))),
clustering=dict(type='list', elements='str'),
description=dict(type='str'),
friendly_name=dict(type='str'),
labels=dict(type='dict'),
name=dict(type='str'),
num_rows=dict(type='int'),
view=dict(
type='dict',
options=dict(
use_legacy_sql=dict(type='bool'),
user_defined_function_resources=dict(
type='list', elements='dict', options=dict(inline_code=dict(type='str'), resource_uri=dict(type='str'))
),
),
),
time_partitioning=dict(type='dict', options=dict(expiration_ms=dict(type='int'), field=dict(type='str'), type=dict(type='str'))),
schema=dict(
type='dict',
options=dict(
fields=dict(
type='list',
elements='dict',
options=dict(
description=dict(type='str'),
fields=dict(type='list', elements='str'),
mode=dict(type='str'),
name=dict(type='str'),
type=dict(type='str'),
),
)
),
),
encryption_configuration=dict(type='dict', options=dict(kms_key_name=dict(type='str'))),
expiration_time=dict(type='int'),
external_data_configuration=dict(
type='dict',
options=dict(
autodetect=dict(type='bool'),
compression=dict(type='str'),
ignore_unknown_values=dict(type='bool'),
max_bad_records=dict(default=0, type='int'),
source_format=dict(type='str'),
source_uris=dict(type='list', elements='str'),
schema=dict(
type='dict',
options=dict(
fields=dict(
type='list',
elements='dict',
options=dict(
description=dict(type='str'),
fields=dict(type='list', elements='str'),
mode=dict(type='str'),
name=dict(type='str'),
type=dict(type='str'),
),
)
),
),
google_sheets_options=dict(type='dict', options=dict(skip_leading_rows=dict(default=0, type='int'))),
csv_options=dict(
type='dict',
options=dict(
allow_jagged_rows=dict(type='bool'),
allow_quoted_newlines=dict(type='bool'),
encoding=dict(type='str'),
field_delimiter=dict(type='str'),
quote=dict(type='str'),
skip_leading_rows=dict(default=0, type='int'),
),
),
bigtable_options=dict(
type='dict',
options=dict(
ignore_unspecified_column_families=dict(type='bool'),
read_rowkey_as_string=dict(type='bool'),
column_families=dict(
type='list',
elements='dict',
options=dict(
columns=dict(
type='list',
elements='dict',
options=dict(
encoding=dict(type='str'),
field_name=dict(type='str'),
only_read_latest=dict(type='bool'),
qualifier_string=dict(required=True, type='str'),
type=dict(type='str'),
),
),
encoding=dict(type='str'),
family_id=dict(type='str'),
only_read_latest=dict(type='bool'),
type=dict(type='str'),
),
),
),
),
),
),
dataset=dict(type='str'),
)
)
if not module.params['scopes']:
module.params['scopes'] = ['https://www.googleapis.com/auth/bigquery']
state = module.params['state']
kind = 'bigquery#table'
fetch = fetch_resource(module, self_link(module), kind)
changed = False
if fetch:
if state == 'present':
if is_different(module, fetch):
update(module, self_link(module), kind)
fetch = fetch_resource(module, self_link(module), kind)
changed = True
else:
delete(module, self_link(module), kind)
fetch = {}
changed = True
else:
if state == 'present':
fetch = create(module, collection(module), kind)
changed = True
else:
fetch = {}
fetch.update({'changed': changed})
module.exit_json(**fetch)
def create(module, link, kind):
auth = GcpSession(module, 'bigquery')
return return_if_object(module, auth.post(link, resource_to_request(module)), kind)
def update(module, link, kind):
auth = GcpSession(module, 'bigquery')
return return_if_object(module, auth.put(link, resource_to_request(module)), kind)
def delete(module, link, kind):
auth = GcpSession(module, 'bigquery')
return return_if_object(module, auth.delete(link), kind)
def resource_to_request(module):
request = {
u'kind': 'bigquery#table',
u'tableReference': TableTablereference(module.params.get('table_reference', {}), module).to_request(),
u'clustering': module.params.get('clustering'),
u'description': module.params.get('description'),
u'friendlyName': module.params.get('friendly_name'),
u'labels': module.params.get('labels'),
u'name': module.params.get('name'),
u'numRows': module.params.get('num_rows'),
u'view': TableView(module.params.get('view', {}), module).to_request(),
u'timePartitioning': TableTimepartitioning(module.params.get('time_partitioning', {}), module).to_request(),
u'schema': TableSchema(module.params.get('schema', {}), module).to_request(),
u'encryptionConfiguration': TableEncryptionconfiguration(module.params.get('encryption_configuration', {}), module).to_request(),
u'expirationTime': module.params.get('expiration_time'),
u'externalDataConfiguration': TableExternaldataconfiguration(module.params.get('external_data_configuration', {}), module).to_request(),
}
return_vals = {}
for k, v in request.items():
if v or v is False:
return_vals[k] = v
return return_vals
def fetch_resource(module, link, kind, allow_not_found=True):
auth = GcpSession(module, 'bigquery')
return return_if_object(module, auth.get(link), kind, allow_not_found)
def self_link(module):
return "https://www.googleapis.com/bigquery/v2/projects/{project}/datasets/{dataset}/tables/{name}".format(**module.params)
def collection(module):
return "https://www.googleapis.com/bigquery/v2/projects/{project}/datasets/{dataset}/tables".format(**module.params)
def return_if_object(module, response, kind, allow_not_found=False):
# If not found, return nothing.
if allow_not_found and response.status_code == 404:
return None
# If no content, return nothing.
if response.status_code == 204:
return None
try:
module.raise_for_status(response)
result = response.json()
except getattr(json.decoder, 'JSONDecodeError', ValueError):
module.fail_json(msg="Invalid JSON response with error: %s" % response.text)
if navigate_hash(result, ['error', 'errors']):
module.fail_json(msg=navigate_hash(result, ['error', 'errors']))
return result
def is_different(module, response):
request = resource_to_request(module)
response = response_to_hash(module, response)
# Remove all output-only from response.
response_vals = {}
for k, v in response.items():
if k in request:
response_vals[k] = v
request_vals = {}
for k, v in request.items():
if k in response:
request_vals[k] = v
return GcpRequest(request_vals) != GcpRequest(response_vals)
# Remove unnecessary properties from the response.
# This is for doing comparisons with Ansible's current parameters.
def response_to_hash(module, response):
return {
u'tableReference': TableTablereference(response.get(u'tableReference', {}), module).from_response(),
u'clustering': response.get(u'clustering'),
u'creationTime': response.get(u'creationTime'),
u'description': response.get(u'description'),
u'friendlyName': response.get(u'friendlyName'),
u'id': response.get(u'id'),
u'labels': response.get(u'labels'),
u'lastModifiedTime': response.get(u'lastModifiedTime'),
u'location': response.get(u'location'),
u'name': response.get(u'name'),
u'numBytes': response.get(u'numBytes'),
u'numLongTermBytes': response.get(u'numLongTermBytes'),
u'numRows': response.get(u'numRows'),
u'requirePartitionFilter': response.get(u'requirePartitionFilter'),
u'type': response.get(u'type'),
u'view': TableView(response.get(u'view', {}), module).from_response(),
u'timePartitioning': TableTimepartitioning(response.get(u'timePartitioning', {}), module).from_response(),
u'streamingBuffer': TableStreamingbuffer(response.get(u'streamingBuffer', {}), module).from_response(),
u'schema': TableSchema(response.get(u'schema', {}), module).from_response(),
u'encryptionConfiguration': TableEncryptionconfiguration(response.get(u'encryptionConfiguration', {}), module).from_response(),
u'expirationTime': response.get(u'expirationTime'),
u'externalDataConfiguration': TableExternaldataconfiguration(response.get(u'externalDataConfiguration', {}), module).from_response(),
}
class TableTablereference(object):
def __init__(self, request, module):
self.module = module
if request:
self.request = request
else:
self.request = {}
def to_request(self):
return remove_nones_from_dict(
{u'datasetId': self.request.get('dataset_id'), u'projectId': self.request.get('project_id'), u'tableId': self.request.get('table_id')}
)
def from_response(self):
return remove_nones_from_dict(
{u'datasetId': self.request.get(u'datasetId'), u'projectId': self.request.get(u'projectId'), u'tableId': self.request.get(u'tableId')}
)
class TableView(object):
def __init__(self, request, module):
self.module = module
if request:
self.request = request
else:
self.request = {}
def to_request(self):
return remove_nones_from_dict(
{
u'useLegacySql': self.request.get('use_legacy_sql'),
u'userDefinedFunctionResources': TableUserdefinedfunctionresourcesArray(
self.request.get('user_defined_function_resources', []), self.module
).to_request(),
}
)
def from_response(self):
return remove_nones_from_dict(
{
u'useLegacySql': self.request.get(u'useLegacySql'),
u'userDefinedFunctionResources': TableUserdefinedfunctionresourcesArray(
self.request.get(u'userDefinedFunctionResources', []), self.module
).from_response(),
}
)
class TableUserdefinedfunctionresourcesArray(object):
def __init__(self, request, module):
self.module = module
if request:
self.request = request
else:
self.request = []
def to_request(self):
items = []
for item in self.request:
items.append(self._request_for_item(item))
return items
def from_response(self):
items = []
for item in self.request:
items.append(self._response_from_item(item))
return items
def _request_for_item(self, item):
return remove_nones_from_dict({u'inlineCode': item.get('inline_code'), u'resourceUri': item.get('resource_uri')})
def _response_from_item(self, item):
return remove_nones_from_dict({u'inlineCode': item.get(u'inlineCode'), u'resourceUri': item.get(u'resourceUri')})
class TableTimepartitioning(object):
def __init__(self, request, module):
self.module = module
if request:
self.request = request
else:
self.request = {}
def to_request(self):
return remove_nones_from_dict(
{u'expirationMs': self.request.get('expiration_ms'), u'field': self.request.get('field'), u'type': self.request.get('type')}
)
def from_response(self):
return remove_nones_from_dict(
{u'expirationMs': self.request.get(u'expirationMs'), u'field': self.request.get(u'field'), u'type': self.request.get(u'type')}
)
class TableStreamingbuffer(object):
def __init__(self, request, module):
self.module = module
if request:
self.request = request
else:
self.request = {}
def to_request(self):
return remove_nones_from_dict({})
def from_response(self):
return remove_nones_from_dict({})
class TableSchema(object):
def __init__(self, request, module):
self.module = module
if request:
self.request = request
else:
self.request = {}
def to_request(self):
return remove_nones_from_dict({u'fields': TableFieldsArray(self.request.get('fields', []), self.module).to_request()})
def from_response(self):
return remove_nones_from_dict({u'fields': TableFieldsArray(self.request.get(u'fields', []), self.module).from_response()})
class TableFieldsArray(object):
def __init__(self, request, module):
self.module = module
if request:
self.request = request
else:
self.request = []
def to_request(self):
items = []
for item in self.request:
items.append(self._request_for_item(item))
return items
def from_response(self):
items = []
for item in self.request:
items.append(self._response_from_item(item))
return items
def _request_for_item(self, item):
return remove_nones_from_dict(
{
u'description': item.get('description'),
u'fields': item.get('fields'),
u'mode': item.get('mode'),
u'name': item.get('name'),
u'type': item.get('type'),
}
)
def _response_from_item(self, item):
return remove_nones_from_dict(
{
u'description': item.get(u'description'),
u'fields': item.get(u'fields'),
u'mode': item.get(u'mode'),
u'name': item.get(u'name'),
u'type': item.get(u'type'),
}
)
class TableEncryptionconfiguration(object):
def __init__(self, request, module):
self.module = module
if request:
self.request = request
else:
self.request = {}
def to_request(self):
return remove_nones_from_dict({u'kmsKeyName': self.request.get('kms_key_name')})
def from_response(self):
return remove_nones_from_dict({u'kmsKeyName': self.request.get(u'kmsKeyName')})
class TableExternaldataconfiguration(object):
def __init__(self, request, module):
self.module = module
if request:
self.request = request
else:
self.request = {}
def to_request(self):
return remove_nones_from_dict(
{
u'autodetect': self.request.get('autodetect'),
u'compression': self.request.get('compression'),
u'ignoreUnknownValues': self.request.get('ignore_unknown_values'),
u'maxBadRecords': self.request.get('max_bad_records'),
u'sourceFormat': self.request.get('source_format'),
u'sourceUris': self.request.get('source_uris'),
u'schema': TableSchema(self.request.get('schema', {}), self.module).to_request(),
u'googleSheetsOptions': TableGooglesheetsoptions(self.request.get('google_sheets_options', {}), self.module).to_request(),
u'csvOptions': TableCsvoptions(self.request.get('csv_options', {}), self.module).to_request(),
u'bigtableOptions': TableBigtableoptions(self.request.get('bigtable_options', {}), self.module).to_request(),
}
)
def from_response(self):
return remove_nones_from_dict(
{
u'autodetect': self.request.get(u'autodetect'),
u'compression': self.request.get(u'compression'),
u'ignoreUnknownValues': self.request.get(u'ignoreUnknownValues'),
u'maxBadRecords': self.request.get(u'maxBadRecords'),
u'sourceFormat': self.request.get(u'sourceFormat'),
u'sourceUris': self.request.get(u'sourceUris'),
u'schema': TableSchema(self.request.get(u'schema', {}), self.module).from_response(),
u'googleSheetsOptions': TableGooglesheetsoptions(self.request.get(u'googleSheetsOptions', {}), self.module).from_response(),
u'csvOptions': TableCsvoptions(self.request.get(u'csvOptions', {}), self.module).from_response(),
u'bigtableOptions': TableBigtableoptions(self.request.get(u'bigtableOptions', {}), self.module).from_response(),
}
)
class TableSchema(object):
def __init__(self, request, module):
self.module = module
if request:
self.request = request
else:
self.request = {}
def to_request(self):
return remove_nones_from_dict({u'fields': TableFieldsArray(self.request.get('fields', []), self.module).to_request()})
def from_response(self):
return remove_nones_from_dict({u'fields': TableFieldsArray(self.request.get(u'fields', []), self.module).from_response()})
class TableFieldsArray(object):
def __init__(self, request, module):
self.module = module
if request:
self.request = request
else:
self.request = []
def to_request(self):
items = []
for item in self.request:
items.append(self._request_for_item(item))
return items
def from_response(self):
items = []
for item in self.request:
items.append(self._response_from_item(item))
return items
def _request_for_item(self, item):
return remove_nones_from_dict(
{
u'description': item.get('description'),
u'fields': item.get('fields'),
u'mode': item.get('mode'),
u'name': item.get('name'),
u'type': item.get('type'),
}
)
def _response_from_item(self, item):
return remove_nones_from_dict(
{
u'description': item.get(u'description'),
u'fields': item.get(u'fields'),
u'mode': item.get(u'mode'),
u'name': item.get(u'name'),
u'type': item.get(u'type'),
}
)
class TableGooglesheetsoptions(object):
def __init__(self, request, module):
self.module = module
if request:
self.request = request
else:
self.request = {}
def to_request(self):
return remove_nones_from_dict({u'skipLeadingRows': self.request.get('skip_leading_rows')})
def from_response(self):
return remove_nones_from_dict({u'skipLeadingRows': self.request.get(u'skipLeadingRows')})
class TableCsvoptions(object):
def __init__(self, request, module):
self.module = module
if request:
self.request = request
else:
self.request = {}
def to_request(self):
return remove_nones_from_dict(
{
u'allowJaggedRows': self.request.get('allow_jagged_rows'),
u'allowQuotedNewlines': self.request.get('allow_quoted_newlines'),
u'encoding': self.request.get('encoding'),
u'fieldDelimiter': self.request.get('field_delimiter'),
u'quote': self.request.get('quote'),
u'skipLeadingRows': self.request.get('skip_leading_rows'),
}
)
def from_response(self):
return remove_nones_from_dict(
{
u'allowJaggedRows': self.request.get(u'allowJaggedRows'),
u'allowQuotedNewlines': self.request.get(u'allowQuotedNewlines'),
u'encoding': self.request.get(u'encoding'),
u'fieldDelimiter': self.request.get(u'fieldDelimiter'),
u'quote': self.request.get(u'quote'),
u'skipLeadingRows': self.request.get(u'skipLeadingRows'),
}
)
class TableBigtableoptions(object):
def __init__(self, request, module):
self.module = module
if request:
self.request = request
else:
self.request = {}
def to_request(self):
return remove_nones_from_dict(
{
u'ignoreUnspecifiedColumnFamilies': self.request.get('ignore_unspecified_column_families'),
u'readRowkeyAsString': self.request.get('read_rowkey_as_string'),
u'columnFamilies': TableColumnfamiliesArray(self.request.get('column_families', []), self.module).to_request(),
}
)
def from_response(self):
return remove_nones_from_dict(
{
u'ignoreUnspecifiedColumnFamilies': self.request.get(u'ignoreUnspecifiedColumnFamilies'),
u'readRowkeyAsString': self.request.get(u'readRowkeyAsString'),
u'columnFamilies': TableColumnfamiliesArray(self.request.get(u'columnFamilies', []), self.module).from_response(),
}
)
class TableColumnfamiliesArray(object):
def __init__(self, request, module):
self.module = module
if request:
self.request = request
else:
self.request = []
def to_request(self):
items = []
for item in self.request:
items.append(self._request_for_item(item))
return items
def from_response(self):
items = []
for item in self.request:
items.append(self._response_from_item(item))
return items
def _request_for_item(self, item):
return remove_nones_from_dict(
{
u'columns': TableColumnsArray(item.get('columns', []), self.module).to_request(),
u'encoding': item.get('encoding'),
u'familyId': item.get('family_id'),
u'onlyReadLatest': item.get('only_read_latest'),
u'type': item.get('type'),
}
)
def _response_from_item(self, item):
return remove_nones_from_dict(
{
u'columns': TableColumnsArray(item.get(u'columns', []), self.module).from_response(),
u'encoding': item.get(u'encoding'),
u'familyId': item.get(u'familyId'),
u'onlyReadLatest': item.get(u'onlyReadLatest'),
u'type': item.get(u'type'),
}
)
class TableColumnsArray(object):
def __init__(self, request, module):
self.module = module
if request:
self.request = request
else:
self.request = []
def to_request(self):
items = []
for item in self.request:
items.append(self._request_for_item(item))
return items
def from_response(self):
items = []
for item in self.request:
items.append(self._response_from_item(item))
return items
def _request_for_item(self, item):
return remove_nones_from_dict(
{
u'encoding': item.get('encoding'),
u'fieldName': item.get('field_name'),
u'onlyReadLatest': item.get('only_read_latest'),
u'qualifierString': item.get('qualifier_string'),
u'type': item.get('type'),
}
)
def _response_from_item(self, item):
return remove_nones_from_dict(
{
u'encoding': item.get(u'encoding'),
u'fieldName': item.get(u'fieldName'),
u'onlyReadLatest': item.get(u'onlyReadLatest'),
u'qualifierString': item.get(u'qualifierString'),
u'type': item.get(u'type'),
}
)
if __name__ == '__main__':
main()
|
jlombacher/pyqtgraph
|
refs/heads/develop
|
pyqtgraph/graphicsItems/VTickGroup.py
|
41
|
if __name__ == '__main__':
import os, sys
path = os.path.abspath(os.path.dirname(__file__))
sys.path.insert(0, os.path.join(path, '..', '..'))
from ..Qt import QtGui, QtCore
from .. import functions as fn
import weakref
from .UIGraphicsItem import UIGraphicsItem
__all__ = ['VTickGroup']
class VTickGroup(UIGraphicsItem):
"""
**Bases:** :class:`UIGraphicsItem <pyqtgraph.UIGraphicsItem>`
Draws a set of tick marks which always occupy the same vertical range of the view,
but have x coordinates relative to the data within the view.
"""
def __init__(self, xvals=None, yrange=None, pen=None):
"""
============== ===================================================================
**Arguments:**
xvals A list of x values (in data coordinates) at which to draw ticks.
yrange A list of [low, high] limits for the tick. 0 is the bottom of
the view, 1 is the top. [0.8, 1] would draw ticks in the top
fifth of the view.
pen The pen to use for drawing ticks. Default is grey. Can be specified
as any argument valid for :func:`mkPen<pyqtgraph.mkPen>`
============== ===================================================================
"""
if yrange is None:
yrange = [0, 1]
if xvals is None:
xvals = []
UIGraphicsItem.__init__(self)
if pen is None:
pen = (200, 200, 200)
self.path = QtGui.QGraphicsPathItem()
self.ticks = []
self.xvals = []
self.yrange = [0,1]
self.setPen(pen)
self.setYRange(yrange)
self.setXVals(xvals)
def setPen(self, *args, **kwargs):
"""Set the pen to use for drawing ticks. Can be specified as any arguments valid
for :func:`mkPen<pyqtgraph.mkPen>`"""
self.pen = fn.mkPen(*args, **kwargs)
def setXVals(self, vals):
"""Set the x values for the ticks.
============== =====================================================================
**Arguments:**
vals A list of x values (in data/plot coordinates) at which to draw ticks.
============== =====================================================================
"""
self.xvals = vals
self.rebuildTicks()
#self.valid = False
def setYRange(self, vals):
"""Set the y range [low, high] that the ticks are drawn on. 0 is the bottom of
the view, 1 is the top."""
self.yrange = vals
self.rebuildTicks()
def dataBounds(self, *args, **kargs):
return None ## item should never affect view autoscaling
def yRange(self):
return self.yrange
def rebuildTicks(self):
self.path = QtGui.QPainterPath()
yrange = self.yRange()
for x in self.xvals:
self.path.moveTo(x, 0.)
self.path.lineTo(x, 1.)
def paint(self, p, *args):
UIGraphicsItem.paint(self, p, *args)
br = self.boundingRect()
h = br.height()
br.setY(br.y() + self.yrange[0] * h)
br.setHeight(h - (1.0-self.yrange[1]) * h)
p.translate(0, br.y())
p.scale(1.0, br.height())
p.setPen(self.pen)
p.drawPath(self.path)
|
MaKToff/SPbSU_Homeworks
|
refs/heads/master
|
Semester 7/Natural language processing/lang_detect.py
|
1
|
from keras import Sequential
from keras.layers import Dense, Dropout, Conv1D, MaxPooling1D, Flatten
from keras.models import load_model
from sklearn.metrics import classification_report
from preprocessing import *
data_directory = "./data/"
model_path = "./model.h5"
def train_model(train_test_path):
"""
Creates a model and performs training.
"""
# Load train/test data
train_test_data = np.load(train_test_path)
x_train = train_test_data['X_train']
y_train = train_test_data['y_train']
print("x_train:", x_train.shape)
print("y_train:", y_train.shape)
del train_test_data
x_train = np.expand_dims(x_train, axis=3)
# Create network
model = Sequential()
model.add(Conv1D(128, 5, input_shape=x_train.shape[1:], padding='same', activation='relu'))
model.add(MaxPooling1D(5))
model.add(Conv1D(128, 5, padding='same', activation='relu'))
model.add(MaxPooling1D(5))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(1024, kernel_initializer='glorot_uniform', activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(512, kernel_initializer='glorot_uniform', activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(256, kernel_initializer='glorot_uniform', activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(128, kernel_initializer='glorot_uniform', activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(len(language_codes), kernel_initializer='glorot_uniform', activation='softmax'))
model_optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
model.compile(loss='categorical_crossentropy', optimizer=model_optimizer, metrics=['accuracy'])
# Train
model.fit(x_train, y_train,
epochs=10,
validation_split=0.10,
batch_size=64,
verbose=2,
shuffle=True)
model.save(model_path)
def test(train_test_path):
"""
Loads model and validates it on given data.
"""
train_test_data = np.load(train_test_path)
x_test = train_test_data['X_test']
y_test = train_test_data['y_test']
print("x_test: ", x_test.shape)
print("y_test: ", y_test.shape)
del train_test_data
x_test = np.expand_dims(x_test, axis=3)
model = load_model(model_path)
scores = model.evaluate(x_test, y_test, verbose=1)
print(f"{model.metrics_names[1]}: {(scores[1] * 100):4.4}%")
# Scikit-learn classification report
y_pred = model.predict_classes(x_test)
y_pred = keras.utils.to_categorical(y_pred, num_classes=len(language_codes))
print(classification_report(y_test, y_pred, target_names=language_codes))
def start(model_exists=False):
"""
Starts the process of training/testing.
"""
vectors_directory = os.path.join(data_directory, "samples")
vectors_path = os.path.join(vectors_directory, "sample_vectors.npz")
train_test_directory = os.path.join(data_directory, "train_test")
train_test_path = os.path.join(train_test_directory, "train_test_data.npz")
if not os.path.exists(vectors_directory):
os.makedirs(vectors_directory)
if not os.path.exists(train_test_directory):
os.makedirs(train_test_directory)
create_sample_vectors(data_directory, vectors_path)
gen_train_test(vectors_path, train_test_path)
if not model_exists:
train_model(train_test_path)
test(train_test_path)
start()
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.