hexsha stringlengths 40 40 | size int64 4 1.02M | ext stringclasses 8 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 4 209 | max_stars_repo_name stringlengths 5 121 | max_stars_repo_head_hexsha stringlengths 40 40 | max_stars_repo_licenses listlengths 1 10 | max_stars_count int64 1 191k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 4 209 | max_issues_repo_name stringlengths 5 121 | max_issues_repo_head_hexsha stringlengths 40 40 | max_issues_repo_licenses listlengths 1 10 | max_issues_count int64 1 67k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 4 209 | max_forks_repo_name stringlengths 5 121 | max_forks_repo_head_hexsha stringlengths 40 40 | max_forks_repo_licenses listlengths 1 10 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 4 1.02M | avg_line_length float64 1.07 66.1k | max_line_length int64 4 266k | alphanum_fraction float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
50df08a12307e6c91bb8b243c362903f8e5a3f99 | 86,246 | py | Python | env/lib/python3.6/site-packages/matplotlib/ticker.py | anthowen/duplify | 846d01c1b21230937fdf0281b0cf8c0b08a8c24e | [
"MIT"
] | 5 | 2022-02-20T07:10:02.000Z | 2022-03-18T17:47:53.000Z | env/lib/python3.6/site-packages/matplotlib/ticker.py | anthowen/duplify | 846d01c1b21230937fdf0281b0cf8c0b08a8c24e | [
"MIT"
] | 4 | 2021-06-08T20:08:26.000Z | 2022-03-11T23:54:16.000Z | env/lib/python3.6/site-packages/matplotlib/ticker.py | anthowen/duplify | 846d01c1b21230937fdf0281b0cf8c0b08a8c24e | [
"MIT"
] | 4 | 2018-05-19T11:31:20.000Z | 2018-07-01T20:58:29.000Z | """
Tick locating and formatting
============================
This module contains classes to support completely configurable tick
locating and formatting. Although the locators know nothing about major
or minor ticks, they are used by the Axis class to support major and
minor tick locating and formatting. Generic tick locators and
formatters are provided, as well as domain specific custom ones.
Default Formatter
-----------------
The default formatter identifies when the x-data being plotted is a
small range on top of a large off set. To reduce the chances that the
ticklabels overlap the ticks are labeled as deltas from a fixed offset.
For example::
ax.plot(np.arange(2000, 2010), range(10))
will have tick of 0-9 with an offset of +2e3. If this is not desired
turn off the use of the offset on the default formatter::
ax.get_xaxis().get_major_formatter().set_useOffset(False)
set the rcParam ``axes.formatter.useoffset=False`` to turn it off
globally, or set a different formatter.
Tick locating
-------------
The Locator class is the base class for all tick locators. The locators
handle autoscaling of the view limits based on the data limits, and the
choosing of tick locations. A useful semi-automatic tick locator is
`MultipleLocator`. It is initialized with a base, e.g., 10, and it picks
axis limits and ticks that are multiples of that base.
The Locator subclasses defined here are
:class:`AutoLocator`
`MaxNLocator` with simple defaults. This is the default tick locator for
most plotting.
:class:`MaxNLocator`
Finds up to a max number of intervals with ticks at nice locations.
:class:`LinearLocator`
Space ticks evenly from min to max.
:class:`LogLocator`
Space ticks logarithmically from min to max.
:class:`MultipleLocator`
Ticks and range are a multiple of base; either integer or float.
:class:`FixedLocator`
Tick locations are fixed.
:class:`IndexLocator`
Locator for index plots (e.g., where ``x = range(len(y))``).
:class:`NullLocator`
No ticks.
:class:`SymmetricalLogLocator`
Locator for use with with the symlog norm; works like `LogLocator` for the
part outside of the threshold and adds 0 if inside the limits.
:class:`LogitLocator`
Locator for logit scaling.
:class:`OldAutoLocator`
Choose a `MultipleLocator` and dynamically reassign it for intelligent
ticking during navigation.
:class:`AutoMinorLocator`
Locator for minor ticks when the axis is linear and the
major ticks are uniformly spaced. Subdivides the major
tick interval into a specified number of minor intervals,
defaulting to 4 or 5 depending on the major interval.
There are a number of locators specialized for date locations - see
the `dates` module.
You can define your own locator by deriving from Locator. You must
override the ``__call__`` method, which returns a sequence of locations,
and you will probably want to override the autoscale method to set the
view limits from the data limits.
If you want to override the default locator, use one of the above or a custom
locator and pass it to the x or y axis instance. The relevant methods are::
ax.xaxis.set_major_locator(xmajor_locator)
ax.xaxis.set_minor_locator(xminor_locator)
ax.yaxis.set_major_locator(ymajor_locator)
ax.yaxis.set_minor_locator(yminor_locator)
The default minor locator is `NullLocator`, i.e., no minor ticks on by default.
Tick formatting
---------------
Tick formatting is controlled by classes derived from Formatter. The formatter
operates on a single tick value and returns a string to the axis.
:class:`NullFormatter`
No labels on the ticks.
:class:`IndexFormatter`
Set the strings from a list of labels.
:class:`FixedFormatter`
Set the strings manually for the labels.
:class:`FuncFormatter`
User defined function sets the labels.
:class:`StrMethodFormatter`
Use string `format` method.
:class:`FormatStrFormatter`
Use an old-style sprintf format string.
:class:`ScalarFormatter`
Default formatter for scalars: autopick the format string.
:class:`LogFormatter`
Formatter for log axes.
:class:`LogFormatterExponent`
Format values for log axis using ``exponent = log_base(value)``.
:class:`LogFormatterMathtext`
Format values for log axis using ``exponent = log_base(value)``
using Math text.
:class:`LogFormatterSciNotation`
Format values for log axis using scientific notation.
:class:`LogitFormatter`
Probability formatter.
:class:`EngFormatter`
Format labels in engineering notation
:class:`PercentFormatter`
Format labels as a percentage
You can derive your own formatter from the Formatter base class by
simply overriding the ``__call__`` method. The formatter class has
access to the axis view and data limits.
To control the major and minor tick label formats, use one of the
following methods::
ax.xaxis.set_major_formatter(xmajor_formatter)
ax.xaxis.set_minor_formatter(xminor_formatter)
ax.yaxis.set_major_formatter(ymajor_formatter)
ax.yaxis.set_minor_formatter(yminor_formatter)
See :ref:`sphx_glr_gallery_ticks_and_spines_major_minor_demo.py` for an
example of setting major and minor ticks. See the :mod:`matplotlib.dates`
module for more information and examples of using date locators and formatters.
"""
from __future__ import (absolute_import, division, print_function,
unicode_literals)
import six
import itertools
import locale
import math
import numpy as np
from matplotlib import rcParams
from matplotlib import cbook
from matplotlib import transforms as mtransforms
from matplotlib.cbook import mplDeprecation
import warnings
__all__ = ('TickHelper', 'Formatter', 'FixedFormatter',
'NullFormatter', 'FuncFormatter', 'FormatStrFormatter',
'StrMethodFormatter', 'ScalarFormatter', 'LogFormatter',
'LogFormatterExponent', 'LogFormatterMathtext',
'IndexFormatter', 'LogFormatterSciNotation',
'LogitFormatter', 'EngFormatter', 'PercentFormatter',
'Locator', 'IndexLocator', 'FixedLocator', 'NullLocator',
'LinearLocator', 'LogLocator', 'AutoLocator',
'MultipleLocator', 'MaxNLocator', 'AutoMinorLocator',
'SymmetricalLogLocator', 'LogitLocator')
if six.PY3:
long = int
# Work around numpy/numpy#6127.
def _divmod(x, y):
if isinstance(x, np.generic):
x = x.item()
if isinstance(y, np.generic):
y = y.item()
return six.moves.builtins.divmod(x, y)
def _mathdefault(s):
return '\\mathdefault{%s}' % s
class _DummyAxis(object):
def __init__(self, minpos=0):
self.dataLim = mtransforms.Bbox.unit()
self.viewLim = mtransforms.Bbox.unit()
self._minpos = minpos
def get_view_interval(self):
return self.viewLim.intervalx
def set_view_interval(self, vmin, vmax):
self.viewLim.intervalx = vmin, vmax
def get_minpos(self):
return self._minpos
def get_data_interval(self):
return self.dataLim.intervalx
def set_data_interval(self, vmin, vmax):
self.dataLim.intervalx = vmin, vmax
def get_tick_space(self):
# Just use the long-standing default of nbins==9
return 9
class TickHelper(object):
axis = None
def set_axis(self, axis):
self.axis = axis
def create_dummy_axis(self, **kwargs):
if self.axis is None:
self.axis = _DummyAxis(**kwargs)
def set_view_interval(self, vmin, vmax):
self.axis.set_view_interval(vmin, vmax)
def set_data_interval(self, vmin, vmax):
self.axis.set_data_interval(vmin, vmax)
def set_bounds(self, vmin, vmax):
self.set_view_interval(vmin, vmax)
self.set_data_interval(vmin, vmax)
class Formatter(TickHelper):
"""
Create a string based on a tick value and location.
"""
# some classes want to see all the locs to help format
# individual ones
locs = []
def __call__(self, x, pos=None):
"""
Return the format for tick value `x` at position pos.
``pos=None`` indicates an unspecified location.
"""
raise NotImplementedError('Derived must override')
def format_data(self, value):
"""
Returns the full string representation of the value with the
position unspecified.
"""
return self.__call__(value)
def format_data_short(self, value):
"""
Return a short string version of the tick value.
Defaults to the position-independent long value.
"""
return self.format_data(value)
def get_offset(self):
return ''
def set_locs(self, locs):
self.locs = locs
def fix_minus(self, s):
"""
Some classes may want to replace a hyphen for minus with the
proper unicode symbol (U+2212) for typographical correctness.
The default is to not replace it.
Note, if you use this method, e.g., in :meth:`format_data` or
call, you probably don't want to use it for
:meth:`format_data_short` since the toolbar uses this for
interactive coord reporting and I doubt we can expect GUIs
across platforms will handle the unicode correctly. So for
now the classes that override :meth:`fix_minus` should have an
explicit :meth:`format_data_short` method
"""
return s
class IndexFormatter(Formatter):
"""
Format the position x to the nearest i-th label where i=int(x+0.5)
"""
def __init__(self, labels):
self.labels = labels
self.n = len(labels)
def __call__(self, x, pos=None):
"""
Return the format for tick value `x` at position pos.
The position is ignored and the value is rounded to the nearest
integer, which is used to look up the label.
"""
i = int(x + 0.5)
if i < 0 or i >= self.n:
return ''
else:
return self.labels[i]
class NullFormatter(Formatter):
"""
Always return the empty string.
"""
def __call__(self, x, pos=None):
"""
Returns an empty string for all inputs.
"""
return ''
class FixedFormatter(Formatter):
"""
Return fixed strings for tick labels based only on position, not
value.
"""
def __init__(self, seq):
"""
Set the sequence of strings that will be used for labels.
"""
self.seq = seq
self.offset_string = ''
def __call__(self, x, pos=None):
"""
Returns the label that matches the position regardless of the
value.
For positions ``pos < len(seq)``, return `seq[i]` regardless of
`x`. Otherwise return empty string. `seq` is the sequence of
strings that this object was initialized with.
"""
if pos is None or pos >= len(self.seq):
return ''
else:
return self.seq[pos]
def get_offset(self):
return self.offset_string
def set_offset_string(self, ofs):
self.offset_string = ofs
class FuncFormatter(Formatter):
"""
Use a user-defined function for formatting.
The function should take in two inputs (a tick value ``x`` and a
position ``pos``), and return a string containing the corresponding
tick label.
"""
def __init__(self, func):
self.func = func
def __call__(self, x, pos=None):
"""
Return the value of the user defined function.
`x` and `pos` are passed through as-is.
"""
return self.func(x, pos)
class FormatStrFormatter(Formatter):
"""
Use an old-style ('%' operator) format string to format the tick.
The format string should have a single variable format (%) in it.
It will be applied to the value (not the position) of the tick.
"""
def __init__(self, fmt):
self.fmt = fmt
def __call__(self, x, pos=None):
"""
Return the formatted label string.
Only the value `x` is formatted. The position is ignored.
"""
return self.fmt % x
class StrMethodFormatter(Formatter):
"""
Use a new-style format string (as used by `str.format()`)
to format the tick.
The field used for the value must be labeled `x` and the field used
for the position must be labeled `pos`.
"""
def __init__(self, fmt):
self.fmt = fmt
def __call__(self, x, pos=None):
"""
Return the formatted label string.
`x` and `pos` are passed to `str.format` as keyword arguments
with those exact names.
"""
return self.fmt.format(x=x, pos=pos)
class OldScalarFormatter(Formatter):
"""
Tick location is a plain old number.
"""
def __call__(self, x, pos=None):
"""
Return the format for tick val `x` based on the width of the
axis.
The position `pos` is ignored.
"""
xmin, xmax = self.axis.get_view_interval()
d = abs(xmax - xmin)
return self.pprint_val(x, d)
def pprint_val(self, x, d):
"""
Formats the value `x` based on the size of the axis range `d`.
"""
#if the number is not too big and it's an int, format it as an
#int
if abs(x) < 1e4 and x == int(x):
return '%d' % x
if d < 1e-2:
fmt = '%1.3e'
elif d < 1e-1:
fmt = '%1.3f'
elif d > 1e5:
fmt = '%1.1e'
elif d > 10:
fmt = '%1.1f'
elif d > 1:
fmt = '%1.2f'
else:
fmt = '%1.3f'
s = fmt % x
tup = s.split('e')
if len(tup) == 2:
mantissa = tup[0].rstrip('0').rstrip('.')
sign = tup[1][0].replace('+', '')
exponent = tup[1][1:].lstrip('0')
s = '%se%s%s' % (mantissa, sign, exponent)
else:
s = s.rstrip('0').rstrip('.')
return s
class ScalarFormatter(Formatter):
"""
Format tick values as a number.
Tick value is interpreted as a plain old number. If
``useOffset==True`` and the data range is much smaller than the data
average, then an offset will be determined such that the tick labels
are meaningful. Scientific notation is used for ``data < 10^-n`` or
``data >= 10^m``, where ``n`` and ``m`` are the power limits set
using ``set_powerlimits((n,m))``. The defaults for these are
controlled by the ``axes.formatter.limits`` rc parameter.
"""
def __init__(self, useOffset=None, useMathText=None, useLocale=None):
# useOffset allows plotting small data ranges with large offsets: for
# example: [1+1e-9,1+2e-9,1+3e-9] useMathText will render the offset
# and scientific notation in mathtext
if useOffset is None:
useOffset = rcParams['axes.formatter.useoffset']
self._offset_threshold = rcParams['axes.formatter.offset_threshold']
self.set_useOffset(useOffset)
self._usetex = rcParams['text.usetex']
if useMathText is None:
useMathText = rcParams['axes.formatter.use_mathtext']
self.set_useMathText(useMathText)
self.orderOfMagnitude = 0
self.format = ''
self._scientific = True
self._powerlimits = rcParams['axes.formatter.limits']
if useLocale is None:
useLocale = rcParams['axes.formatter.use_locale']
self._useLocale = useLocale
def get_useOffset(self):
return self._useOffset
def set_useOffset(self, val):
if val in [True, False]:
self.offset = 0
self._useOffset = val
else:
self._useOffset = False
self.offset = val
useOffset = property(fget=get_useOffset, fset=set_useOffset)
def get_useLocale(self):
return self._useLocale
def set_useLocale(self, val):
if val is None:
self._useLocale = rcParams['axes.formatter.use_locale']
else:
self._useLocale = val
useLocale = property(fget=get_useLocale, fset=set_useLocale)
def get_useMathText(self):
return self._useMathText
def set_useMathText(self, val):
if val is None:
self._useMathText = rcParams['axes.formatter.use_mathtext']
else:
self._useMathText = val
useMathText = property(fget=get_useMathText, fset=set_useMathText)
def fix_minus(self, s):
"""
Replace hyphens with a unicode minus.
"""
if rcParams['text.usetex'] or not rcParams['axes.unicode_minus']:
return s
else:
return s.replace('-', '\N{MINUS SIGN}')
def __call__(self, x, pos=None):
"""
Return the format for tick value `x` at position `pos`.
"""
if len(self.locs) == 0:
return ''
else:
s = self.pprint_val(x)
return self.fix_minus(s)
def set_scientific(self, b):
"""
Turn scientific notation on or off.
.. seealso:: Method :meth:`set_powerlimits`
"""
self._scientific = bool(b)
def set_powerlimits(self, lims):
"""
Sets size thresholds for scientific notation.
``lims`` is a two-element sequence containing the powers of 10
that determine the switchover threshold. Numbers below
``10**lims[0]`` and above ``10**lims[1]`` will be displayed in
scientific notation.
For example, ``formatter.set_powerlimits((-3, 4))`` sets the
pre-2007 default in which scientific notation is used for
numbers less than 1e-3 or greater than 1e4.
.. seealso:: Method :meth:`set_scientific`
"""
if len(lims) != 2:
raise ValueError("'lims' must be a sequence of length 2")
self._powerlimits = lims
def format_data_short(self, value):
"""
Return a short formatted string representation of a number.
"""
if self._useLocale:
return locale.format_string('%-12g', (value,))
else:
return '%-12g' % value
def format_data(self, value):
"""
Return a formatted string representation of a number.
"""
if self._useLocale:
s = locale.format_string('%1.10e', (value,))
else:
s = '%1.10e' % value
s = self._formatSciNotation(s)
return self.fix_minus(s)
def get_offset(self):
"""
Return scientific notation, plus offset.
"""
if len(self.locs) == 0:
return ''
s = ''
if self.orderOfMagnitude or self.offset:
offsetStr = ''
sciNotStr = ''
if self.offset:
offsetStr = self.format_data(self.offset)
if self.offset > 0:
offsetStr = '+' + offsetStr
if self.orderOfMagnitude:
if self._usetex or self._useMathText:
sciNotStr = self.format_data(10 ** self.orderOfMagnitude)
else:
sciNotStr = '1e%d' % self.orderOfMagnitude
if self._useMathText:
if sciNotStr != '':
sciNotStr = r'\times%s' % _mathdefault(sciNotStr)
s = ''.join(('$', sciNotStr, _mathdefault(offsetStr), '$'))
elif self._usetex:
if sciNotStr != '':
sciNotStr = r'\times%s' % sciNotStr
s = ''.join(('$', sciNotStr, offsetStr, '$'))
else:
s = ''.join((sciNotStr, offsetStr))
return self.fix_minus(s)
def set_locs(self, locs):
"""
Set the locations of the ticks.
"""
self.locs = locs
if len(self.locs) > 0:
vmin, vmax = self.axis.get_view_interval()
d = abs(vmax - vmin)
if self._useOffset:
self._compute_offset()
self._set_orderOfMagnitude(d)
self._set_format(vmin, vmax)
def _compute_offset(self):
locs = self.locs
if locs is None or not len(locs):
self.offset = 0
return
# Restrict to visible ticks.
vmin, vmax = sorted(self.axis.get_view_interval())
locs = np.asarray(locs)
locs = locs[(vmin <= locs) & (locs <= vmax)]
if not len(locs):
self.offset = 0
return
lmin, lmax = locs.min(), locs.max()
# Only use offset if there are at least two ticks and every tick has
# the same sign.
if lmin == lmax or lmin <= 0 <= lmax:
self.offset = 0
return
# min, max comparing absolute values (we want division to round towards
# zero so we work on absolute values).
abs_min, abs_max = sorted([abs(float(lmin)), abs(float(lmax))])
sign = math.copysign(1, lmin)
# What is the smallest power of ten such that abs_min and abs_max are
# equal up to that precision?
# Note: Internally using oom instead of 10 ** oom avoids some numerical
# accuracy issues.
oom_max = np.ceil(math.log10(abs_max))
oom = 1 + next(oom for oom in itertools.count(oom_max, -1)
if abs_min // 10 ** oom != abs_max // 10 ** oom)
if (abs_max - abs_min) / 10 ** oom <= 1e-2:
# Handle the case of straddling a multiple of a large power of ten
# (relative to the span).
# What is the smallest power of ten such that abs_min and abs_max
# are no more than 1 apart at that precision?
oom = 1 + next(oom for oom in itertools.count(oom_max, -1)
if abs_max // 10 ** oom - abs_min // 10 ** oom > 1)
# Only use offset if it saves at least _offset_threshold digits.
n = self._offset_threshold - 1
self.offset = (sign * (abs_max // 10 ** oom) * 10 ** oom
if abs_max // 10 ** oom >= 10**n
else 0)
def _set_orderOfMagnitude(self, range):
# if scientific notation is to be used, find the appropriate exponent
# if using an numerical offset, find the exponent after applying the
# offset
if not self._scientific:
self.orderOfMagnitude = 0
return
locs = np.abs(self.locs)
if self.offset:
oom = math.floor(math.log10(range))
else:
if locs[0] > locs[-1]:
val = locs[0]
else:
val = locs[-1]
if val == 0:
oom = 0
else:
oom = math.floor(math.log10(val))
if oom <= self._powerlimits[0]:
self.orderOfMagnitude = oom
elif oom >= self._powerlimits[1]:
self.orderOfMagnitude = oom
else:
self.orderOfMagnitude = 0
def _set_format(self, vmin, vmax):
# set the format string to format all the ticklabels
if len(self.locs) < 2:
# Temporarily augment the locations with the axis end points.
_locs = list(self.locs) + [vmin, vmax]
else:
_locs = self.locs
locs = (np.asarray(_locs) - self.offset) / 10. ** self.orderOfMagnitude
loc_range = np.ptp(locs)
# Curvilinear coordinates can yield two identical points.
if loc_range == 0:
loc_range = np.max(np.abs(locs))
# Both points might be zero.
if loc_range == 0:
loc_range = 1
if len(self.locs) < 2:
# We needed the end points only for the loc_range calculation.
locs = locs[:-2]
loc_range_oom = int(math.floor(math.log10(loc_range)))
# first estimate:
sigfigs = max(0, 3 - loc_range_oom)
# refined estimate:
thresh = 1e-3 * 10 ** loc_range_oom
while sigfigs >= 0:
if np.abs(locs - np.round(locs, decimals=sigfigs)).max() < thresh:
sigfigs -= 1
else:
break
sigfigs += 1
self.format = '%1.' + str(sigfigs) + 'f'
if self._usetex:
self.format = '$%s$' % self.format
elif self._useMathText:
self.format = '$%s$' % _mathdefault(self.format)
def pprint_val(self, x):
xp = (x - self.offset) / (10. ** self.orderOfMagnitude)
if np.abs(xp) < 1e-8:
xp = 0
if self._useLocale:
return locale.format_string(self.format, (xp,))
else:
return self.format % xp
def _formatSciNotation(self, s):
# transform 1e+004 into 1e4, for example
if self._useLocale:
decimal_point = locale.localeconv()['decimal_point']
positive_sign = locale.localeconv()['positive_sign']
else:
decimal_point = '.'
positive_sign = '+'
tup = s.split('e')
try:
significand = tup[0].rstrip('0').rstrip(decimal_point)
sign = tup[1][0].replace(positive_sign, '')
exponent = tup[1][1:].lstrip('0')
if self._useMathText or self._usetex:
if significand == '1' and exponent != '':
# reformat 1x10^y as 10^y
significand = ''
if exponent:
exponent = '10^{%s%s}' % (sign, exponent)
if significand and exponent:
return r'%s{\times}%s' % (significand, exponent)
else:
return r'%s%s' % (significand, exponent)
else:
s = ('%se%s%s' % (significand, sign, exponent)).rstrip('e')
return s
except IndexError:
return s
class LogFormatter(Formatter):
"""
Base class for formatting ticks on a log or symlog scale.
It may be instantiated directly, or subclassed.
Parameters
----------
base : float, optional, default: 10.
Base of the logarithm used in all calculations.
labelOnlyBase : bool, optional, default: False
If True, label ticks only at integer powers of base.
This is normally True for major ticks and False for
minor ticks.
minor_thresholds : (subset, all), optional, default: (1, 0.4)
If labelOnlyBase is False, these two numbers control
the labeling of ticks that are not at integer powers of
base; normally these are the minor ticks. The controlling
parameter is the log of the axis data range. In the typical
case where base is 10 it is the number of decades spanned
by the axis, so we can call it 'numdec'. If ``numdec <= all``,
all minor ticks will be labeled. If ``all < numdec <= subset``,
then only a subset of minor ticks will be labeled, so as to
avoid crowding. If ``numdec > subset`` then no minor ticks will
be labeled.
linthresh : None or float, optional, default: None
If a symmetric log scale is in use, its ``linthresh``
parameter must be supplied here.
Notes
-----
The `set_locs` method must be called to enable the subsetting
logic controlled by the ``minor_thresholds`` parameter.
In some cases such as the colorbar, there is no distinction between
major and minor ticks; the tick locations might be set manually,
or by a locator that puts ticks at integer powers of base and
at intermediate locations. For this situation, disable the
minor_thresholds logic by using ``minor_thresholds=(np.inf, np.inf)``,
so that all ticks will be labeled.
To disable labeling of minor ticks when 'labelOnlyBase' is False,
use ``minor_thresholds=(0, 0)``. This is the default for the
"classic" style.
Examples
--------
To label a subset of minor ticks when the view limits span up
to 2 decades, and all of the ticks when zoomed in to 0.5 decades
or less, use ``minor_thresholds=(2, 0.5)``.
To label all minor ticks when the view limits span up to 1.5
decades, use ``minor_thresholds=(1.5, 1.5)``.
"""
def __init__(self, base=10.0, labelOnlyBase=False,
minor_thresholds=None,
linthresh=None):
self._base = float(base)
self.labelOnlyBase = labelOnlyBase
if minor_thresholds is None:
if rcParams['_internal.classic_mode']:
minor_thresholds = (0, 0)
else:
minor_thresholds = (1, 0.4)
self.minor_thresholds = minor_thresholds
self._sublabels = None
self._linthresh = linthresh
def base(self, base):
"""
change the `base` for labeling.
.. warning::
Should always match the base used for :class:`LogLocator`
"""
self._base = base
def label_minor(self, labelOnlyBase):
"""
Switch minor tick labeling on or off.
Parameters
----------
labelOnlyBase : bool
If True, label ticks only at integer powers of base.
"""
self.labelOnlyBase = labelOnlyBase
def set_locs(self, locs=None):
"""
Use axis view limits to control which ticks are labeled.
The ``locs`` parameter is ignored in the present algorithm.
"""
if np.isinf(self.minor_thresholds[0]):
self._sublabels = None
return
# Handle symlog case:
linthresh = self._linthresh
if linthresh is None:
try:
linthresh = self.axis.get_transform().linthresh
except AttributeError:
pass
vmin, vmax = self.axis.get_view_interval()
if vmin > vmax:
vmin, vmax = vmax, vmin
if linthresh is None and vmin <= 0:
# It's probably a colorbar with
# a format kwarg setting a LogFormatter in the manner
# that worked with 1.5.x, but that doesn't work now.
self._sublabels = set((1,)) # label powers of base
return
b = self._base
if linthresh is not None: # symlog
# Only compute the number of decades in the logarithmic part of the
# axis
numdec = 0
if vmin < -linthresh:
rhs = min(vmax, -linthresh)
numdec += math.log(vmin / rhs) / math.log(b)
if vmax > linthresh:
lhs = max(vmin, linthresh)
numdec += math.log(vmax / lhs) / math.log(b)
else:
vmin = math.log(vmin) / math.log(b)
vmax = math.log(vmax) / math.log(b)
numdec = abs(vmax - vmin)
if numdec > self.minor_thresholds[0]:
# Label only bases
self._sublabels = {1}
elif numdec > self.minor_thresholds[1]:
# Add labels between bases at log-spaced coefficients;
# include base powers in case the locations include
# "major" and "minor" points, as in colorbar.
c = np.logspace(0, 1, int(b)//2 + 1, base=b)
self._sublabels = set(np.round(c))
# For base 10, this yields (1, 2, 3, 4, 6, 10).
else:
# Label all integer multiples of base**n.
self._sublabels = set(np.arange(1, b + 1))
def _num_to_string(self, x, vmin, vmax):
if x > 10000:
s = '%1.0e' % x
elif x < 1:
s = '%1.0e' % x
else:
s = self.pprint_val(x, vmax - vmin)
return s
def __call__(self, x, pos=None):
"""
Return the format for tick val `x`.
"""
if x == 0.0: # Symlog
return '0'
x = abs(x)
b = self._base
# only label the decades
fx = math.log(x) / math.log(b)
is_x_decade = is_close_to_int(fx)
exponent = np.round(fx) if is_x_decade else np.floor(fx)
coeff = np.round(x / b ** exponent)
if self.labelOnlyBase and not is_x_decade:
return ''
if self._sublabels is not None and coeff not in self._sublabels:
return ''
vmin, vmax = self.axis.get_view_interval()
vmin, vmax = mtransforms.nonsingular(vmin, vmax, expander=0.05)
s = self._num_to_string(x, vmin, vmax)
return self.fix_minus(s)
def format_data(self, value):
b = self.labelOnlyBase
self.labelOnlyBase = False
value = cbook.strip_math(self.__call__(value))
self.labelOnlyBase = b
return value
def format_data_short(self, value):
"""
Return a short formatted string representation of a number.
"""
return '%-12g' % value
def pprint_val(self, x, d):
#if the number is not too big and it's an int, format it as an
#int
if abs(x) < 1e4 and x == int(x):
return '%d' % x
if d < 1e-2:
fmt = '%1.3e'
elif d < 1e-1:
fmt = '%1.3f'
elif d > 1e5:
fmt = '%1.1e'
elif d > 10:
fmt = '%1.1f'
elif d > 1:
fmt = '%1.2f'
else:
fmt = '%1.3f'
s = fmt % x
tup = s.split('e')
if len(tup) == 2:
mantissa = tup[0].rstrip('0').rstrip('.')
exponent = int(tup[1])
if exponent:
s = '%se%d' % (mantissa, exponent)
else:
s = mantissa
else:
s = s.rstrip('0').rstrip('.')
return s
class LogFormatterExponent(LogFormatter):
"""
Format values for log axis using ``exponent = log_base(value)``.
"""
def _num_to_string(self, x, vmin, vmax):
fx = math.log(x) / math.log(self._base)
if abs(fx) > 10000:
s = '%1.0g' % fx
elif abs(fx) < 1:
s = '%1.0g' % fx
else:
fd = math.log(vmax - vmin) / math.log(self._base)
s = self.pprint_val(fx, fd)
return s
class LogFormatterMathtext(LogFormatter):
"""
Format values for log axis using ``exponent = log_base(value)``.
"""
def _non_decade_format(self, sign_string, base, fx, usetex):
'Return string for non-decade locations'
if usetex:
return (r'$%s%s^{%.2f}$') % (sign_string, base, fx)
else:
return ('$%s$' % _mathdefault('%s%s^{%.2f}' %
(sign_string, base, fx)))
def __call__(self, x, pos=None):
"""
Return the format for tick value `x`.
The position `pos` is ignored.
"""
usetex = rcParams['text.usetex']
min_exp = rcParams['axes.formatter.min_exponent']
if x == 0: # Symlog
if usetex:
return '$0$'
else:
return '$%s$' % _mathdefault('0')
sign_string = '-' if x < 0 else ''
x = abs(x)
b = self._base
# only label the decades
fx = math.log(x) / math.log(b)
is_x_decade = is_close_to_int(fx)
exponent = np.round(fx) if is_x_decade else np.floor(fx)
coeff = np.round(x / b ** exponent)
if is_x_decade:
fx = nearest_long(fx)
if self.labelOnlyBase and not is_x_decade:
return ''
if self._sublabels is not None and coeff not in self._sublabels:
return ''
# use string formatting of the base if it is not an integer
if b % 1 == 0.0:
base = '%d' % b
else:
base = '%s' % b
if np.abs(fx) < min_exp:
if usetex:
return r'${0}{1:g}$'.format(sign_string, x)
else:
return '${0}$'.format(_mathdefault(
'{0}{1:g}'.format(sign_string, x)))
elif not is_x_decade:
return self._non_decade_format(sign_string, base, fx, usetex)
else:
if usetex:
return (r'$%s%s^{%d}$') % (sign_string,
base,
nearest_long(fx))
else:
return ('$%s$' % _mathdefault(
'%s%s^{%d}' %
(sign_string, base, nearest_long(fx))))
class LogFormatterSciNotation(LogFormatterMathtext):
"""
Format values following scientific notation in a logarithmic axis
"""
def _non_decade_format(self, sign_string, base, fx, usetex):
'Return string for non-decade locations'
b = float(base)
exponent = math.floor(fx)
coeff = b ** fx / b ** exponent
if is_close_to_int(coeff):
coeff = nearest_long(coeff)
if usetex:
return (r'$%s%g\times%s^{%d}$') % \
(sign_string, coeff, base, exponent)
else:
return ('$%s$' % _mathdefault(r'%s%g\times%s^{%d}' %
(sign_string, coeff, base, exponent)))
class LogitFormatter(Formatter):
"""
Probability formatter (using Math text).
"""
def __call__(self, x, pos=None):
s = ''
if 0.01 <= x <= 0.99:
s = '{:.2f}'.format(x)
elif x < 0.01:
if is_decade(x):
s = '$10^{{{:.0f}}}$'.format(np.log10(x))
else:
s = '${:.5f}$'.format(x)
else: # x > 0.99
if is_decade(1-x):
s = '$1-10^{{{:.0f}}}$'.format(np.log10(1-x))
else:
s = '$1-{:.5f}$'.format(1-x)
return s
def format_data_short(self, value):
'return a short formatted string representation of a number'
return '%-12g' % value
class EngFormatter(Formatter):
"""
Formats axis values using engineering prefixes to represent powers
of 1000, plus a specified unit, e.g., 10 MHz instead of 1e7.
"""
# The SI engineering prefixes
ENG_PREFIXES = {
-24: "y",
-21: "z",
-18: "a",
-15: "f",
-12: "p",
-9: "n",
-6: "\N{GREEK SMALL LETTER MU}",
-3: "m",
0: "",
3: "k",
6: "M",
9: "G",
12: "T",
15: "P",
18: "E",
21: "Z",
24: "Y"
}
def __init__(self, unit="", places=None, sep=" "):
"""
Parameters
----------
unit : str (default: "")
Unit symbol to use, suitable for use with single-letter
representations of powers of 1000. For example, 'Hz' or 'm'.
places : int (default: None)
Precision with which to display the number, specified in
digits after the decimal point (there will be between one
and three digits before the decimal point). If it is None,
the formatting falls back to the floating point format '%g',
which displays up to 6 *significant* digits, i.e. the equivalent
value for *places* varies between 0 and 5 (inclusive).
sep : str (default: " ")
Separator used between the value and the prefix/unit. For
example, one get '3.14 mV' if ``sep`` is " " (default) and
'3.14mV' if ``sep`` is "". Besides the default behavior, some
other useful options may be:
* ``sep=""`` to append directly the prefix/unit to the value;
* ``sep="\\N{THIN SPACE}"`` (``U+2009``);
* ``sep="\\N{NARROW NO-BREAK SPACE}"`` (``U+202F``);
* ``sep="\\N{NO-BREAK SPACE}"`` (``U+00A0``).
"""
self.unit = unit
self.places = places
self.sep = sep
def __call__(self, x, pos=None):
s = "%s%s" % (self.format_eng(x), self.unit)
# Remove the trailing separator when there is neither prefix nor unit
if len(self.sep) > 0 and s.endswith(self.sep):
s = s[:-len(self.sep)]
return self.fix_minus(s)
def format_eng(self, num):
"""
Formats a number in engineering notation, appending a letter
representing the power of 1000 of the original number.
Some examples:
>>> format_eng(0) # for self.places = 0
'0'
>>> format_eng(1000000) # for self.places = 1
'1.0 M'
>>> format_eng("-1e-6") # for self.places = 2
u'-1.00 \N{GREEK SMALL LETTER MU}'
`num` may be a numeric value or a string that can be converted
to a numeric value with ``float(num)``.
"""
if isinstance(num, six.string_types):
warnings.warn(
"Passing a string as *num* argument is deprecated since"
"Matplotlib 2.1, and is expected to be removed in 2.3.",
mplDeprecation)
dnum = float(num)
sign = 1
fmt = "g" if self.places is None else ".{:d}f".format(self.places)
if dnum < 0:
sign = -1
dnum = -dnum
if dnum != 0:
pow10 = int(math.floor(math.log10(dnum) / 3) * 3)
else:
pow10 = 0
# Force dnum to zero, to avoid inconsistencies like
# format_eng(-0) = "0" and format_eng(0.0) = "0"
# but format_eng(-0.0) = "-0.0"
dnum = 0.0
pow10 = np.clip(pow10, min(self.ENG_PREFIXES), max(self.ENG_PREFIXES))
mant = sign * dnum / (10.0 ** pow10)
# Taking care of the cases like 999.9..., which
# may be rounded to 1000 instead of 1 k. Beware
# of the corner case of values that are beyond
# the range of SI prefixes (i.e. > 'Y').
_fmant = float("{mant:{fmt}}".format(mant=mant, fmt=fmt))
if _fmant >= 1000 and pow10 != max(self.ENG_PREFIXES):
mant /= 1000
pow10 += 3
prefix = self.ENG_PREFIXES[int(pow10)]
formatted = "{mant:{fmt}}{sep}{prefix}".format(
mant=mant, sep=self.sep, prefix=prefix, fmt=fmt)
return formatted
class PercentFormatter(Formatter):
"""
Format numbers as a percentage.
How the number is converted into a percentage is determined by the
`xmax` parameter. `xmax` is the data value that corresponds to 100%.
Percentages are computed as ``x / xmax * 100``. So if the data is
already scaled to be percentages, `xmax` will be 100. Another common
situation is where `xmax` is 1.0.
`symbol` is a string which will be appended to the label. It may be
`None` or empty to indicate that no symbol should be used. LaTeX
special characters are escaped in `symbol` whenever latex mode is
enabled, unless `is_latex` is `True`.
`decimals` is the number of decimal places to place after the point.
If it is set to `None` (the default), the number will be computed
automatically.
"""
def __init__(self, xmax=100, decimals=None, symbol='%', is_latex=False):
self.xmax = xmax + 0.0
self.decimals = decimals
self._symbol = symbol
self._is_latex = is_latex
def __call__(self, x, pos=None):
"""
Formats the tick as a percentage with the appropriate scaling.
"""
ax_min, ax_max = self.axis.get_view_interval()
display_range = abs(ax_max - ax_min)
return self.fix_minus(self.format_pct(x, display_range))
def format_pct(self, x, display_range):
"""
Formats the number as a percentage number with the correct
number of decimals and adds the percent symbol, if any.
If `self.decimals` is `None`, the number of digits after the
decimal point is set based on the `display_range` of the axis
as follows:
+---------------+----------+------------------------+
| display_range | decimals | sample |
+---------------+----------+------------------------+
| >50 | 0 | ``x = 34.5`` => 35% |
+---------------+----------+------------------------+
| >5 | 1 | ``x = 34.5`` => 34.5% |
+---------------+----------+------------------------+
| >0.5 | 2 | ``x = 34.5`` => 34.50% |
+---------------+----------+------------------------+
| ... | ... | ... |
+---------------+----------+------------------------+
This method will not be very good for tiny axis ranges or
extremely large ones. It assumes that the values on the chart
are percentages displayed on a reasonable scale.
"""
x = self.convert_to_pct(x)
if self.decimals is None:
# conversion works because display_range is a difference
scaled_range = self.convert_to_pct(display_range)
if scaled_range <= 0:
decimals = 0
else:
# Luckily Python's built-in ceil rounds to +inf, not away from
# zero. This is very important since the equation for decimals
# starts out as `scaled_range > 0.5 * 10**(2 - decimals)`
# and ends up with `decimals > 2 - log10(2 * scaled_range)`.
decimals = math.ceil(2.0 - math.log10(2.0 * scaled_range))
if decimals > 5:
decimals = 5
elif decimals < 0:
decimals = 0
else:
decimals = self.decimals
s = '{x:0.{decimals}f}'.format(x=x, decimals=int(decimals))
return s + self.symbol
def convert_to_pct(self, x):
return 100.0 * (x / self.xmax)
@property
def symbol(self):
"""
The configured percent symbol as a string.
If LaTeX is enabled via :rc:`text.usetex`, the special characters
``{'#', '$', '%', '&', '~', '_', '^', '\\', '{', '}'}`` are
automatically escaped in the string.
"""
symbol = self._symbol
if not symbol:
symbol = ''
elif rcParams['text.usetex'] and not self._is_latex:
# Source: http://www.personal.ceu.hu/tex/specchar.htm
# Backslash must be first for this to work correctly since
# it keeps getting added in
for spec in r'\#$%&~_^{}':
symbol = symbol.replace(spec, '\\' + spec)
return symbol
@symbol.setter
def symbol(self, symbol):
self._symbol = symbol
class Locator(TickHelper):
"""
Determine the tick locations;
Note, you should not use the same locator between different
:class:`~matplotlib.axis.Axis` because the locator stores references to
the Axis data and view limits
"""
# Some automatic tick locators can generate so many ticks they
# kill the machine when you try and render them.
# This parameter is set to cause locators to raise an error if too
# many ticks are generated.
MAXTICKS = 1000
def tick_values(self, vmin, vmax):
"""
Return the values of the located ticks given **vmin** and **vmax**.
.. note::
To get tick locations with the vmin and vmax values defined
automatically for the associated :attr:`axis` simply call
the Locator instance::
>>> print((type(loc)))
<type 'Locator'>
>>> print((loc()))
[1, 2, 3, 4]
"""
raise NotImplementedError('Derived must override')
def set_params(self, **kwargs):
"""
Do nothing, and rase a warning. Any locator class not supporting the
set_params() function will call this.
"""
warnings.warn("'set_params()' not defined for locator of type " +
str(type(self)))
def __call__(self):
"""Return the locations of the ticks"""
# note: some locators return data limits, other return view limits,
# hence there is no *one* interface to call self.tick_values.
raise NotImplementedError('Derived must override')
def raise_if_exceeds(self, locs):
"""raise a RuntimeError if Locator attempts to create more than
MAXTICKS locs"""
if len(locs) >= self.MAXTICKS:
raise RuntimeError("Locator attempting to generate {} ticks from "
"{} to {}: exceeds Locator.MAXTICKS".format(
len(locs), locs[0], locs[-1]))
return locs
def view_limits(self, vmin, vmax):
"""
select a scale for the range from vmin to vmax
Normally this method is overridden by subclasses to
change locator behaviour.
"""
return mtransforms.nonsingular(vmin, vmax)
def autoscale(self):
"""autoscale the view limits"""
return self.view_limits(*self.axis.get_view_interval())
def pan(self, numsteps):
"""Pan numticks (can be positive or negative)"""
ticks = self()
numticks = len(ticks)
vmin, vmax = self.axis.get_view_interval()
vmin, vmax = mtransforms.nonsingular(vmin, vmax, expander=0.05)
if numticks > 2:
step = numsteps * abs(ticks[0] - ticks[1])
else:
d = abs(vmax - vmin)
step = numsteps * d / 6.
vmin += step
vmax += step
self.axis.set_view_interval(vmin, vmax, ignore=True)
def zoom(self, direction):
"Zoom in/out on axis; if direction is >0 zoom in, else zoom out"
vmin, vmax = self.axis.get_view_interval()
vmin, vmax = mtransforms.nonsingular(vmin, vmax, expander=0.05)
interval = abs(vmax - vmin)
step = 0.1 * interval * direction
self.axis.set_view_interval(vmin + step, vmax - step, ignore=True)
def refresh(self):
"""refresh internal information based on current lim"""
pass
class IndexLocator(Locator):
"""
Place a tick on every multiple of some base number of points
plotted, e.g., on every 5th point. It is assumed that you are doing
index plotting; i.e., the axis is 0, len(data). This is mainly
useful for x ticks.
"""
def __init__(self, base, offset):
'place ticks on the i-th data points where (i-offset)%base==0'
self._base = base
self.offset = offset
def set_params(self, base=None, offset=None):
"""Set parameters within this locator"""
if base is not None:
self._base = base
if offset is not None:
self.offset = offset
def __call__(self):
"""Return the locations of the ticks"""
dmin, dmax = self.axis.get_data_interval()
return self.tick_values(dmin, dmax)
def tick_values(self, vmin, vmax):
return self.raise_if_exceeds(
np.arange(vmin + self.offset, vmax + 1, self._base))
class FixedLocator(Locator):
"""
Tick locations are fixed. If nbins is not None,
the array of possible positions will be subsampled to
keep the number of ticks <= nbins +1.
The subsampling will be done so as to include the smallest
absolute value; for example, if zero is included in the
array of possibilities, then it is guaranteed to be one of
the chosen ticks.
"""
def __init__(self, locs, nbins=None):
self.locs = np.asarray(locs)
self.nbins = nbins
if self.nbins is not None:
self.nbins = max(self.nbins, 2)
def set_params(self, nbins=None):
"""Set parameters within this locator."""
if nbins is not None:
self.nbins = nbins
def __call__(self):
return self.tick_values(None, None)
def tick_values(self, vmin, vmax):
""""
Return the locations of the ticks.
.. note::
Because the values are fixed, vmin and vmax are not used in this
method.
"""
if self.nbins is None:
return self.locs
step = max(int(np.ceil(len(self.locs) / self.nbins)), 1)
ticks = self.locs[::step]
for i in range(1, step):
ticks1 = self.locs[i::step]
if np.abs(ticks1).min() < np.abs(ticks).min():
ticks = ticks1
return self.raise_if_exceeds(ticks)
class NullLocator(Locator):
"""
No ticks
"""
def __call__(self):
return self.tick_values(None, None)
def tick_values(self, vmin, vmax):
""""
Return the locations of the ticks.
.. note::
Because the values are Null, vmin and vmax are not used in this
method.
"""
return []
class LinearLocator(Locator):
"""
Determine the tick locations
The first time this function is called it will try to set the
number of ticks to make a nice tick partitioning. Thereafter the
number of ticks will be fixed so that interactive navigation will
be nice
"""
def __init__(self, numticks=None, presets=None):
"""
Use presets to set locs based on lom. A dict mapping vmin, vmax->locs
"""
self.numticks = numticks
if presets is None:
self.presets = {}
else:
self.presets = presets
def set_params(self, numticks=None, presets=None):
"""Set parameters within this locator."""
if presets is not None:
self.presets = presets
if numticks is not None:
self.numticks = numticks
def __call__(self):
'Return the locations of the ticks'
vmin, vmax = self.axis.get_view_interval()
return self.tick_values(vmin, vmax)
def tick_values(self, vmin, vmax):
vmin, vmax = mtransforms.nonsingular(vmin, vmax, expander=0.05)
if vmax < vmin:
vmin, vmax = vmax, vmin
if (vmin, vmax) in self.presets:
return self.presets[(vmin, vmax)]
if self.numticks is None:
self._set_numticks()
if self.numticks == 0:
return []
ticklocs = np.linspace(vmin, vmax, self.numticks)
return self.raise_if_exceeds(ticklocs)
def _set_numticks(self):
self.numticks = 11 # todo; be smart here; this is just for dev
def view_limits(self, vmin, vmax):
'Try to choose the view limits intelligently'
if vmax < vmin:
vmin, vmax = vmax, vmin
if vmin == vmax:
vmin -= 1
vmax += 1
if rcParams['axes.autolimit_mode'] == 'round_numbers':
exponent, remainder = _divmod(
math.log10(vmax - vmin), math.log10(max(self.numticks - 1, 1)))
exponent -= (remainder < .5)
scale = max(self.numticks - 1, 1) ** (-exponent)
vmin = math.floor(scale * vmin) / scale
vmax = math.ceil(scale * vmax) / scale
return mtransforms.nonsingular(vmin, vmax)
def closeto(x, y):
if abs(x - y) < 1e-10:
return True
else:
return False
class Base(object):
'this solution has some hacks to deal with floating point inaccuracies'
def __init__(self, base):
if base <= 0:
raise ValueError("'base' must be positive")
self._base = base
def lt(self, x):
'return the largest multiple of base < x'
d, m = _divmod(x, self._base)
if closeto(m, 0) and not closeto(m / self._base, 1):
return (d - 1) * self._base
return d * self._base
def le(self, x):
'return the largest multiple of base <= x'
d, m = _divmod(x, self._base)
if closeto(m / self._base, 1): # was closeto(m, self._base)
#looks like floating point error
return (d + 1) * self._base
return d * self._base
def gt(self, x):
'return the smallest multiple of base > x'
d, m = _divmod(x, self._base)
if closeto(m / self._base, 1):
#looks like floating point error
return (d + 2) * self._base
return (d + 1) * self._base
def ge(self, x):
'return the smallest multiple of base >= x'
d, m = _divmod(x, self._base)
if closeto(m, 0) and not closeto(m / self._base, 1):
return d * self._base
return (d + 1) * self._base
def get_base(self):
return self._base
class MultipleLocator(Locator):
"""
Set a tick on every integer that is multiple of base in the
view interval
"""
def __init__(self, base=1.0):
self._base = Base(base)
def set_params(self, base):
"""Set parameters within this locator."""
if base is not None:
self._base = base
def __call__(self):
'Return the locations of the ticks'
vmin, vmax = self.axis.get_view_interval()
return self.tick_values(vmin, vmax)
def tick_values(self, vmin, vmax):
if vmax < vmin:
vmin, vmax = vmax, vmin
vmin = self._base.ge(vmin)
base = self._base.get_base()
n = (vmax - vmin + 0.001 * base) // base
locs = vmin - base + np.arange(n + 3) * base
return self.raise_if_exceeds(locs)
def view_limits(self, dmin, dmax):
"""
Set the view limits to the nearest multiples of base that
contain the data
"""
if rcParams['axes.autolimit_mode'] == 'round_numbers':
vmin = self._base.le(dmin)
vmax = self._base.ge(dmax)
if vmin == vmax:
vmin -= 1
vmax += 1
else:
vmin = dmin
vmax = dmax
return mtransforms.nonsingular(vmin, vmax)
def scale_range(vmin, vmax, n=1, threshold=100):
dv = abs(vmax - vmin) # > 0 as nonsingular is called before.
meanv = (vmax + vmin) / 2
if abs(meanv) / dv < threshold:
offset = 0
else:
offset = math.copysign(10 ** (math.log10(abs(meanv)) // 1), meanv)
scale = 10 ** (math.log10(dv / n) // 1)
return scale, offset
class MaxNLocator(Locator):
"""
Select no more than N intervals at nice locations.
"""
default_params = dict(nbins=10,
steps=None,
integer=False,
symmetric=False,
prune=None,
min_n_ticks=2)
def __init__(self, *args, **kwargs):
"""
Keyword args:
*nbins*
Maximum number of intervals; one less than max number of
ticks. If the string `'auto'`, the number of bins will be
automatically determined based on the length of the axis.
*steps*
Sequence of nice numbers starting with 1 and ending with 10;
e.g., [1, 2, 4, 5, 10], where the values are acceptable
tick multiples. i.e. for the example, 20, 40, 60 would be
an acceptable set of ticks, as would 0.4, 0.6, 0.8, because
they are multiples of 2. However, 30, 60, 90 would not
be allowed because 3 does not appear in the list of steps.
*integer*
If True, ticks will take only integer values, provided
at least `min_n_ticks` integers are found within the
view limits.
*symmetric*
If True, autoscaling will result in a range symmetric
about zero.
*prune*
['lower' | 'upper' | 'both' | None]
Remove edge ticks -- useful for stacked or ganged plots where
the upper tick of one axes overlaps with the lower tick of the
axes above it, primarily when :rc:`axes.autolimit_mode` is
``'round_numbers'``. If ``prune=='lower'``, the smallest tick will
be removed. If ``prune == 'upper'``, the largest tick will be
removed. If ``prune == 'both'``, the largest and smallest ticks
will be removed. If ``prune == None``, no ticks will be removed.
*min_n_ticks*
Relax `nbins` and `integer` constraints if necessary to
obtain this minimum number of ticks.
"""
if args:
kwargs['nbins'] = args[0]
if len(args) > 1:
raise ValueError(
"Keywords are required for all arguments except 'nbins'")
self.set_params(**self.default_params)
self.set_params(**kwargs)
@staticmethod
def _validate_steps(steps):
if not np.iterable(steps):
raise ValueError('steps argument must be a sequence of numbers '
'from 1 to 10')
steps = np.asarray(steps)
if np.any(np.diff(steps) <= 0):
raise ValueError('steps argument must be uniformly increasing')
if steps[-1] > 10 or steps[0] < 1:
warnings.warn('Steps argument should be a sequence of numbers\n'
'increasing from 1 to 10, inclusive. Behavior with\n'
'values outside this range is undefined, and will\n'
'raise a ValueError in future versions of mpl.')
if steps[0] != 1:
steps = np.hstack((1, steps))
if steps[-1] != 10:
steps = np.hstack((steps, 10))
return steps
@staticmethod
def _staircase(steps):
# Make an extended staircase within which the needed
# step will be found. This is probably much larger
# than necessary.
flights = (0.1 * steps[:-1], steps, 10 * steps[1])
return np.hstack(flights)
def set_params(self, **kwargs):
"""Set parameters within this locator."""
if 'nbins' in kwargs:
self._nbins = kwargs['nbins']
if self._nbins != 'auto':
self._nbins = int(self._nbins)
if 'symmetric' in kwargs:
self._symmetric = kwargs['symmetric']
if 'prune' in kwargs:
prune = kwargs['prune']
if prune is not None and prune not in ['upper', 'lower', 'both']:
raise ValueError(
"prune must be 'upper', 'lower', 'both', or None")
self._prune = prune
if 'min_n_ticks' in kwargs:
self._min_n_ticks = max(1, kwargs['min_n_ticks'])
if 'steps' in kwargs:
steps = kwargs['steps']
if steps is None:
self._steps = np.array([1, 1.5, 2, 2.5, 3, 4, 5, 6, 8, 10])
else:
self._steps = self._validate_steps(steps)
self._extended_steps = self._staircase(self._steps)
if 'integer' in kwargs:
self._integer = kwargs['integer']
def _raw_ticks(self, vmin, vmax):
if self._nbins == 'auto':
if self.axis is not None:
nbins = np.clip(self.axis.get_tick_space(),
max(1, self._min_n_ticks - 1), 9)
else:
nbins = 9
else:
nbins = self._nbins
scale, offset = scale_range(vmin, vmax, nbins)
_vmin = vmin - offset
_vmax = vmax - offset
raw_step = (vmax - vmin) / nbins
steps = self._extended_steps * scale
if self._integer:
# For steps > 1, keep only integer values.
igood = (steps < 1) | (np.abs(steps - np.round(steps)) < 0.001)
steps = steps[igood]
istep = np.nonzero(steps >= raw_step)[0][0]
# Classic round_numbers mode may require a larger step.
if rcParams['axes.autolimit_mode'] == 'round_numbers':
for istep in range(istep, len(steps)):
step = steps[istep]
best_vmin = (_vmin // step) * step
best_vmax = best_vmin + step * nbins
if (best_vmax >= _vmax):
break
# This is an upper limit; move to smaller steps if necessary.
for i in range(istep):
step = steps[istep - i]
if (self._integer and
np.floor(_vmax) - np.ceil(_vmin) >= self._min_n_ticks - 1):
step = max(1, step)
best_vmin = (_vmin // step) * step
low = np.round(Base(step).le(_vmin - best_vmin) / step)
high = np.round(Base(step).ge(_vmax - best_vmin) / step)
ticks = np.arange(low, high + 1) * step + best_vmin + offset
nticks = ((ticks <= vmax) & (ticks >= vmin)).sum()
if nticks >= self._min_n_ticks:
break
return ticks
def __call__(self):
vmin, vmax = self.axis.get_view_interval()
return self.tick_values(vmin, vmax)
def tick_values(self, vmin, vmax):
if self._symmetric:
vmax = max(abs(vmin), abs(vmax))
vmin = -vmax
vmin, vmax = mtransforms.nonsingular(
vmin, vmax, expander=1e-13, tiny=1e-14)
locs = self._raw_ticks(vmin, vmax)
prune = self._prune
if prune == 'lower':
locs = locs[1:]
elif prune == 'upper':
locs = locs[:-1]
elif prune == 'both':
locs = locs[1:-1]
return self.raise_if_exceeds(locs)
def view_limits(self, dmin, dmax):
if self._symmetric:
dmax = max(abs(dmin), abs(dmax))
dmin = -dmax
dmin, dmax = mtransforms.nonsingular(
dmin, dmax, expander=1e-12, tiny=1e-13)
if rcParams['axes.autolimit_mode'] == 'round_numbers':
return self._raw_ticks(dmin, dmax)[[0, -1]]
else:
return dmin, dmax
def decade_down(x, base=10):
'floor x to the nearest lower decade'
if x == 0.0:
return -base
lx = np.floor(np.log(x) / np.log(base))
return base ** lx
def decade_up(x, base=10):
'ceil x to the nearest higher decade'
if x == 0.0:
return base
lx = np.ceil(np.log(x) / np.log(base))
return base ** lx
def nearest_long(x):
if x == 0:
return long(0)
elif x > 0:
return long(x + 0.5)
else:
return long(x - 0.5)
def is_decade(x, base=10):
if not np.isfinite(x):
return False
if x == 0.0:
return True
lx = np.log(np.abs(x)) / np.log(base)
return is_close_to_int(lx)
def is_close_to_int(x):
if not np.isfinite(x):
return False
return abs(x - nearest_long(x)) < 1e-10
class LogLocator(Locator):
"""
Determine the tick locations for log axes
"""
def __init__(self, base=10.0, subs=(1.0,), numdecs=4, numticks=None):
"""
Place ticks on the locations : subs[j] * base**i
Parameters
----------
subs : None, string, or sequence of float, optional, default (1.0,)
Gives the multiples of integer powers of the base at which
to place ticks. The default places ticks only at
integer powers of the base.
The permitted string values are ``'auto'`` and ``'all'``,
both of which use an algorithm based on the axis view
limits to determine whether and how to put ticks between
integer powers of the base. With ``'auto'``, ticks are
placed only between integer powers; with ``'all'``, the
integer powers are included. A value of None is
equivalent to ``'auto'``.
"""
if numticks is None:
if rcParams['_internal.classic_mode']:
numticks = 15
else:
numticks = 'auto'
self.base(base)
self.subs(subs)
self.numdecs = numdecs
self.numticks = numticks
def set_params(self, base=None, subs=None, numdecs=None, numticks=None):
"""Set parameters within this locator."""
if base is not None:
self.base(base)
if subs is not None:
self.subs(subs)
if numdecs is not None:
self.numdecs = numdecs
if numticks is not None:
self.numticks = numticks
# FIXME: these base and subs functions are contrary to our
# usual and desired API.
def base(self, base):
"""
set the base of the log scaling (major tick every base**i, i integer)
"""
self._base = float(base)
def subs(self, subs):
"""
set the minor ticks for the log scaling every base**i*subs[j]
"""
if subs is None: # consistency with previous bad API
self._subs = 'auto'
elif isinstance(subs, six.string_types):
if subs not in ('all', 'auto'):
raise ValueError("A subs string must be 'all' or 'auto'; "
"found '%s'." % subs)
self._subs = subs
else:
self._subs = np.asarray(subs, dtype=float)
def __call__(self):
'Return the locations of the ticks'
vmin, vmax = self.axis.get_view_interval()
return self.tick_values(vmin, vmax)
def tick_values(self, vmin, vmax):
if self.numticks == 'auto':
if self.axis is not None:
numticks = np.clip(self.axis.get_tick_space(), 2, 9)
else:
numticks = 9
else:
numticks = self.numticks
b = self._base
# dummy axis has no axes attribute
if hasattr(self.axis, 'axes') and self.axis.axes.name == 'polar':
vmax = math.ceil(math.log(vmax) / math.log(b))
decades = np.arange(vmax - self.numdecs, vmax)
ticklocs = b ** decades
return ticklocs
if vmin <= 0.0:
if self.axis is not None:
vmin = self.axis.get_minpos()
if vmin <= 0.0 or not np.isfinite(vmin):
raise ValueError(
"Data has no positive values, and therefore can not be "
"log-scaled.")
vmin = math.log(vmin) / math.log(b)
vmax = math.log(vmax) / math.log(b)
if vmax < vmin:
vmin, vmax = vmax, vmin
numdec = math.floor(vmax) - math.ceil(vmin)
if isinstance(self._subs, six.string_types):
_first = 2.0 if self._subs == 'auto' else 1.0
if numdec > 10 or b < 3:
if self._subs == 'auto':
return np.array([]) # no minor or major ticks
else:
subs = np.array([1.0]) # major ticks
else:
subs = np.arange(_first, b)
else:
subs = self._subs
stride = 1
if rcParams['_internal.classic_mode']:
# Leave the bug left over from the PY2-PY3 transition.
while numdec / stride + 1 > numticks:
stride += 1
else:
while numdec // stride + 1 > numticks:
stride += 1
# Does subs include anything other than 1?
have_subs = len(subs) > 1 or (len(subs == 1) and subs[0] != 1.0)
decades = np.arange(math.floor(vmin) - stride,
math.ceil(vmax) + 2 * stride, stride)
if hasattr(self, '_transform'):
ticklocs = self._transform.inverted().transform(decades)
if have_subs:
if stride == 1:
ticklocs = np.ravel(np.outer(subs, ticklocs))
else:
ticklocs = []
else:
if have_subs:
ticklocs = []
if stride == 1:
for decadeStart in b ** decades:
ticklocs.extend(subs * decadeStart)
else:
ticklocs = b ** decades
return self.raise_if_exceeds(np.asarray(ticklocs))
def view_limits(self, vmin, vmax):
'Try to choose the view limits intelligently'
b = self._base
vmin, vmax = self.nonsingular(vmin, vmax)
if self.axis.axes.name == 'polar':
vmax = math.ceil(math.log(vmax) / math.log(b))
vmin = b ** (vmax - self.numdecs)
if rcParams['axes.autolimit_mode'] == 'round_numbers':
if not is_decade(vmin, self._base):
vmin = decade_down(vmin, self._base)
if not is_decade(vmax, self._base):
vmax = decade_up(vmax, self._base)
return vmin, vmax
def nonsingular(self, vmin, vmax):
if not np.isfinite(vmin) or not np.isfinite(vmax):
return 1, 10 # initial range, no data plotted yet
if vmin > vmax:
vmin, vmax = vmax, vmin
if vmax <= 0:
warnings.warn(
"Data has no positive values, and therefore cannot be "
"log-scaled.")
return 1, 10
minpos = self.axis.get_minpos()
if not np.isfinite(minpos):
minpos = 1e-300 # This should never take effect.
if vmin <= 0:
vmin = minpos
if vmin == vmax:
vmin = decade_down(vmin, self._base)
vmax = decade_up(vmax, self._base)
return vmin, vmax
class SymmetricalLogLocator(Locator):
"""
Determine the tick locations for symmetric log axes
"""
def __init__(self, transform=None, subs=None, linthresh=None, base=None):
"""
place ticks on the location= base**i*subs[j]
"""
if transform is not None:
self._base = transform.base
self._linthresh = transform.linthresh
elif linthresh is not None and base is not None:
self._base = base
self._linthresh = linthresh
else:
raise ValueError("Either transform, or both linthresh "
"and base, must be provided.")
if subs is None:
self._subs = [1.0]
else:
self._subs = subs
self.numticks = 15
def set_params(self, subs=None, numticks=None):
"""Set parameters within this locator."""
if numticks is not None:
self.numticks = numticks
if subs is not None:
self._subs = subs
def __call__(self):
'Return the locations of the ticks'
# Note, these are untransformed coordinates
vmin, vmax = self.axis.get_view_interval()
return self.tick_values(vmin, vmax)
def tick_values(self, vmin, vmax):
b = self._base
t = self._linthresh
if vmax < vmin:
vmin, vmax = vmax, vmin
# The domain is divided into three sections, only some of
# which may actually be present.
#
# <======== -t ==0== t ========>
# aaaaaaaaa bbbbb ccccccccc
#
# a) and c) will have ticks at integral log positions. The
# number of ticks needs to be reduced if there are more
# than self.numticks of them.
#
# b) has a tick at 0 and only 0 (we assume t is a small
# number, and the linear segment is just an implementation
# detail and not interesting.)
#
# We could also add ticks at t, but that seems to usually be
# uninteresting.
#
# "simple" mode is when the range falls entirely within (-t,
# t) -- it should just display (vmin, 0, vmax)
has_a = has_b = has_c = False
if vmin < -t:
has_a = True
if vmax > -t:
has_b = True
if vmax > t:
has_c = True
elif vmin < 0:
if vmax > 0:
has_b = True
if vmax > t:
has_c = True
else:
return [vmin, vmax]
elif vmin < t:
if vmax > t:
has_b = True
has_c = True
else:
return [vmin, vmax]
else:
has_c = True
def get_log_range(lo, hi):
lo = np.floor(np.log(lo) / np.log(b))
hi = np.ceil(np.log(hi) / np.log(b))
return lo, hi
# First, calculate all the ranges, so we can determine striding
if has_a:
if has_b:
a_range = get_log_range(t, -vmin + 1)
else:
a_range = get_log_range(-vmax, -vmin + 1)
else:
a_range = (0, 0)
if has_c:
if has_b:
c_range = get_log_range(t, vmax + 1)
else:
c_range = get_log_range(vmin, vmax + 1)
else:
c_range = (0, 0)
total_ticks = (a_range[1] - a_range[0]) + (c_range[1] - c_range[0])
if has_b:
total_ticks += 1
stride = max(total_ticks // (self.numticks - 1), 1)
decades = []
if has_a:
decades.extend(-1 * (b ** (np.arange(a_range[0], a_range[1],
stride)[::-1])))
if has_b:
decades.append(0.0)
if has_c:
decades.extend(b ** (np.arange(c_range[0], c_range[1], stride)))
# Add the subticks if requested
if self._subs is None:
subs = np.arange(2.0, b)
else:
subs = np.asarray(self._subs)
if len(subs) > 1 or subs[0] != 1.0:
ticklocs = []
for decade in decades:
if decade == 0:
ticklocs.append(decade)
else:
ticklocs.extend(subs * decade)
else:
ticklocs = decades
return self.raise_if_exceeds(np.array(ticklocs))
def view_limits(self, vmin, vmax):
'Try to choose the view limits intelligently'
b = self._base
if vmax < vmin:
vmin, vmax = vmax, vmin
if rcParams['axes.autolimit_mode'] == 'round_numbers':
if not is_decade(abs(vmin), b):
if vmin < 0:
vmin = -decade_up(-vmin, b)
else:
vmin = decade_down(vmin, b)
if not is_decade(abs(vmax), b):
if vmax < 0:
vmax = -decade_down(-vmax, b)
else:
vmax = decade_up(vmax, b)
if vmin == vmax:
if vmin < 0:
vmin = -decade_up(-vmin, b)
vmax = -decade_down(-vmax, b)
else:
vmin = decade_down(vmin, b)
vmax = decade_up(vmax, b)
result = mtransforms.nonsingular(vmin, vmax)
return result
class LogitLocator(Locator):
"""
Determine the tick locations for logit axes
"""
def __init__(self, minor=False):
"""
place ticks on the logit locations
"""
self.minor = minor
def set_params(self, minor=None):
"""Set parameters within this locator."""
if minor is not None:
self.minor = minor
def __call__(self):
'Return the locations of the ticks'
vmin, vmax = self.axis.get_view_interval()
return self.tick_values(vmin, vmax)
def tick_values(self, vmin, vmax):
# dummy axis has no axes attribute
if hasattr(self.axis, 'axes') and self.axis.axes.name == 'polar':
raise NotImplementedError('Polar axis cannot be logit scaled yet')
vmin, vmax = self.nonsingular(vmin, vmax)
vmin = np.log10(vmin / (1 - vmin))
vmax = np.log10(vmax / (1 - vmax))
decade_min = np.floor(vmin)
decade_max = np.ceil(vmax)
# major ticks
if not self.minor:
ticklocs = []
if (decade_min <= -1):
expo = np.arange(decade_min, min(0, decade_max + 1))
ticklocs.extend(list(10**expo))
if (decade_min <= 0) and (decade_max >= 0):
ticklocs.append(0.5)
if (decade_max >= 1):
expo = -np.arange(max(1, decade_min), decade_max + 1)
ticklocs.extend(list(1 - 10**expo))
# minor ticks
else:
ticklocs = []
if (decade_min <= -2):
expo = np.arange(decade_min, min(-1, decade_max))
newticks = np.outer(np.arange(2, 10), 10**expo).ravel()
ticklocs.extend(list(newticks))
if (decade_min <= 0) and (decade_max >= 0):
ticklocs.extend([0.2, 0.3, 0.4, 0.6, 0.7, 0.8])
if (decade_max >= 2):
expo = -np.arange(max(2, decade_min), decade_max + 1)
newticks = 1 - np.outer(np.arange(2, 10), 10**expo).ravel()
ticklocs.extend(list(newticks))
return self.raise_if_exceeds(np.array(ticklocs))
def nonsingular(self, vmin, vmax):
initial_range = (1e-7, 1 - 1e-7)
if not np.isfinite(vmin) or not np.isfinite(vmax):
return initial_range # no data plotted yet
if vmin > vmax:
vmin, vmax = vmax, vmin
# what to do if a window beyond ]0, 1[ is chosen
if self.axis is not None:
minpos = self.axis.get_minpos()
if not np.isfinite(minpos):
return initial_range # again, no data plotted
else:
minpos = 1e-7 # should not occur in normal use
# NOTE: for vmax, we should query a property similar to get_minpos, but
# related to the maximal, less-than-one data point. Unfortunately,
# Bbox._minpos is defined very deep in the BBox and updated with data,
# so for now we use 1 - minpos as a substitute.
if vmin <= 0:
vmin = minpos
if vmax >= 1:
vmax = 1 - minpos
if vmin == vmax:
return 0.1 * vmin, 1 - 0.1 * vmin
return vmin, vmax
class AutoLocator(MaxNLocator):
"""
Dynamically find major tick positions. This is actually a subclass
of `~matplotlib.ticker.MaxNLocator`, with parameters *nbins = 'auto'*
and *steps = [1, 2, 2.5, 5, 10]*.
"""
def __init__(self):
"""
To know the values of the non-public parameters, please have a
look to the defaults of `~matplotlib.ticker.MaxNLocator`.
"""
if rcParams['_internal.classic_mode']:
nbins = 9
steps = [1, 2, 5, 10]
else:
nbins = 'auto'
steps = [1, 2, 2.5, 5, 10]
MaxNLocator.__init__(self, nbins=nbins, steps=steps)
class AutoMinorLocator(Locator):
"""
Dynamically find minor tick positions based on the positions of
major ticks. The scale must be linear with major ticks evenly spaced.
"""
def __init__(self, n=None):
"""
*n* is the number of subdivisions of the interval between
major ticks; e.g., n=2 will place a single minor tick midway
between major ticks.
If *n* is omitted or None, it will be set to 5 or 4.
"""
self.ndivs = n
def __call__(self):
'Return the locations of the ticks'
if self.axis.get_scale() == 'log':
warnings.warn('AutoMinorLocator does not work with logarithmic '
'scale')
return []
majorlocs = self.axis.get_majorticklocs()
try:
majorstep = majorlocs[1] - majorlocs[0]
except IndexError:
# Need at least two major ticks to find minor tick locations
# TODO: Figure out a way to still be able to display minor
# ticks without two major ticks visible. For now, just display
# no ticks at all.
return []
if self.ndivs is None:
x = int(np.round(10 ** (np.log10(majorstep) % 1)))
if x in [1, 5, 10]:
ndivs = 5
else:
ndivs = 4
else:
ndivs = self.ndivs
minorstep = majorstep / ndivs
vmin, vmax = self.axis.get_view_interval()
if vmin > vmax:
vmin, vmax = vmax, vmin
t0 = majorlocs[0]
tmin = ((vmin - t0) // minorstep + 1) * minorstep
tmax = ((vmax - t0) // minorstep + 1) * minorstep
locs = np.arange(tmin, tmax, minorstep) + t0
cond = np.abs((locs - t0) % majorstep) > minorstep / 10.0
locs = locs.compress(cond)
return self.raise_if_exceeds(np.array(locs))
def tick_values(self, vmin, vmax):
raise NotImplementedError('Cannot get tick locations for a '
'%s type.' % type(self))
class OldAutoLocator(Locator):
"""
On autoscale this class picks the best MultipleLocator to set the
view limits and the tick locs.
"""
def __init__(self):
self._locator = LinearLocator()
def __call__(self):
'Return the locations of the ticks'
self.refresh()
return self.raise_if_exceeds(self._locator())
def tick_values(self, vmin, vmax):
raise NotImplementedError('Cannot get tick locations for a '
'%s type.' % type(self))
def refresh(self):
'refresh internal information based on current lim'
vmin, vmax = self.axis.get_view_interval()
vmin, vmax = mtransforms.nonsingular(vmin, vmax, expander=0.05)
d = abs(vmax - vmin)
self._locator = self.get_locator(d)
def view_limits(self, vmin, vmax):
'Try to choose the view limits intelligently'
d = abs(vmax - vmin)
self._locator = self.get_locator(d)
return self._locator.view_limits(vmin, vmax)
def get_locator(self, d):
'pick the best locator based on a distance'
d = abs(d)
if d <= 0:
locator = MultipleLocator(0.2)
else:
try:
ld = math.log10(d)
except OverflowError:
raise RuntimeError('AutoLocator illegal data interval range')
fld = math.floor(ld)
base = 10 ** fld
#if ld==fld: base = 10**(fld-1)
#else: base = 10**fld
if d >= 5 * base:
ticksize = base
elif d >= 2 * base:
ticksize = base / 2.0
else:
ticksize = base / 5.0
locator = MultipleLocator(ticksize)
return locator
| 32.943468 | 79 | 0.556745 |
17111432226dd73d653390de29656f4d3d8a9a11 | 2,882 | py | Python | tools/optimization/driver/ask_tell_parallel_driver.py | MRossol/HOPP | c8bcf610fdd2cbb27a807ddaf444684ef1aab7e8 | [
"BSD-3-Clause"
] | 3 | 2021-03-10T20:03:42.000Z | 2022-03-18T17:10:04.000Z | tools/optimization/driver/ask_tell_parallel_driver.py | MRossol/HOPP | c8bcf610fdd2cbb27a807ddaf444684ef1aab7e8 | [
"BSD-3-Clause"
] | 14 | 2020-12-28T22:32:07.000Z | 2022-03-17T15:33:04.000Z | tools/optimization/driver/ask_tell_parallel_driver.py | MRossol/HOPP | c8bcf610fdd2cbb27a807ddaf444684ef1aab7e8 | [
"BSD-3-Clause"
] | 8 | 2021-01-19T02:39:01.000Z | 2022-01-31T18:04:39.000Z | import multiprocessing
from typing import (
Callable,
Tuple,
)
from ..data_logging.data_recorder import DataRecorder
from ..driver.ask_tell_driver import AskTellDriver
from ..optimizer.ask_tell_optimizer import AskTellOptimizer
from .ask_tell_parallel_driver_fns import *
class AskTellParallelDriver(AskTellDriver):
def __init__(self,
nprocs: int = multiprocessing.cpu_count()):
self._num_evaluations: int = 0
self._num_iterations: int = 0
self._nprocs = nprocs
self._pool = None
# self.evaluations = []
def __getstate__(self):
"""
This prevents the pool from being pickled when using the pool...
"""
self_dict = self.__dict__.copy()
if 'pool' in self_dict:
del self_dict['pool']
return self_dict
def __setstate__(self, state):
"""
This prevents the pool from being pickled when using the pool...
"""
self.__dict__.update(state)
def __del__(self):
"""
This prevents the pool from being pickled when using the pool...
"""
if hasattr(self, 'pool') and self._pool is not None:
self._pool.close()
def setup(
self,
objective: Callable[[any], Tuple[float, float, any]],
recorder: DataRecorder,
) -> None:
"""
Must be called before calling step() or run().
Sets the objective function for this driver and the data recorder.
:param objective: objective function for evaluating candidate solutions
:param recorder: data recorder
:return:
"""
self._pool = multiprocessing.Pool(
initializer=make_initializer(objective),
processes=self._nprocs)
def step(self,
optimizer: AskTellOptimizer,
) -> bool:
"""
Steps the optimizer through one iteration of generating candidates, evaluating them, and updating with their
evaluations.
:param optimizer: the optimizer to use
:return: True if the optimizer reached a stopping point (via calling optimizer.stop())
"""
# print('step()')
num_candidates = optimizer.get_num_candidates()
candidates = optimizer.ask(num_candidates)
evaluations = self._pool.map(evaluate, candidates)
num_candidates = len(evaluations)
# print('telling')
# self.evaluations = list(evaluations)
optimizer.tell(evaluations)
self._num_evaluations += num_candidates
self._num_iterations += 1
# print('done')
return optimizer.stop()
def get_num_evaluations(self) -> int:
return self._num_evaluations
def get_num_iterations(self) -> int:
return self._num_iterations
| 32.022222 | 116 | 0.611381 |
43a6ecf674359595b5a197b8dc4d0046f0f82491 | 404 | py | Python | betterbay/migrations/0006_auto_20210215_1817.py | acj3rd/bba | d9e2aa9cb1242d8686f9aa3b63b166e47b6e709d | [
"MIT"
] | null | null | null | betterbay/migrations/0006_auto_20210215_1817.py | acj3rd/bba | d9e2aa9cb1242d8686f9aa3b63b166e47b6e709d | [
"MIT"
] | null | null | null | betterbay/migrations/0006_auto_20210215_1817.py | acj3rd/bba | d9e2aa9cb1242d8686f9aa3b63b166e47b6e709d | [
"MIT"
] | null | null | null | # Generated by Django 3.0.12 on 2021-02-15 18:17
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('betterbay', '0005_auto_20210215_1630'),
]
operations = [
migrations.RemoveField(
model_name='event',
name='calendar',
),
migrations.DeleteModel(
name='Calendar',
),
]
| 19.238095 | 49 | 0.571782 |
4c883ee65dd8e9d8bc9812fa7d5b31dcb3ad3365 | 700 | py | Python | user_manager/oauth/api/__init__.py | voegtlel/auth-manager-backend | 20d40de0abc9deeb3fcddd892ffe2e635301917a | [
"MIT"
] | null | null | null | user_manager/oauth/api/__init__.py | voegtlel/auth-manager-backend | 20d40de0abc9deeb3fcddd892ffe2e635301917a | [
"MIT"
] | null | null | null | user_manager/oauth/api/__init__.py | voegtlel/auth-manager-backend | 20d40de0abc9deeb3fcddd892ffe2e635301917a | [
"MIT"
] | null | null | null | from fastapi import APIRouter
from . import (
authorize,
end_session,
login_status,
picture,
token,
userinfo,
well_known,
ext_card_auth,
ext_profiles,
ext_mail,
ext_mailing_list,
)
router = APIRouter()
router.include_router(authorize.router)
router.include_router(end_session.router)
router.include_router(login_status.router)
router.include_router(picture.router)
router.include_router(token.router)
router.include_router(userinfo.router)
router.include_router(well_known.router)
router.include_router(ext_card_auth.router)
router.include_router(ext_profiles.router)
router.include_router(ext_mail.router)
router.include_router(ext_mailing_list.router)
| 24.137931 | 46 | 0.798571 |
aa3b28645ffd0cb0d066a3f23b1e5e783a621220 | 3,528 | py | Python | youbot_flexbe_states/src/youbot_flexbe_states/execute_arm_trajectory_state.py | FlexBE/youbot_behaviors | 6f7a0330d6a9e883fc0a3dff22f44422e2379274 | [
"BSD-3-Clause"
] | 6 | 2015-11-17T15:59:38.000Z | 2019-12-04T02:24:30.000Z | youbot_flexbe_states/src/youbot_flexbe_states/execute_arm_trajectory_state.py | FlexBE/youbot_behaviors | 6f7a0330d6a9e883fc0a3dff22f44422e2379274 | [
"BSD-3-Clause"
] | null | null | null | youbot_flexbe_states/src/youbot_flexbe_states/execute_arm_trajectory_state.py | FlexBE/youbot_behaviors | 6f7a0330d6a9e883fc0a3dff22f44422e2379274 | [
"BSD-3-Clause"
] | 2 | 2018-05-09T13:01:30.000Z | 2022-03-30T10:16:15.000Z | #!/usr/bin/env python
import rospy
import math
import actionlib
from flexbe_core import EventState, Logger
from flexbe_core.proxy import ProxyActionClient
from control_msgs.msg import *
from trajectory_msgs.msg import *
"""
Created on 11/17/2015
@author: Spyros Maniatopoulos
"""
class ExecuteTrajectoryState(EventState):
"""
Executes a custom trajectory.
-- target_pose float[][] Trajectory to be executed, given as a
list of time steps where each step
contains a list of target joint values.
-- time float[] Relative time in seconds from starting
the execution when the corresponding
time step should be reached.
<= done Trajectory was successfully executed.
<= failed Failed to send or execute trajectory.
"""
def __init__(self, target_pose, time):
"""Constructor"""
super(ExecuteTrajectoryState, self).__init__(outcomes = ['done', 'failed'])
self._joint_names = ['arm_joint_1', 'arm_joint_2',
'arm_joint_3', 'arm_joint_4',
'arm_joint_5']
self._target_pose = target_pose
self._time = time
self._action_topic = "/arm_1/arm_controller/follow_joint_trajectory"
self._client = ProxyActionClient({self._action_topic: FollowJointTrajectoryAction})
self._done = False
self._failed = False
def execute(self, userdata):
"""Wait for action result and return outcome accordingly"""
if self._done:
return 'done'
if self._failed:
return 'failed'
if self._client.has_result(self._action_topic):
result = self._client.get_result(self._action_topic)
if result.error_code == FollowJointTrajectoryResult.SUCCESSFUL:
self._done = True
return 'done'
elif result.error_code == FollowJointTrajectoryResult.GOAL_TOLERANCE_VIOLATED:
Logger.logwarn('Probably done, but goal tolerances violated (%d)' % result.error_code)
self._done = True
return 'done'
else:
Logger.logwarn('Joint trajectory failed to execute: (%d) %s' % (result.error_code, result.error_string))
self._failed = True
return 'failed'
def on_enter(self, userdata):
"""Create and send action goal"""
self._done = False
self._failed = False
# Create and populate action goal
goal = FollowJointTrajectoryGoal()
goal.trajectory.joint_names = self._joint_names
for i in range(len(self._target_pose)):
point = JointTrajectoryPoint()
point.positions = self._target_pose[i]
point.time_from_start = rospy.Duration.from_sec(self._time[i])
goal.trajectory.points.append(point)
# Send the action goal for execution
try:
self._client.send_goal(self._action_topic, goal)
except Exception as e:
Logger.logwarn("Unable to send follow joint trajectory action goal:\n%s" % str(e))
self._failed = True
def on_exit(self, userdata):
if not self._client.has_result(self._action_topic):
self._client.cancel(self._action_topic)
Logger.loginfo('Cancelled active action goal.')
| 33.923077 | 120 | 0.602891 |
d58d996085acd3997ac5116c247cbe26dea3cb97 | 511 | py | Python | q2_sourmash/_types.py | dib-lab/q2-sourmash | 144585574928594e273760a5421fabeac618c59b | [
"BSD-3-Clause"
] | 3 | 2018-09-27T21:36:07.000Z | 2018-10-02T02:17:33.000Z | q2_sourmash/_types.py | gregcaporaso/q2-sourmash | c6e6a9abe505da3cefc83c075bcbd4166a989f40 | [
"BSD-3-Clause"
] | 4 | 2018-09-27T14:20:52.000Z | 2018-10-19T05:35:24.000Z | q2_sourmash/_types.py | gregcaporaso/q2-sourmash | c6e6a9abe505da3cefc83c075bcbd4166a989f40 | [
"BSD-3-Clause"
] | 2 | 2018-09-27T21:15:31.000Z | 2020-09-22T19:31:34.000Z | # ----------------------------------------------------------------------------
# Copyright (c) 2016-2018, QIIME 2 development team.
#
# Distributed under the terms of the Modified BSD License.
#
# The full license is in the file LICENSE, distributed with this software.
# ----------------------------------------------------------------------------
from qiime2.plugin import SemanticType
from q2_types.sample_data import SampleData
MinHashSig = SemanticType('MinHashSig', variant_of=SampleData.field['type'])
| 39.307692 | 78 | 0.545988 |
9480ac1542d3456c55db06a7e6d7b3b6f1a308b7 | 600 | py | Python | tools/add_table_frames_to_h5py.py | KI-cker/Ki-cker | b48ae75bfeea970940ad657c73d71438531259c6 | [
"Apache-2.0"
] | null | null | null | tools/add_table_frames_to_h5py.py | KI-cker/Ki-cker | b48ae75bfeea970940ad657c73d71438531259c6 | [
"Apache-2.0"
] | 14 | 2018-02-21T17:58:33.000Z | 2022-03-11T23:16:09.000Z | tools/add_table_frames_to_h5py.py | KI-cker/Ki-cker | b48ae75bfeea970940ad657c73d71438531259c6 | [
"Apache-2.0"
] | 1 | 2018-02-22T09:28:26.000Z | 2018-02-22T09:28:26.000Z | import argparse
from kicker.train import Parser, Converter
def add_table_frames(filename):
p = Parser(filename)
for g in p.file:
print("Processing {}".format(g))
if 'table_frames_encoded' not in p.file[g]:
c = Converter(p, g)
p.file[g]['table_frames_encoded'] = c.get_table_frames_encoded()
p.file.flush()
def main():
parser = argparse.ArgumentParser()
parser.add_argument('-i', '--input_file')
args = parser.parse_args()
if args.input_file:
add_table_frames(args.input_file)
if __name__ == '__main__':
main()
| 20.689655 | 76 | 0.638333 |
c50579f573700381ad255eb3c0367c98c3893ce2 | 13,044 | py | Python | examples/wmt/input_pipeline.py | navjotts/flax | 5ffd0006701e4b162ae906c4f089553600d3114c | [
"Apache-2.0"
] | 2,249 | 2020-03-08T12:13:08.000Z | 2022-03-31T10:25:13.000Z | examples/wmt/input_pipeline.py | navjotts/flax | 5ffd0006701e4b162ae906c4f089553600d3114c | [
"Apache-2.0"
] | 1,338 | 2020-03-06T16:56:34.000Z | 2022-03-31T13:46:49.000Z | examples/wmt/input_pipeline.py | navjotts/flax | 5ffd0006701e4b162ae906c4f089553600d3114c | [
"Apache-2.0"
] | 343 | 2020-03-06T16:35:39.000Z | 2022-03-27T17:31:45.000Z | # Copyright 2021 The Flax Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Input pipeline for a WMT dataset."""
import os
from typing import Dict, Optional, List, Union
from clu import deterministic_data
import tokenizer
import ml_collections
import tensorflow as tf
import tensorflow_datasets as tfds
AUTOTUNE = tf.data.AUTOTUNE
Features = Dict[str, tf.Tensor]
class NormalizeFeatureNamesOp:
"""Normalizes feature names to 'inputs' and 'targets'."""
def __init__(self, ds_info: tfds.core.DatasetInfo, reverse_translation: bool):
self.input_lang, self.target_lang = ds_info.supervised_keys
if reverse_translation:
self.input_lang, self.target_lang = self.target_lang, self.input_lang
def __call__(self, features: Features) -> Features:
features['inputs'] = features.pop(self.input_lang)
features['targets'] = features.pop(self.target_lang)
return features
def get_raw_dataset(dataset_builder: tfds.core.DatasetBuilder,
split: str,
*,
reverse_translation: bool = False) -> tf.data.Dataset:
"""Loads a raw WMT dataset and normalizes feature keys.
Args:
dataset_builder: TFDS dataset builder that can build `slit`.
split: Split to use. This must be the full split. We shard the split across
multiple hosts and currently don't support sharding subsplits.
reverse_translation: bool: whether to reverse the translation direction.
e.g. for 'de-en' this translates from english to german.
Returns:
Dataset with source and target language features mapped to 'inputs' and
'targets'.
"""
num_examples = dataset_builder.info.splits[split].num_examples
per_host_split = deterministic_data.get_read_instruction_for_host(
split, num_examples, drop_remainder=False)
ds = dataset_builder.as_dataset(split=per_host_split, shuffle_files=False)
ds = ds.map(
NormalizeFeatureNamesOp(
dataset_builder.info, reverse_translation=reverse_translation),
num_parallel_calls=AUTOTUNE)
return ds
def pack_dataset(dataset: tf.data.Dataset,
key2length: Union[int, Dict[str, int]],
keys: Optional[List[str]] = None) -> tf.data.Dataset:
"""Creates a 'packed' version of a dataset on-the-fly.
Adapted from the mesh-tf implementation.
This is meant to replace the irritation of having to create a separate
"packed" version of a dataset to train efficiently on TPU.
Each example in the output dataset represents several examples in the
input dataset.
For each key in the input dataset, two additional keys are created:
<key>_segmentation: an int32 tensor identifying the parts
representing the original example.
<key>_position: an int32 tensor identifying the position within the original
example.
Example:
Two input examples get combined to form an output example.
The input examples are:
{"inputs": [8, 7, 1, 0], "targets":[4, 1, 0]}
{"inputs": [2, 3, 4, 1], "targets":[5, 6, 1]}
The output example is:
{
"inputs": [8, 7, 1, 2, 3, 4, 1, 0, 0, 0]
"inputs_segmentation": [1, 1, 1, 2, 2, 2, 2, 0, 0, 0]
"inputs_position": [0, 1, 2, 0, 1, 2, 3, 0, 0, 0]
"targets": [4, 1, 5, 6, 1, 0, 0, 0, 0, 0]
"targets_segmentation": [1, 1, 2, 2, 2, 0, 0, 0, 0, 0]
"targets_position": [0, 1, 0, 1, 2, 0, 0, 0, 0, 0]
}
0 represents padding in both the inputs and the outputs.
Sequences in the incoming examples are truncated to length "length", and the
sequences in the output examples all have fixed (padded) length "length".
Args:
dataset: a tf.data.Dataset
key2length: an integer, or a dict from feature-key to integer
keys: a list of strings (e.g. ["inputs", "targets"])
Returns:
a tf.data.Dataset
"""
shapes = tf.nest.map_structure(lambda spec: spec.shape, dataset.element_spec)
if keys is None:
keys = list(shapes.keys())
for k in keys:
if k not in shapes:
raise ValueError('Key %s not found in dataset. Available keys are %s' %
(k, shapes.keys()))
if not shapes[k].is_compatible_with(tf.TensorShape([None])):
raise ValueError('Tensors to be packed must be one-dimensional.')
# make sure that the length dictionary contains all keys as well as the
# keys suffixed by "_segmentation" and "_position"
if isinstance(key2length, int):
key2length = {k: key2length for k in keys}
for k in keys:
for suffix in ['_segmentation', '_position']:
key2length[k + suffix] = key2length[k]
# trim to length
dataset = dataset.map(
lambda x: {k: x[k][:key2length[k]] for k in keys},
num_parallel_calls=AUTOTUNE)
# Setting batch_size=length ensures that the concatenated sequences (if they
# have length >=1) are sufficient to fill at least one packed example.
batch_size = max(key2length.values())
dataset = dataset.padded_batch(
batch_size, padded_shapes={k: [-1] for k in keys})
dataset = _pack_with_tf_ops(dataset, keys, key2length)
# Set the Tensor shapes correctly since they get lost in the process.
def my_fn(x):
return {k: tf.reshape(v, [key2length[k]]) for k, v in x.items()}
return dataset.map(my_fn, num_parallel_calls=AUTOTUNE)
def _pack_with_tf_ops(dataset: tf.data.Dataset, keys: List[str],
key2length: Dict[str, int]) -> tf.data.Dataset:
"""Helper-function for packing a dataset which has already been batched.
Helper for pack_dataset() Uses tf.while_loop.
Args:
dataset: a dataset containing padded batches of examples.
keys: a list of strings
key2length: an dict from feature-key to integer
Returns:
a dataset.
"""
empty_example = {}
for k in keys:
empty_example[k] = tf.zeros([0], dtype=tf.int32)
empty_example[k + '_position'] = tf.zeros([0], dtype=tf.int32)
keys_etc = empty_example.keys()
def write_packed_example(partial, outputs):
new_partial = empty_example.copy()
new_outputs = {}
for k in keys_etc:
new_outputs[k] = outputs[k].write(
outputs[k].size(),
tf.pad(partial[k], [[0, key2length[k] - tf.size(partial[k])]]))
return new_partial, new_outputs
def map_fn(x):
"""Internal function to flat_map over.
Consumes a batch of input examples and produces a variable number of output
examples.
Args:
x: a single example
Returns:
a tf.data.Dataset
"""
partial = empty_example.copy()
i = tf.zeros([], dtype=tf.int32)
dynamic_batch_size = tf.shape(x[keys[0]])[0]
outputs = {}
for k in keys:
outputs[k] = tf.TensorArray(
tf.int32, size=0, dynamic_size=True, element_shape=[key2length[k]])
outputs[k + '_position'] = tf.TensorArray(
tf.int32, size=0, dynamic_size=True, element_shape=[key2length[k]])
def body_fn(i, partial, outputs):
"""Body function for while_loop.
Args:
i: integer scalar
partial: dictionary of Tensor (partially-constructed example)
outputs: dictionary of TensorArray
Returns:
A triple containing the new values of the inputs.
"""
can_append = True
one_example = {}
for k in keys:
val = tf.cast(x[k][i], tf.int32)
val = val[:tf.reduce_sum(tf.cast(tf.not_equal(val, 0), tf.int32))]
one_example[k] = val
for k in keys:
can_append = tf.logical_and(
can_append,
tf.less_equal(
tf.size(partial[k]) + tf.size(one_example[k]), key2length[k]))
def false_fn():
return write_packed_example(partial, outputs)
def true_fn():
return partial, outputs
partial, outputs = tf.cond(can_append, true_fn, false_fn)
new_partial = {}
for k in keys:
new_seq = one_example[k][:key2length[k]]
new_seq_len = tf.size(new_seq)
new_partial[k] = tf.concat([partial[k], new_seq], 0)
new_partial[k + '_position'] = tf.concat(
[partial[k + '_position'],
tf.range(new_seq_len)], 0)
partial = new_partial
return i + 1, partial, outputs
# For loop over all examples in the batch.
i, partial, outputs = tf.while_loop(
cond=lambda *_: True,
body=body_fn,
loop_vars=(i, partial, outputs),
shape_invariants=(
tf.TensorShape([]),
{k: tf.TensorShape([None]) for k in keys_etc},
{k: tf.TensorShape(None) for k in keys_etc},
),
maximum_iterations=dynamic_batch_size)
_, outputs = write_packed_example(partial, outputs)
packed = {k: outputs[k].stack() for k in keys_etc}
for k in keys:
packed[k + '_segmentation'] = (
tf.cumsum(
tf.cast(tf.equal(packed[k + '_position'], 0), tf.int32), axis=1) *
tf.cast(tf.not_equal(packed[k], 0), tf.int32))
return packed
dataset = dataset.map(map_fn, num_parallel_calls=AUTOTUNE)
return dataset.unbatch()
# -----------------------------------------------------------------------------
# Main dataset prep routines.
# -----------------------------------------------------------------------------
def preprocess_wmt_data(dataset,
shuffle: bool,
num_epochs: Optional[int] = 1,
pack_examples: bool = True,
shuffle_buffer_size: int = 1024,
max_length: int = 512,
batch_size: int = 256,
drop_remainder: bool = True,
prefetch_size: int = AUTOTUNE):
"""Shuffle and batch/pack the given dataset."""
def length_filter(max_len):
def filter_fn(x):
source, target = x['inputs'], x['targets']
l = tf.maximum(tf.shape(source)[0], tf.shape(target)[0])
return tf.less(l, max_len + 1)
return filter_fn
if max_length > 0:
dataset = dataset.filter(length_filter(max_length))
if shuffle:
dataset = dataset.shuffle(shuffle_buffer_size)
dataset = dataset.repeat(num_epochs)
if pack_examples:
dataset = pack_dataset(dataset, max_length)
dataset = dataset.batch(batch_size, drop_remainder=drop_remainder)
else: # simple (static-shape) padded batching
dataset = dataset.padded_batch(
batch_size,
padded_shapes={
'inputs': max_length,
'targets': max_length
},
padding_values={
'inputs': 0,
'targets': 0
},
drop_remainder=drop_remainder)
if prefetch_size:
dataset = dataset.prefetch(prefetch_size)
return dataset
def get_wmt_datasets(config: ml_collections.ConfigDict,
*,
n_devices: int,
reverse_translation: bool = True,
vocab_path: Optional[str] = None):
"""Load and return dataset of batched examples for use during training."""
if vocab_path is None:
vocab_path = os.path.expanduser('~/wmt_sentencepiece_model')
train_ds_builder = tfds.builder(config.dataset_name)
train_data = get_raw_dataset(
train_ds_builder, 'train', reverse_translation=reverse_translation)
if config.eval_dataset_name:
eval_ds_builder = tfds.builder(config.eval_dataset_name)
else:
eval_ds_builder = train_ds_builder
eval_data = get_raw_dataset(
eval_ds_builder,
config.eval_split,
reverse_translation=reverse_translation)
# Tokenize data.
sp_tokenizer = tokenizer.load_or_train_tokenizer(
train_data,
vocab_path=vocab_path,
vocab_size=config.vocab_size,
max_corpus_chars=config.max_corpus_chars)
train_data = train_data.map(
tokenizer.TokenizeOp(sp_tokenizer), num_parallel_calls=AUTOTUNE)
eval_data = eval_data.map(
tokenizer.TokenizeOp(sp_tokenizer), num_parallel_calls=AUTOTUNE)
batch_size = config.per_device_batch_size * n_devices
train_ds = preprocess_wmt_data(
train_data,
shuffle=True,
num_epochs=None,
pack_examples=True,
batch_size=batch_size,
max_length=config.max_target_length)
eval_ds = preprocess_wmt_data(
eval_data,
shuffle=False,
pack_examples=False,
batch_size=batch_size,
max_length=config.max_eval_target_length)
predict_ds = preprocess_wmt_data(
eval_data,
shuffle=False,
pack_examples=False,
batch_size=batch_size,
max_length=config.max_predict_length,
drop_remainder=False)
return train_ds, eval_ds, predict_ds, sp_tokenizer
| 34.784 | 80 | 0.652407 |
c261a5cd629fe82dd8b49d86b9eb5f0de091f332 | 18,406 | py | Python | models/siso_regression_tut.py | kyeongsoo/dnn-based_indoor_localization | 39e6a60fbd5095b714f6e158f1b933acc435a982 | [
"MIT"
] | 13 | 2018-02-08T13:32:20.000Z | 2022-02-06T14:27:34.000Z | models/siso_regression_tut.py | kyeongsoo/dnn-based_indoor_localization | 39e6a60fbd5095b714f6e158f1b933acc435a982 | [
"MIT"
] | null | null | null | models/siso_regression_tut.py | kyeongsoo/dnn-based_indoor_localization | 39e6a60fbd5095b714f6e158f1b933acc435a982 | [
"MIT"
] | 6 | 2018-09-24T21:52:56.000Z | 2021-05-11T03:13:43.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
##
# @file siso_regression_tut.py
# @author Kyeong Soo (Joseph) Kim <kyeongsoo.kim@gmail.com>
# @date 2018-08-23
#
# @brief A scalable indoor localization system based on Wi-Fi fingerprinting
# using three-dimensional regression of location coordiates with a
# single-input and single-output (SISO) deep neural network (DNN) model
# and TUT datasets.
#
# @remarks The results are published in the proceedings of <a
# href="https://is-candar.org/GCA18/">The 3rd International Workshop on
# GPU Computing and AI (GCA'18)</a>.
### import basic modules and a model to test
import os
# os.environ['PYTHONHASHSEED'] = '0' # for reproducibility
import sys
sys.path.insert(0, '../models')
sys.path.insert(0, '../utils')
from deep_autoencoder import deep_autoencoder
from sdae import sdae
from mean_ci import mean_ci
### import other modules; keras and its backend will be loaded later
import argparse
import datetime
import math
import multiprocessing
import numpy as np
import pandas as pd
import pathlib
import random as rn
from collections import namedtuple
from num2words import num2words
from numpy.linalg import norm
from time import time
from timeit import default_timer as timer
### import keras and tensorflow backend
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # see issue #152
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' # supress warning messages
import tensorflow as tf
num_cpus = multiprocessing.cpu_count()
session_conf = tf.ConfigProto(
intra_op_parallelism_threads=num_cpus,
inter_op_parallelism_threads=num_cpus
)
from keras import backend as K
from keras.callbacks import EarlyStopping, ModelCheckpoint
from keras.layers import Activation, Dense, Dropout, Input
from keras.layers.normalization import BatchNormalization
from keras.metrics import categorical_accuracy
from keras.models import Model
def siso_regression_tut(
gpu_id: int,
dataset: str,
frac: float,
validation_split: float,
preprocessor: str,
batch_size: int,
epochs: int,
optimizer: str,
dropout: float,
corruption_level: float,
dae_hidden_layers: list,
sdae_hidden_layers: list,
cache: bool,
regression_hidden_layers: list,
verbose: int
):
"""Multi-floor indoor localization based on three-dimensional regression of
location coordinates using a single-input and single-output (SISO) deep
neural network (DNN) model and TUT datasets.
Keyword arguments:
"""
### initialize numpy, random, TensorFlow, and keras
np.random.seed() # based on current time or OS-specific randomness source
rn.seed() # "
tf.set_random_seed(rn.randint(0, 1000000))
if gpu_id >= 0:
os.environ["CUDA_VISIBLE_DEVICES"] = str(gpu_id)
else:
os.environ["CUDA_VISIBLE_DEVICES"] = ''
sess = tf.Session(
graph=tf.get_default_graph(),
config=session_conf)
K.set_session(sess)
### load datasets after scaling
print("Loading data ...")
if dataset == 'tut':
from tut import TUT
tut = TUT(
cache=cache,
frac=frac,
preprocessor=preprocessor,
classification_mode='hierarchical',
grid_size=0)
elif dataset == 'tut2':
from tut import TUT2
tut = TUT2(
cache=cache,
frac=frac,
preprocessor=preprocessor,
classification_mode='hierarchical',
grid_size=0,
testing_split=0.2)
elif dataset == 'tut3':
from tut import TUT3
tut = TUT3(
cache=cache,
frac=frac,
preprocessor=preprocessor,
classification_mode='hierarchical',
grid_size=0)
else:
print("'{0}' is not a supported data set.".format(dataset))
sys.exit(0)
flr_height = tut.floor_height
training_df = tut.training_df
training_data = tut.training_data
testing_df = tut.testing_df
testing_data = tut.testing_data
### build and train a SIMO model
print(
"Building and training a SISO model for three-dimensional regression ..."
)
rss = training_data.rss_scaled
coord = training_data.coord_3d_scaled
coord_scaler = training_data.coord_3d_scaler # for inverse transform
labels = training_data.labels
input = Input(shape=(rss.shape[1], ), name='input') # common input
# (optional) build deep autoencoder or stacked denoising autoencoder
if dae_hidden_layers != '':
print("- Building a DAE model ...")
model = deep_autoencoder(
dataset=dataset,
input_data=rss,
preprocessor=preprocessor,
hidden_layers=dae_hidden_layers,
cache=cache,
model_fname=None,
optimizer=optimizer,
batch_size=batch_size,
epochs=epochs,
validation_split=validation_split)
x = model(input)
elif sdae_hidden_layers != '':
print("- Building an SDAE model ...")
model = sdae(
dataset=dataset,
input_data=rss,
preprocessor=preprocessor,
hidden_layers=sdae_hidden_layers,
cache=cache,
model_fname=None,
optimizer=optimizer,
corruption_level=corruption_level,
batch_size=batch_size,
epochs=epochs,
validation_split=validation_split)
x = model(input)
else:
x = input
# regression hidden layers
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = Dropout(dropout)(x)
if regression_hidden_layers != '':
for units in regression_hidden_layers:
x = Dense(units)(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = Dropout(dropout)(x)
# coordinates regression output
x = Dense(coord.shape[1], kernel_initializer='normal')(x)
x = BatchNormalization()(x)
coordinates_output = Activation(
'linear', name='coordinates_output')(x) # 'linear' activation
model = Model(inputs=input, outputs=coordinates_output)
model.compile(optimizer=optimizer, loss='mean_squared_error',
metrics=['mean_squared_error'])
weights_file = os.path.expanduser("~/tmp/best_weights.h5")
checkpoint = ModelCheckpoint(weights_file, monitor='val_loss', save_best_only=True, verbose=0)
early_stop = EarlyStopping(monitor='val_loss', min_delta=0, patience=10, verbose=0)
print("- Training a coordinates regressor ...", end='')
startTime = timer()
history = model.fit(
x={'input': rss},
y={'coordinates_output': coord},
batch_size=batch_size,
epochs=epochs,
verbose=verbose,
callbacks=[checkpoint, early_stop],
validation_split=validation_split,
shuffle=True)
elapsedTime = timer() - startTime
print(" completed in {0:.4e} s".format(elapsedTime))
model.load_weights(weights_file) # load weights from the best model
### evaluate the model
print("Evaluating the model ...")
rss = testing_data.rss_scaled
labels = testing_data.labels
flrs = labels.floor
coord = testing_data.coord_3d # original coordinates
# calculate the classification accuracies and localization errors
coords_scaled_pred = model.predict(rss, batch_size=batch_size)
coord_est = coord_scaler.inverse_transform(coords_scaled_pred) # inverse-scaling
tmp = np.maximum(np.minimum(coord_est[:,2], 4*tut.floor_height), 0) # clamping to [0, 4*tut.floor_height]
flrs_pred = np.floor(tmp/tut.floor_height+0.5) # floor number (0..4); N.B. round() behavior in Python 3 has been changed,so we cannot use it.
flr_results = (np.equal(np.argmax(flrs, axis=1), flrs_pred)).astype(int)
flr_acc = flr_results.mean()
# calculate 2D localization errors
dist_2d = norm(coord - coord_est, axis=1)
mean_error_2d = dist_2d.mean()
median_error_2d = np.median(dist_2d)
# calculate 3D localization errors
flr_diff = np.absolute(np.argmax(flrs, axis=1) - flrs_pred)
z_diff_squared = (flr_height**2)*np.square(flr_diff)
dist_3d = np.sqrt(np.sum(np.square(coord - coord_est), axis=1) + z_diff_squared)
mean_error_3d = dist_3d.mean()
median_error_3d = np.median(dist_3d)
LocalizationResults = namedtuple('LocalizationResults', ['flr_acc',
'mean_error_2d',
'median_error_2d',
'mean_error_3d',
'median_error_3d',
'elapsedTime'])
return LocalizationResults(flr_acc=flr_acc, mean_error_2d=mean_error_2d,
median_error_2d=median_error_2d,
mean_error_3d=mean_error_3d,
median_error_3d=median_error_3d,
elapsedTime=elapsedTime)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"-N",
"--num_runs",
help=
"number of runs; default is 20",
default=20,
type=int)
parser.add_argument(
"-G",
"--gpu_id",
help=
"ID of GPU device to run this script; default is 0; set it to a negative number for CPU (i.e., no GPU)",
default=0,
type=int)
parser.add_argument(
"--dataset",
help="a data set for training, validation, and testing; choices are 'tut' (default), 'tut2', and 'tut3'",
default='tut',
type=str)
parser.add_argument(
"-F",
"--frac",
help=
"fraction of input data to load for training and validation; default is 1.0",
default=1.0,
type=float)
parser.add_argument(
"--validation_split",
help=
"fraction of training data to be used as validation data: default is 0.2",
default=0.2,
type=float)
parser.add_argument(
"-P",
"--preprocessor",
help=
"preprocessor to scale/normalize input data before training and validation; default is 'standard_scaler'",
default='standard_scaler',
type=str)
parser.add_argument(
"-B",
"--batch_size",
help="batch size; default is 64",
default=64,
type=int)
parser.add_argument(
"-E",
"--epochs",
help="number of epochs; default is 100",
default=100,
type=int)
parser.add_argument(
"-O",
"--optimizer",
help="optimizer; default is 'nadam'",
default='nadam',
type=str)
parser.add_argument(
"-D",
"--dropout",
help="dropout rate before and after hidden layers; default is 0.25",
default=0.25,
type=float)
parser.add_argument(
"-C",
"--corruption_level",
help=
"corruption level of masking noise for stacked denoising autoencoder; default is 0.1",
default=0.1,
type=float)
parser.add_argument(
"--dae_hidden_layers",
help=
"comma-separated numbers of units in hidden layers for deep autoencoder; default is ''",
default='',
type=str)
parser.add_argument(
"--sdae_hidden_layers",
help=
"comma-separated numbers of units in hidden layers for stacked denoising autoencoder; default is '1024,1024,1024'",
default='1024,1024,1024',
type=str)
parser.add_argument(
"--no_cache",
help=
"disable loading a trained model from/saving it to a cache",
action='store_true')
parser.add_argument(
"--regression_hidden_layers",
help=
"comma-separated numbers of units in regression hidden layers; default is '1024'",
default='1024',
type=str)
parser.add_argument(
"-V",
"--verbose",
help=
"verbosity mode: 0 = silent, 1 = progress bar, 2 = one line per epoch; default is 0",
default=0,
type=int)
args = parser.parse_args()
# set variables using command-line arguments
num_runs = args.num_runs
gpu_id = args.gpu_id
dataset = args.dataset
frac = args.frac
validation_split = args.validation_split
preprocessor = args.preprocessor
batch_size = args.batch_size
epochs = args.epochs
optimizer = args.optimizer
dropout = args.dropout
corruption_level = args.corruption_level
if args.dae_hidden_layers == '':
dae_hidden_layers = ''
else:
dae_hidden_layers = [
int(i) for i in (args.dae_hidden_layers).split(',')
]
if args.sdae_hidden_layers == '':
sdae_hidden_layers = ''
else:
sdae_hidden_layers = [
int(i) for i in (args.sdae_hidden_layers).split(',')
]
cache = not args.no_cache
if args.regression_hidden_layers == '':
regression_hidden_layers = ''
else:
regression_hidden_layers = [
int(i) for i in (args.regression_hidden_layers).split(',')
]
verbose = args.verbose
### run simo_hybrid_tut() num_runs times
flr_accs = np.empty(num_runs)
mean_error_2ds = np.empty(num_runs)
median_error_2ds = np.empty(num_runs)
mean_error_3ds = np.empty(num_runs)
median_error_3ds = np.empty(num_runs)
elapsedTimes = np.empty(num_runs)
for i in range(num_runs):
print("\n########## {0:s} run ##########".format(num2words(i+1, to='ordinal_num')))
rst = siso_regression_tut(gpu_id, dataset, frac, validation_split,
preprocessor, batch_size, epochs, optimizer,
dropout, corruption_level, dae_hidden_layers,
sdae_hidden_layers, cache,
regression_hidden_layers, verbose)
flr_accs[i] = rst.flr_acc
mean_error_2ds[i] = rst.mean_error_2d
median_error_2ds[i] = rst.median_error_2d
mean_error_3ds[i] = rst.mean_error_3d
median_error_3ds[i] = rst.median_error_3d
elapsedTimes[i] = rst.elapsedTime
### print out final results
base_dir = '../results/test/' + (os.path.splitext(
os.path.basename(__file__))[0]).replace('test_', '') + '/' + dataset
pathlib.Path(base_dir).mkdir(parents=True, exist_ok=True)
base_file_name = base_dir + "/E{0:d}_B{1:d}_D{2:.2f}".format(
epochs, batch_size, dropout)
now = datetime.datetime.now()
output_file_base = base_file_name + '_' + now.strftime("%Y%m%d-%H%M%S")
with open(output_file_base + '.org', 'w') as output_file:
output_file.write(
"#+STARTUP: showall\n") # unfold everything when opening
output_file.write("* System parameters\n")
output_file.write(" - Command line: %s\n" % ' '.join(sys.argv))
output_file.write(" - Number of runs: %d\n" % num_runs)
output_file.write(" - GPU ID: %d\n" % gpu_id)
output_file.write(
" - Fraction of data loaded for training and validation: %.2f\n" %
frac)
output_file.write(" - Validation split: %.2f\n" % validation_split)
output_file.write(
" - Preprocessor for scaling/normalizing input data: %s\n" %
preprocessor)
output_file.write(" - Batch size: %d\n" % batch_size)
output_file.write(" - Epochs: %d\n" % epochs)
output_file.write(" - Optimizer: %s\n" % optimizer)
output_file.write(" - Dropout rate: %.2f\n" % dropout)
output_file.write(" - Deep autoencoder hidden layers: ")
if dae_hidden_layers == '':
output_file.write("N/A\n")
else:
output_file.write("%d" % dae_hidden_layers[0])
for units in dae_hidden_layers[1:]:
output_file.write("-%d" % units)
output_file.write("\n")
output_file.write(" - Stacked denoising autoencoder hidden layers: ")
if sdae_hidden_layers == '':
output_file.write("N/A\n")
else:
output_file.write("%d" % sdae_hidden_layers[0])
for units in sdae_hidden_layers[1:]:
output_file.write("-%d" % units)
output_file.write("\n")
output_file.write(" - Regression hidden layers: ")
if regression_hidden_layers == '':
output_file.write("N/A\n")
else:
output_file.write("%d" % regression_hidden_layers[0])
for units in regression_hidden_layers[1:]:
output_file.write("-%d" % units)
output_file.write("\n")
# output_file.write("* Model Summary\n")
# model.summary(print_fn=lambda x: output_file.write(x + '\n'))
# output_file.write("\n")
output_file.write("* Performance\n")
output_file.write(" - Floor hit rate [%]: Mean (w/ 95% CI)={0:.4f}+-{1:{ci_fs}}, Max={2:.4f}, Min={3:.4f}\n".format(*[i*100 for i in mean_ci(flr_accs)], 100*flr_accs.max(), 100*flr_accs.min(), ci_fs=('.4f' if num_runs > 1 else '')))
output_file.write(" - Mean 2D error [m]: Mean (w/ 95% CI)={0:.4f}+-{1:.4f}, Max={2:.4f}, Min={3:.4f}\n".format(*mean_ci(mean_error_2ds), mean_error_2ds.max(), mean_error_2ds.min()))
output_file.write(" - Median 2D error [m]: Mean (w/ 95% CI)={0:.4f}+-{1:.4f}, Max={3:.4f}, Min={3:.4f}\n".format(*mean_ci(median_error_2ds), median_error_2ds.max(), median_error_2ds.min()))
output_file.write(" - Mean 3D error [m]: Mean (w/ 95% CI)={0:.4f}+-{1:.4f}, Max={2:.4f}, Min={3:.4f}\n".format(*mean_ci(mean_error_3ds), mean_error_3ds.max(), mean_error_3ds.min()))
output_file.write(" - Median 3D error [m]: Mean (w/ 95% CI)={0:.4f}+-{1:.4f}, Max={2:.4f}, Min={3:.4f}\n".format(*mean_ci(median_error_3ds), median_error_3ds.max(), median_error_3ds.min()))
output_file.write(" - Training time [s]: Mean (w/ 95% CI)={0:.4f}+-{1:.4f}, Max={2:.4f}, Min={3:.4f}\n".format(*mean_ci(elapsedTimes), elapsedTimes.max(), elapsedTimes.min()))
| 38.995763 | 241 | 0.609801 |
3a4b3ac0c4f1113a5cc0cc1defb10e460e3def7c | 1,973 | py | Python | python/GafferImageUI/ImageTransformUI.py | cwmartin/gaffer | 1f8a0f75522105c9d5efefac6d55cb61c1038909 | [
"BSD-3-Clause"
] | null | null | null | python/GafferImageUI/ImageTransformUI.py | cwmartin/gaffer | 1f8a0f75522105c9d5efefac6d55cb61c1038909 | [
"BSD-3-Clause"
] | null | null | null | python/GafferImageUI/ImageTransformUI.py | cwmartin/gaffer | 1f8a0f75522105c9d5efefac6d55cb61c1038909 | [
"BSD-3-Clause"
] | null | null | null | ##########################################################################
#
# Copyright (c) 2014, Image Engine Design Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above
# copyright notice, this list of conditions and the following
# disclaimer.
#
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following
# disclaimer in the documentation and/or other materials provided with
# the distribution.
#
# * Neither the name of John Haddon nor the names of
# any other contributors to this software may be used to endorse or
# promote products derived from this software without specific prior
# written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
# IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
# LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
# NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
##########################################################################
import GafferUI
import GafferImage
GafferUI.PlugValueWidget.registerCreator( GafferImage.ImageTransform, "transform", GafferUI.CompoundPlugValueWidget, collapsed=None )
| 48.121951 | 133 | 0.696908 |
dd2f9f1106b7fa4ea914e7b1c127be8e7c189b12 | 15,556 | py | Python | tests/tacacs/test_authorization.py | amulyan7/sonic-mgmt | b673fe4d830f064ae6f937c514215a7a7d0c7f33 | [
"Apache-2.0"
] | null | null | null | tests/tacacs/test_authorization.py | amulyan7/sonic-mgmt | b673fe4d830f064ae6f937c514215a7a7d0c7f33 | [
"Apache-2.0"
] | null | null | null | tests/tacacs/test_authorization.py | amulyan7/sonic-mgmt | b673fe4d830f064ae6f937c514215a7a7d0c7f33 | [
"Apache-2.0"
] | null | null | null | import logging
import paramiko
import pytest
from .utils import stop_tacacs_server, start_tacacs_server, per_command_check_skip_versions, remove_all_tacacs_server
from tests.common.helpers.assertions import pytest_assert
from tests.common.utilities import skip_release
pytestmark = [
pytest.mark.disable_loganalyzer,
pytest.mark.topology('any'),
pytest.mark.device_type('vs')
]
logger = logging.getLogger(__name__)
TIMEOUT_LIMIT = 120
def ssh_connect_remote(remote_ip, remote_username, remote_password):
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(remote_ip, username=remote_username, password=remote_password, allow_agent=False, look_for_keys=False, auth_timeout=TIMEOUT_LIMIT)
return ssh
def check_ssh_connect_remote_failed(remote_ip, remote_username, remote_password):
login_failed = False
try:
ssh_connect_remote(remote_ip, remote_username, remote_password)
except paramiko.ssh_exception.AuthenticationException as e:
login_failed = True
pytest_assert(login_failed == True)
def ssh_run_command(ssh_client, command):
stdin, stdout, stderr = ssh_client.exec_command(command, timeout=TIMEOUT_LIMIT)
exit_code = stdout.channel.recv_exit_status()
stdout_lines = stdout.readlines()
stderr_lines = stderr.readlines()
return exit_code, stdout_lines, stderr_lines
def check_ssh_output(res, exp_val):
content_exist = False
for l in res:
if exp_val in l:
content_exist = True
break
pytest_assert(content_exist)
@pytest.fixture
def remote_user_client(duthosts, enum_rand_one_per_hwsku_hostname, tacacs_creds):
duthost = duthosts[enum_rand_one_per_hwsku_hostname]
dutip = duthost.mgmt_ip
with ssh_connect_remote(dutip, tacacs_creds['tacacs_authorization_user'], tacacs_creds['tacacs_authorization_user_passwd']) as ssh_client:
yield ssh_client
@pytest.fixture
def local_user_client():
with paramiko.SSHClient() as ssh_client:
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
yield ssh_client
@pytest.fixture(scope="module", autouse=True)
def check_image_version(duthost):
"""Skips this test if the SONiC image installed on DUT is older than 202112
Args:
duthost: Hostname of DUT.
Returns:
None.
"""
skip_release(duthost, per_command_check_skip_versions)
def check_authorization_tacacs_only(duthosts, enum_rand_one_per_hwsku_hostname, tacacs_creds, check_tacacs, remote_user_client):
duthost = duthosts[enum_rand_one_per_hwsku_hostname]
duthost.shell("sudo config aaa authorization tacacs+")
"""
Verify TACACS+ user run command in server side whitelist:
If command have local permission, user can run command.
"""
exit_code, stdout, stderr = ssh_run_command(remote_user_client, "show aaa")
pytest_assert(exit_code == 0)
check_ssh_output(stdout, 'AAA authentication')
"""
Verify TACACS+ user run command in server side whitelist:
If command not have local permission, user can't run command.
"""
exit_code, stdout, stderr = ssh_run_command(remote_user_client, "config aaa")
pytest_assert(exit_code == 1)
check_ssh_output(stderr, 'Root privileges are required for this operation')
# Verify TACACS+ user can't run command not in server side whitelist.
exit_code, stdout, stderr = ssh_run_command(remote_user_client, "cat /etc/passwd")
pytest_assert(exit_code == 1)
check_ssh_output(stdout, '/usr/bin/cat authorize failed by TACACS+ with given arguments, not executing')
# Verify Local user can't login.
dutip = duthost.mgmt_ip
check_ssh_connect_remote_failed(dutip, tacacs_creds['local_user'],
tacacs_creds['local_user_passwd'])
def test_authorization_tacacs_only(duthosts, enum_rand_one_per_hwsku_hostname, tacacs_creds, check_tacacs, remote_user_client):
check_authorization_tacacs_only(duthosts, enum_rand_one_per_hwsku_hostname, tacacs_creds, check_tacacs, remote_user_client)
def test_authorization_tacacs_only_some_server_down(localhost, duthosts, enum_rand_one_per_hwsku_hostname, tacacs_creds, ptfhost, check_tacacs, remote_user_client):
"""
Setup multiple tacacs server for this UT.
Tacacs server 127.0.0.1 not accessible.
"""
invalid_tacacs_server_ip = "127.0.0.1"
duthost = duthosts[enum_rand_one_per_hwsku_hostname]
tacacs_server_ip = ptfhost.mgmt_ip
duthost.shell("sudo config tacacs timeout 1")
# cleanup all tacacs server, if UT break, tacacs server may still left in dut and will break next UT.
remove_all_tacacs_server(duthost)
duthost.shell("sudo config tacacs add %s" % invalid_tacacs_server_ip)
duthost.shell("sudo config tacacs add %s" % tacacs_server_ip)
"""
Verify TACACS+ user run command in server side whitelist:
If command have local permission, user can run command.
If command not have local permission, user can't run command.
Verify TACACS+ user can't run command not in server side whitelist.
Verify Local user can't login.
"""
check_authorization_tacacs_only(duthosts, enum_rand_one_per_hwsku_hostname, tacacs_creds, check_tacacs, remote_user_client)
# Cleanup
duthost.shell("sudo config tacacs delete %s" % invalid_tacacs_server_ip)
def test_authorization_tacacs_only_then_server_down_after_login(duthosts, enum_rand_one_per_hwsku_hostname, tacacs_creds, ptfhost, check_tacacs, remote_user_client):
duthost = duthosts[enum_rand_one_per_hwsku_hostname]
duthost.shell("sudo config aaa authorization tacacs+")
# Verify when server are accessible, TACACS+ user can run command in server side whitelist.
exit_code, stdout, stderr = ssh_run_command(remote_user_client, "show aaa")
pytest_assert(exit_code == 0)
check_ssh_output(stdout, 'AAA authentication')
# Shutdown tacacs server
stop_tacacs_server(ptfhost)
# Verify when server are not accessible, TACACS+ user can't run any command.
exit_code, stdout, stderr = ssh_run_command(remote_user_client, "show aaa")
pytest_assert(exit_code == 1)
check_ssh_output(stdout, '/usr/local/bin/show not authorized by TACACS+ with given arguments, not executing')
# Cleanup UT.
start_tacacs_server(ptfhost)
def test_authorization_tacacs_and_local(duthosts, enum_rand_one_per_hwsku_hostname, tacacs_creds, check_tacacs, remote_user_client):
duthost = duthosts[enum_rand_one_per_hwsku_hostname]
duthost.shell("sudo config aaa authorization \"tacacs+ local\"")
"""
Verify TACACS+ user run command in server side whitelist:
If command have local permission, user can run command.
"""
exit_code, stdout, stderr = ssh_run_command(remote_user_client, "show aaa")
pytest_assert(exit_code == 0)
check_ssh_output(stdout, 'AAA authentication')
"""
Verify TACACS+ user run command in server side whitelist:
If command not have local permission, user can't run command.
"""
exit_code, stdout, stderr = ssh_run_command(remote_user_client, "config aaa")
pytest_assert(exit_code == 1)
check_ssh_output(stderr, 'Root privileges are required for this operation')
# Verify TACACS+ user can run command not in server side whitelist, but have local permission.
exit_code, stdout, stderr = ssh_run_command(remote_user_client, "cat /etc/passwd")
pytest_assert(exit_code == 0)
check_ssh_output(stdout, 'root:x:0:0:root:/root:/bin/bash')
# Verify Local user can't login.
dutip = duthost.mgmt_ip
check_ssh_connect_remote_failed(dutip, tacacs_creds['local_user'],
tacacs_creds['local_user_passwd'])
def test_authorization_tacacs_and_local_then_server_down_after_login(duthosts, enum_rand_one_per_hwsku_hostname, tacacs_creds, ptfhost, check_tacacs, remote_user_client, local_user_client):
duthost = duthosts[enum_rand_one_per_hwsku_hostname]
duthost.shell("sudo config aaa authorization \"tacacs+ local\"")
# Shutdown tacacs server
stop_tacacs_server(ptfhost)
# Verify TACACS+ user can run command not in server side whitelist but have permission in local.
exit_code, stdout, stderr = ssh_run_command(remote_user_client, "cat /etc/passwd")
pytest_assert(exit_code == 0)
check_ssh_output(stdout, 'root:x:0:0:root:/root:/bin/bash')
# Verify TACACS+ user can't run command in server side whitelist also not have permission in local.
exit_code, stdout, stderr = ssh_run_command(remote_user_client, "config tacacs")
pytest_assert(exit_code == 1)
check_ssh_output(stdout, '/usr/local/bin/config not authorized by TACACS+ with given arguments, not executing')
check_ssh_output(stderr, 'Root privileges are required for this operation')
# Verify Local user can login when tacacs closed, and run command with local permission.
dutip = duthost.mgmt_ip
local_user_client.connect(dutip, username=tacacs_creds['local_user'],
password=tacacs_creds['local_user_passwd'],
allow_agent=False, look_for_keys=False, auth_timeout=TIMEOUT_LIMIT)
exit_code, stdout, stderr = ssh_run_command(local_user_client, "show aaa")
pytest_assert(exit_code == 0)
check_ssh_output(stdout, 'AAA authentication')
# Start tacacs server
start_tacacs_server(ptfhost)
# Verify after Local user login, then server becomes accessible, Local user still can run command with local permission.
exit_code, stdout, stderr = ssh_run_command(local_user_client, "show aaa")
pytest_assert(exit_code == 0)
check_ssh_output(stdout, 'AAA authentication')
def test_authorization_local(duthosts, enum_rand_one_per_hwsku_hostname, tacacs_creds, ptfhost, check_tacacs, remote_user_client, local_user_client):
duthost = duthosts[enum_rand_one_per_hwsku_hostname]
duthost.shell("sudo config aaa authorization local")
"""
TACACS server up:
Verify TACACS+ user can run command if have permission in local.
"""
exit_code, stdout, stderr = ssh_run_command(remote_user_client, "show aaa")
pytest_assert(exit_code == 0)
check_ssh_output(stdout, 'AAA authentication')
"""
TACACS server up:
Verify TACACS+ user can't run command if not have permission in local.
"""
exit_code, stdout, stderr = ssh_run_command(remote_user_client, "config aaa")
pytest_assert(exit_code == 1)
check_ssh_output(stderr, 'Root privileges are required for this operation')
# Shutdown tacacs server.
stop_tacacs_server(ptfhost)
"""
TACACS server down:
Verify Local user can login, and run command with local permission.
"""
dutip = duthost.mgmt_ip
local_user_client.connect(dutip, username=tacacs_creds['local_user'],
password=tacacs_creds['local_user_passwd'],
allow_agent=False, look_for_keys=False, auth_timeout=TIMEOUT_LIMIT)
exit_code, stdout, stderr = ssh_run_command(local_user_client, "show aaa")
pytest_assert(exit_code == 0)
check_ssh_output(stdout, 'AAA authentication')
# Cleanup
start_tacacs_server(ptfhost)
def test_bypass_authorization(duthosts, enum_rand_one_per_hwsku_hostname, tacacs_creds, check_tacacs, remote_user_client):
duthost = duthosts[enum_rand_one_per_hwsku_hostname]
duthost.shell("sudo config aaa authorization tacacs+")
"""
Verify user can't run script with sh/python with following command.
python ./testscript.py
"""
exit_code, stdout, stderr = ssh_run_command(remote_user_client, 'echo "" >> ./testscript.py')
pytest_assert(exit_code == 0)
exit_code, stdout, stderr = ssh_run_command(remote_user_client, "python ./testscript.py")
pytest_assert(exit_code == 1)
check_ssh_output(stdout, 'authorize failed by TACACS+ with given arguments, not executing')
# Verify user can't run 'find' command with '-exec' parameter.
exit_code, stdout, stderr = ssh_run_command(remote_user_client, "find . -type f -exec /bin/sh ;")
pytest_assert(exit_code == 1)
check_ssh_output(stdout, 'authorize failed by TACACS+ with given arguments, not executing')
# Verify user can run 'find' command without '-exec' parameter.
exit_code, stdout, stderr = ssh_run_command(remote_user_client, "find . /bin/sh")
pytest_assert(exit_code == 0)
check_ssh_output(stdout, '/bin/sh')
"""
Verify user can't run command with loader:
/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 sh
"""
exit_code, stdout, stderr = ssh_run_command(remote_user_client, "/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 sh")
pytest_assert(exit_code == 1)
check_ssh_output(stdout, 'authorize failed by TACACS+ with given arguments, not executing')
"""
Verify user can't run command with prefix/quoting:
\sh
"sh"
echo $(sh -c ls)
"""
exit_code, stdout, stderr = ssh_run_command(remote_user_client, "\\sh")
pytest_assert(exit_code == 1)
check_ssh_output(stdout, 'authorize failed by TACACS+ with given arguments, not executing')
exit_code, stdout, stderr = ssh_run_command(remote_user_client, '"sh"')
pytest_assert(exit_code == 1)
check_ssh_output(stdout, 'authorize failed by TACACS+ with given arguments, not executing')
exit_code, stdout, stderr = ssh_run_command(remote_user_client, "echo $(sh -c ls)")
# echo command will run success and return 0, but sh command will be blocked.
pytest_assert(exit_code == 0)
check_ssh_output(stdout, 'authorize failed by TACACS+ with given arguments, not executing')
def test_backward_compatibility_disable_authorization(duthosts, enum_rand_one_per_hwsku_hostname, tacacs_creds, ptfhost, check_tacacs, remote_user_client, local_user_client):
duthost = duthosts[enum_rand_one_per_hwsku_hostname]
duthost.shell("sudo config aaa authorization local")
# Verify domain account can run command if have permission in local.
exit_code, stdout, stderr = ssh_run_command(remote_user_client, "show aaa")
pytest_assert(exit_code == 0)
check_ssh_output(stdout, 'AAA authentication')
# Shutdown tacacs server
stop_tacacs_server(ptfhost)
# Verify domain account can't login to device successfully.
dutip = duthost.mgmt_ip
check_ssh_connect_remote_failed(dutip, tacacs_creds['tacacs_authorization_user'],
tacacs_creds['tacacs_authorization_user_passwd'])
# Verify local admin account can run command if have permission in local.
dutip = duthost.mgmt_ip
local_user_client.connect(dutip, username=tacacs_creds['local_user'],
password=tacacs_creds['local_user_passwd'],
allow_agent=False, look_for_keys=False, auth_timeout=TIMEOUT_LIMIT)
exit_code, stdout, stderr = ssh_run_command(local_user_client, "show aaa")
pytest_assert(exit_code == 0)
check_ssh_output(stdout, 'AAA authentication')
# Verify local admin account can't run command if not have permission in local.
exit_code, stdout, stderr = ssh_run_command(local_user_client, "config aaa")
pytest_assert(exit_code == 1)
check_ssh_output(stderr, 'Root privileges are required for this operation')
# cleanup
start_tacacs_server(ptfhost)
| 44.701149 | 189 | 0.732708 |
4978dcd52b85f27088f2e0dc1ab1cee199d7bf2e | 2,220 | py | Python | barcode/itf.py | timgates42/python-barcode | 5fc5608ad78d72f12bb8310124211030ba1af368 | [
"MIT"
] | null | null | null | barcode/itf.py | timgates42/python-barcode | 5fc5608ad78d72f12bb8310124211030ba1af368 | [
"MIT"
] | null | null | null | barcode/itf.py | timgates42/python-barcode | 5fc5608ad78d72f12bb8310124211030ba1af368 | [
"MIT"
] | null | null | null | """Module: barcode.itf
:Provided barcodes: Interleaved 2 of 5
"""
__docformat__ = "restructuredtext en"
from barcode.base import Barcode
from barcode.charsets import itf
from barcode.errors import IllegalCharacterError
MIN_SIZE = 0.2
MIN_QUIET_ZONE = 6.4
class ITF(Barcode):
"""Initializes a new ITF instance.
:parameters:
code : String
ITF (Interleaved 2 of 5) numeric string
writer : barcode.writer Instance
The writer to render the barcode (default: SVGWriter).
narrow: Integer
Width of the narrow elements (default: 2)
wide: Integer
Width of the wide elements (default: 5)
wide/narrow must be in the range 2..3
"""
name = "ITF"
def __init__(self, code, writer=None, narrow=2, wide=5):
if not code.isdigit():
raise IllegalCharacterError("ITF code can only contain numbers.")
# Length must be even, prepend 0 if necessary
if len(code) % 2 != 0:
code = "0" + code
self.code = code
self.writer = writer or Barcode.default_writer()
self.narrow = narrow
self.wide = wide
def __str__(self):
return self.code
def get_fullcode(self):
return self.code
def build(self):
data = itf.START
for i in range(0, len(self.code), 2):
bars_digit = int(self.code[i])
spaces_digit = int(self.code[i + 1])
for j in range(5):
data += itf.CODES[bars_digit][j].upper()
data += itf.CODES[spaces_digit][j].lower()
data += itf.STOP
raw = ""
for e in data:
if e == "W":
raw += "1" * self.wide
if e == "w":
raw += "0" * self.wide
if e == "N":
raw += "1" * self.narrow
if e == "n":
raw += "0" * self.narrow
return [raw]
def render(self, writer_options, text=None):
options = {
"module_width": MIN_SIZE / self.narrow,
"quiet_zone": MIN_QUIET_ZONE,
}
options.update(writer_options or {})
return Barcode.render(self, options, text)
| 28.831169 | 77 | 0.548649 |
a113e73caa08e33cdf4368bc8dc29abe2d521cea | 303 | py | Python | python/5_kyu/not_very_secure.py | CommonLouis/CodeWars_Solutions | f325c12effbd361905027864848e06ce07ec941e | [
"MIT"
] | null | null | null | python/5_kyu/not_very_secure.py | CommonLouis/CodeWars_Solutions | f325c12effbd361905027864848e06ce07ec941e | [
"MIT"
] | null | null | null | python/5_kyu/not_very_secure.py | CommonLouis/CodeWars_Solutions | f325c12effbd361905027864848e06ce07ec941e | [
"MIT"
] | null | null | null | """
Michael Persico
Oct.17, 2021
Not very secure
https://www.codewars.com/kata/526dbd6c8c0eb53254000110
"""
def alphanumeric(password):
return all(char.isalnum() for char in password) and len(password) > 0
if __name__ == "__main__":
print(alphanumeric("iW4l0fXC95JHZDurmMBnCKai8pIQ")) # True
| 23.307692 | 73 | 0.742574 |
3ce50354072877b26ad4d0f6b79f71b9b46ec973 | 240 | py | Python | Code/generics/functions.py | Den1k22/python-lessons | cc898284e4d9b233dc023fbdae6ac41cf184ab02 | [
"MIT"
] | null | null | null | Code/generics/functions.py | Den1k22/python-lessons | cc898284e4d9b233dc023fbdae6ac41cf184ab02 | [
"MIT"
] | null | null | null | Code/generics/functions.py | Den1k22/python-lessons | cc898284e4d9b233dc023fbdae6ac41cf184ab02 | [
"MIT"
] | null | null | null |
def add_numbers(number1, number2):
return number1 + number2
def subtract_numbers(number1, number2):
return number1 - number2
print(add_numbers(7,2))
print(add_numbers(9,9))
print(subtract_numbers(7,2))
print(subtract_numbers(9,9))
| 17.142857 | 39 | 0.7625 |
c491560ac267167a9f5b169bd4dbf7242df1ea3b | 2,864 | py | Python | functionaltests/api/v2/clients/zone_client.py | cneill/designate-testing | 7bf320062d85a12bff2aee8d26c133941a289fc4 | [
"Apache-2.0"
] | null | null | null | functionaltests/api/v2/clients/zone_client.py | cneill/designate-testing | 7bf320062d85a12bff2aee8d26c133941a289fc4 | [
"Apache-2.0"
] | null | null | null | functionaltests/api/v2/clients/zone_client.py | cneill/designate-testing | 7bf320062d85a12bff2aee8d26c133941a289fc4 | [
"Apache-2.0"
] | null | null | null | """
Copyright 2015 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from tempest_lib.exceptions import NotFound
from functionaltests.api.v2.models.zone_model import ZoneModel
from functionaltests.api.v2.models.zone_model import ZoneListModel
from functionaltests.common.client import ClientMixin
from functionaltests.common import utils
class ZoneClient(ClientMixin):
@classmethod
def zones_uri(cls, filters=None):
url = "/v2/zones"
if filters:
url = cls.add_filters(url, filters)
return url
@classmethod
def zone_uri(cls, id):
return "{0}/{1}".format(cls.zones_uri(), id)
def list_zones(self, filters=None, **kwargs):
resp, body = self.client.get(self.zones_uri(filters), **kwargs)
return self.deserialize(resp, body, ZoneListModel)
def get_zone(self, id, **kwargs):
resp, body = self.client.get(self.zone_uri(id))
return self.deserialize(resp, body, ZoneModel)
def post_zone(self, zone_model, **kwargs):
resp, body = self.client.post(self.zones_uri(),
body=zone_model.to_json(), **kwargs)
return self.deserialize(resp, body, ZoneModel)
def patch_zone(self, id, zone_model, **kwargs):
resp, body = self.client.patch(self.zone_uri(id),
body=zone_model.to_json(), **kwargs)
return self.deserialize(resp, body, ZoneModel)
def delete_zone(self, id, **kwargs):
resp, body = self.client.delete(self.zone_uri(id), **kwargs)
return self.deserialize(resp, body, ZoneModel)
def wait_for_zone(self, zone_id):
utils.wait_for_condition(lambda: self.is_zone_active(zone_id))
def wait_for_zone_404(self, zone_id):
utils.wait_for_condition(lambda: self.is_zone_404(zone_id))
def is_zone_active(self, zone_id):
resp, model = self.get_zone(zone_id)
# don't have assertEqual but still want to fail fast
assert resp.status == 200
if model.status == 'ACTIVE':
return True
elif model.status == 'ERROR':
raise Exception("Saw ERROR status")
return False
def is_zone_404(self, zone_id):
try:
# tempest_lib rest client raises exceptions on bad status codes
resp, model = self.get_zone(zone_id)
except NotFound:
return True
return False
| 34.506024 | 75 | 0.682612 |
91ef6268771cbc8cc507c440985a83acdead2b4b | 4,965 | py | Python | main.py | jackarailo/chatbot | 729bfb0a9051d8681f6458c21f8aa923e5138ff0 | [
"Apache-2.0"
] | null | null | null | main.py | jackarailo/chatbot | 729bfb0a9051d8681f6458c21f8aa923e5138ff0 | [
"Apache-2.0"
] | null | null | null | main.py | jackarailo/chatbot | 729bfb0a9051d8681f6458c21f8aa923e5138ff0 | [
"Apache-2.0"
] | null | null | null | import os
import argparse
import torch
import model
import data
import train
import query
parser = argparse.ArgumentParser(
description=r"""Chatbot influenced by the tutorial
on https://pytorch.org/tutorials/beginner/chatbot_tutorial.html
and pytorch official examples on gihub""")
parser.add_argument('--mode', type=str, default='train',
help='mode to run main (train, query)')
parser.add_argument('--device', type=str, default='best',
help='device to use (cpu, cuda, best)')
parser.add_argument('--datadir', type=str,
default='./data/cornell_movie-dialogs_corpus/',
help='directory of the data corpus')
parser.add_argument('--epochs', type=int, default=100,
help=r"Number of epochs to train the model")
parser.add_argument('--learning_rate', type=float, default=1e-3,
help=r"Learning rate")
parser.add_argument('--batch_size', type=int, default=64,
help=r"Batch size during training")
parser.add_argument('--max_word_length', type=int, default=20,
help=r"Max word length (required if batch size > 1)")
parser.add_argument('--min_count', type=int, default=1000,
help=r"Min number of times word is repeated to be added")
parser.add_argument('--print_every', type=int, default=10,
help=r"Generate words every print_every steps during train")
parser.add_argument('--pretrained_word_vectors', type=str, default="",
help="optional directory to pretrained word vectors")
parser.add_argument('--checkpoint', type=str, default='./model_state_dict.pt',
help='directory to state_dict checkpoint to use')
parser.add_argument('--query', type=str, default=None,
help='pass your query to the bot')
parser.add_argument('--use_existing_checkpoint', type=int, default=1,
help='use existing checkpoint if it exists')
args = parser.parse_args()
# Globals
DATA_DIR = args.datadir
if args.pretrained_word_vectors != "":
WORD_VECTORS_FILE = args.pretrained_word_vectors
else:
WORD_VECTORS_FILE = None
VALID_MODES = ['train', 'query']
MODE = args.mode
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
PRINT_EVERY = args.print_every
LEARNING_RATE = args.learning_rate
MAX_WORD_LEN = args.max_word_length
MIN_COUNT = args.min_count
if args.device == 'best':
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
else:
DEVICE = args.device
CHECKPOINT_FILE = args.checkpoint
USE_EXISTING_CHECKPOINT = False if args.use_existing_checkpoint == 0 else True
QUERY = args.query
RAW_DATA = False
def main():
# Get the word tokens
wvectors = None
if MODE == 'train':
sinputs, stargets = data.get_sentences(DATA_DIR, MAX_WORD_LEN,
MIN_COUNT)
if WORD_VECTORS_FILE is not None:
tokens, wvectors = data.get_tokens_from_vectors_file(WORD_VECTORS_FILE)
sinputs, stargets = data.clear_not_found(tokens, sinputs, stargets)
tokens = tokens[4:]
elif os.path.isfile(CHECKPOINT_FILE) and USE_EXISTING_CHECKPOINT:
tokens = data.get_tokens_from_checkpoint_file(CHECKPOINT_FILE)
else:
# get list of lists for each word, sentence in inputs and targets
tokens = data.get_tokens_from_data(sinputs, stargets)
SDATA = True
# Populate the vocab based on the word tokens
vocab = data.Vocab(tokens)
# Create the model and move it to device
net = model.get_model(vocab, wvectors=wvectors,
checkpoint_file=CHECKPOINT_FILE,
use_existing_checkpoint=USE_EXISTING_CHECKPOINT,
device=DEVICE, outwlen=MAX_WORD_LEN)
if MODE == 'train':
inputs, mask_inputs, targets, mask_targets = data.get_tensors(
vocab, [sinputs, stargets],
MAX_WORD_LEN)
print("-----------Starting training--------------")
train.step(net, inputs, mask_inputs, targets, mask_targets, vocab, DEVICE,
EPOCHS, BATCH_SIZE, CHECKPOINT_FILE, LEARNING_RATE,
PRINT_EVERY)
elif MODE == 'query':
print("Pass a query or type exit to terminate")
q = input("> ")
while q != 'exit':
query.respond(q, net, vocab, DEVICE)
q = input("> ")
def test():
sinputs, stargets = data.get_sentences(DATA_DIR, MAX_WORD_LEN, MIN_COUNT)
tokens = data.get_tokens_from_data(sinputs, stargets)
vocab = data.Vocab(tokens)
inputs, mask_inputs, targets, mask_targets = data.get_tensors(
vocab, [sinputs, stargets],
MAX_WORD_LEN)
return sinputs, stargets, vocab
if __name__ == "__main__":
main()
| 42.076271 | 82 | 0.635247 |
772524f960459b6bffacd94f88ae40a6879eaf13 | 5,358 | py | Python | menu_module.py | helliio/Reddit-image-extractor | 8dad611267b4efef3094a5f8bbf6ff9601031c65 | [
"MIT"
] | 1 | 2017-12-14T19:21:05.000Z | 2017-12-14T19:21:05.000Z | menu_module.py | helliio/Reddit-image-extractor | 8dad611267b4efef3094a5f8bbf6ff9601031c65 | [
"MIT"
] | null | null | null | menu_module.py | helliio/Reddit-image-extractor | 8dad611267b4efef3094a5f8bbf6ff9601031c65 | [
"MIT"
] | null | null | null | import config
def run_menu():
print("--------------------------------------------------")
print("Welcome to Reddit image extractor version 1.2.4")
print("--------------------------------------------------" + "\n")
prompt_subreddits()
prompt_sort_type()
prompt_down_limit()
def prompt_subreddits():
while True:
if not config.subreddit:
edit_subreddits()
break
subreddits = get_subreddits()
print("You have set: " + subreddits + "As your default, do you wish to change it? [y/N]" + "\n")
change_subreddit_input = input().lower()
print("")
if change_subreddit_input == "" or change_subreddit_input == "n":
break
elif change_subreddit_input == "y":
edit_subreddits()
break
def get_subreddits():
subreddits = ""
for subreddit in config.subreddit:
subreddits = subreddits + subreddit + " "
return subreddits
def edit_subreddits():
while True:
if config.subreddit:
print("You have set: " + get_subreddits() + "what do you wish to do?:" + "\n")
else:
print("You have not set any subreddits")
print("1 Add subreddit")
print("2 Clear")
print("3 Done" + "\n")
change_subreddit_choice = input()
print("")
if change_subreddit_choice == "1":
new_subreddit_input = input("Enter subreddit: ")
print("")
config.subreddit.append(new_subreddit_input)
elif change_subreddit_choice == "2":
config.subreddit = []
elif change_subreddit_choice == "3":
if config.subreddit:
break
def prompt_sort_type():
while True:
sort_type = config.sort_type
if sort_type == "":
sort_type = "Hot"
print("You have set: " + sort_type + " as your default, do you wish to change it? [y/N]" + "\n")
change_subreddit_input = input().lower()
print("")
if change_subreddit_input == "" or change_subreddit_input == "n":
break
elif change_subreddit_input == "y":
edit_sort_type()
break
def edit_sort_type():
while True:
print("What do you wish to choose?" + "\n")
print("1 Hot")
print("2 New")
print("3 Rising")
print("4 Controversial")
print("5 Top")
print("6 Gilded" + "\n")
type_choice = input()
print("")
if type_choice == "1":
break
elif type_choice == "2":
config.sort_type = "new"
break
elif type_choice == "3":
config.sort_type = "rising"
break
elif type_choice == "4":
config.sort_type = "controversial"
prompt_sort_arg("controversial")
break
elif type_choice == "5":
config.sort_type = "top"
prompt_sort_arg("top")
break
elif type_choice == "6":
config.sort_type = "gilded"
break
def prompt_sort_arg(arg_type):
while True:
sort_arg = config.sort_arg
if sort_arg == "":
sort_arg = "Past 24 hours"
print("You have set: " + sort_arg + " as your default, do you wish to change it? [y/N]" + "\n")
change_subreddit_input = input().lower()
print("")
if change_subreddit_input == "" or change_subreddit_input == "n":
break
elif change_subreddit_input == "y":
edit_sort_arg(arg_type)
break
def edit_sort_arg(arg_type):
while True:
print("What time frame do you wish to choose?" + "\n")
print("1 Past hour")
print("2 Past 24 hours")
print("3 Past week")
print("4 Past month")
print("5 Past year")
print("6 All time" + "\n")
type_choice = input()
print("")
if type_choice == "1":
config.sort_arg = "?sort=" + arg_type + "&t=hour"
break
elif type_choice == "2":
break
elif type_choice == "3":
config.sort_arg = "?sort=" + arg_type + "&t=week"
break
elif type_choice == "4":
config.sort_arg = "?sort=" + arg_type + "&t=month"
break
elif type_choice == "5":
config.sort_arg = "?sort=" + arg_type + "&t=year"
break
elif type_choice == "6":
config.sort_arg = "?sort=" + arg_type + "&t=all"
break
def prompt_down_limit():
while True:
if config.down_limit <= 0:
edit_down_limit()
break
print("You have set: " + str(config.down_limit) + " As your default, do you wish to change it? [y/N]" + "\n")
change_down_limit_input = input().lower()
print("")
if change_down_limit_input == "" or change_down_limit_input == "n":
break
elif change_down_limit_input == "y":
edit_down_limit()
break
def edit_down_limit():
while True:
user_input = input("Enter number of pictures you wish to download: ")
print("")
try:
val = int(user_input)
config.down_limit = val
break
except ValueError:
print("That's not an int!" + "\n")
| 32.472727 | 117 | 0.522583 |
64323f8778839fe16fe92bdd24ef5ef34a9b7fe1 | 11,764 | py | Python | houdini/handlers/redemption/code.py | EmperorBale/houdini | e501bfc9aa9493919d8c581e467f378109045db1 | [
"MIT"
] | 1 | 2020-09-30T06:42:15.000Z | 2020-09-30T06:42:15.000Z | houdini/handlers/redemption/code.py | EmperorBale/houdini | e501bfc9aa9493919d8c581e467f378109045db1 | [
"MIT"
] | null | null | null | houdini/handlers/redemption/code.py | EmperorBale/houdini | e501bfc9aa9493919d8c581e467f378109045db1 | [
"MIT"
] | 3 | 2020-09-30T12:15:02.000Z | 2021-11-11T09:20:36.000Z | import random
from datetime import datetime
from houdini import handlers
from houdini.constants import ClientType
from houdini.data import db
from houdini.data.igloo import Furniture, Igloo
from houdini.data.item import Item
from houdini.data.redemption import PenguinRedemptionCode, RedemptionAwardCard, \
RedemptionAwardFlooring, RedemptionAwardFurniture, RedemptionAwardIgloo, RedemptionAwardItem, \
RedemptionAwardLocation, RedemptionAwardPuffle, RedemptionAwardPuffleItem, RedemptionCode
from houdini.handlers import XTPacket
from houdini.handlers.games.ninja.card import ninja_rank_up
from houdini.handlers.games.ninja.fire import fire_ninja_rank_up
TreasureUnlockCount = 3
NinjaRankUpChoice = 1
FireNinjaRankUpChoice = 3
WaterNinjaRankUpChoice = 4
SnowNinjaRankUpChoice = 5
@handlers.handler(XTPacket('rsc', ext='red'), pre_login=True, client=ClientType.Legacy)
@handlers.depends_on_packet(XTPacket('rjs', ext='red'))
async def handle_code_legacy(p, redemption_code: str):
query = RedemptionCode.distinct(RedemptionCode.id)\
.load(cards=RedemptionAwardCard.distinct(RedemptionAwardCard.card_id),
items=RedemptionAwardItem.distinct(RedemptionAwardItem.item_id))\
.query.where(RedemptionCode.code == redemption_code)
codes = await query.gino.all()
if not codes:
return await p.send_error(720)
code = codes[0]
awards = []
if code.uses is not None:
redeemed_count = await db.select([db.func.count(PenguinRedemptionCode.code_id)]).where(
PenguinRedemptionCode.code_id == code.id).gino.scalar()
if redeemed_count >= code.uses:
return await p.send_error(721)
penguin_redeemed = await PenguinRedemptionCode.query.\
where((PenguinRedemptionCode.code_id == code.id) &
(PenguinRedemptionCode.penguin_id == p.id)).gino.scalar()
if penguin_redeemed:
return await p.send_error(721)
if code.expires is not None and code.expires < datetime.now():
return await p.send_error(726)
if code.type == 'GOLDEN':
p.server.cache.set(f'{p.id}.{code.code}.golden_code', code)
return await p.send_xt('rsc', 'GOLDEN', p.ninja_rank, p.fire_ninja_rank, p.water_ninja_rank,
int(p.fire_ninja_rank > 0), int(p.water_ninja_rank > 0))
if code.type == 'CARD':
for award in code.cards:
awards.append(str(award.card_id))
await p.add_card(p.server.cards[award.card_id])
else:
if code.items:
for award in code.items:
awards.append(str(award.item_id))
await p.add_inventory(p.server.items[award.item_id], notify=False)
await PenguinRedemptionCode.create(penguin_id=p.id, code_id=code.id)
await p.update(coins=p.coins + code.coins).apply()
return await p.send_xt('rsc', code.type, ','.join(map(str, awards)), code.coins)
@handlers.handler(XTPacket('rsc', ext='red'), pre_login=True, client=ClientType.Vanilla)
@handlers.depends_on_packet(XTPacket('rjs', ext='red'))
async def handle_code_vanilla(p, redemption_code: str):
query = RedemptionCode.distinct(RedemptionCode.id) \
.load(cards=RedemptionAwardCard.distinct(RedemptionAwardCard.card_id),
items=RedemptionAwardItem.distinct(RedemptionAwardItem.item_id),
flooring=RedemptionAwardFlooring.distinct(RedemptionAwardFlooring.flooring_id),
furniture=RedemptionAwardFurniture.distinct(RedemptionAwardFurniture.furniture_id),
igloos=RedemptionAwardIgloo.distinct(RedemptionAwardIgloo.igloo_id),
locations=RedemptionAwardLocation.distinct(RedemptionAwardLocation.location_id),
puffles=RedemptionAwardPuffle.distinct(RedemptionAwardPuffle.puffle_id),
puffle_items=RedemptionAwardPuffleItem.distinct(RedemptionAwardPuffleItem.puffle_item_id))\
.query.where(RedemptionCode.code == redemption_code)
codes = await query.gino.all()
if not codes:
return await p.send_error(720)
code = codes[0]
awards = []
if code.uses is not None:
redeemed_count = await db.select([db.func.count(PenguinRedemptionCode.code_id)]).where(
PenguinRedemptionCode.code_id == code.id).gino.scalar()
if redeemed_count >= code.uses:
return await p.send_error(721)
penguin_redeemed = await PenguinRedemptionCode.query.where((PenguinRedemptionCode.code_id == code.id) &
(PenguinRedemptionCode.penguin_id == p.id)).gino.scalar()
if penguin_redeemed:
return await p.send_error(721)
if code.expires is not None and code.expires < datetime.now():
return await p.send_error(726)
if code.type == 'CATALOG':
num_redeemed_codes = await PenguinRedemptionCode.join(RedemptionCode).count().where(
(PenguinRedemptionCode.penguin_id == p.id) & (RedemptionCode.type == 'CATALOG')).gino.scalar()
owned_ids = ','.join((str(item.id) for item in p.server.items.treasure if item.id in p.inventory))
p.server.cache.set(f'{p.id}.{code.code}.treasure_code', code)
return await p.send_xt('rsc', 'treasurebook', TreasureUnlockCount, owned_ids, num_redeemed_codes)
if code.type == 'GOLDEN':
p.server.cache.set(f'{p.id}.{code.code}.golden_code', code)
return await p.send_xt('rsc', 'GOLDEN', p.ninja_rank, p.fire_ninja_rank, p.water_ninja_rank, 0,
int(p.fire_ninja_rank > 0), int(p.water_ninja_rank > 0), 0)
if code.type == 'INNOCENT':
innocent_redeemed_items = {item for item in p.server.items.innocent if item.id in p.inventory}
innocent_redeemed_furniture = {item for item in p.server.furniture.innocent if item.id in p.furniture}
innocent_redeemed = innocent_redeemed_items.union(innocent_redeemed_furniture)
innocent_items = set(p.server.items.innocent + p.server.furniture.innocent)
innocent_remaining = innocent_items - innocent_redeemed
choices = random.sample(innocent_remaining, min(len(innocent_remaining), 3))
if len(innocent_redeemed) + 3 == len(innocent_items):
choices.append(p.server.igloos[53])
for item in choices:
if type(item) is Item:
awards.append(str(item.id))
await p.add_inventory(item, notify=False)
elif type(item) is Igloo:
awards.append(f'g{item.id}')
await p.add_igloo(item, notify=False)
elif type(item) is Furniture:
awards.append(f'f{item.id}')
await p.add_furniture(item, notify=False)
await PenguinRedemptionCode.create(penguin_id=p.id, code_id=code[0].id)
return await p.send_xt('rsc', 'INNOCENT', ','.join(map(str, awards)),
len(innocent_redeemed) + len(choices),
len(innocent_items))
if code.type == 'CARD':
for award in code.cards:
awards.append(str(award.card_id))
await p.add_card(p.server.cards[award.card_id])
else:
if code.items:
for award in code.items:
awards.append(str(award.item_id))
await p.add_inventory(p.server.items[award.item_id], notify=False)
if code.furniture:
for award in code.furniture:
awards.append(f'f{award.furniture_id}')
await p.add_furniture(p.server.furniture[award.furniture_id], notify=False)
if code.igloos:
for award in code.igloos:
awards.append(f'g{award.igloo_id}')
await p.add_igloo(p.server.igloos[award.igloo_id], notify=False)
if code.locations:
for award in code.locations:
awards.append(f'loc{award.location_id}')
await p.add_location(p.server.locations[award.location_id], notify=False)
if code.flooring:
for award in code.flooring:
awards.append(f'flr{award.flooring_id}')
await p.add_flooring(p.server.flooring[award.flooring_id], notify=False)
if code.puffles:
for award in code.puffles:
awards.append(f'p{award.puffle_id}')
if code.puffle_items:
for award in code.puffle_items:
awards.append(f'pi{award.puffle_item_id}')
await p.add_puffle_item(p.server.puffle_items[award.puffle_item_id], notify=False)
await PenguinRedemptionCode.create(penguin_id=p.id, code_id=code.id)
await p.update(coins=p.coins + code.coins).apply()
return await p.send_xt('rsc', code.type, ','.join(map(str, awards)), code.coins or '')
@handlers.handler(XTPacket('rsgc', ext='red'), pre_login=True)
@handlers.depends_on_packet(XTPacket('rsc', ext='red'))
async def handle_golden_choice(p, redemption_code: str, choice: int):
code_key = f'{p.id}.{redemption_code}.golden_code'
code = p.server.cache.get(code_key)
p.server.cache.delete(code_key)
if not code:
return await p.close()
if len(code.cards) < 6:
return await p.close()
cards = list(code.cards)
card_ids = [str(card.card_id) for card in cards]
if choice == NinjaRankUpChoice:
await ninja_rank_up(p)
cards = cards[:4]
await p.send_xt('rsgc', ','.join(card_ids[:4]) + '|' + str(p.ninja_rank))
elif choice == FireNinjaRankUpChoice:
await fire_ninja_rank_up(p)
cards = cards[:4]
await p.send_xt('rsgc', ','.join(card_ids[:4]) + '|' + str(p.fire_ninja_rank))
else:
cards = cards[:4] + cards[-2:]
await p.send_xt('rsgc', ','.join(card_ids[:4]) + '|' + ','.join(card_ids[-2:]))
for card in cards:
await p.add_card(p.server.cards[card.card_id])
await PenguinRedemptionCode.create(penguin_id=p.id, code_id=code.id)
@handlers.handler(XTPacket('rscrt', ext='red'), pre_login=True)
@handlers.depends_on_packet(XTPacket('rsc', ext='red'))
async def handle_send_cart(p, redemption_code: str, choice: str):
code_key = f'{p.id}.{redemption_code}.treasure_code'
code = p.server.cache.get(code_key)
p.server.cache.delete(code_key)
if code is None:
return await p.close()
coins = 0
awards = []
choices = choice.split(',')
if len(choices) > TreasureUnlockCount:
return await p.close()
for choice in choices:
if choice.startswith('c'):
coins += 500
elif choice.startswith('p'):
awards.append(choice)
elif choice.isdigit():
awards.append(choice)
await p.add_inventory(p.server.items[int(choice)], notify=False)
if code.uses is not None:
await PenguinRedemptionCode.create(penguin_id=p.id, code_id=code.id)
await p.update(coins=p.coins + coins).apply()
await p.send_xt('rscrt', ','.join(awards), coins or '')
@handlers.handler(XTPacket('rsp', ext='red'), pre_login=True)
@handlers.depends_on_packet(XTPacket('rsc', ext='red'))
async def handle_redeem_puffle(p, name: str, puffle_type: int):
if puffle_type not in p.server.puffles:
return await p.close()
if not 16 > len(name) >= 3:
await p.send_xt('rsp', 0)
if len(p.puffles) >= 75:
return await p.send_error(440)
puffle = await p.puffles.insert(puffle_id=puffle_type, name=name)
await p.add_puffle_item(p.server.puffle_items[3], quantity=5, cost=0)
await p.add_puffle_item(p.server.puffle_items[79], cost=0)
await p.add_puffle_item(p.server.puffle_items[p.server.puffles[puffle.puffle_id].favourite_toy])
await p.add_inbox(p.server.postcards[111], details=puffle.name)
await p.send_xt('rsp', 1)
| 43.25 | 120 | 0.663295 |
5931a24e15b2402e07e774c1e78b03374cddfdce | 18,750 | py | Python | research/cognitive_planning/train_supervised_active_vision.py | 873040/Abhishek | 2ddd716e66bc5cc6e6f0787508dd07da0e02e75a | [
"Apache-2.0"
] | 82,518 | 2016-02-05T12:07:23.000Z | 2022-03-31T23:09:47.000Z | research/cognitive_planning/train_supervised_active_vision.py | 873040/Abhishek | 2ddd716e66bc5cc6e6f0787508dd07da0e02e75a | [
"Apache-2.0"
] | 9,021 | 2016-03-08T01:02:05.000Z | 2022-03-31T08:06:35.000Z | research/cognitive_planning/train_supervised_active_vision.py | 873040/Abhishek | 2ddd716e66bc5cc6e6f0787508dd07da0e02e75a | [
"Apache-2.0"
] | 54,341 | 2016-02-06T17:19:55.000Z | 2022-03-31T10:27:44.000Z | # Copyright 2018 The TensorFlow Authors All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# pylint: disable=line-too-long
# pyformat: disable
"""Train and eval for supervised navigation training.
For training:
python train_supervised_active_vision.py \
--mode='train' \
--logdir=$logdir/checkin_log_det/ \
--modality_types='det' \
--batch_size=8 \
--train_iters=200000 \
--lstm_cell_size=2048 \
--policy_fc_size=2048 \
--sequence_length=20 \
--max_eval_episode_length=100 \
--test_iters=194 \
--gin_config=envs/configs/active_vision_config.gin \
--gin_params='ActiveVisionDatasetEnv.dataset_root="$datadir"' \
--logtostderr
For testing:
python train_supervised_active_vision.py
--mode='eval' \
--logdir=$logdir/checkin_log_det/ \
--modality_types='det' \
--batch_size=8 \
--train_iters=200000 \
--lstm_cell_size=2048 \
--policy_fc_size=2048 \
--sequence_length=20 \
--max_eval_episode_length=100 \
--test_iters=194 \
--gin_config=envs/configs/active_vision_config.gin \
--gin_params='ActiveVisionDatasetEnv.dataset_root="$datadir"' \
--logtostderr
"""
import collections
import os
import time
from absl import app
from absl import flags
from absl import logging
import networkx as nx
import numpy as np
import tensorflow as tf
import gin
import embedders
import policies
import tasks
from envs import active_vision_dataset_env
from envs import task_env
slim = tf.contrib.slim
flags.DEFINE_string('logdir', '',
'Path to a directory to write summaries and checkpoints')
# Parameters controlling the training setup. In general one would not need to
# modify them.
flags.DEFINE_string('master', 'local',
'BNS name of the TensorFlow master, or local.')
flags.DEFINE_integer('task_id', 0,
'Task id of the replica running the training.')
flags.DEFINE_integer('ps_tasks', 0,
'Number of tasks in the ps job. If 0 no ps job is used.')
flags.DEFINE_integer('decay_steps', 1000,
'Number of steps for exponential decay.')
flags.DEFINE_float('learning_rate', 0.0001, 'Learning rate.')
flags.DEFINE_integer('batch_size', 8, 'Batch size.')
flags.DEFINE_integer('sequence_length', 20, 'sequence length')
flags.DEFINE_integer('train_iters', 200000, 'number of training iterations.')
flags.DEFINE_integer('save_summaries_secs', 300,
'number of seconds between saving summaries')
flags.DEFINE_integer('save_interval_secs', 300,
'numer of seconds between saving variables')
flags.DEFINE_integer('log_every_n_steps', 20, 'number of steps between logging')
flags.DEFINE_string('modality_types', '',
'modality names in _ separated format')
flags.DEFINE_string('conv_window_sizes', '8_4_3',
'conv window size in separated by _')
flags.DEFINE_string('conv_strides', '4_2_1', '')
flags.DEFINE_string('conv_channels', '8_16_16', '')
flags.DEFINE_integer('embedding_fc_size', 128,
'size of embedding for each modality')
flags.DEFINE_integer('obs_resolution', 64,
'resolution of the input observations')
flags.DEFINE_integer('lstm_cell_size', 2048, 'size of lstm cell size')
flags.DEFINE_integer('policy_fc_size', 2048,
'size of fully connected layers for policy part')
flags.DEFINE_float('weight_decay', 0.0002, 'weight decay')
flags.DEFINE_integer('goal_category_count', 5, 'number of goal categories')
flags.DEFINE_integer('action_size', 7, 'number of possible actions')
flags.DEFINE_integer('max_eval_episode_length', 100,
'maximum sequence length for evaluation.')
flags.DEFINE_enum('mode', 'train', ['train', 'eval'],
'indicates whether it is in training or evaluation')
flags.DEFINE_integer('test_iters', 194,
'number of iterations that the eval needs to be run')
flags.DEFINE_multi_string('gin_config', [],
'List of paths to a gin config files for the env.')
flags.DEFINE_multi_string('gin_params', [],
'Newline separated list of Gin parameter bindings.')
flags.DEFINE_string(
'resnet50_path', './resnet_v2_50_checkpoint/resnet_v2_50.ckpt', 'path to resnet50'
'checkpoint')
flags.DEFINE_bool('freeze_resnet_weights', True, '')
flags.DEFINE_string(
'eval_init_points_file_name', '',
'Name of the file that containts the initial locations and'
'worlds for each evalution point')
FLAGS = flags.FLAGS
TRAIN_WORLDS = [
'Home_001_1', 'Home_001_2', 'Home_002_1', 'Home_003_1', 'Home_003_2',
'Home_004_1', 'Home_004_2', 'Home_005_1', 'Home_005_2', 'Home_006_1',
'Home_010_1'
]
TEST_WORLDS = ['Home_011_1', 'Home_013_1', 'Home_016_1']
def create_modality_types():
"""Parses the modality_types and returns a list of task_env.ModalityType."""
if not FLAGS.modality_types:
raise ValueError('there needs to be at least one modality type')
modality_types = FLAGS.modality_types.split('_')
for x in modality_types:
if x not in ['image', 'sseg', 'det', 'depth']:
raise ValueError('invalid modality type: {}'.format(x))
conversion_dict = {
'image': task_env.ModalityTypes.IMAGE,
'sseg': task_env.ModalityTypes.SEMANTIC_SEGMENTATION,
'depth': task_env.ModalityTypes.DEPTH,
'det': task_env.ModalityTypes.OBJECT_DETECTION,
}
return [conversion_dict[k] for k in modality_types]
def create_task_io_config(
modality_types,
goal_category_count,
action_size,
sequence_length,
):
"""Generates task io config."""
shape_prefix = [sequence_length, FLAGS.obs_resolution, FLAGS.obs_resolution]
shapes = {
task_env.ModalityTypes.IMAGE: [sequence_length, 224, 224, 3],
task_env.ModalityTypes.DEPTH: shape_prefix + [
2,
],
task_env.ModalityTypes.SEMANTIC_SEGMENTATION: shape_prefix + [
1,
],
task_env.ModalityTypes.OBJECT_DETECTION: shape_prefix + [
90,
]
}
types = {k: tf.float32 for k in shapes}
types[task_env.ModalityTypes.IMAGE] = tf.uint8
inputs = collections.OrderedDict(
[[mtype, (types[mtype], shapes[mtype])] for mtype in modality_types])
inputs[task_env.ModalityTypes.GOAL] = (tf.float32,
[sequence_length, goal_category_count])
inputs[task_env.ModalityTypes.PREV_ACTION] = (tf.float32, [
sequence_length, action_size + 1
])
print inputs
return tasks.UnrolledTaskIOConfig(
inputs=inputs,
output=(tf.float32, [sequence_length, action_size]),
query=None)
def map_to_embedder(modality_type):
"""Maps modality_type to its corresponding embedder."""
if modality_type == task_env.ModalityTypes.PREV_ACTION:
return None
if modality_type == task_env.ModalityTypes.GOAL:
return embedders.IdentityEmbedder()
if modality_type == task_env.ModalityTypes.IMAGE:
return embedders.ResNet50Embedder()
conv_window_sizes = [int(x) for x in FLAGS.conv_window_sizes.split('_')]
conv_channels = [int(x) for x in FLAGS.conv_channels.split('_')]
conv_strides = [int(x) for x in FLAGS.conv_strides.split('_')]
params = tf.contrib.training.HParams(
to_one_hot=modality_type == task_env.ModalityTypes.SEMANTIC_SEGMENTATION,
one_hot_length=10,
conv_sizes=conv_window_sizes,
conv_strides=conv_strides,
conv_channels=conv_channels,
embedding_size=FLAGS.embedding_fc_size,
weight_decay_rate=FLAGS.weight_decay,
)
return embedders.SmallNetworkEmbedder(params)
def create_train_and_init_ops(policy, task):
"""Creates training ops given the arguments.
Args:
policy: the policy for the task.
task: the task instance.
Returns:
train_op: the op that needs to be runned at each step.
summaries_op: the summary op that is executed.
init_fn: the op that initializes the variables if there is no previous
checkpoint. If Resnet50 is not used in the model it is None, otherwise
it reads the weights from FLAGS.resnet50_path and sets the init_fn
to the op that initializes the ResNet50 with the pre-trained weights.
"""
assert isinstance(task, tasks.GotoStaticXNoExplorationTask)
assert isinstance(policy, policies.Policy)
inputs, _, gt_outputs, masks = task.tf_episode_batch(FLAGS.batch_size)
outputs, _ = policy.build(inputs, None)
loss = task.target_loss(gt_outputs, outputs, masks)
init_fn = None
# If resnet is added to the graph, init_fn should initialize resnet weights
# if there is no previous checkpoint.
variables_assign_dict = {}
vars_list = []
for v in slim.get_model_variables():
if v.name.find('resnet') >= 0:
if not FLAGS.freeze_resnet_weights:
vars_list.append(v)
variables_assign_dict[v.name[v.name.find('resnet'):-2]] = v
else:
vars_list.append(v)
global_step = tf.train.get_or_create_global_step()
learning_rate = tf.train.exponential_decay(
FLAGS.learning_rate,
global_step,
decay_steps=FLAGS.decay_steps,
decay_rate=0.98,
staircase=True)
optimizer = tf.train.AdamOptimizer(learning_rate)
train_op = slim.learning.create_train_op(
loss,
optimizer,
global_step=global_step,
variables_to_train=vars_list,
)
if variables_assign_dict:
init_fn = slim.assign_from_checkpoint_fn(
FLAGS.resnet50_path,
variables_assign_dict,
ignore_missing_vars=False)
scalar_summaries = {}
scalar_summaries['LR'] = learning_rate
scalar_summaries['loss'] = loss
for name, summary in scalar_summaries.iteritems():
tf.summary.scalar(name, summary)
return train_op, init_fn
def create_eval_ops(policy, config, possible_targets):
"""Creates the necessary ops for evaluation."""
inputs_feed = collections.OrderedDict([[
mtype,
tf.placeholder(config.inputs[mtype].type,
[1] + config.inputs[mtype].shape)
] for mtype in config.inputs])
inputs_feed[task_env.ModalityTypes.PREV_ACTION] = tf.placeholder(
tf.float32, [1, 1] + [
config.output.shape[-1] + 1,
])
prev_state_feed = [
tf.placeholder(
tf.float32, [1, FLAGS.lstm_cell_size], name='prev_state_{}'.format(i))
for i in range(2)
]
policy_outputs = policy.build(inputs_feed, prev_state_feed)
summary_feed = {}
for c in possible_targets + ['mean']:
summary_feed[c] = tf.placeholder(
tf.float32, [], name='eval_in_range_{}_input'.format(c))
tf.summary.scalar('eval_in_range_{}'.format(c), summary_feed[c])
return inputs_feed, prev_state_feed, policy_outputs, (tf.summary.merge_all(),
summary_feed)
def unroll_policy_for_eval(
sess,
env,
inputs_feed,
prev_state_feed,
policy_outputs,
number_of_steps,
output_folder,
):
"""unrolls the policy for testing.
Args:
sess: tf.Session
env: The environment.
inputs_feed: dictionary of placeholder for the input modalities.
prev_state_feed: placeholder for the input to the prev_state of the model.
policy_outputs: tensor that contains outputs of the policy.
number_of_steps: maximum number of unrolling steps.
output_folder: output_folder where the function writes a dictionary of
detailed information about the path. The dictionary keys are 'states' and
'distance'. The value for 'states' is the list of states that the agent
goes along the path. The value for 'distance' contains the length of
shortest path to the goal at each step.
Returns:
states: list of states along the path.
distance: list of distances along the path.
"""
prev_state = [
np.zeros((1, FLAGS.lstm_cell_size), dtype=np.float32) for _ in range(2)
]
prev_action = np.zeros((1, 1, FLAGS.action_size + 1), dtype=np.float32)
obs = env.reset()
distances_to_goal = []
states = []
unique_id = '{}_{}'.format(env.cur_image_id(), env.goal_string)
for _ in range(number_of_steps):
distances_to_goal.append(
np.min([
len(
nx.shortest_path(env.graph, env.pose_to_vertex(env.state()),
env.pose_to_vertex(target_view)))
for target_view in env.targets()
]))
states.append(env.state())
feed_dict = {inputs_feed[mtype]: [[obs[mtype]]] for mtype in inputs_feed}
feed_dict[prev_state_feed[0]] = prev_state[0]
feed_dict[prev_state_feed[1]] = prev_state[1]
action_values, prev_state = sess.run(policy_outputs, feed_dict=feed_dict)
chosen_action = np.argmax(action_values[0])
obs, _, done, info = env.step(np.int32(chosen_action))
prev_action[0][0][chosen_action] = 1.
prev_action[0][0][-1] = float(info['success'])
# If the agent chooses action stop or the number of steps exceeeded
# env._episode_length.
if done:
break
# logging.info('distance = %d, id = %s, #steps = %d', distances_to_goal[-1],
output_path = os.path.join(output_folder, unique_id + '.npy')
with tf.gfile.Open(output_path, 'w') as f:
print 'saving path information to {}'.format(output_path)
np.save(f, {'states': states, 'distance': distances_to_goal})
return states, distances_to_goal
def init(sequence_length, eval_init_points_file_name, worlds):
"""Initializes the common operations between train and test."""
modality_types = create_modality_types()
logging.info('modality types: %r', modality_types)
# negative reward_goal_range prevents the env from terminating early when the
# agent is close to the goal. The policy should keep the agent until the end
# of the 100 steps either through chosing stop action or oscilating around
# the target.
env = active_vision_dataset_env.ActiveVisionDatasetEnv(
modality_types=modality_types +
[task_env.ModalityTypes.GOAL, task_env.ModalityTypes.PREV_ACTION],
reward_goal_range=-1,
eval_init_points_file_name=eval_init_points_file_name,
worlds=worlds,
output_size=FLAGS.obs_resolution,
)
config = create_task_io_config(
modality_types=modality_types,
goal_category_count=FLAGS.goal_category_count,
action_size=FLAGS.action_size,
sequence_length=sequence_length,
)
task = tasks.GotoStaticXNoExplorationTask(env=env, config=config)
embedders_dict = {mtype: map_to_embedder(mtype) for mtype in config.inputs}
policy_params = tf.contrib.training.HParams(
lstm_state_size=FLAGS.lstm_cell_size,
fc_channels=FLAGS.policy_fc_size,
weight_decay=FLAGS.weight_decay,
target_embedding_size=FLAGS.embedding_fc_size,
)
policy = policies.LSTMPolicy(
modality_names=config.inputs.keys(),
embedders_dict=embedders_dict,
action_size=FLAGS.action_size,
params=policy_params,
max_episode_length=sequence_length)
return env, config, task, policy
def test():
"""Contains all the operations for testing policies."""
env, config, _, policy = init(1, 'all_init_configs', TEST_WORLDS)
inputs_feed, prev_state_feed, policy_outputs, summary_op = create_eval_ops(
policy, config, env.possible_targets)
sv = tf.train.Supervisor(logdir=FLAGS.logdir)
prev_checkpoint = None
with sv.managed_session(
start_standard_services=False,
config=tf.ConfigProto(allow_soft_placement=True)) as sess:
while not sv.should_stop():
while True:
new_checkpoint = tf.train.latest_checkpoint(FLAGS.logdir)
print 'new_checkpoint ', new_checkpoint
if not new_checkpoint:
time.sleep(1)
continue
if prev_checkpoint is None:
prev_checkpoint = new_checkpoint
break
if prev_checkpoint != new_checkpoint:
prev_checkpoint = new_checkpoint
break
else: # if prev_checkpoint == new_checkpoint, we have to wait more.
time.sleep(1)
checkpoint_step = int(new_checkpoint[new_checkpoint.rfind('-') + 1:])
sv.saver.restore(sess, new_checkpoint)
print '--------------------'
print 'evaluating checkpoint {}'.format(new_checkpoint)
folder_path = os.path.join(FLAGS.logdir, 'evals', str(checkpoint_step))
if not tf.gfile.Exists(folder_path):
tf.gfile.MakeDirs(folder_path)
eval_stats = {c: [] for c in env.possible_targets}
for test_iter in range(FLAGS.test_iters):
print 'evaluating {} of {}'.format(test_iter, FLAGS.test_iters)
_, distance_to_goal = unroll_policy_for_eval(
sess,
env,
inputs_feed,
prev_state_feed,
policy_outputs,
FLAGS.max_eval_episode_length,
folder_path,
)
print 'goal = {}'.format(env.goal_string)
eval_stats[env.goal_string].append(float(distance_to_goal[-1] <= 7))
eval_stats = {k: np.mean(v) for k, v in eval_stats.iteritems()}
eval_stats['mean'] = np.mean(eval_stats.values())
print eval_stats
feed_dict = {summary_op[1][c]: eval_stats[c] for c in eval_stats}
summary_str = sess.run(summary_op[0], feed_dict=feed_dict)
writer = sv.summary_writer
writer.add_summary(summary_str, checkpoint_step)
writer.flush()
def train():
_, _, task, policy = init(FLAGS.sequence_length, None, TRAIN_WORLDS)
print(FLAGS.save_summaries_secs)
print(FLAGS.save_interval_secs)
print(FLAGS.logdir)
with tf.device(
tf.train.replica_device_setter(ps_tasks=FLAGS.ps_tasks, merge_devices=True)):
train_op, init_fn = create_train_and_init_ops(policy=policy, task=task)
print(FLAGS.logdir)
slim.learning.train(
train_op=train_op,
init_fn=init_fn,
logdir=FLAGS.logdir,
is_chief=FLAGS.task_id == 0,
number_of_steps=FLAGS.train_iters,
save_summaries_secs=FLAGS.save_summaries_secs,
save_interval_secs=FLAGS.save_interval_secs,
session_config=tf.ConfigProto(allow_soft_placement=True),
)
def main(_):
gin.parse_config_files_and_bindings(FLAGS.gin_config, FLAGS.gin_params)
if FLAGS.mode == 'train':
train()
else:
test()
if __name__ == '__main__':
app.run(main)
| 37.202381 | 86 | 0.69472 |
dc5bf0d0d67b8a2609ff9a657aee5448be9b74ff | 4,452 | py | Python | ansible/venv/lib/python2.7/site-packages/ansible/modules/notification/jabber.py | gvashchenkolineate/gvashchenkolineate_infra_trytravis | 0fb18850afe0d8609693ba4b23f29c7cda17d97f | [
"MIT"
] | 17 | 2017-06-07T23:15:01.000Z | 2021-08-30T14:32:36.000Z | ansible/ansible/modules/notification/jabber.py | SergeyCherepanov/ansible | 875711cd2fd6b783c812241c2ed7a954bf6f670f | [
"MIT"
] | 9 | 2017-06-25T03:31:52.000Z | 2021-05-17T23:43:12.000Z | ansible/ansible/modules/notification/jabber.py | SergeyCherepanov/ansible | 875711cd2fd6b783c812241c2ed7a954bf6f670f | [
"MIT"
] | 3 | 2018-05-26T21:31:22.000Z | 2019-09-28T17:00:45.000Z | #!/usr/bin/python
# -*- coding: utf-8 -*-
#
# (c) 2015, Brian Coca <bcoca@ansible.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['stableinterface'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
version_added: "1.2"
module: jabber
short_description: Send a message to jabber user or chat room
description:
- Send a message to jabber
options:
user:
description:
- User as which to connect
required: true
password:
description:
- password for user to connect
required: true
to:
description:
- user ID or name of the room, when using room use a slash to indicate your nick.
required: true
msg:
description:
- The message body.
required: true
host:
description:
- host to connect, overrides user info
port:
description:
- port to connect to, overrides default
default: 5222
encoding:
description:
- message encoding
# informational: requirements for nodes
requirements:
- python xmpp (xmpppy)
author: "Brian Coca (@bcoca)"
'''
EXAMPLES = '''
# send a message to a user
- jabber:
user: mybot@example.net
password: secret
to: friend@example.net
msg: Ansible task finished
# send a message to a room
- jabber:
user: mybot@example.net
password: secret
to: mychaps@conference.example.net/ansiblebot
msg: Ansible task finished
# send a message, specifying the host and port
- jabber:
user: mybot@example.net
host: talk.example.net
port: 5223
password: secret
to: mychaps@example.net
msg: Ansible task finished
'''
import time
import traceback
HAS_XMPP = True
XMPP_IMP_ERR = None
try:
import xmpp
except ImportError:
XMPP_IMP_ERR = traceback.format_exc()
HAS_XMPP = False
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
def main():
module = AnsibleModule(
argument_spec=dict(
user=dict(required=True),
password=dict(required=True, no_log=True),
to=dict(required=True),
msg=dict(required=True),
host=dict(required=False),
port=dict(required=False, default=5222, type='int'),
encoding=dict(required=False),
),
supports_check_mode=True
)
if not HAS_XMPP:
module.fail_json(msg=missing_required_lib('xmpppy'), exception=XMPP_IMP_ERR)
jid = xmpp.JID(module.params['user'])
user = jid.getNode()
server = jid.getDomain()
port = module.params['port']
password = module.params['password']
try:
to, nick = module.params['to'].split('/', 1)
except ValueError:
to, nick = module.params['to'], None
if module.params['host']:
host = module.params['host']
else:
host = server
if module.params['encoding']:
xmpp.simplexml.ENCODING = module.params['encoding']
msg = xmpp.protocol.Message(body=module.params['msg'])
try:
conn = xmpp.Client(server, debug=[])
if not conn.connect(server=(host, port)):
module.fail_json(rc=1, msg='Failed to connect to server: %s' % (server))
if not conn.auth(user, password, 'Ansible'):
module.fail_json(rc=1, msg='Failed to authorize %s on: %s' % (user, server))
# some old servers require this, also the sleep following send
conn.sendInitPresence(requestRoster=0)
if nick: # sending to room instead of user, need to join
msg.setType('groupchat')
msg.setTag('x', namespace='http://jabber.org/protocol/muc#user')
join = xmpp.Presence(to=module.params['to'])
join.setTag('x', namespace='http://jabber.org/protocol/muc')
conn.send(join)
time.sleep(1)
else:
msg.setType('chat')
msg.setTo(to)
if not module.check_mode:
conn.send(msg)
time.sleep(1)
conn.disconnect()
except Exception as e:
module.fail_json(msg="unable to send msg: %s" % to_native(e), exception=traceback.format_exc())
module.exit_json(changed=False, to=to, user=user, msg=msg.getBody())
if __name__ == '__main__':
main()
| 26.819277 | 103 | 0.635445 |
214b560b2adf814b9c77eca9f83ab4f69820b1c4 | 465 | py | Python | output/models/ms_data/datatypes/facets/non_negative_integer/non_negative_integer_enumeration002_xsd/non_negative_integer_enumeration002.py | tefra/xsdata-w3c-tests | b6b6a4ac4e0ab610e4b50d868510a8b7105b1a5f | [
"MIT"
] | 1 | 2021-08-14T17:59:21.000Z | 2021-08-14T17:59:21.000Z | output/models/ms_data/datatypes/facets/non_negative_integer/non_negative_integer_enumeration002_xsd/non_negative_integer_enumeration002.py | tefra/xsdata-w3c-tests | b6b6a4ac4e0ab610e4b50d868510a8b7105b1a5f | [
"MIT"
] | 4 | 2020-02-12T21:30:44.000Z | 2020-04-15T20:06:46.000Z | output/models/ms_data/datatypes/facets/non_negative_integer/non_negative_integer_enumeration002_xsd/non_negative_integer_enumeration002.py | tefra/xsdata-w3c-tests | b6b6a4ac4e0ab610e4b50d868510a8b7105b1a5f | [
"MIT"
] | null | null | null | from dataclasses import dataclass, field
from enum import Enum
from typing import Optional
class FooTypeFoo(Enum):
VALUE_456 = 456
@dataclass
class FooType:
class Meta:
name = "fooType"
foo: Optional[FooTypeFoo] = field(
default=None,
metadata={
"type": "Element",
"namespace": "",
"required": True,
}
)
@dataclass
class Test(FooType):
class Meta:
name = "test"
| 16.034483 | 40 | 0.576344 |
de65350b35c7d70502c7f3847464dbfd2f2cc51c | 2,343 | py | Python | Aula60/ForTwo/r2/viagem.py | PabloSchumacher/TrabalhosPython | 828edd35eb40442629211bc9f1477f75fb025d74 | [
"bzip2-1.0.6",
"MIT"
] | null | null | null | Aula60/ForTwo/r2/viagem.py | PabloSchumacher/TrabalhosPython | 828edd35eb40442629211bc9f1477f75fb025d74 | [
"bzip2-1.0.6",
"MIT"
] | null | null | null | Aula60/ForTwo/r2/viagem.py | PabloSchumacher/TrabalhosPython | 828edd35eb40442629211bc9f1477f75fb025d74 | [
"bzip2-1.0.6",
"MIT"
] | null | null | null | from Aula60.ForTwo.r2.embarque import embarque
from Aula60.ForTwo.r2.desembarque import desembarque
from Aula60.ForTwo.r2.terminal import Terminal
from Aula60.ForTwo.r2.aviao import Aviao
from Aula60.ForTwo.r2.local import Local
from Aula60.ForTwo.r2.fortwo import Fortwo
terminal = {'descricao':'terminal', 'pessoas': ['piloto','oficial1','oficial2','chefe de serviço','comissário1','comissário2','policial','presidiario']}
aviao = { 'descricao':'aviao', 'pessoas': [] }
def viagem(motorista:str, passageiro:str, saida:dict, chegada:dict):
fortwo = embarque(motorista, passageiro, saida)
print(f"Saindo do {saida['descricao']}")
print('Iniciando a viagem...')
print(f"Chegando no {chegada['descricao']}")
print('Finalizando a viagem ...')
# alto acoplamento
desembarque(fortwo, chegada)
print(saida)
print(chegada)
def viagem2(pessoa1, pessoa2, origem:Local, destino:Local):
fortwo = Fortwo()
if origem.saida(pessoa2):
if origem.saida(pessoa1):
if fortwo.set_motorista(pessoa1):
if fortwo.set_passageiro(pessoa2):
fortwo.viagem(origem, destino)
if destino.entrada(pessoa2):
if not destino.entrada(pessoa1):
print('Não permitido6')
else:
print('Não permitido5')
else:
print('Não permitido4')
else:
print('Não permitido3')
else:
print('Não permitido2')
else:
print('Não permitido1')
print(f'origem: {origem.get_pessoas()}')
print(f'destino: {destino.get_pessoas()}')
terminal = Terminal()
aviao = Aviao()
viagem2('policial','presidiário', terminal, aviao)
viagem2('policial','', aviao, terminal)
viagem2('piloto','policial', terminal, aviao)
viagem2('piloto','', aviao, terminal)
viagem2('piloto','oficial1', terminal, aviao)
viagem2('piloto','', aviao, terminal)
viagem2('piloto','oficial2', terminal, aviao)
viagem2('piloto','', aviao, terminal)
viagem2('chefe de serviço','piloto', terminal, aviao)
viagem2('chefe de serviço','', aviao, terminal)
viagem2('chefe de serviço','comissário1', terminal, aviao)
viagem2('chefe de serviço','', aviao, terminal)
viagem2('chefe de serviço','comissário2', terminal, aviao)
| 36.609375 | 152 | 0.6449 |
66bed836a247ec18237483abf4d32a49d5842fec | 16,519 | py | Python | model/encModel.py | gyeongmoon/CNN-DM | 5b71307fa41096bc439d480283c0c4c0200164be | [
"MIT"
] | 2 | 2021-05-31T04:43:12.000Z | 2021-10-06T07:48:21.000Z | model/encModel.py | gyeongmoon/CNN-DM | 5b71307fa41096bc439d480283c0c4c0200164be | [
"MIT"
] | null | null | null | model/encModel.py | gyeongmoon/CNN-DM | 5b71307fa41096bc439d480283c0c4c0200164be | [
"MIT"
] | null | null | null | import time
import torch
import torch.nn as nn
import torch.nn.init as init
import torch.optim as optim
from torch.autograd import Variable
from model import utils
from model import LwFLoss
from torchvision import models
#####################################################
# Defining the Encoder-based Lifelong Learning model.
# ---------------------------------------------------
class Model(nn.Module):
def __init__(self, model_name, dataset, num_classes, is_fine_tuning=True, pretrained=True,
network_name='encModel'):
super(Model, self).__init__()
prev_model = eval(model_name)(pretrained=True)
if not is_fine_tuning: # Feature-extraction.
for param in prev_model.parameters():
param.requires_grad = False
# Total number of classifiers.
self.num_classifiers = len(num_classes)
# Define the base model.
self.features = prev_model.features
self.fc6 = nn.Sequential(*list(prev_model.classifier.children())[:3])
self.fc7 = nn.Sequential(*list(prev_model.classifier.children())[3:6])
# self.classifier = nn.Linear(prev_model.classifier._modules['6'].in_features, num_classes).
for i, num_class in enumerate(num_classes):
classifier_name = 'classifier' + str(i)
setattr(self, classifier_name, nn.Linear(prev_model.classifier._modules['6'].in_features, num_class))
# If continual_learning & pretrained & before a new classifier, load the saved model.
if (self.num_classifiers > 1) and pretrained and (i == self.num_classifiers - 2):
self.load_model(dataset[0:-1], network_name)
# Load the saved model.
def load_model(self, dataset, network_name):
saved_model_name = network_name + '_'
for data_name in dataset:
saved_model_name = saved_model_name + data_name + '_'
saved_model_name = saved_model_name + 'model'
checkpoint = torch.load(saved_model_name)
self.load_state_dict(checkpoint['state_dict']) # Containing ['bias', 'weight'].
# Define parameters to be trained.
def params(self, lr, is_fine_tuning=True):
if is_fine_tuning:
if self.num_classifiers > 1:
params = [{'params': self.features.parameters(), 'lr': 0.02 * lr},
{'params': self.fc6.parameters(), 'lr': 0.02 * lr},
{'params': self.fc7.parameters(), 'lr': 0.02 * lr}]
for i in range(self.num_classifiers):
classifier_name = 'classifier' + str(i)
if i != self.num_classifiers - 1:
params = params + [{'params': getattr(self, classifier_name).parameters(), 'lr': 0.02 * lr}]
else:
params = params + [{'params': getattr(self, classifier_name).parameters()}]
else:
params = self.parameters()
else: # Feature-Extraction.
classifier_name = 'classifier' + str(self.num_classifiers - 1)
params = [{'params': getattr(self, classifier_name).parameters()}]
return params
def forward(self, x):
features = self.features(x)
features = features.view(features.size(0), -1)
fc6 = self.fc6(features)
fc7 = self.fc7(fc6)
outputs = []
for i in range(self.num_classifiers):
classifier_name = 'classifier' + str(i)
output = getattr(self, classifier_name)(fc7)
outputs = outputs + [output]
return outputs, features
#############################
# Defining the encoder model.
# ---------------------------
class encoderModel(nn.Module):
def __init__(self, hidden_size=100):
super(encoderModel, self).__init__()
self.encoder = nn.Linear(256 * 6 * 6, hidden_size) # AlexNet in_features size: (256 * 6 * 6).
self.sigmoid = nn.Sigmoid()
self.decoder = nn.Linear(hidden_size, 256 * 6 * 6)
self.reset_parameters()
def reset_parameters(self):
init.xavier_uniform_(self.encoder.weight.data)
init.xavier_uniform_(self.decoder.weight.data)
self.encoder.bias.data.fill_(0)
self.decoder.bias.data.fill_(0)
def forward(self, features):
codes = self.sigmoid(self.encoder(features))
r_features = self.decoder(codes)
return codes, r_features
###########################
# Training the auto-encoder
# -------------------------
def train_encoder(feature_model, model, ae_model, scheduler, criterion, enc_criterion, optimizer,
dataloader, dataset_size, num_epochs=80, lamb=1e-6):
since = time.time()
best_model_wts = ae_model.state_dict()
torch.save({'model': best_model_wts}, 'curr_best_encoder_wts')
best_loss = 0.0
best_acc = 0.0
for params in feature_model.parameters():
params.requires_grad = False
for params in model.parameters():
params.requires_grad = False
feature_model.train(False)
model.train(False)
print('\nTraining auto encoder..')
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
# Each epoch has a training and validation phase
for phase in ['train', 'test']:
if phase == 'train':
scheduler.step()
ae_model.train(True) # Set model to training mode
else:
ae_model.train(False) # Set model to evaluate mode
running_loss = 0.0
running_corrects = 0
# Iterate over data.
for i, data in enumerate(dataloader[phase]):
# get the inputs
inputs, labels = data
# wrap them in Variable
if torch.cuda.is_available():
inputs, labels = Variable(inputs.cuda()), Variable(labels.cuda())
else:
inputs, labels = Variable(inputs), Variable(labels)
# zero the parameter gradients
optimizer.zero_grad()
# forward
features = feature_model(inputs)
features = features.view(features.size(0), -1)
codes, r_features = ae_model(features)
outputs = model(r_features)
_, preds = torch.max(outputs.data, 1) # You can use "topk" function.
# loss = lambda * reconstruct_loss + task_loss
loss = lamb * enc_criterion(r_features, features) + criterion(outputs, labels)
# loss = enc_criterion(r_features, features)
# backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
# statistics
running_loss += loss.data[0]
running_corrects += torch.sum(preds == labels.data)
epoch_loss = running_loss / dataset_size[phase]
epoch_acc = running_corrects / dataset_size[phase]
print('{} Loss: {:.4f} Acc: {:.4f}'.format(phase, epoch_loss, epoch_acc))
# deep copy the model
if phase == 'test' and epoch_acc > best_acc:
best_loss = epoch_loss
best_acc = epoch_acc
best_model_wts = ae_model.state_dict()
torch.save({'model': best_model_wts}, 'curr_best_encoder_wts')
print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))
print('Best test Loss: {:4f} Acc: {:4f}'.format(best_loss, best_acc))
# load the best model.
checkpoint = torch.load('curr_best_encoder_wts')
ae_model.load_state_dict(checkpoint['model'])
return ae_model
#####################
# Training the model.
def train_model(model, optimizer, scheduler, start_epoch, num_epochs, dataloaders, dataset_sizes,
dataset, ld=0.02, alpha=1e-2, enc_lr=1e-1, enc_num_epoch=80, weight_decay=0.0005, is_training_encoder=True):
# Define dataloader & dataset_size
dataloader, dataset_size = dataloaders[model.num_classifiers-1], dataset_sizes[model.num_classifiers-1]
# Define Criterion for loss.
criterion = nn.CrossEntropyLoss()
LwF_criterion = LwFLoss.LwFLoss() # LwF_Loss.
enc_criterion = nn.MSELoss() # Encoder_Loss.
# Gen_output for LwFLoss.
prev_labels = {}
prev_labels = utils.gen_output(model, dataloader, prev_labels)
# Define the prev_encoder model & gen_output.
feature_model = model.features
prev_encoders, prev_codes = {}, {}
for i in range(model.num_classifiers - 1):
prev_encoders[i] = encoderModel()
if torch.cuda.is_available():
prev_encoders[i] = prev_encoders[i].cuda()
# Load the pre-trained encoder.
prev_encoders[i], _ = utils.save_model(prev_encoders[i], 0, 0, reuse='encoder_' + dataset[i], save_mode=False)
for parameters in prev_encoders[i].parameters():
parameters.requires_grad = False
prev_encoders[i].train(False)
# Gen_output for encLoss.
if not (i in prev_codes):
prev_codes[i] = []
prev_codes[i] = utils.gen_output(prev_encoders[i], dataloader, prev_codes[i], feature_model=feature_model)
best_model_wts = model.state_dict()
torch.save({'model': best_model_wts}, 'curr_best_model_wts')
best_loss = 0.0
best_acc = 0.0
since = time.time()
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(start_epoch + epoch, start_epoch + num_epochs - 1))
print('-' * 10)
# Each epoch has a training and validation phase
for phase in ['train', 'test']:
if phase == 'train':
scheduler.step()
model.train(True) # Set model to training mode
else:
model.train(False) # Set model to evaluate mode
running_loss = 0.0
running_corrects = 0
# Iterate over data.
for i, data in enumerate(dataloader[phase]):
# get the inputs
inputs, labels, _ = data
# wrap them in Variable
if torch.cuda.is_available():
inputs, labels = Variable(inputs.cuda()), Variable(labels.cuda())
else:
inputs, labels = Variable(inputs), Variable(labels)
# zero the parameter gradients
optimizer.zero_grad()
# forward
outputs, features = model(inputs)
_, preds = torch.max(outputs[-1].data, 1) # You can use "topk" function.
if phase == 'train':
LwF_Loss, enc_Loss = 0, 0
for k in range(model.num_classifiers - 1):
# wrap prev_labels in Variable for out of memory.
if torch.cuda.is_available():
prev_labels_i = Variable(prev_labels[k][i].cuda())
prev_codes_i = Variable(prev_codes[k][i].cuda())
else:
prev_labels_i = prev_labels[k][i]
prev_codes_i = prev_codes[k][i]
# It should be checked.
# feature_model = model.pretrained_model.features
# for params in feature_model.parameters():
# params.requires_grad = False
#
# feature_model.train(False)
# forward
LwF_Loss = LwF_Loss + LwF_criterion(outputs[k], prev_labels_i)
codes, _ = prev_encoders[k](features)
enc_Loss = enc_Loss + enc_criterion(codes, prev_codes_i) ** 2
# CrossEntropyLoss + Knowledge Distillation Loss + alpha/2 * enc_loss.
loss = criterion(outputs[-1], labels) + ld * LwF_Loss + alpha / 2 * enc_Loss
else:
loss = criterion(outputs[-1], labels)
# backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item()
running_corrects += torch.sum(preds == labels.data).item()
epoch_loss = running_loss / dataset_size[phase]
epoch_acc = running_corrects / dataset_size[phase]
print('{} Loss: {:.4f} Acc: {:.4f}'.format(phase, epoch_loss, epoch_acc))
# deep copy the model
if phase == 'test' and epoch_acc > best_acc:
best_loss = epoch_loss
best_acc = epoch_acc
best_model_wts = model.state_dict()
torch.save({'model': best_model_wts}, 'curr_best_model_wts')
# if model.num_classifiers > 1: # Continual Learning.
# if (epoch % 2 == 0 and epoch < 10) or (epoch % 10 == 0) or (epoch == num_epochs-1):
# test_model(model, dataloaders, dataset_sizes, num_task=0) # Test the model.
# print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))
print('Best test Loss: {:4f} Acc: {:4f}'.format(best_loss, best_acc)) # mems
# load the best model.
checkpoint = torch.load('curr_best_model_wts')
model.load_state_dict(checkpoint['model'])
# Please check the old task performance separately, because of out of memory error.
if model.num_classifiers > 1: # Continual Learning.
print()
for i in range(model.num_classifiers-1):
test_model(model, dataloaders, dataset_sizes, num_task=i) # Test the model.
##################################################
# Training Auto-Encoder after learning a new task.
# ------------------------------------------------
if is_training_encoder:
# Define a new encoder model.
new_encoder_model = encoderModel()
if torch.cuda.is_available():
new_encoder_model = new_encoder_model.cuda()
enc_optimizer = optim.Adadelta(new_encoder_model.parameters(), lr=enc_lr, weight_decay=weight_decay)
# Decay LR by a factor of gamma every step_size.
enc_lr_scheduler = optim.lr_scheduler.StepLR(enc_optimizer, step_size=enc_num_epoch, gamma=0.1)
# Load model.
ae_name = 'encoder_' + dataset
start_epoch = 0
new_encoder_model, start_epoch = utils.save_model(new_encoder_model, enc_num_epoch, start_epoch,
save_mode=False, reuse=ae_name)
# Training the auto-encoder.
new_encoder_model = train_encoder(feature_model, model, new_encoder_model, enc_lr_scheduler, criterion,
enc_criterion, enc_optimizer, dataloader, dataset_size, num_epochs=enc_num_epoch)
# Save model.
utils.save_model(new_encoder_model, enc_num_epoch, start_epoch, save_mode=True, reuse=ae_name)
return model
#################
# Test the model.
def test_model(model, dataloaders, dataset_sizes, num_task):
# Define dataloader & dataset_size
dataloader, dataset_size = dataloaders[num_task], dataset_sizes[num_task]
# Define Criterion for loss.
criterion = nn.CrossEntropyLoss()
model.train(False)
running_loss = 0.0
running_corrects = 0
for i, data in enumerate(dataloader['test']):
inputs, labels, _ = data
if torch.cuda.is_available():
inputs, labels = Variable(inputs.cuda()), Variable(labels.cuda())
else:
inputs, labels = Variable(inputs), Variable(labels)
# forward
outputs, _ = model(inputs)
_, preds = torch.max(outputs[num_task].data, 1) # To check Ac (Accuracy of total model).
loss = criterion(outputs[num_task], labels)
# statistics
running_loss += loss.item()
running_corrects += torch.sum(preds == labels.data).item()
epoch_loss = running_loss / dataset_size['test']
epoch_acc = running_corrects / dataset_size['test']
print('Test Loss: {:.4f} Acc: {:.4f}'.format(epoch_loss, epoch_acc))
| 38.595794 | 124 | 0.578364 |
8ce9989276e53e1f6eb922bf24976ad3a1fc40ef | 6,693 | py | Python | intersight/models/hyperflex_hx_link_dt_all_of.py | sdnit-se/intersight-python | 551f7685c0f76bb8af60ec83ffb6f9672d49a4ae | [
"Apache-2.0"
] | 21 | 2018-03-29T14:20:35.000Z | 2021-10-13T05:11:41.000Z | intersight/models/hyperflex_hx_link_dt_all_of.py | sdnit-se/intersight-python | 551f7685c0f76bb8af60ec83ffb6f9672d49a4ae | [
"Apache-2.0"
] | 14 | 2018-01-30T15:45:46.000Z | 2022-02-23T14:23:21.000Z | intersight/models/hyperflex_hx_link_dt_all_of.py | sdnit-se/intersight-python | 551f7685c0f76bb8af60ec83ffb6f9672d49a4ae | [
"Apache-2.0"
] | 18 | 2018-01-03T15:09:56.000Z | 2021-07-16T02:21:54.000Z | # coding: utf-8
"""
Cisco Intersight
Cisco Intersight is a management platform delivered as a service with embedded analytics for your Cisco and 3rd party IT infrastructure. This platform offers an intelligent level of management that enables IT organizations to analyze, simplify, and automate their environments in more advanced ways than the prior generations of tools. Cisco Intersight provides an integrated and intuitive management experience for resources in the traditional data center as well as at the edge. With flexible deployment options to address complex security needs, getting started with Intersight is quick and easy. Cisco Intersight has deep integration with Cisco UCS and HyperFlex systems allowing for remote deployment, configuration, and ongoing maintenance. The model-based deployment works for a single system in a remote location or hundreds of systems in a data center and enables rapid, standardized configuration and deployment. It also streamlines maintaining those systems whether you are working with small or very large configurations. # noqa: E501
The version of the OpenAPI document: 1.0.9-1295
Contact: intersight@cisco.com
Generated by: https://openapi-generator.tech
"""
import pprint
import re # noqa: F401
import six
from intersight.configuration import Configuration
class HyperflexHxLinkDtAllOf(object):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
"""
Attributes:
openapi_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
openapi_types = {
'comments': 'str',
'href': 'str',
'method': 'str',
'rel': 'str'
}
attribute_map = {
'comments': 'Comments',
'href': 'Href',
'method': 'Method',
'rel': 'Rel'
}
def __init__(self,
comments=None,
href=None,
method='POST',
rel=None,
local_vars_configuration=None): # noqa: E501
"""HyperflexHxLinkDtAllOf - a model defined in OpenAPI""" # noqa: E501
if local_vars_configuration is None:
local_vars_configuration = Configuration()
self.local_vars_configuration = local_vars_configuration
self._comments = None
self._href = None
self._method = None
self._rel = None
self.discriminator = None
if comments is not None:
self.comments = comments
if href is not None:
self.href = href
if method is not None:
self.method = method
if rel is not None:
self.rel = rel
@property
def comments(self):
"""Gets the comments of this HyperflexHxLinkDtAllOf. # noqa: E501
:return: The comments of this HyperflexHxLinkDtAllOf. # noqa: E501
:rtype: str
"""
return self._comments
@comments.setter
def comments(self, comments):
"""Sets the comments of this HyperflexHxLinkDtAllOf.
:param comments: The comments of this HyperflexHxLinkDtAllOf. # noqa: E501
:type: str
"""
self._comments = comments
@property
def href(self):
"""Gets the href of this HyperflexHxLinkDtAllOf. # noqa: E501
:return: The href of this HyperflexHxLinkDtAllOf. # noqa: E501
:rtype: str
"""
return self._href
@href.setter
def href(self, href):
"""Sets the href of this HyperflexHxLinkDtAllOf.
:param href: The href of this HyperflexHxLinkDtAllOf. # noqa: E501
:type: str
"""
self._href = href
@property
def method(self):
"""Gets the method of this HyperflexHxLinkDtAllOf. # noqa: E501
:return: The method of this HyperflexHxLinkDtAllOf. # noqa: E501
:rtype: str
"""
return self._method
@method.setter
def method(self, method):
"""Sets the method of this HyperflexHxLinkDtAllOf.
:param method: The method of this HyperflexHxLinkDtAllOf. # noqa: E501
:type: str
"""
allowed_values = ["POST", "GET", "PUT", "DELETE"] # noqa: E501
if self.local_vars_configuration.client_side_validation and method not in allowed_values: # noqa: E501
raise ValueError(
"Invalid value for `method` ({0}), must be one of {1}" # noqa: E501
.format(method, allowed_values))
self._method = method
@property
def rel(self):
"""Gets the rel of this HyperflexHxLinkDtAllOf. # noqa: E501
:return: The rel of this HyperflexHxLinkDtAllOf. # noqa: E501
:rtype: str
"""
return self._rel
@rel.setter
def rel(self, rel):
"""Sets the rel of this HyperflexHxLinkDtAllOf.
:param rel: The rel of this HyperflexHxLinkDtAllOf. # noqa: E501
:type: str
"""
self._rel = rel
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.openapi_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(
map(lambda x: x.to_dict()
if hasattr(x, "to_dict") else x, value))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(
map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()))
else:
result[attr] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, HyperflexHxLinkDtAllOf):
return False
return self.to_dict() == other.to_dict()
def __ne__(self, other):
"""Returns true if both objects are not equal"""
if not isinstance(other, HyperflexHxLinkDtAllOf):
return True
return self.to_dict() != other.to_dict()
| 32.490291 | 1,052 | 0.606156 |
03fdbd91531e502712981b4aa7051af11c5e07b2 | 3,773 | py | Python | tools/OneToOne.py | klocey/SADModels | 330bf1591c66bcb097bd2ca91c497be0394b734d | [
"MIT"
] | null | null | null | tools/OneToOne.py | klocey/SADModels | 330bf1591c66bcb097bd2ca91c497be0394b734d | [
"MIT"
] | null | null | null | tools/OneToOne.py | klocey/SADModels | 330bf1591c66bcb097bd2ca91c497be0394b734d | [
"MIT"
] | null | null | null | from __future__ import division
import sys
import os
mydir = os.path.expanduser("~/GitHub/SADModels")
sys.path.append(mydir)
import matplotlib.pyplot as plt
import numpy as np
from mpl_toolkits.axes_grid.inset_locator import inset_axes
sys.path.append(mydir + "/tools/macroecotools")
import macroecotools
""" Functions to examine observed vs. predicted abundance relationship around
the one-to-one line.
These functions were taken from the MIT-licensed public GitHub repository:
github.com/weecology/white-etal-2012-ecology/blob/master/mete_sads.py
"""
def import_obs_pred_data(input_filename):
# TAKEN FROM THE mete_sads.py script used for White et al. (2012)
data = np.genfromtxt(input_filename, dtype = "S15, S15, S15, f8, f8",
names = ['date','site','species','obs','pred'], delimiter = " ")
# ensure the delimiter is correct
return data
def hist_mete_r2(sites, obs, pred): # TAKEN FROM Macroecotools or the mete_sads.py script used for White et al. (2012)
"""Generate a kernel density estimate of the r^2 values for obs-pred plots"""
r2s = []
for site in np.unique(sites):
obs_site = obs[sites==site]
pred_site = pred[sites==site]
r2 = macroecotools.obs_pred_rsquare(obs_site, pred_site)
r2s.append(r2)
hist_r2 = np.histogram(r2s, range=(0, 1))
xvals = hist_r2[1] + (hist_r2[1][1] - hist_r2[1][0])
xvals = xvals[0:len(xvals)-1]
yvals = hist_r2[0]
plt.plot(xvals, yvals, 'k-', linewidth=2)
plt.axis([0, 1, 0, 1.1 * max(yvals)])
def obs_pred_r2_multi(methods, data_dir = mydir + '/results/'):
# TAKEN FROM THE mete_sads.py script
print 'generating 1:1 line R-square values for dataset(s)'
for j, method in enumerate(methods):
obs_pred_data = import_obs_pred_data(data_dir + dataset + '/' + dataset + '_obs_pred.txt')
obs = ((obs_pred_data["obs"]))
pred = ((obs_pred_data["pred"]))
print method,' ', macroecotools.obs_pred_rsquare(np.log10(obs), np.log10(pred))
def plot_obs_pred_sad(SADModels, data_dir, radius=2):
# TAKEN FROM THE mete_sads.py script used for White et al. (2012)
# Used for Figure 3 Locey and White (2013) ########################################################################################
"""Multiple obs-predicted plotter"""
fig = plt.figure()
for i, model in enumerate(SADModels):
fig.add_subplot(2, 2, i+1)
obs_pred_data = import_obs_pred_data(data_dir + model + '.txt')
site = ((obs_pred_data["site"]))
obs = ((obs_pred_data["obs"]))
pred = ((obs_pred_data["pred"]))
axis_min = 0.5 * min(obs)
axis_max = 2 * max(obs)
macroecotools.plot_color_by_pt_dens(pred, obs, radius, loglog=1,
plot_obj=plt.subplot(2, 2, i+1))
plt.plot([axis_min, axis_max],[axis_min, axis_max], 'k-')
plt.xlim(axis_min, axis_max)
plt.ylim(axis_min, axis_max)
plt.tick_params(axis='both', which='major', labelsize=8)
plt.subplots_adjust(wspace=0.5, hspace=0.3)
r2 = macroecotools.obs_pred_rsquare(np.log10(obs), np.log10(pred))
print model, r2
# Create inset for histogram of site level r^2 values
#axins = inset_axes(ax, width="30%", height="30%", loc=4)
#hist_mete_r2(site, np.log10(obs), np.log10(pred))
#plt.setp(axins, xticks=[], yticks=[])
plt.title(model)
#plt.text(1, 2000, r'$R^2$' + '='+ str(round(r2,3)))
plt.ylabel('Observed abundance',rotation='90',fontsize=12)
plt.xlabel('Predicted abundance',fontsize=12)
plt.savefig(mydir+'/Results/obs_pred_plots.png', dpi=600)#, bbox_inches = 'tight')#, pad_inches=0)
plt.show()
| 33.990991 | 142 | 0.631593 |
096ade25f0b2facba785f7f34bd396987e916a41 | 845 | py | Python | strukdat_4.py | rpurnama/pycodes | bc507d9b6eab30e39d69946ee07eebcd538546af | [
"Unlicense"
] | 1 | 2020-07-25T16:57:57.000Z | 2020-07-25T16:57:57.000Z | strukdat_4.py | itbj/PyCodes | 6a3f3a6d4e70882e00991493839a6af0fabeaab4 | [
"Unlicense"
] | null | null | null | strukdat_4.py | itbj/PyCodes | 6a3f3a6d4e70882e00991493839a6af0fabeaab4 | [
"Unlicense"
] | 1 | 2020-07-25T16:58:27.000Z | 2020-07-25T16:58:27.000Z | #Contoh List
contoh_list=['Windows',
'Ubuntu',
'FreeBSD',
'Solaris',
'DOS']
#Contoh Tuple
contoh_tuple=(0,1,2,3,4,5,6,7,8,9)
#Contoh Dictionary
contoh_dict={'nama':'John Doe',
'position':'CEO',
'DoB':'19-Nov-1976',
'phone':'+6212345678',
'email':'jdoe@abc.def'}
#Cara Menambahkan List
print "Isi list sebelum: ", contoh_list
list_baru = contoh_list + ['LinuxMint','OpenSuse','Slackware']
print "Isi list setelah: ", list_baru
print "\n"
#Cara Menambahkan Tuple
print "Isi tuple sebelum: ", contoh_tuple
tuple_baru = contoh_tuple + (10,11,12,13)
print "Isi tuple setelah: ", tuple_baru
print "\n"
#Cara Menambahkan Dictionary
print "Isi dictionary sebelum", contoh_dict
dict_update = {'education':'B. Eng',
'address':'Sunny Vale, CA'}
contoh_dict.update(dict_update)
print "Isi dictionary setelah: ", contoh_dict
| 22.837838 | 62 | 0.697041 |
637189b7b842c7d42123cb2c1585d559a27222d0 | 2,351 | py | Python | src/bag/layout/routing/__init__.py | Partmedia/bag | f4f871df66a75152980568967ff1d798ec446843 | [
"Apache-2.0",
"BSD-3-Clause"
] | 32 | 2019-05-16T19:25:00.000Z | 2021-12-07T20:12:13.000Z | src/bag/layout/routing/__init__.py | Partmedia/bag | f4f871df66a75152980568967ff1d798ec446843 | [
"Apache-2.0",
"BSD-3-Clause"
] | 1 | 2021-01-07T03:08:33.000Z | 2021-01-07T03:08:33.000Z | src/bag/layout/routing/__init__.py | Partmedia/bag | f4f871df66a75152980568967ff1d798ec446843 | [
"Apache-2.0",
"BSD-3-Clause"
] | 11 | 2019-07-23T17:37:48.000Z | 2021-10-19T15:24:33.000Z | # SPDX-License-Identifier: BSD-3-Clause AND Apache-2.0
# Copyright 2018 Regents of the University of California
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# * Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# Copyright 2019 Blue Cheetah Analog Design Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package provide routing grid related classes and methods.
"""
from .base import WireArray, TrackID
from .grid import RoutingGrid
| 47.979592 | 80 | 0.780944 |
72b70031452cc7c80c2251b43fbc800daae9709a | 2,928 | py | Python | data/evaluation/cran/loader.py | SSK-14/Covid19-Search-Engine | 2a9e0066e766d8a356a2c4a1ebd51c0aeb3cd4b6 | [
"Apache-2.0"
] | 1 | 2020-06-14T16:52:55.000Z | 2020-06-14T16:52:55.000Z | data/evaluation/cran/loader.py | SSK-14/Covid19-Search-Engine | 2a9e0066e766d8a356a2c4a1ebd51c0aeb3cd4b6 | [
"Apache-2.0"
] | 1 | 2020-05-06T14:28:10.000Z | 2020-05-06T14:28:10.000Z | data/evaluation/cran/loader.py | SSK-14/Covid19-Search-Engine | 2a9e0066e766d8a356a2c4a1ebd51c0aeb3cd4b6 | [
"Apache-2.0"
] | null | null | null | # loader.py for cran
import os
import re
from collections import defaultdict
from nltk.tokenize import word_tokenize
from data.template import Dataset, Document, Query, Text
class CranDataset(Dataset):
def __init__(self, base_path):
self.base_path = base_path
self.documents = None
self.queries = None
self.relevant_docs = None
super().__init__()
def read_raw(self, filename):
docs = [defaultdict(list)] # empty 0 index
category = ''
with open(os.path.join(self.base_path, filename)) as f:
i = 0
for line in f:
line = line.strip()
if line.startswith('.I'):
i = int(line[3:])
# print(i)
docs.append(defaultdict(list))
elif re.match(r'\.\w', line):
category = line[1]
elif line != '':
docs[i][category].append(Text(line, [word.lower()
for word in word_tokenize(line)]))
return docs
def load_docs(self, filename):
raw_docs = self.read_raw(filename)
documents = list()
for doc_id, _ in enumerate(raw_docs[1:]):
title, content = None, None
raw, tokenized = "", list()
for entry in raw_docs[doc_id+1]["T"]:
raw += " " + entry.raw
tokenized.extend(entry.tokenized)
title = Text(raw, tokenized)
raw, tokenized = "", list()
for entry in raw_docs[doc_id+1]["W"]:
raw += " " + entry.raw
tokenized.extend(entry.tokenized)
content = Text(raw, tokenized)
documents.append(Document(doc_id+1, title, content))
self.documents = documents
def load_queries(self, filename):
raw_docs = self.read_raw(filename)
queries = list()
for query_id, _ in enumerate(raw_docs[1:]):
text = None
raw, tokenized = "", list()
for entry in raw_docs[query_id+1]["W"]:
raw += " " + entry.raw
tokenized.extend(entry.tokenized)
text = Text(raw, tokenized)
queries.append(Query(query_id+1, text))
self.queries = queries
def load_relevant_docs(self, filename):
rels = {}
with open(os.path.join(base_path, filename)) as f:
for line in f:
qid, rel = line.strip().split()
qid = int(qid)
rel = int(rel)
if qid not in rels:
rels[qid] = []
rels[qid].append(rel)
self.relevant_docs = rels
base_path = "./data/evaluation/cran"
cran_data = CranDataset(base_path)
cran_data.load_docs("cran.all")
cran_data.load_queries("cran.qry")
cran_data.load_relevant_docs("cran.rel")
| 31.148936 | 91 | 0.527322 |
88fd63ef7dfb1607e3a59b21e5bfb8ed8c3cf806 | 491 | py | Python | raiden/tests/benchmark/merkle_tree_speed.py | jurajpetrik/raiden | e1a9201f5e09da804e589137d5a415a3870fa508 | [
"MIT"
] | 1 | 2018-07-04T05:42:19.000Z | 2018-07-04T05:42:19.000Z | raiden/tests/benchmark/merkle_tree_speed.py | jurajpetrik/raiden | e1a9201f5e09da804e589137d5a415a3870fa508 | [
"MIT"
] | null | null | null | raiden/tests/benchmark/merkle_tree_speed.py | jurajpetrik/raiden | e1a9201f5e09da804e589137d5a415a3870fa508 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import time
from raiden.mtree import Merkletree
from raiden.utils import keccak
def do_test_speed(rounds=100, num_hashes=1000):
values = [
keccak(str(i))
for i in range(num_hashes)
]
start_time = time.time()
for __ in range(rounds):
Merkletree(values).merkleroot
elapsed = time.time() - start_time
print '%d additions per second' % (num_hashes * rounds / elapsed)
if __name__ == '__main__':
do_test_speed()
| 19.64 | 69 | 0.649695 |
1adc4f0ad06469e32be80a8f0ea75fe593d8670f | 1,589 | py | Python | src/common/node.py | gruyaume/my-blockchain | 283f5ef0c8c09eff0478dfead3950c720cda2882 | [
"Apache-2.0"
] | 4 | 2021-11-14T17:16:03.000Z | 2022-03-17T21:01:42.000Z | src/common/node.py | gruyaume/my-blockchain | 283f5ef0c8c09eff0478dfead3950c720cda2882 | [
"Apache-2.0"
] | null | null | null | src/common/node.py | gruyaume/my-blockchain | 283f5ef0c8c09eff0478dfead3950c720cda2882 | [
"Apache-2.0"
] | 5 | 2021-07-30T14:27:37.000Z | 2021-12-15T12:08:46.000Z | import requests
class Node:
def __init__(self, hostname: str):
self.hostname = hostname
self.base_url = f"http://{hostname}/"
def __eq__(self, other):
return self.hostname == other.hostname
@property
def dict(self):
return {
"hostname": self.hostname
}
def post(self, endpoint: str, data: dict = None) -> requests.Response:
url = f"{self.base_url}{endpoint}"
if data:
req_return = requests.post(url, json=data)
else:
req_return = requests.post(url)
req_return.raise_for_status()
return req_return
def get(self, endpoint: str, data: dict = None) -> list:
url = f"{self.base_url}{endpoint}"
if data:
req_return = requests.get(url, json=data)
else:
req_return = requests.get(url)
req_return.raise_for_status()
return req_return.json()
def advertise(self, hostname: str):
data = {"hostname": hostname}
return self.post(endpoint="new_node_advertisement", data=data)
def known_node_request(self):
return self.get(endpoint="known_node_request")
def send_new_block(self, block: dict) -> requests.Response:
return self.post(endpoint="block", data=block)
def send_transaction(self, transaction_data: dict) -> requests.Response:
return self.post("transactions", transaction_data)
def get_blockchain(self) -> list:
return self.get(endpoint="block")
def restart(self):
return self.post(endpoint="restart")
| 29.425926 | 76 | 0.619887 |
20511b38961cd484a66c90a49cb2e94444a6a68c | 1,375 | py | Python | rgb_to_hsi_hsi_to_rgb/rgb_to_hsi.py | servercalap/img_processing | 8a24547135f417143b24ce4292ca59452ce89c82 | [
"MIT"
] | null | null | null | rgb_to_hsi_hsi_to_rgb/rgb_to_hsi.py | servercalap/img_processing | 8a24547135f417143b24ce4292ca59452ce89c82 | [
"MIT"
] | null | null | null | rgb_to_hsi_hsi_to_rgb/rgb_to_hsi.py | servercalap/img_processing | 8a24547135f417143b24ce4292ca59452ce89c82 | [
"MIT"
] | null | null | null | import cv2
import numpy as np
import math
def RGB_TO_HSI(img):
#load im with 32 bit
with np.errstate(divide='ignore', invalid='ignore'):
bgr = np.float32(img)/255
blue = bgr[:,:,0]
green = bgr[:,:,1]
red = bgr[:,:,2]
#Calculate Intensity
def calc_intensity(red,blue,gree):
return np.divide(blue+green+red,3)
#Calculate Saturation
def calc_saturation(red,blue,green):
minimum = np.minimum(np.minimum(red,green),blue)
saturation = 1 - (3/(red + green+blue+0.001)*minimum)
return saturation
#Calculate Hue
def calc_hue(red,blue,green):
hue = np.copy(red)
for i in range(0,blue.shape[0]):
for j in range(0,blue.shape[1]):
hue[i][j] = 0.5* ((red[i][j] - green[i][j]) + (red[i][j] - blue[i][j])) / \
np.sqrt((red[i][j] - green[i][j])**2 + ((red[i][j] - blue[i][j])*(green[i][j] - blue[i][j])))
hue[i][j] = math.acos(hue[i][j])
if blue[i][j] <= green[i][j]:
hue[i][j] = hue[i][j]
else:
hue[i][j] = ((360 * math.pi)/ 180.0) - hue[i][j]
return hue
#merge channels values
hsi = cv2.merge((calc_hue(red,blue,green),calc_saturation(red,blue,green),calc_intensity(red,blue,green)))
return hsi
| 23.706897 | 121 | 0.517091 |
4e1310d422be84b8f3cfd9059149bae33615794a | 952 | py | Python | pir-logger.py | jspan/youthere | 6419e0b524ff01c94b9f0b1f497d651ae8d33be9 | [
"MIT"
] | 1 | 2016-04-13T05:05:45.000Z | 2016-04-13T05:05:45.000Z | pir-logger.py | jspan/youthere | 6419e0b524ff01c94b9f0b1f497d651ae8d33be9 | [
"MIT"
] | null | null | null | pir-logger.py | jspan/youthere | 6419e0b524ff01c94b9f0b1f497d651ae8d33be9 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
'''
Read arduino serial port and log to file
'''
import datetime
import logging
import serial
import sys
import time
SERIAL_DEV = '/dev/ttyUSB0'
CHECK_PERIOD_SEC = 1
def get_timestamp():
return datetime.datetime.now().strftime('%Y-%m-%dT%H:%M:%S')
def main():
if len(sys.argv) != 2:
print >> sys.stderr, 'Usage: %s LOG_FILE' % sys.argv[0]
sys.exit(1)
log_file = sys.argv[1]
logging.basicConfig(filename=log_file, level=logging.INFO, format='%(message)s')
logging.info('-> start %s' % (get_timestamp()))
ser = serial.Serial(SERIAL_DEV, 9600)
time.sleep(2)
try:
while True:
char = ser.read(ser.inWaiting())
if char in ['0', '1']:
logging.info('%s %s' % (char, get_timestamp()))
time.sleep(CHECK_PERIOD_SEC)
except serial.serialutil.SerialException, IOError:
pass
if __name__ == '__main__':
main()
| 21.636364 | 84 | 0.611345 |
8b948f9090f7c55d4817ad5966defb62912b33f7 | 7,148 | py | Python | deephar/data/human36m.py | steuwe/deephar | 85efc43b7c166339f9655cf322a40dde242bad27 | [
"MIT"
] | 343 | 2018-07-18T10:39:30.000Z | 2022-03-30T02:32:06.000Z | deephar/data/human36m.py | steuwe/deephar | 85efc43b7c166339f9655cf322a40dde242bad27 | [
"MIT"
] | 47 | 2018-09-03T03:35:13.000Z | 2021-11-15T02:09:15.000Z | deephar/data/human36m.py | thejumboroar/deep-human-action-recognition | c0fa2245d4e716837efead5cc304a5b6e2c2df55 | [
"MIT"
] | 83 | 2018-10-15T08:36:12.000Z | 2022-03-05T05:51:16.000Z | import os
import numpy as np
import scipy.io as sio
from PIL import Image
from deephar.data.datasets import get_clip_frame_index
from deephar.utils import *
ACTION_LABELS = None
def load_h36m_mat_annotation(filename):
mat = sio.loadmat(filename, struct_as_record=False, squeeze_me=True)
# Respect the order of TEST (0), TRAIN (1), and VALID (2)
sequences = [mat['sequences_te'], mat['sequences_tr'], mat['sequences_val']]
action_labels = mat['action_labels']
joint_labels = mat['joint_labels']
return sequences, action_labels, joint_labels
def serialize_index_sequences(seq):
frames_idx = []
for s in range(len(seq)):
for f in range(len(seq[s].frames)):
frames_idx.append((s, f))
return frames_idx
class Human36M(object):
"""Implementation of the Human3.6M dataset for 3D pose estimation and
action recognition.
"""
def __init__(self, dataset_path, dataconf, poselayout=pa17j3d,
topology='sequences', clip_size=16):
assert topology in ['sequences', 'frames'], \
'Invalid topology ({})'.format(topology)
self.dataset_path = dataset_path
self.dataconf = dataconf
self.poselayout = poselayout
self.topology = topology
self.clip_size = clip_size
self.load_annotations(os.path.join(dataset_path, 'annotations.mat'))
def load_annotations(self, filename):
try:
self.sequences, self.action_labels, self.joint_labels = \
load_h36m_mat_annotation(filename)
self.frame_idx = [serialize_index_sequences(self.sequences[0]),
serialize_index_sequences(self.sequences[1]),
serialize_index_sequences(self.sequences[2])]
global ACTION_LABELS
ACTION_LABELS = self.action_labels
except:
warning('Error loading Human3.6M dataset!')
raise
def get_data(self, key, mode, frame_list=None, fast_crop=False):
output = {}
if mode == TRAIN_MODE:
dconf = self.dataconf.random_data_generator()
random_clip = True
else:
dconf = self.dataconf.get_fixed_config()
random_clip = False
if self.topology == 'sequences':
seq = self.sequences[mode][key]
if frame_list == None:
frame_list = get_clip_frame_index(len(seq.frames),
dconf['subspl'], self.clip_size,
random_clip=random_clip)
objframes = seq.frames[frame_list]
else:
seq_idx, frame_idx = self.frame_idx[mode][key]
seq = self.sequences[mode][seq_idx]
objframes = seq.frames[[frame_idx]]
"""Build a Camera object"""
cpar = seq.camera_parameters
cam = Camera(cpar.R, cpar.T, cpar.f, cpar.c, cpar.p, cpar.k)
"""Load and project the poses"""
pose_w = self.load_pose_annot(objframes)
pose_uvd = cam.project(np.reshape(pose_w, (-1, 3)))
pose_uvd = np.reshape(pose_uvd,
(len(objframes), self.poselayout.num_joints, 3))
"""Compute GT bouding box."""
imgsize = (objframes[0].w, objframes[0].h)
objpos, winsize, zrange = get_crop_params(pose_uvd[:, 0, :],
imgsize, cam.f, dconf['scale'])
objpos += dconf['scale'] * np.array([dconf['transx'], dconf['transy']])
frames = np.empty((len(objframes),) + self.dataconf.input_shape)
pose = np.empty((len(objframes), self.poselayout.num_joints,
self.poselayout.dim))
for i in range(len(objframes)):
image = 'images/%s/%05d.jpg' % (seq.name, objframes[i].f)
imgt = T(Image.open(os.path.join(self.dataset_path, image)))
imgt.rotate_crop(dconf['angle'], objpos, winsize)
if dconf['hflip'] == 1:
imgt.horizontal_flip()
imgt.resize(self.dataconf.crop_resolution)
imgt.normalize_affinemap()
frames[i, :, :, :] = normalize_channels(imgt.asarray(),
channel_power=dconf['chpower'])
pose[i, :, 0:2] = transform_2d_points(imgt.afmat,
pose_uvd[i, :,0:2], transpose=True)
pose[i, :, 2] = \
(pose_uvd[i, :, 2] - zrange[0]) / (zrange[1] - zrange[0])
if imgt.hflip:
pose[i, :, :] = pose[i, self.poselayout.map_hflip, :]
"""Set outsider body joints to invalid (-1e9)."""
pose = np.reshape(pose, (-1, self.poselayout.dim))
pose[np.isnan(pose)] = -1e9
v = np.expand_dims(get_visible_joints(pose[:,0:2]), axis=-1)
pose[(v==0)[:,0],:] = -1e9
pose = np.reshape(pose, (len(objframes), self.poselayout.num_joints,
self.poselayout.dim))
v = np.reshape(v, (len(objframes), self.poselayout.num_joints, 1))
pose = np.concatenate((pose, v), axis=-1)
if self.topology != 'sequences':
pose_w = np.squeeze(pose_w, axis=0)
pose_uvd = np.squeeze(pose_uvd, axis=0)
pose = np.squeeze(pose, axis=0)
frames = np.squeeze(frames, axis=0)
output['camera'] = cam.serialize()
output['action'] = int(seq.name[1:3]) - 1
output['pose_w'] = pose_w
output['pose_uvd'] = pose_uvd
output['pose'] = pose
output['frame'] = frames
"""Take the last transformation matrix, it should not change"""
output['afmat'] = imgt.afmat.copy()
return output
def load_pose_annot(self, frames):
p = np.empty((len(frames), self.poselayout.num_joints,
self.poselayout.dim))
for i in range(len(frames)):
p[i,:] = frames[i].pose3d.T[self.poselayout.map_from_h36m,
0:self.poselayout.dim].copy()
return p
def clip_length(self):
if self.topology == 'sequences':
return self.clip_size
else:
return None
def clip_shape(self):
if self.topology == 'sequences':
return (self.clip_size,)
else:
return ()
def get_shape(self, dictkey):
if dictkey == 'frame':
return self.clip_shape() + self.dataconf.input_shape
if dictkey == 'pose':
return self.clip_shape() \
+ (self.poselayout.num_joints, self.poselayout.dim+1)
if dictkey == 'pose_w':
return self.clip_shape() \
+ (self.poselayout.num_joints, self.poselayout.dim)
if dictkey == 'pose_uvd':
return self.clip_shape() \
+ (self.poselayout.num_joints, self.poselayout.dim)
if dictkey == 'action':
return (1,)
if dictkey == 'camera':
return (21,)
if dictkey == 'afmat':
return (3, 3)
raise Exception('Invalid dictkey on get_shape!')
def get_length(self, mode):
if self.topology == 'sequences':
return len(self.sequences[mode])
else:
return len(self.frame_idx[mode])
| 35.039216 | 80 | 0.576245 |
a2a2f078eeff0e22917edb8e868ca538906eb6d1 | 423 | py | Python | celebclassifier/celebclassifier/wsgi.py | EyeSlash1998/Hackathon-2k19 | 96bbe7b2fe71a19b195c0a43d43f3b587418e5f9 | [
"MIT"
] | 1 | 2020-03-14T15:24:27.000Z | 2020-03-14T15:24:27.000Z | celebclassifier/celebclassifier/wsgi.py | EyeSlash1998/Hackathon-2k19-Celeb-Classifier- | 96bbe7b2fe71a19b195c0a43d43f3b587418e5f9 | [
"MIT"
] | null | null | null | celebclassifier/celebclassifier/wsgi.py | EyeSlash1998/Hackathon-2k19-Celeb-Classifier- | 96bbe7b2fe71a19b195c0a43d43f3b587418e5f9 | [
"MIT"
] | null | null | null | """
WSGI config for celebclassifier project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/2.2/howto/deployment/wsgi/
"""
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'celebclassifier.settings')
application = get_wsgi_application()
| 24.882353 | 79 | 0.763593 |
2e9f2095748a7605db4fe41ee18b102312fe672b | 11,579 | py | Python | keystone/tests/ksfixtures/hacking.py | BMDan/keystone | 39de8b0a0a34c1645b607449fc1247d5cc11d89d | [
"Apache-2.0"
] | null | null | null | keystone/tests/ksfixtures/hacking.py | BMDan/keystone | 39de8b0a0a34c1645b607449fc1247d5cc11d89d | [
"Apache-2.0"
] | null | null | null | keystone/tests/ksfixtures/hacking.py | BMDan/keystone | 39de8b0a0a34c1645b607449fc1247d5cc11d89d | [
"Apache-2.0"
] | null | null | null | # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# NOTE(morganfainberg) This file shouldn't have flake8 run on it as it has
# code examples that will fail normal CI pep8/flake8 tests. This is expected.
# The code has been moved here to ensure that proper tests occur on the
# test_hacking_checks test cases.
# flake8: noqa
import fixtures
class HackingCode(fixtures.Fixture):
"""A fixture to house the various code examples for the keystone hacking
style checks.
"""
mutable_default_args = {
'code': """
def f():
pass
def f(a, b='', c=None):
pass
def f(bad=[]):
pass
def f(foo, bad=[], more_bad=[x for x in range(3)]):
pass
def f(foo, bad={}):
pass
def f(foo, bad={}, another_bad=[], fine=None):
pass
def f(bad=[]): # noqa
pass
def funcs(bad=dict(), more_bad=list(), even_more_bad=set()):
"creating mutables through builtins"
def funcs(bad=something(), more_bad=some_object.something()):
"defaults from any functions"
def f(bad=set(), more_bad={x for x in range(3)},
even_more_bad={1, 2, 3}):
"set and set comprehession"
def f(bad={x: x for x in range(3)}):
"dict comprehension"
""",
'expected_errors': [
(7, 10, 'K001'),
(10, 15, 'K001'),
(10, 29, 'K001'),
(13, 15, 'K001'),
(16, 15, 'K001'),
(16, 31, 'K001'),
(22, 14, 'K001'),
(22, 31, 'K001'),
(22, 53, 'K001'),
(25, 14, 'K001'),
(25, 36, 'K001'),
(28, 10, 'K001'),
(28, 27, 'K001'),
(29, 21, 'K001'),
(32, 11, 'K001'),
]}
comments_begin_with_space = {
'code': """
# This is a good comment
#This is a bad one
# This is alright and can
# be continued with extra indentation
# if that's what the developer wants.
""",
'expected_errors': [
(3, 0, 'K002'),
]}
asserting_none_equality = {
'code': """
class Test(object):
def test(self):
self.assertEqual('', '')
self.assertEqual('', None)
self.assertEqual(None, '')
self.assertNotEqual('', None)
self.assertNotEqual(None, '')
self.assertNotEqual('', None) # noqa
self.assertNotEqual(None, '') # noqa
""",
'expected_errors': [
(5, 8, 'K003'),
(6, 8, 'K003'),
(7, 8, 'K004'),
(8, 8, 'K004'),
]}
assert_no_translations_for_debug_logging = {
'code': """
import logging
import logging as stlib_logging
from keystone.i18n import _
from keystone.i18n import _ as oslo_i18n
from keystone.openstack.common import log
from keystone.openstack.common import log as oslo_logging
# stdlib logging
L0 = logging.getLogger()
L0.debug(_('text'))
class C:
def __init__(self):
L0.debug(oslo_i18n('text', {}))
# stdlib logging w/ alias and specifying a logger
class C:
def __init__(self):
self.L1 = logging.getLogger(__name__)
def m(self):
self.L1.debug(
_('text'), {}
)
# oslo logging and specifying a logger
L2 = log.getLogger(__name__)
L2.debug(oslo_i18n('text'))
# oslo logging w/ alias
class C:
def __init__(self):
self.L3 = oslo_logging.getLogger()
self.L3.debug(_('text'))
# translation on a separate line
msg = _('text')
L2.debug(msg)
# this should not fail
if True:
msg = _('message %s') % X
L2.error(msg)
raise TypeError(msg)
if True:
msg = 'message'
L2.debug(msg)
# this should not fail
if True:
if True:
msg = _('message')
else:
msg = _('message')
L2.debug(msg)
raise Exception(msg)
""",
'expected_errors': [
(10, 9, 'K005'),
(13, 17, 'K005'),
(21, 12, 'K005'),
(26, 9, 'K005'),
(32, 22, 'K005'),
(36, 9, 'K005'),
]
}
class HackingLogging(fixtures.Fixture):
shared_imports = """
import logging
import logging as stlib_logging
from keystone.i18n import _
from keystone.i18n import _ as oslo_i18n
from keystone.i18n import _LC
from keystone.i18n import _LE
from keystone.i18n import _LE as error_hint
from keystone.i18n import _LI
from keystone.i18n import _LW
from keystone.openstack.common import log
from keystone.openstack.common import log as oslo_logging
"""
examples = [
{
'code': """
# stdlib logging
LOG = logging.getLogger()
LOG.info(_('text'))
class C:
def __init__(self):
LOG.warn(oslo_i18n('text', {}))
LOG.warn(_LW('text', {}))
""",
'expected_errors': [
(3, 9, 'K006'),
(6, 17, 'K006'),
],
},
{
'code': """
# stdlib logging w/ alias and specifying a logger
class C:
def __init__(self):
self.L = logging.getLogger(__name__)
def m(self):
self.L.warning(
_('text'), {}
)
self.L.warning(
_LW('text'), {}
)
""",
'expected_errors': [
(7, 12, 'K006'),
],
},
{
'code': """
# oslo logging and specifying a logger
L = log.getLogger(__name__)
L.error(oslo_i18n('text'))
L.error(error_hint('text'))
""",
'expected_errors': [
(3, 8, 'K006'),
],
},
{
'code': """
# oslo logging w/ alias
class C:
def __init__(self):
self.LOG = oslo_logging.getLogger()
self.LOG.critical(_('text'))
self.LOG.critical(_LC('text'))
""",
'expected_errors': [
(5, 26, 'K006'),
],
},
{
'code': """
LOG = log.getLogger(__name__)
# translation on a separate line
msg = _('text')
LOG.exception(msg)
msg = _LE('text')
LOG.exception(msg)
""",
'expected_errors': [
(4, 14, 'K006'),
],
},
{
'code': """
LOG = logging.getLogger()
# ensure the correct helper is being used
LOG.warn(_LI('this should cause an error'))
# debug should not allow any helpers either
LOG.debug(_LI('this should cause an error'))
""",
'expected_errors': [
(4, 9, 'K006'),
(7, 10, 'K005'),
],
},
{
'code': """
# this should not be an error
L = log.getLogger(__name__)
msg = _('text')
L.warn(msg)
raise Exception(msg)
""",
'expected_errors': [],
},
{
'code': """
L = log.getLogger(__name__)
def f():
msg = _('text')
L2.warn(msg)
something = True # add an extra statement here
raise Exception(msg)
""",
'expected_errors': [],
},
{
'code': """
LOG = log.getLogger(__name__)
def func():
msg = _('text')
LOG.warn(msg)
raise Exception('some other message')
""",
'expected_errors': [
(4, 13, 'K006'),
],
},
{
'code': """
LOG = log.getLogger(__name__)
if True:
msg = _('text')
else:
msg = _('text')
LOG.warn(msg)
raise Exception(msg)
""",
'expected_errors': [
],
},
{
'code': """
LOG = log.getLogger(__name__)
if True:
msg = _('text')
else:
msg = _('text')
LOG.warn(msg)
""",
'expected_errors': [
(6, 9, 'K006'),
],
},
{
'code': """
LOG = log.getLogger(__name__)
msg = _LW('text')
LOG.warn(msg)
raise Exception(msg)
""",
'expected_errors': [
(3, 9, 'K007'),
],
},
{
'code': """
LOG = log.getLogger(__name__)
msg = _LW('text')
LOG.warn(msg)
msg = _('something else')
raise Exception(msg)
""",
'expected_errors': [],
},
{
'code': """
LOG = log.getLogger(__name__)
msg = _LW('hello %s') % 'world'
LOG.warn(msg)
raise Exception(msg)
""",
'expected_errors': [
(3, 9, 'K007'),
],
},
{
'code': """
LOG = log.getLogger(__name__)
msg = _LW('hello %s') % 'world'
LOG.warn(msg)
""",
'expected_errors': [],
},
]
| 30.075325 | 77 | 0.397098 |
0b2935cd79e5f5ef0bb6b74f87cbb95d7f12c665 | 1,320 | py | Python | src/ripVT/transforms/from_behavorial_file_downloaed.py | nkrios/ripVT | db083e6cd6afd541329e3efc856330f93b11c287 | [
"MIT"
] | 20 | 2019-01-14T09:40:37.000Z | 2020-06-01T22:19:04.000Z | src/ripVT/transforms/from_behavorial_file_downloaed.py | nkrios/ripVT | db083e6cd6afd541329e3efc856330f93b11c287 | [
"MIT"
] | null | null | null | src/ripVT/transforms/from_behavorial_file_downloaed.py | nkrios/ripVT | db083e6cd6afd541329e3efc856330f93b11c287 | [
"MIT"
] | 10 | 2019-01-14T09:56:50.000Z | 2020-06-01T22:19:05.000Z | #!/usr/bin/env python
from canari.maltego.entities import Person
from canari.maltego.utils import debug, progress
from canari.framework import configure #, superuser
from common.entities import Filename,vtfilereport
from common.ripVT import *
import ast
__author__ = '@matonis'
__copyright__ = 'Copyright 2015, Ripvt Project'
__credits__ = []
__license__ = 'GPL'
__version__ = '0.1'
__maintainer__ = '@matonis'
__email__ = 'dfir.matonis@gmail.com'
__status__ = 'Development'
__all__ = [
'dotransform',
'onterminate'
]
@configure(
label='[ripVT] - Behavioral to Downloaded Files (VT)',
description='Extracts Downloaded files from sandbox.',
uuids=[ 'ripVT.v2.b2dlf'],
inputs=[ ( 'ripVT', vtfilereport )],
remote=False,
debug=True
)
def dotransform(request, response):
if request.fields['behavioral']!= "false":
behavior=ast.literal_eval(request.fields['behavior_data'])
if behavior.has_key("filesystem"):
if behavior['filesystem'].has_key("downloaded"):
for t_file in behavior['filesystem']['downloaded']:
r=Filename(t_file['path'])
r.linklabel="vt_behave->downloaded"
response+=r
else:
debug("ripVT: No behavioral for %s" % request.value)
return response | 29.333333 | 67 | 0.665909 |
036afb7002d847e9ca98905d8e17710b294e3eed | 1,304 | py | Python | stickybeak/vendored/pip/_internal/utils/encoding.py | reloadware/stickybeak | 8ac52a80849a3098fb6b2f47115970a734a73c14 | [
"Apache-2.0"
] | null | null | null | stickybeak/vendored/pip/_internal/utils/encoding.py | reloadware/stickybeak | 8ac52a80849a3098fb6b2f47115970a734a73c14 | [
"Apache-2.0"
] | null | null | null | stickybeak/vendored/pip/_internal/utils/encoding.py | reloadware/stickybeak | 8ac52a80849a3098fb6b2f47115970a734a73c14 | [
"Apache-2.0"
] | 1 | 2022-01-01T15:14:42.000Z | 2022-01-01T15:14:42.000Z | import codecs
import locale
import re
import sys
from stickybeak.vendored.pip._internal.utils.typing import MYPY_CHECK_RUNNING
if MYPY_CHECK_RUNNING:
from typing import List, Tuple, Text
BOMS = [
(codecs.BOM_UTF8, 'utf-8'),
(codecs.BOM_UTF16, 'utf-16'),
(codecs.BOM_UTF16_BE, 'utf-16-be'),
(codecs.BOM_UTF16_LE, 'utf-16-le'),
(codecs.BOM_UTF32, 'utf-32'),
(codecs.BOM_UTF32_BE, 'utf-32-be'),
(codecs.BOM_UTF32_LE, 'utf-32-le'),
] # type: List[Tuple[bytes, Text]]
ENCODING_RE = re.compile(br'coding[:=]\s*([-\w.]+)')
def auto_decode(data):
# type: (bytes) -> Text
"""Check a bytes string for a BOM to correctly detect the encoding
Fallback to locale.getpreferredencoding(False) like open() on Python3"""
for bom, encoding in BOMS:
if data.startswith(bom):
return data[len(bom):].decode(encoding)
# Lets check the first two lines as in PEP263
for line in data.split(b'\n')[:2]:
if line[0:1] == b'#' and ENCODING_RE.search(line):
result = ENCODING_RE.search(line)
assert result is not None
encoding = result.groups()[0].decode('ascii')
return data.decode(encoding)
return data.decode(
locale.getpreferredencoding(False) or sys.getdefaultencoding(),
)
| 31.047619 | 77 | 0.648773 |
22388c00e44e64ae0046e34f7a617775fd8a8c45 | 1,639 | py | Python | run_describe_cloudwatch.py | HardBoiledSmith/johanna | 0443a9040f0248f0a800c9d4b062e375f997bb6f | [
"MIT"
] | 64 | 2016-11-03T11:20:25.000Z | 2021-05-24T03:08:57.000Z | run_describe_cloudwatch.py | HardBoiledSmith/johanna | 0443a9040f0248f0a800c9d4b062e375f997bb6f | [
"MIT"
] | 69 | 2016-11-03T14:09:35.000Z | 2022-02-07T12:52:05.000Z | run_describe_cloudwatch.py | HardBoiledSmith/johanna | 0443a9040f0248f0a800c9d4b062e375f997bb6f | [
"MIT"
] | 19 | 2016-11-03T11:04:51.000Z | 2020-06-12T10:40:57.000Z | #!/usr/bin/env python3
from env import env
from run_common import AWSCli
aws_cli = AWSCli()
def describe_cloudwatch_dashboard():
if not env.get('cloudwatch'):
return False
if not env['cloudwatch'].get('DASHBOARDS'):
return False
d_set = set()
dashboards_list = env['cloudwatch']['DASHBOARDS']
for dl in dashboards_list:
d_name = f"{dl['NAME']}_{dl['AWS_REGION']}"
d_set.add(d_name)
cmd = ['cloudwatch', 'list-dashboards']
result = aws_cli.run(cmd)
for de in result['DashboardEntries']:
if de['DashboardName'] in d_set:
return True
return False
def describe_cloudwatch_alarm():
if not env.get('cloudwatch'):
return False
if not env['cloudwatch'].get('ALARMS'):
return False
a_set = set()
alarms_list = env['cloudwatch']['ALARMS']
for al in alarms_list:
a_name = f"{al['NAME']}_{al['AWS_REGION']}_{al.get('METRIC_NAME', '')}"
a_set.add(f'"{a_name}"')
cmd = ['cloudwatch', 'describe-alarms']
cmd += ['--alarm-names']
cmd += a_set
result = aws_cli.run(cmd)
for ma in result['MetricAlarms']:
if ma['AlarmName'] in a_set:
return True
return False
results = list()
if describe_cloudwatch_dashboard():
results.append('CloudWatch Dashboard -------------- O')
else:
results.append('CloudWatch Dashboard -------------- X')
if describe_cloudwatch_alarm():
results.append('CloudWatch Alarm -------------- O')
else:
results.append('CloudWatch Alarm -------------- X')
print('#' * 80)
for r in results:
print(r)
print('#' * 80)
| 22.148649 | 79 | 0.600366 |
f800904aa9501a9db5c84018ea2ccbdd381cc5c7 | 6,370 | py | Python | examples/process_script.py | DynamicGravitySystems/DGP | 5c0b566b846eb25f1e5ede64b2caaaa6a3352a29 | [
"Apache-2.0"
] | 7 | 2017-08-15T21:51:40.000Z | 2020-10-28T00:40:23.000Z | examples/process_script.py | DynamicGravitySystems/DGP | 5c0b566b846eb25f1e5ede64b2caaaa6a3352a29 | [
"Apache-2.0"
] | 63 | 2017-08-11T15:12:03.000Z | 2020-05-23T19:03:46.000Z | examples/process_script.py | DynamicGravitySystems/DGP | 5c0b566b846eb25f1e5ede64b2caaaa6a3352a29 | [
"Apache-2.0"
] | 4 | 2018-03-29T21:30:26.000Z | 2020-10-27T20:15:23.000Z | import os
from datetime import datetime
from dgp.lib.gravity_ingestor import read_at1a
from dgp.lib.trajectory_ingestor import import_trajectory
from dgp.lib.etc import align_frames
from dgp.lib.transform.transform_graphs import AirbornePost
from dgp.lib.transform.filters import detrend
from dgp.lib.plots import timeseries_gravity_diagnostic, mapplot_line, read_meterconfig
# Runtime Option
campaign = 'OIB' # 'ROSETTA'
# Set paths
if campaign == 'ROSETTA':
print('ROSETTA')
basedir = '/Users/dporter/Documents/Research/Projects/DGP_test/'
gravity_directory = 'DGP_data'
gravity_file = 'AN04_F1001_20171103_2127.dat'
trajectory_directory = gravity_directory
trajectory_file = 'AN04_F1001_20171103_DGS-INS_FINAL_DGS.txt'
# L650
begin_line = datetime(2017, 11, 4, 0, 27)
end_line = datetime(2017, 11, 4, 1, 45)
gps_fields = ['mdy', 'hms', 'lat', 'long', 'ortho_ht', 'ell_ht', 'num_stats', 'pdop']
elif campaign == 'OIB':
print('OIB')
basedir = '/Users/dporter/Documents/Research/Projects/OIB-grav/data/P3_2017'
gravity_directory = 'gravity/dgs/raw/F2004'
gravity_file = 'OIB-P3_20170327_F2004_DGS_0938.dat'
trajectory_directory = 'pnt/dgs-ins/F2004/txt'
trajectory_file = 'OIB-P3_20170327_F2004_DGS-INS_RAPID_DGS.txt'
# NW Coast Parallel
begin_line = datetime(2017, 3, 27, 15, 35)
end_line = datetime(2017, 3, 27, 16, 50)
gps_fields = ['mdy', 'hms', 'lat', 'long', 'ortho_ht', 'ell_ht', 'num_stats', 'pdop']
else:
print('Scotia?')
# Load Data Files
print('\nImporting gravity')
gravity = read_at1a(os.path.join(basedir, gravity_directory, gravity_file), interp=True)
print('\nImporting trajectory')
trajectory = import_trajectory(os.path.join(basedir, trajectory_directory, trajectory_file),
columns=gps_fields, skiprows=1, timeformat='hms')
# Read MeterProcessing file in Data Directory
config_file = os.path.join(basedir, gravity_directory, "MeterProcessing.ini")
k_factor = read_meterconfig(config_file, 'kfactor')
tie_gravity = read_meterconfig(config_file, 'TieGravity')
print('{0} {1}'.format(k_factor, tie_gravity))
flight = gravity_file[4:11]
# statics
# TODO: Semi-automate or create GUI to get statics
first_static = read_meterconfig(config_file, 'PreStill')
second_static = read_meterconfig(config_file, 'PostStill')
# def compute_static(begin, end):
# return gravity[(begin < gravity.index) & (gravity.index < end)]['gravity'].mean()
#
# begin_first_static = datetime(2016, 8, 10, 19, 57)
# end_first_static = datetime(2016, 8, 10, 20, 8)
# first_static = compute_static(begin_first_static, end_first_static)
#
# begin_second_static = datetime(2016, 8, 10, 21, 7)
# end_second_static = datetime(2016, 8, 10, 21, 17)
# second_static = compute_static(begin_second_static, end_second_static)
# pre-processing prep
trajectory_full = trajectory[['long', 'lat']]
gravity = gravity[(begin_line <= gravity.index) & (gravity.index <= end_line)]
trajectory = trajectory[(begin_line <= trajectory.index) & (trajectory.index <= end_line)]
# align gravity and trajectory frames
gravity, trajectory = align_frames(gravity, trajectory)
# adjust for crossing the prime meridian
trajectory['long'] = trajectory['long'].where(trajectory['long'] > 0, trajectory['long'] + 360)
# de-drift
gravity['gravity'] = detrend(gravity['gravity'], first_static, second_static)
# adjust to absolute
offset = tie_gravity - k_factor * first_static
gravity['gravity'] += offset
# print('\nProcessing')
# g = AirbornePost(trajectory, gravity, begin_static=first_static, end_static=second_static)
# results = g.execute()
###########
# Real plots
print('\nPlotting')
if 'results' in locals():
# Time-series Plot
variables = ['ell_ht', 'lat', 'long']
variable_units = ['m', 'degrees', 'degrees']
plot_title = campaign + ' ' + flight + ': PNT'
plot_name = os.path.join(basedir, campaign + '_' + flight + '_DGP_TS_pnt.png')
timeseries_gravity_diagnostic(results['shifted_trajectory'], variables, variable_units, begin_line, end_line,
plot_title, plot_name)
# Time-series Plot
variables = ['eotvos', 'lat_corr', 'fac', 'total_corr']
variable_units = ['mGal', 'mGal', 'mGal', 'mGal']
plot_title = campaign + ' ' + flight + ': Corrections'
plot_name = os.path.join(basedir, campaign + '_' + flight + '_DGP_TS_corrections.png')
timeseries_gravity_diagnostic(results, variables, variable_units, begin_line, end_line,
plot_title, plot_name)
# Time-series Plot
variables = ['filtered_grav', 'corrected_grav', 'abs_grav']
variable_units = ['mGal', 'mGal', 'mGal', 'mGal']
plot_title = campaign + ' ' + flight + ': Gravity'
plot_name = os.path.join(basedir, campaign + '_' + flight + '_DGP_TS_gravity.png')
timeseries_gravity_diagnostic(results, variables, variable_units, begin_line, end_line,
plot_title, plot_name)
# Map Plot
plot_title = campaign + ' ' + flight + ': Gravity'
plot_name = os.path.join(basedir, campaign + '_' + flight + '_DGP_mapplot_gravity.png')
mapplot_line(trajectory_full, trajectory, results, 'filtered_grav', 'mGal', plot_title, plot_name)
else:
# Temporary plots for when graph is commented out (currently OIB_P3)
variables = ['gravity', 'cross_accel', 'beam', 'temp']
variable_units = ['mGal', 'mGal', 'mGal', 'C']
plot_title = campaign + ' ' + flight + ': QC'
plot_name = os.path.join(basedir, campaign + '_' + flight + '_DGP_TS_QC.png')
timeseries_gravity_diagnostic(gravity, variables, variable_units, begin_line, end_line,
plot_title, plot_name)
variables = ['ell_ht', 'ortho_ht', 'lat', 'long']
variable_units = ['m', 'm', 'degrees', 'degrees']
plot_title = campaign + ' ' + flight + ': PNT'
plot_name = os.path.join(basedir, campaign + '_' + flight + '_DGP_TS_pnt.png')
timeseries_gravity_diagnostic(trajectory, variables, variable_units, begin_line, end_line,
plot_title, plot_name)
plot_title = campaign + ' ' + flight + ': Gravity'
plot_name = os.path.join(basedir, campaign + '_' + flight + '_DGP_mapplot_gravity.png')
mapplot_line(trajectory_full, trajectory, gravity, 'gravity', 'mGal', plot_title, plot_name)
| 44.545455 | 113 | 0.694035 |
eae240dad8858e7d5be2c527adb06ad56aad7607 | 13,214 | py | Python | inference-engine/ie_bridges/python/tests/test_IECore.py | akhakimova/openvino | 3a588476cd7a34bdc8ad02b85c14dc939747a282 | [
"Apache-2.0"
] | 1 | 2020-09-28T08:56:20.000Z | 2020-09-28T08:56:20.000Z | inference-engine/ie_bridges/python/tests/test_IECore.py | akhakimova/openvino | 3a588476cd7a34bdc8ad02b85c14dc939747a282 | [
"Apache-2.0"
] | 34 | 2020-11-20T15:19:18.000Z | 2022-02-21T13:13:48.000Z | inference-engine/ie_bridges/python/tests/test_IECore.py | sbalandi/openvino | 519951a4a9f979c1b04529dda821111c56113716 | [
"Apache-2.0"
] | 1 | 2019-09-03T08:35:20.000Z | 2019-09-03T08:35:20.000Z | # Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import os
import pytest
from sys import platform
from pathlib import Path
from threading import Event, Thread
from time import sleep, time
from queue import Queue
from openvino.inference_engine import IENetwork, IECore, ExecutableNetwork
from conftest import model_path, plugins_path, model_onnx_path
import ngraph as ng
test_net_xml, test_net_bin = model_path()
test_net_onnx = model_onnx_path()
plugins_xml, plugins_win_xml, plugins_osx_xml = plugins_path()
def test_init_ie_core_no_cfg():
ie = IECore()
assert isinstance(ie, IECore)
def test_init_ie_core_with_cfg():
ie = IECore(plugins_xml)
assert isinstance(ie, IECore)
def test_get_version(device):
ie = IECore()
version = ie.get_versions(device)
assert isinstance(version, dict), "Returned version must be a dictionary"
assert device in version, "{} plugin version wasn't found in versions"
assert hasattr(version[device], "major"), "Returned version has no field 'major'"
assert hasattr(version[device], "minor"), "Returned version has no field 'minor'"
assert hasattr(version[device], "description"), "Returned version has no field 'description'"
assert hasattr(version[device], "build_number"), "Returned version has no field 'build_number'"
def test_load_network(device):
ie = IECore()
net = ie.read_network(model=test_net_xml, weights=test_net_bin)
exec_net = ie.load_network(net, device)
assert isinstance(exec_net, ExecutableNetwork)
def test_load_network_from_file(device):
ie = IECore()
exec_net = ie.load_network(test_net_xml, device)
assert isinstance(exec_net, ExecutableNetwork)
@pytest.mark.skipif(os.environ.get("TEST_DEVICE", "CPU") != "CPU", reason="Device independent test")
def test_load_network_wrong_device():
ie = IECore()
net = ie.read_network(model=test_net_xml, weights=test_net_bin)
with pytest.raises(RuntimeError) as e:
ie.load_network(net, "BLA")
assert 'Device with "BLA" name is not registered in the InferenceEngine' in str(e.value)
def test_query_network(device):
ie = IECore()
net = ie.read_network(model=test_net_xml, weights=test_net_bin)
query_res = ie.query_network(net, device)
func_net = ng.function_from_cnn(net)
ops_net = func_net.get_ordered_ops()
ops_net_names = [op.friendly_name for op in ops_net]
assert [key for key in query_res.keys() if key not in ops_net_names] == [], \
"Not all network layers present in query_network results"
assert next(iter(set(query_res.values()))) == device, "Wrong device for some layers"
@pytest.mark.skipif(os.environ.get("TEST_DEVICE", "CPU") != "CPU", reason="Device dependent test")
def test_register_plugin():
ie = IECore()
if ie.get_metric("CPU", "FULL_DEVICE_NAME") == "arm_compute::NEON":
pytest.skip("Can't run on ARM plugin due-to MKLDNNPlugin specific test")
ie.register_plugin("MKLDNNPlugin", "BLA")
net = ie.read_network(model=test_net_xml, weights=test_net_bin)
exec_net = ie.load_network(net, "BLA")
assert isinstance(exec_net, ExecutableNetwork), "Cannot load the network to the registered plugin with name 'BLA'"
@pytest.mark.skipif(os.environ.get("TEST_DEVICE", "CPU") != "CPU", reason="Device dependent test")
def test_register_plugins():
ie = IECore()
if ie.get_metric("CPU", "FULL_DEVICE_NAME") == "arm_compute::NEON":
pytest.skip("Can't run on ARM plugin due-to MKLDNNPlugin specific test")
if platform == "linux" or platform == "linux2":
ie.register_plugins(plugins_xml)
elif platform == "darwin":
ie.register_plugins(plugins_osx_xml)
elif platform == "win32":
ie.register_plugins(plugins_win_xml)
net = ie.read_network(model=test_net_xml, weights=test_net_bin)
exec_net = ie.load_network(net, "CUSTOM")
assert isinstance(exec_net,
ExecutableNetwork), "Cannot load the network to the registered plugin with name 'CUSTOM' " \
"registred in the XML file"
@pytest.mark.skip(reason="Need to figure out if it's expected behaviour (fails with C++ API as well")
def test_unregister_plugin(device):
ie = IECore()
ie.unregister_plugin(device)
net = ie.read_network(model=test_net_xml, weights=test_net_bin)
with pytest.raises(RuntimeError) as e:
ie.load_network(net, device)
assert f"Device with '{device}' name is not registered in the InferenceEngine" in str(e.value)
def test_available_devices(device):
ie = IECore()
devices = ie.available_devices
assert device in devices, f"Current device '{device}' is not listed in available devices '{', '.join(devices)}'"
@pytest.mark.skipif(os.environ.get("TEST_DEVICE", "CPU") != "CPU",
reason=f"Cannot run test on device {os.environ.get('TEST_DEVICE')}, Plugin specific test")
def test_get_metric_list_of_str():
ie = IECore()
param = ie.get_metric("CPU", "OPTIMIZATION_CAPABILITIES")
assert isinstance(param, list), "Parameter value for 'OPTIMIZATION_CAPABILITIES' " \
f"metric must be a list but {type(param)} is returned"
assert all(isinstance(v, str) for v in param), "Not all of the parameter values for 'OPTIMIZATION_CAPABILITIES' " \
"metric are strings!"
@pytest.mark.skipif(os.environ.get("TEST_DEVICE", "CPU") != "CPU",
reason=f"Cannot run test on device {os.environ.get('TEST_DEVICE')}, Plugin specific test")
def test_get_metric_tuple_of_two_ints():
ie = IECore()
if ie.get_metric("CPU", "FULL_DEVICE_NAME") == "arm_compute::NEON":
pytest.skip("Can't run on ARM plugin due-to unsupported device metric")
param = ie.get_metric("CPU", "RANGE_FOR_STREAMS")
assert isinstance(param, tuple), "Parameter value for 'RANGE_FOR_STREAMS' " \
f"metric must be tuple but {type(param)} is returned"
assert all(isinstance(v, int) for v in param), "Not all of the parameter values for 'RANGE_FOR_STREAMS' " \
"metric are integers!"
@pytest.mark.skipif(os.environ.get("TEST_DEVICE", "CPU") != "CPU",
reason=f"Cannot run test on device {os.environ.get('TEST_DEVICE')}, Plugin specific test")
def test_get_metric_tuple_of_three_ints():
ie = IECore()
if ie.get_metric("CPU", "FULL_DEVICE_NAME") == "arm_compute::NEON":
pytest.skip("Can't run on ARM plugin due-to unsupported device metric")
param = ie.get_metric("CPU", "RANGE_FOR_ASYNC_INFER_REQUESTS")
assert isinstance(param, tuple), "Parameter value for 'RANGE_FOR_ASYNC_INFER_REQUESTS' " \
f"metric must be tuple but {type(param)} is returned"
assert all(isinstance(v, int) for v in param), "Not all of the parameter values for " \
"'RANGE_FOR_ASYNC_INFER_REQUESTS' metric are integers!"
@pytest.mark.skipif(os.environ.get("TEST_DEVICE", "CPU") != "CPU",
reason=f"Cannot run test on device {os.environ.get('TEST_DEVICE')}, Plugin specific test")
def test_get_metric_str():
ie = IECore()
param = ie.get_metric("CPU", "FULL_DEVICE_NAME")
assert isinstance(param, str), "Parameter value for 'FULL_DEVICE_NAME' " \
f"metric must be string but {type(param)} is returned"
def test_read_network_from_xml():
ie = IECore()
net = ie.read_network(model=test_net_xml, weights=test_net_bin)
assert isinstance(net, IENetwork)
net = ie.read_network(model=test_net_xml)
assert isinstance(net, IENetwork)
def test_read_network_as_path():
ie = IECore()
net = ie.read_network(model=Path(test_net_xml), weights=test_net_bin)
assert isinstance(net, IENetwork)
net = ie.read_network(model=test_net_xml, weights=Path(test_net_bin))
assert isinstance(net, IENetwork)
net = ie.read_network(model=Path(test_net_xml))
assert isinstance(net, IENetwork)
def test_read_network_from_onnx():
ie = IECore()
net = ie.read_network(model=test_net_onnx)
assert isinstance(net, IENetwork)
def test_read_network_from_onnx_as_path():
ie = IECore()
net = ie.read_network(model=Path(test_net_onnx))
assert isinstance(net, IENetwork)
def test_incorrect_xml():
ie = IECore()
with pytest.raises(Exception) as e:
ie.read_network(model="./model.xml", weights=Path(test_net_bin))
assert "Path to the model ./model.xml doesn't exist or it's a directory" in str(e.value)
def test_incorrect_bin():
ie = IECore()
with pytest.raises(Exception) as e:
ie.read_network(model=test_net_xml, weights="./model.bin")
assert "Path to the weights ./model.bin doesn't exist or it's a directory" in str(e.value)
def test_read_net_from_buffer():
ie = IECore()
with open(test_net_bin, 'rb') as f:
bin = f.read()
with open(model_path()[0], 'rb') as f:
xml = f.read()
net = ie.read_network(model=xml, weights=bin, init_from_buffer=True)
assert isinstance(net, IENetwork)
def test_net_from_buffer_valid():
ie = IECore()
with open(test_net_bin, 'rb') as f:
bin = f.read()
with open(model_path()[0], 'rb') as f:
xml = f.read()
net = ie.read_network(model=xml, weights=bin, init_from_buffer=True)
ref_net = ie.read_network(model=test_net_xml, weights=test_net_bin)
assert net.name == ref_net.name
assert net.batch_size == ref_net.batch_size
ii_net = net.input_info
ii_net2 = ref_net.input_info
o_net = net.outputs
o_net2 = ref_net.outputs
assert ii_net.keys() == ii_net2.keys()
assert o_net.keys() == o_net2.keys()
@pytest.mark.skipif(os.environ.get("TEST_DEVICE","CPU") != "GPU", reason=f"Device dependent test")
def test_load_network_release_gil(device):
running = True
message_queue = Queue()
def detect_long_gil_holds():
sleep_time = 0.01
latency_alert_threshold = 0.1
# Send a message to indicate the thread is running and ready to detect GIL locks
message_queue.put("ready to detect")
while running:
start_sleep = time()
sleep(sleep_time)
elapsed = time() - start_sleep
if elapsed > latency_alert_threshold:
# Send a message to the testing thread that a long GIL lock occurred
message_queue.put(latency_alert_threshold)
ie = IECore()
net = ie.read_network(model=test_net_xml, weights=test_net_bin)
# Wait for the GIL lock detector to be up and running
gil_hold_detection_thread = Thread(daemon=True, target=detect_long_gil_holds)
gil_hold_detection_thread.start()
# Wait to make sure the thread is started and checking for GIL holds
sleep(0.1)
assert message_queue.get(timeout=5) == "ready to detect"
# Run the function that should unlock the GIL
exec_net = ie.load_network(net, device)
# Ensure resources are closed
running = False
gil_hold_detection_thread.join(timeout=5)
# Assert there were never any long gil locks
assert message_queue.qsize() == 0, \
f"More than 0 GIL locks occured! Latency: {message_queue.get()})"
def test_nogil_safe(device):
call_thread_func = Event()
core = IECore()
net = core.read_network(model=test_net_xml, weights=test_net_bin)
def thread_target(thread_func, thread_args):
call_thread_func.wait()
call_thread_func.clear()
thread_func(*thread_args)
def main_thread_target(gil_release_func, args):
call_thread_func.set()
gil_release_func(*args)
assert not call_thread_func.is_set()
def test_run_parallel(gil_release_func, args, thread_func, thread_args):
thread = Thread(target=thread_target, args=[thread_func, thread_args])
thread.start()
main_thread_target(gil_release_func, args)
thread.join()
main_targets = [{
core.read_network: [test_net_xml, test_net_bin],
core.load_network: [net, device],
},
{
core.load_network: [net, device],
}]
thread_targets = [{
core.get_versions: [device,],
core.read_network: [test_net_xml, test_net_bin],
core.load_network: [net, device],
core.query_network: [net, device],
getattr: [core, "available_devices"],
},
{
getattr: [net, "name"],
getattr: [net, "input_info"],
getattr: [net, "outputs"],
getattr: [net, "batch_size"],
}]
for main_target, custom_target in zip(main_targets, thread_targets):
for nogil_func, args in main_target.items():
for thread_func, thread_args in custom_target.items():
test_run_parallel(nogil_func, args, thread_func, thread_args)
| 40.533742 | 119 | 0.665582 |
9a4ddebf6c17b941a96e4f15869203cfc646dfb6 | 1,374 | py | Python | dizoo/box2d/lunarlander/config/lunarlander_sql_config.py | sailxjx/DI-engine | c6763f8e2ba885a2a02f611195a1b5f8b50bff00 | [
"Apache-2.0"
] | null | null | null | dizoo/box2d/lunarlander/config/lunarlander_sql_config.py | sailxjx/DI-engine | c6763f8e2ba885a2a02f611195a1b5f8b50bff00 | [
"Apache-2.0"
] | null | null | null | dizoo/box2d/lunarlander/config/lunarlander_sql_config.py | sailxjx/DI-engine | c6763f8e2ba885a2a02f611195a1b5f8b50bff00 | [
"Apache-2.0"
] | null | null | null | from easydict import EasyDict
lunarlander_sql_config = dict(
exp_name='lunarlander_sql',
env=dict(
collector_env_num=8,
evaluator_env_num=5,
n_evaluator_episode=5,
stop_value=200,
),
policy=dict(
cuda=False,
model=dict(
obs_shape=8,
action_shape=4,
encoder_hidden_size_list=[128, 128, 64],
dueling=True,
),
nstep=1,
discount_factor=0.97,
learn=dict(batch_size=64, learning_rate=0.001, alpha=0.08),
collect=dict(n_sample=64),
eval=dict(evaluator=dict(eval_freq=50, )), # note: this is the times after which you learns to evaluate
other=dict(
eps=dict(
type='exp',
start=0.95,
end=0.1,
decay=10000,
),
replay_buffer=dict(replay_buffer_size=20000, ),
),
),
)
lunarlander_sql_config = EasyDict(lunarlander_sql_config)
main_config = lunarlander_sql_config
lunarlander_sql_create_config = dict(
env=dict(
type='lunarlander',
import_names=['dizoo.box2d.lunarlander.envs.lunarlander_env'],
),
env_manager=dict(type='base'),
policy=dict(type='sql'),
)
lunarlander_sql_create_config = EasyDict(lunarlander_sql_create_config)
create_config = lunarlander_sql_create_config | 29.869565 | 112 | 0.612082 |
0a526faa16bfc9ef0284ddec4e63448f0e6934e1 | 1,550 | py | Python | python/aghast/aghast_generated/UnweightedCounts.py | HDembinski/aghast | f3d45a6960033f48fb8f6b7e906cb36b9d9d8e95 | [
"BSD-3-Clause"
] | 18 | 2019-04-15T14:39:35.000Z | 2021-12-21T15:01:02.000Z | python/aghast/aghast_generated/UnweightedCounts.py | HDembinski/aghast | f3d45a6960033f48fb8f6b7e906cb36b9d9d8e95 | [
"BSD-3-Clause"
] | 27 | 2019-04-12T20:24:00.000Z | 2021-12-03T08:51:56.000Z | python/aghast/aghast_generated/UnweightedCounts.py | diana-hep/stagg | ed97e9abc870e729d300622253aa7e9c870f77ec | [
"BSD-3-Clause"
] | 11 | 2019-04-15T14:41:00.000Z | 2021-11-16T13:28:10.000Z | # automatically generated by the FlatBuffers compiler, do not modify
# namespace: aghast_generated
import flatbuffers
# ///////////////////////////////////////////////// distributions
class UnweightedCounts(object):
__slots__ = ["_tab"]
@classmethod
def GetRootAsUnweightedCounts(cls, buf, offset):
n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)
x = UnweightedCounts()
x.Init(buf, n + offset)
return x
# UnweightedCounts
def Init(self, buf, pos):
self._tab = flatbuffers.table.Table(buf, pos)
# UnweightedCounts
def CountsType(self):
o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))
if o != 0:
return self._tab.Get(flatbuffers.number_types.Uint8Flags, o + self._tab.Pos)
return 0
# UnweightedCounts
def Counts(self):
o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))
if o != 0:
from flatbuffers.table import Table
obj = Table(bytearray(), 0)
self._tab.Union(obj, o)
return obj
return None
def UnweightedCountsStart(builder):
builder.StartObject(2)
def UnweightedCountsAddCountsType(builder, countsType):
builder.PrependUint8Slot(0, countsType, 0)
def UnweightedCountsAddCounts(builder, counts):
builder.PrependUOffsetTRelativeSlot(
1, flatbuffers.number_types.UOffsetTFlags.py_type(counts), 0
)
def UnweightedCountsEnd(builder):
return builder.EndObject()
| 27.192982 | 88 | 0.656129 |
746cf7e166b9381af2d77e3081bcd8235a0943c5 | 2,351 | py | Python | Oscleton.py | ArthurVimond/oscleton-midi-remote-script | c475de8f2f13cb309bbb957584611037e97cd621 | [
"Apache-2.0"
] | null | null | null | Oscleton.py | ArthurVimond/oscleton-midi-remote-script | c475de8f2f13cb309bbb957584611037e97cd621 | [
"Apache-2.0"
] | 1 | 2020-08-17T23:25:30.000Z | 2020-08-17T23:25:30.000Z | Oscleton.py | ArthurVimond/oscleton-midi-remote-script | c475de8f2f13cb309bbb957584611037e97cd621 | [
"Apache-2.0"
] | null | null | null | from __future__ import with_statement
from _Framework.ControlSurface import ControlSurface
from OscletonApplicationComponent import OscletonApplicationComponent
from OscletonSessionComponent import OscletonSessionComponent
from OscletonMixerComponent import OscletonMixerComponent
from OscletonTransportComponent import OscletonTransportComponent
from OscletonPreferences import OscletonPreferences
from OscletonUpdater import OscletonUpdater
from OscletonMixin import OscletonMixin
from OscletonOSC import OscletonOSC
class Oscleton(ControlSurface):
# MIDI Remote Script version
midi_remote_script_version = '0.5.0'
def __init__(self, c_instance):
super(Oscleton, self).__init__(c_instance)
with self.component_guard():
OscletonOSC.set_log(self.log_message)
OscletonOSC.set_message(self.show_message)
OscletonMixin.set_log(self.log_message)
self.osc_handler = OscletonOSC(self)
OscletonMixin.set_osc_handler(self.osc_handler)
self._app = OscletonApplicationComponent(1, 1)
self._app.setMidiRemoteScriptVersion(self.midi_remote_script_version)
self._mixer = OscletonMixerComponent(1)
self._session = OscletonSessionComponent(1,1)
self._session.set_mixer(self._mixer)
self._transport = OscletonTransportComponent()
self._prefs = OscletonPreferences()
self._updater = OscletonUpdater(self._prefs, self.midi_remote_script_version)
self.parse()
if not self.osc_handler.error():
# Set remote host from preferences
linked_device_ip = self._prefs.get_linked_device_ip()
if linked_device_ip is not None and linked_device_ip is not '':
self.osc_handler.set_peer(linked_device_ip)
self.show_message('Ready')
self.osc_handler.send('/live/start', True)
self._updater.check_for_update()
def disconnect(self):
self.osc_handler.send('/live/quit', True)
self.osc_handler.shutdown()
def parse(self):
self.osc_handler.process()
self.schedule_message(1, self.parse)
def set_linked_device_ip(self, ip):
self._prefs.set_linked_device_ip(ip) | 35.621212 | 89 | 0.695023 |
0ff167a2d80ac9d5dfae89ae8a21525c9d0d418b | 2,992 | py | Python | backdrop/core/log_handler.py | alphagov/backdrop | 1256e5075d7e5a0e41afb0f0913a5f2c4bdb9ad8 | [
"MIT"
] | 9 | 2015-10-20T04:36:48.000Z | 2020-09-08T18:47:01.000Z | backdrop/core/log_handler.py | alphagov/backdrop | 1256e5075d7e5a0e41afb0f0913a5f2c4bdb9ad8 | [
"MIT"
] | 31 | 2015-01-11T11:57:05.000Z | 2021-03-24T10:52:33.000Z | backdrop/core/log_handler.py | alphagov/backdrop | 1256e5075d7e5a0e41afb0f0913a5f2c4bdb9ad8 | [
"MIT"
] | 4 | 2015-01-25T09:06:45.000Z | 2021-04-10T20:27:36.000Z | from logging import FileHandler
from logging.handlers import RotatingFileHandler
from logstash_formatter import LogstashFormatter
import logging
from flask import request
class RequestIdFilter(logging.Filter):
def filter(self, record):
try:
record.govuk_request_id = request.headers.get('Govuk-Request-Id')
except RuntimeError:
# flask will throw a runtime error if we are attempting to get the
# header outside of the application context. In this case we can't
# infer the request_id, so we can just pass
pass
return True
def get_log_file_handler(path, log_level=logging.DEBUG):
handler = RotatingFileHandler(
path, maxBytes=1024 * 1024 * 10, backupCount=5)
handler.setFormatter(logging.Formatter(
"%(asctime)s [%(levelname)s] -> %(message)s"))
handler.setLevel(log_level)
return handler
def get_json_log_handler(path, app_name):
handler = RotatingFileHandler(
path, maxBytes=1024 * 1024 * 10, backupCount=5)
formatter = LogstashFormatter()
formatter.defaults['@tags'] = ['application', app_name]
handler.setFormatter(formatter)
return handler
def set_up_logging(app, env):
log_level = app.config['LOG_LEVEL']
numeric_log_level = logging._levelNames[log_level]
logger = logging.getLogger()
if log_level == "DEBUG":
logger.addHandler(logging.StreamHandler())
logger.addHandler(
get_log_file_handler("log/%s.log" % env, numeric_log_level)
)
logger.addHandler(
get_json_log_handler("log/%s.json.log" % env, app.name)
)
logger.setLevel(numeric_log_level)
request_id_filter = RequestIdFilter()
app.logger.addFilter(request_id_filter)
app.logger.info("{} logging started".format(app.name))
app.logger.info("{} logging started".format(numeric_log_level))
app.before_request(create_request_logger(app))
app.after_request(create_response_logger(app))
def set_up_audit_logging(app, env):
logger = logging.getLogger('backdrop.write.audit')
logger.setLevel(logging._levelNames['INFO'])
logger.addHandler(
get_json_log_handler("log/audit/%s.log.json" % env, app.name))
app.audit_logger = logger
def create_request_logger(app):
def log_request():
if request.method != "HEAD":
app.logger.info("request: %s - %s" % (request.method, request.url),
extra=create_logging_extra_dict())
return log_request
def create_response_logger(app):
def log_response(response):
if request.method != "HEAD":
app.logger.info(
"response: %s - %s - %s" % (
request.method, request.url, response.status
),
extra=create_logging_extra_dict()
)
return response
return log_response
def create_logging_extra_dict():
return {'govuk_request_id': request.headers.get('Govuk-Request-Id')}
| 32.879121 | 79 | 0.671457 |
03ff6e412ab2880612fc746f501fcd457a0edcd5 | 1,016 | py | Python | base/run_cherrypy.py | daavelino/vulnerability-catalog | 61e0db9cc4656a16847ec635a4cac3e9a6c67dd4 | [
"MIT"
] | 12 | 2018-01-09T18:03:41.000Z | 2021-02-04T08:21:43.000Z | base/run_cherrypy.py | daavelino/vulnerability-catalog | 61e0db9cc4656a16847ec635a4cac3e9a6c67dd4 | [
"MIT"
] | 21 | 2018-01-13T21:23:22.000Z | 2021-04-08T18:28:05.000Z | base/run_cherrypy.py | daavelino/vulnerability-catalog | 61e0db9cc4656a16847ec635a4cac3e9a6c67dd4 | [
"MIT"
] | 7 | 2017-08-29T10:27:19.000Z | 2021-11-09T00:37:03.000Z |
from base.wsgi import application
import cherrypy
import sys
if __name__ == '__main__':
hostname = "0.0.0.0"
port = 8000
if len(sys.argv) == 2:
tmp = sys.argv[1].split(":")
if len(tmp) == 2:
hostname = tmp[0]
port = int(tmp[1])
# Mount the application
cherrypy.tree.graft(application, "/")
# Unsubscribe the default server
cherrypy.server.unsubscribe()
# Instantiate a new server object
server = cherrypy._cpserver.Server()
# Configure the server object
server.socket_host = hostname
server.socket_port = port
server.thread_pool = 30
# For SSL Support
# server.ssl_module = 'builtin'
# server.ssl_certificate = 'ssl/certificate.cer'
# server.ssl_private_key = 'ssl/private.key'
# server.ssl_certificate_chain = 'ssl/bundle.pem'
# Subscribe this server
server.subscribe()
# Start the server engine
cherrypy.engine.start()
cherrypy.engine.block()
| 21.166667 | 58 | 0.624016 |
8367602ea21c7ddda93f6d7b069a0adc21d713e8 | 4,177 | py | Python | src/python/consts.py | yotamfr/prot2vec | eaee36f9e3929054b1c324acd053a52d0e7be2bd | [
"MIT"
] | 8 | 2017-10-01T14:34:25.000Z | 2021-04-27T13:18:00.000Z | src/python/consts.py | yotamfr/prot2vec | eaee36f9e3929054b1c324acd053a52d0e7be2bd | [
"MIT"
] | 1 | 2020-01-23T17:17:18.000Z | 2020-01-23T17:17:18.000Z | src/python/consts.py | yotamfr/prot2vec | eaee36f9e3929054b1c324acd053a52d0e7be2bd | [
"MIT"
] | 1 | 2018-05-04T04:54:32.000Z | 2018-05-04T04:54:32.000Z | from datetime import datetime
from Bio.SubsMat import MatrixInfo
exp_codes = ["EXP", "IDA", "IPI", "IMP", "IGI", "IEP"] + ["TAS", "IC"]
t0 = datetime(2014, 1, 1, 0, 0)
t1 = datetime(2014, 9, 1, 0, 0)
cafa2_cutoff = datetime(2014, 1, 1, 0, 0)
cafa3_cutoff = datetime(2017, 2, 2, 0, 0)
TODAY = today_cutoff = datetime.now()
NOW = datetime.utcnow()
PAD = 25
class AminoAcids(object):
def __len__(self):
return 20
def __init__(self):
self.aa2index = \
{
"A": 0,
"R": 1,
"N": 2,
"D": 3,
"C": 4,
"Q": 5,
"E": 6,
"G": 7,
"H": 8,
"I": 9,
"L": 10,
"K": 11,
"M": 12,
"F": 13,
"P": 14,
"S": 15,
"T": 16,
"W": 17,
"Y": 18,
"V": 19,
"X": 20,
"B": 21,
"Z": 22,
"O": 23,
"U": 24
}
self.index2aa = {v: k for k, v in self.aa2index.items()}
self.aa2onehot = {
"A": [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
"R": [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
"N": [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
"D": [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
"C": [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
"Q": [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
"E": [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
"G": [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
"H": [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
"I": [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
"L": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
"K": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
"M": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0],
"F": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0],
"P": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0],
"S": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0],
"T": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0],
"W": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0],
"Y": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0],
"V": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
"X": [.05, .05, .05, .05, .05, .05, .05, .05, .05, .05, .05, .05, .05, .05, .05, .05, .05, .05, .05, .05],
"B": [0, 0, .5, .5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], # D, N
"Z": [0, 0, 0, 0, 0, .5, .5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], # E, Q
"O": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0], # O -> K
"U": [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], # U -> C
# "PAD": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
}
self.blosum62 = {k: [0] * 25 for k in self.aa2index.keys()}
for k1, v1 in self.aa2index.items():
if k1 == "O" or k1 == "U":
continue
for k2, v2 in self.aa2index.items():
if k2 == "O": k2 = "K"
elif k2 == "U": k2 = "C"
self.blosum62[k1][v2] = MatrixInfo.blosum62[(k1, k2)] \
if (k1, k2) in MatrixInfo.blosum62 else MatrixInfo.blosum62[(k2, k1)]
self.blosum62['O'][:] = self.blosum62['K'][:]
self.blosum62['U'][:] = self.blosum62['C'][:]
AA = AminoAcids()
amino_acids = ['A', 'R', 'N', 'D', 'C', 'Q', 'E', 'G', 'H', 'I', 'L', 'K', 'M', 'F', 'P', 'S', 'T', 'W', 'Y', 'V']
assert len(set(amino_acids)) == 20
if __name__=="__main__":
print(AA.blosum62)
| 39.037383 | 118 | 0.325832 |
bca8dfc706fb8d2d8606f241fcafd1792bd72636 | 5,354 | py | Python | ask-sdk-model/ask_sdk_model/interfaces/alexa/presentation/apl/list_runtime_error.py | Signal-Kinetics/alexa-apis-for-python | abb8d3dce18a5510c48b215406ed36c024f01495 | [
"Apache-2.0"
] | null | null | null | ask-sdk-model/ask_sdk_model/interfaces/alexa/presentation/apl/list_runtime_error.py | Signal-Kinetics/alexa-apis-for-python | abb8d3dce18a5510c48b215406ed36c024f01495 | [
"Apache-2.0"
] | null | null | null | ask-sdk-model/ask_sdk_model/interfaces/alexa/presentation/apl/list_runtime_error.py | Signal-Kinetics/alexa-apis-for-python | abb8d3dce18a5510c48b215406ed36c024f01495 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
#
# Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file
# except in compliance with the License. A copy of the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for
# the specific language governing permissions and limitations under the License.
#
import pprint
import re # noqa: F401
import six
import typing
from enum import Enum
from ask_sdk_model.interfaces.alexa.presentation.apl.runtime_error import RuntimeError
if typing.TYPE_CHECKING:
from typing import Dict, List, Optional, Union
from datetime import datetime
from ask_sdk_model.interfaces.alexa.presentation.apl.list_runtime_error_reason import ListRuntimeErrorReason
class ListRuntimeError(RuntimeError):
"""
Reports an error with list functionality.
:param message: A human-readable description of the error.
:type message: (optional) str
:param reason:
:type reason: (optional) ask_sdk_model.interfaces.alexa.presentation.apl.list_runtime_error_reason.ListRuntimeErrorReason
:param list_id: The identifier of the list in which the error occurred.
:type list_id: (optional) str
:param list_version: The listVersion in which the error occurred.
:type list_version: (optional) int
:param operation_index: The index of the operation which caused the error (if known)
:type operation_index: (optional) int
"""
deserialized_types = {
'object_type': 'str',
'message': 'str',
'reason': 'ask_sdk_model.interfaces.alexa.presentation.apl.list_runtime_error_reason.ListRuntimeErrorReason',
'list_id': 'str',
'list_version': 'int',
'operation_index': 'int'
} # type: Dict
attribute_map = {
'object_type': 'type',
'message': 'message',
'reason': 'reason',
'list_id': 'listId',
'list_version': 'listVersion',
'operation_index': 'operationIndex'
} # type: Dict
supports_multiple_types = False
def __init__(self, message=None, reason=None, list_id=None, list_version=None, operation_index=None):
# type: (Optional[str], Optional[ListRuntimeErrorReason], Optional[str], Optional[int], Optional[int]) -> None
"""Reports an error with list functionality.
:param message: A human-readable description of the error.
:type message: (optional) str
:param reason:
:type reason: (optional) ask_sdk_model.interfaces.alexa.presentation.apl.list_runtime_error_reason.ListRuntimeErrorReason
:param list_id: The identifier of the list in which the error occurred.
:type list_id: (optional) str
:param list_version: The listVersion in which the error occurred.
:type list_version: (optional) int
:param operation_index: The index of the operation which caused the error (if known)
:type operation_index: (optional) int
"""
self.__discriminator_value = "LIST_ERROR" # type: str
self.object_type = self.__discriminator_value
super(ListRuntimeError, self).__init__(object_type=self.__discriminator_value, message=message)
self.reason = reason
self.list_id = list_id
self.list_version = list_version
self.operation_index = operation_index
def to_dict(self):
# type: () -> Dict[str, object]
"""Returns the model properties as a dict"""
result = {} # type: Dict
for attr, _ in six.iteritems(self.deserialized_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else
x.value if isinstance(x, Enum) else x,
value
))
elif isinstance(value, Enum):
result[attr] = value.value
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else
(item[0], item[1].value)
if isinstance(item[1], Enum) else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
# type: () -> str
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
# type: () -> str
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
# type: (object) -> bool
"""Returns true if both objects are equal"""
if not isinstance(other, ListRuntimeError):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
# type: (object) -> bool
"""Returns true if both objects are not equal"""
return not self == other
| 37.704225 | 129 | 0.637467 |
5fe0f928786c7ace660177eb7dc03026b45ae957 | 459 | py | Python | data/scripts/templates/object/tangible/component/vehicle/shared_fuel_a.py | obi-two/GameServer | 7d37024e2291a97d49522610cd8f1dbe5666afc2 | [
"MIT"
] | 20 | 2015-02-23T15:11:56.000Z | 2022-03-18T20:56:48.000Z | data/scripts/templates/object/tangible/component/vehicle/shared_fuel_a.py | apathyboy/swganh | 665128efe9154611dec4cb5efc61d246dd095984 | [
"MIT"
] | null | null | null | data/scripts/templates/object/tangible/component/vehicle/shared_fuel_a.py | apathyboy/swganh | 665128efe9154611dec4cb5efc61d246dd095984 | [
"MIT"
] | 20 | 2015-04-04T16:35:59.000Z | 2022-03-24T14:54:37.000Z | #### NOTICE: THIS FILE IS AUTOGENERATED
#### MODIFICATIONS MAY BE LOST IF DONE IMPROPERLY
#### PLEASE SEE THE ONLINE DOCUMENTATION FOR EXAMPLES
from swgpy.object import *
def create(kernel):
result = Tangible()
result.template = "object/tangible/component/vehicle/shared_fuel_a.iff"
result.attribute_template_id = -1
result.stfName("craft_item_ingredients_n","fuel_a")
#### BEGIN MODIFICATIONS ####
#### END MODIFICATIONS ####
return result | 27 | 72 | 0.732026 |
961eaf63f2d03e4af09e62ace49966d3ab113229 | 6,216 | py | Python | Max2SAT_quantum/adiabatic/calculate_time_to_99_doubling.py | puyamirkarimi/quantum-walks | eb41146cc22e32b2f4d5a6119cc892f45062764c | [
"MIT"
] | 4 | 2020-02-11T16:55:39.000Z | 2021-05-01T07:50:43.000Z | Max2SAT_quantum/adiabatic/calculate_time_to_99_doubling.py | puyamirkarimi/quantum-walks | eb41146cc22e32b2f4d5a6119cc892f45062764c | [
"MIT"
] | 1 | 2020-02-11T17:03:54.000Z | 2020-02-16T09:47:47.000Z | Max2SAT_quantum/adiabatic/calculate_time_to_99_doubling.py | puyamirkarimi/quantum-walks | eb41146cc22e32b2f4d5a6119cc892f45062764c | [
"MIT"
] | 1 | 2020-03-11T12:00:12.000Z | 2020-03-11T12:00:12.000Z | import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from scipy.linalg import expm
from scipy import sparse
from scipy.sparse.linalg import expm_multiply
from scipy.sparse.linalg import eigsh
from scipy.special import comb
import time
def get_2sat_formula(instance_name):
out = np.loadtxt("../../../instances_original/" + instance_name + ".m2s")
return out.astype(int)
def get_instances():
"""returns array of instance names, array of corresponding n"""
instance_data = np.genfromtxt('../m2s_nqubits.csv', delimiter=',', skip_header=1, dtype=str)
return instance_data[:, 0], instance_data[:, 1].astype(int)
def hypercube(n_dim):
sigma_x = np.array([[0, 1],
[1, 0]])
A = sigma_i(sigma_x, 0, n)
for i in range(1, n_dim):
A += sigma_i(sigma_x, i, n_dim)
return -1 * A
def sigma_i(sigma, i, n_dim):
n = n_dim -1 # because of i starting from 0 rather than 1
if i > 0:
out = np.eye(2)
for j_before in range(i - 1):
out = np.kron(out, np.eye(2))
out = np.kron(out, sigma)
else:
out = sigma
for j_after in range(n - i):
out = np.kron(out, np.eye(2))
return out
def hamiltonian_2sat(n, formula):
N = 2 ** n
out = np.zeros((N, N))
sigma_z = np.array([[1, 0],
[0, -1]])
sigma_identity = np.eye(N)
sigma_z_i = np.zeros((n, N, N))
for i in range(n):
sigma_z_i[i] = sigma_i(sigma_z, i, n)
for clause in formula:
v_1 = clause[1]
v_2 = clause[3]
sign_1 = -1 * clause[0] # -1 because signs should be opposite in Hamiltonian
sign_2 = -1 * clause[2]
out += (1/4) * (sign_1*sign_2*sigma_z_i[v_1]*sigma_z_i[v_2]
+ sign_1*sigma_z_i[v_1] + sign_2*sigma_z_i[v_2] + sigma_identity)
return out
def adiabatic(n, T, M, H_driver, H_problem, ground_state_prob, normalise=True, sprs=True):
N = 2**n
psiN = np.ones(N) * (1 / np.sqrt(N))
H = H_driver
for i in range(1, M + 1):
t = i * (T / M)
H = hamiltonian(t, T, H_driver, H_problem)
if sprs:
A = -1j * (T / M) * H
psiN = expm_multiply(A, psiN)
else:
U = expm(-1j * (T / M) * H)
psiN = np.dot(U, psiN)
return np.abs(np.dot(np.conjugate(ground_state_prob), psiN)) ** 2
def hamiltonian(t, T, H_driver, H_problem):
return (1 - t/T)*H_driver + (t/T)*H_problem
def first_eigv(A, sprs=True):
if sprs:
return eigsh(A, k=1, which='SM')[1][:, 0]
else:
return np.linalg.eigh(A)[1][:, 0]
def optimal_gamma(n):
N = 2 ** n
gam = 0
for r in range(1, n+1):
gam += comb(n, r) * (1/r)
gam = (1/2) * (1/N) * gam
return gam
def driver_hamiltonian(n, gamma):
A = hypercube(n)
return (A + n * np.eye(2 ** n))/2 # plus or minus??? keep the half?
def heuristic_gamma(n):
out = "haven't defined heuristic gamma for given n"
if n == 5:
out = 0.56503
if n == 6:
out = 0.587375
if n == 7:
out = 0.5984357142857143
if n == 8:
out = 0.60751875
if n == 9:
out = 0.6139833333333333
if n == 10:
out = 0.619345
if n == 11:
out = 0.6220136363636364
print("heuristic gamma: ", out)
return out
if __name__ == '__main__':
time_start = time.time()
M = 50 # number of slices
max_T = 512
instance_names, instance_n_bits = get_instances()
n = 6
sprs = False
gamma = heuristic_gamma(n) # hopping rate
n_shifted = n - 5 # n_shifted runs from 0 to 15 instead of 5 to 20
if sprs:
H_driver = sparse.csc_matrix(driver_hamiltonian(n, gamma))
else:
H_driver = driver_hamiltonian(n, gamma)
ground_state_calculated = False
ground_state_prob = None
times_array = np.zeros(10000)
for loop, i in enumerate(range(n_shifted * 10000, (n_shifted + 1) * 10000)): # 10000 instances per value of n
abandon = False
success_prob = 0
t_finish_old = 1
t_finish = 1
success_prob_old = 0
T = 0
instance_name = instance_names[i]
sat_formula = get_2sat_formula(instance_name)
if sprs:
H_problem = sparse.csc_matrix(hamiltonian_2sat(n, sat_formula))
else:
H_problem = hamiltonian_2sat(n, sat_formula)
if not ground_state_calculated:
ground_state_prob = first_eigv(H_problem, sprs=sprs)
ground_state_calculated = True
while success_prob < 0.99 and not abandon:
success_prob_old = success_prob
t_finish_old = t_finish
t_finish *= 2
if t_finish > max_T:
abandon = True
break
success_prob = adiabatic(n, t_finish, M, H_driver, H_problem, ground_state_prob, sprs=sprs)
if not abandon:
m = (success_prob - success_prob_old)/(t_finish - t_finish_old)
guess_t = t_finish_old + int(np.floor((0.99 - success_prob_old)/m))
success_prob = adiabatic(n, guess_t, M, H_driver, H_problem, ground_state_prob, sprs=sprs)
if success_prob < 0.99:
for T in range(guess_t, t_finish+1):
success_prob = adiabatic(n, T, M, H_driver, H_problem, ground_state_prob, sprs=sprs)
if success_prob >= 0.99:
break
else:
for T in range(guess_t, t_finish_old, -1):
success_prob = adiabatic(n, T, M, H_driver, H_problem, ground_state_prob, sprs=sprs)
if success_prob <= 0.99:
T += 1
break
# print(loop, success_prob, T)
if not abandon:
times_array[loop] = T
else:
times_array[loop] = -1
if loop % 10 == 0:
print("Instance:", loop)
time_end = time.time()
print("runtime:", time_end - time_start)
with open("adiabatic_time_n_"+str(n)+".txt", "ab") as f: # saves runtimes using time.time()
np.savetxt(f, times_array)
| 29.884615 | 114 | 0.564189 |
399d8edea2c6b954d07c7896150c2b40b01e8dfe | 583 | py | Python | ex.py | turutle/labLP-6 | 7bbf67624514b501dfeb390f2409f86290e6d3a8 | [
"MIT"
] | null | null | null | ex.py | turutle/labLP-6 | 7bbf67624514b501dfeb390f2409f86290e6d3a8 | [
"MIT"
] | null | null | null | ex.py | turutle/labLP-6 | 7bbf67624514b501dfeb390f2409f86290e6d3a8 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
def benchmark(func):
import time
def wrapper(*args, **kwargs):
start = time.time()
return_value = func(*args, **kwargs)
end = time.time()
print('[*] Время выполнения: {} секунд.'.format(end-start))
return return_value
return wrapper
@benchmark
def fetch_webpage(url):
import requests
webpage = requests.get(url)
return webpage.text
if __name__ == "__main__":
webpage = fetch_webpage('https://google.com')
print(webpage) | 18.21875 | 67 | 0.578045 |
184908c358ac1ce8e8460192d52049642e467370 | 59,495 | py | Python | pyscf/pbc/cc/kccsd_uhf.py | mfkasim1/pyscf | 7be5e015b2b40181755c71d888449db936604660 | [
"Apache-2.0"
] | 1 | 2021-01-24T13:35:42.000Z | 2021-01-24T13:35:42.000Z | pyscf/pbc/cc/kccsd_uhf.py | mfkasim1/pyscf | 7be5e015b2b40181755c71d888449db936604660 | [
"Apache-2.0"
] | 36 | 2018-08-22T19:44:03.000Z | 2020-05-09T10:02:36.000Z | pyscf/pbc/cc/kccsd_uhf.py | mfkasim1/pyscf | 7be5e015b2b40181755c71d888449db936604660 | [
"Apache-2.0"
] | 4 | 2018-02-14T16:28:28.000Z | 2019-08-12T16:40:30.000Z | #!/usr/bin/env python
# Copyright 2014-2020 The PySCF Developers. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Authors: James D. McClain
# Mario Motta
# Yang Gao
# Qiming Sun <osirpt.sun@gmail.com>
# Jason Yu
# Alec White
#
import time
from functools import reduce
import numpy as np
import h5py
from pyscf import lib
from pyscf.lib import logger
from pyscf.pbc import scf
from pyscf.cc import uccsd
from pyscf.pbc.lib import kpts_helper
from pyscf.pbc.lib.kpts_helper import gamma_point
from pyscf.lib.parameters import LOOSE_ZERO_TOL, LARGE_DENOM # noqa
from pyscf.pbc.mp.kump2 import (get_frozen_mask, get_nocc, get_nmo,
padded_mo_coeff, padding_k_idx) # noqa
from pyscf.pbc.cc import kintermediates_uhf
from pyscf import __config__
einsum = lib.einsum
# --- list2array
def mo_c_list_to_array(mo_coeff):
mo_coeff_tmp=[]
for js in range(2):
tmp_nk = len(mo_coeff[js])
tmp_nb = mo_coeff[js][0].shape[0]
tmp_array = np.zeros((tmp_nk,tmp_nb,tmp_nb),dtype=complex)
for ik in range(tmp_nk):
tmp_array[ik,:,:]=mo_coeff[js][ik][:,:]
mo_coeff_tmp.append(tmp_array)
return mo_coeff_tmp
def convert_mo_coeff(mo_coeff):
if isinstance(mo_coeff[0], list):
mo_coeff=mo_c_list_to_array(mo_coeff)
return mo_coeff
def update_amps(cc, t1, t2, eris):
time0 = time.clock(), time.time()
log = logger.Logger(cc.stdout, cc.verbose)
t1a, t1b = t1
t2aa, t2ab, t2bb = t2
Ht1a = np.zeros_like(t1a)
Ht1b = np.zeros_like(t1b)
Ht2aa = np.zeros_like(t2aa)
Ht2ab = np.zeros_like(t2ab)
Ht2bb = np.zeros_like(t2bb)
nkpts, nocca, nvira = t1a.shape
noccb, nvirb = t1b.shape[1:]
#fvv_ = eris.fock[0][:,nocca:,nocca:]
#fVV_ = eris.fock[1][:,noccb:,noccb:]
#foo_ = eris.fock[0][:,:nocca,:nocca]
#fOO_ = eris.fock[1][:,:noccb,:noccb]
fov_ = eris.fock[0][:,:nocca,nocca:]
fOV_ = eris.fock[1][:,:noccb,noccb:]
# Get location of padded elements in occupied and virtual space
nonzero_padding_alpha, nonzero_padding_beta = padding_k_idx(cc, kind="split")
nonzero_opadding_alpha, nonzero_vpadding_alpha = nonzero_padding_alpha
nonzero_opadding_beta, nonzero_vpadding_beta = nonzero_padding_beta
mo_ea_o = [e[:nocca] for e in eris.mo_energy[0]]
mo_eb_o = [e[:noccb] for e in eris.mo_energy[1]]
mo_ea_v = [e[nocca:] + cc.level_shift for e in eris.mo_energy[0]]
mo_eb_v = [e[noccb:] + cc.level_shift for e in eris.mo_energy[1]]
Fvv_, FVV_ = kintermediates_uhf.cc_Fvv(cc, t1, t2, eris)
Foo_, FOO_ = kintermediates_uhf.cc_Foo(cc, t1, t2, eris)
Fov_, FOV_ = kintermediates_uhf.cc_Fov(cc, t1, t2, eris)
# Move energy terms to the other side
for k in range(nkpts):
Fvv_[k][np.diag_indices(nvira)] -= mo_ea_v[k]
FVV_[k][np.diag_indices(nvirb)] -= mo_eb_v[k]
Foo_[k][np.diag_indices(nocca)] -= mo_ea_o[k]
FOO_[k][np.diag_indices(noccb)] -= mo_eb_o[k]
# Get the momentum conservation array
kconserv = cc.khelper.kconserv
# T1 equation
P = kintermediates_uhf.kconserv_mat(cc.nkpts, cc.khelper.kconserv)
Ht1a += fov_.conj()
Ht1b += fOV_.conj()
Ht1a += einsum('xyximae,yme->xia', t2aa, Fov_)
Ht1a += einsum('xyximae,yme->xia', t2ab, FOV_)
Ht1b += einsum('xyximae,yme->xia', t2bb, FOV_)
Ht1b += einsum('yxymiea,yme->xia', t2ab, Fov_)
Ht1a -= einsum('xyzmnae, xzymine->zia', t2aa, eris.ooov)
Ht1a -= einsum('xyzmNaE, xzymiNE->zia', t2ab, eris.ooOV)
#Ht1a -= einsum('xyzmnae,xzymine,xyzw->zia', t2aa, eris.ooov, P)
#Ht1a -= einsum('xyzmNaE,xzymiNE,xyzw->zia', t2ab, eris.ooOV, P)
Ht1b -= einsum('xyzmnae, xzymine->zia', t2bb, eris.OOOV)
#Ht1b -= einsum('xyzmnae,xzymine,xyzw->zia', t2bb, eris.OOOV, P)
Ht1b -= einsum('yxwnmea,xzymine,xyzw->zia', t2ab, eris.OOov, P)
for ka in range(nkpts):
Ht1a[ka] += einsum('ie,ae->ia', t1a[ka], Fvv_[ka])
Ht1b[ka] += einsum('ie,ae->ia', t1b[ka], FVV_[ka])
Ht1a[ka] -= einsum('ma,mi->ia', t1a[ka], Foo_[ka])
Ht1b[ka] -= einsum('ma,mi->ia', t1b[ka], FOO_[ka])
for km in range(nkpts):
# ka == ki; km == kf == km
# <ma||if> = [mi|af] - [mf|ai]
# => [mi|af] - [fm|ia]
Ht1a[ka] += einsum('mf,aimf->ia', t1a[km], eris.voov[ka, ka, km])
Ht1a[ka] -= einsum('mf,miaf->ia', t1a[km], eris.oovv[km, ka, ka])
Ht1a[ka] += einsum('MF,aiMF->ia', t1b[km], eris.voOV[ka, ka, km])
# miaf - mfai => miaf - fmia
Ht1b[ka] += einsum('MF,AIMF->IA', t1b[km], eris.VOOV[ka, ka, km])
Ht1b[ka] -= einsum('MF,MIAF->IA', t1b[km], eris.OOVV[km, ka, ka])
Ht1b[ka] += einsum('mf,fmIA->IA', t1a[km], eris.voOV[km, km, ka].conj())
for kf in range(nkpts):
ki = ka
ke = kconserv[ki, kf, km]
Ht1a[ka] += einsum('imef,fmea->ia', t2aa[ki,km,ke], eris.vovv[kf,km,ke].conj())
Ht1a[ka] += einsum('iMeF,FMea->ia', t2ab[ki,km,ke], eris.VOvv[kf,km,ke].conj())
Ht1b[ka] += einsum('IMEF,FMEA->IA', t2bb[ki,km,ke], eris.VOVV[kf,km,ke].conj())
Ht1b[ka] += einsum('mIfE,fmEA->IA', t2ab[km,ki,kf], eris.voVV[kf,km,ke].conj())
for ki, kj, ka in kpts_helper.loop_kkk(nkpts):
kb = kconserv[ki, ka, kj]
# Fvv equation
Ftmpa_kb = Fvv_[kb] - 0.5 * einsum('mb,me->be', t1a[kb], Fov_[kb])
Ftmpb_kb = FVV_[kb] - 0.5 * einsum('MB,ME->BE', t1b[kb], FOV_[kb])
Ftmpa_ka = Fvv_[ka] - 0.5 * einsum('mb,me->be', t1a[ka], Fov_[ka])
Ftmpb_ka = FVV_[ka] - 0.5 * einsum('MB,ME->BE', t1b[ka], FOV_[ka])
tmp = einsum('ijae,be->ijab', t2aa[ki, kj, ka], Ftmpa_kb)
Ht2aa[ki, kj, ka] += tmp
tmp = einsum('IJAE,BE->IJAB', t2bb[ki, kj, ka], Ftmpb_kb)
Ht2bb[ki, kj, ka] += tmp
tmp = einsum('iJaE,BE->iJaB', t2ab[ki, kj, ka], Ftmpb_kb)
Ht2ab[ki, kj, ka] += tmp
tmp = einsum('iJeB,ae->iJaB', t2ab[ki, kj, ka], Ftmpa_ka)
Ht2ab[ki, kj, ka] += tmp
#P(ab)
tmp = einsum('ijbe,ae->ijab', t2aa[ki, kj, kb], Ftmpa_ka)
Ht2aa[ki, kj, ka] -= tmp
tmp = einsum('IJBE,AE->IJAB', t2bb[ki, kj, kb], Ftmpb_ka)
Ht2bb[ki, kj, ka] -= tmp
# Foo equation
Ftmpa_kj = Foo_[kj] + 0.5 * einsum('je,me->mj', t1a[kj], Fov_[kj])
Ftmpb_kj = FOO_[kj] + 0.5 * einsum('JE,ME->MJ', t1b[kj], FOV_[kj])
Ftmpa_ki = Foo_[ki] + 0.5 * einsum('je,me->mj', t1a[ki], Fov_[ki])
Ftmpb_ki = FOO_[ki] + 0.5 * einsum('JE,ME->MJ', t1b[ki], FOV_[ki])
tmp = einsum('imab,mj->ijab', t2aa[ki, kj, ka], Ftmpa_kj)
Ht2aa[ki, kj, ka] -= tmp
tmp = einsum('IMAB,MJ->IJAB', t2bb[ki, kj, ka], Ftmpb_kj)
Ht2bb[ki, kj, ka] -= tmp
tmp = einsum('iMaB,MJ->iJaB', t2ab[ki, kj, ka], Ftmpb_kj)
Ht2ab[ki, kj, ka] -= tmp
tmp = einsum('mJaB,mi->iJaB', t2ab[ki, kj, ka], Ftmpa_ki)
Ht2ab[ki, kj, ka] -= tmp
#P(ij)
tmp = einsum('jmab,mi->ijab', t2aa[kj, ki, ka], Ftmpa_ki)
Ht2aa[ki, kj, ka] += tmp
tmp = einsum('JMAB,MI->IJAB', t2bb[kj, ki, ka], Ftmpb_ki)
Ht2bb[ki, kj, ka] += tmp
# T2 equation
eris_ovov = np.asarray(eris.ovov)
eris_OVOV = np.asarray(eris.OVOV)
eris_ovOV = np.asarray(eris.ovOV)
Ht2aa += (eris_ovov.transpose(0,2,1,3,5,4,6) - eris_ovov.transpose(2,0,1,5,3,4,6)).conj()
Ht2bb += (eris_OVOV.transpose(0,2,1,3,5,4,6) - eris_OVOV.transpose(2,0,1,5,3,4,6)).conj()
Ht2ab += eris_ovOV.transpose(0,2,1,3,5,4,6).conj()
tauaa, tauab, taubb = kintermediates_uhf.make_tau(cc, t2, t1, t1)
Woooo, WooOO, WOOOO = kintermediates_uhf.cc_Woooo(cc, t1, t2, eris)
# Add the contributions from Wvvvv
for km, ki, kn in kpts_helper.loop_kkk(nkpts):
kj = kconserv[km,ki,kn]
Woooo[km,ki,kn] += .5 * einsum('xmenf, xijef->minj', eris_ovov[km,:,kn], tauaa[ki,kj])
WOOOO[km,ki,kn] += .5 * einsum('xMENF, xIJEF->MINJ', eris_OVOV[km,:,kn], taubb[ki,kj])
WooOO[km,ki,kn] += .5 * einsum('xmeNF, xiJeF->miNJ', eris_ovOV[km,:,kn], tauab[ki,kj])
for km, ki, kn in kpts_helper.loop_kkk(nkpts):
kj = kconserv[km,ki,kn]
Ht2aa[ki,kj,:] += einsum('minj,wmnab->wijab', Woooo[km,ki,kn], tauaa[km,kn]) * .5
Ht2bb[ki,kj,:] += einsum('MINJ,wMNAB->wIJAB', WOOOO[km,ki,kn], taubb[km,kn]) * .5
Ht2ab[ki,kj,:] += einsum('miNJ,wmNaB->wiJaB', WooOO[km,ki,kn], tauab[km,kn])
add_vvvv_(cc, (Ht2aa, Ht2ab, Ht2bb), t1, t2, eris)
Wovvo, WovVO, WOVvo, WOVVO, WoVVo, WOvvO = \
kintermediates_uhf.cc_Wovvo(cc, t1, t2, eris)
#:Ht2ab += einsum('xwzimae,wvumeBJ,xwzv,wuvy->xyziJaB', t2aa, WovVO, P, P)
#:Ht2ab += einsum('xwziMaE,wvuMEBJ,xwzv,wuvy->xyziJaB', t2ab, WOVVO, P, P)
#:Ht2ab -= einsum('xie,zma,uwzBJme,zuwx,xyzu->xyziJaB', t1a, t1a, eris.VOov, P, P)
for kx, kw, kz in kpts_helper.loop_kkk(nkpts):
kv = kconserv[kx, kz, kw]
for ku in range(nkpts):
ky = kconserv[kw, kv, ku]
Ht2ab[kx, ky, kz] += lib.einsum('imae,mebj->ijab', t2aa[kx,kw,kz], WovVO[kw,kv,ku])
Ht2ab[kx, ky, kz] += lib.einsum('imae,mebj->ijab', t2ab[kx,kw,kz], WOVVO[kw,kv,ku])
#for kz, ku, kw in kpts_helper.loop_kkk(nkpts):
# kx = kconserv[kz,kw,ku]
# ky = kconserv[kz,kx,ku]
# continue
# Ht2ab[kx, ky, kz] -= lib.einsum('ie, ma, emjb->ijab', t1a[kx], t1a[kz], eris.voOV[kx,kz,kw].conj())
Ht2ab -= einsum('xie, yma, xyzemjb->xzyijab', t1a, t1a, eris.voOV[:].conj())
#:Ht2ab += einsum('wxvmIeA,wvumebj,xwzv,wuvy->yxujIbA', t2ab, Wovvo, P, P)
#:Ht2ab += einsum('wxvMIEA,wvuMEbj,xwzv,wuvy->yxujIbA', t2bb, WOVvo, P, P)
#:Ht2ab -= einsum('xIE,zMA,uwzbjME,zuwx,xyzu->yxujIbA', t1b, t1b, eris.voOV, P, P)
#for kx, kw, kz in kpts_helper.loop_kkk(nkpts):
# kv = kconserv[kx, kz, kw]
# for ku in range(nkpts):
# ky = kconserv[kw, kv, ku]
#Ht2ab[ky,kx,ku] += lib.einsum('miea, mebj-> jiba', t2ab[kw,kx,kv], Wovvo[kw,kv,ku])
#Ht2ab[ky,kx,ku] += lib.einsum('miea, mebj-> jiba', t2bb[kw,kx,kv], WOVvo[kw,kv,ku])
for km, ke, kb in kpts_helper.loop_kkk(nkpts):
kj = kconserv[km, ke, kb]
Ht2ab[kj,:,kb] += einsum('xmiea, mebj->xjiba', t2ab[km,:,ke], Wovvo[km,ke,kb])
Ht2ab[kj,:,kb] += einsum('xmiea, mebj->xjiba', t2bb[km,:,ke], WOVvo[km,ke,kb])
for kz, ku, kw in kpts_helper.loop_kkk(nkpts):
kx = kconserv[kz, kw, ku]
ky = kconserv[kz, kx, ku]
Ht2ab[ky,kx,ku] -= lib.einsum('ie, ma, bjme->jiba', t1b[kx], t1b[kz], eris.voOV[ku,kw,kz])
#:Ht2ab += einsum('xwviMeA,wvuMebJ,xwzv,wuvy->xyuiJbA', t2ab, WOvvO, P, P)
#:Ht2ab -= einsum('xie,zMA,zwuMJbe,zuwx,xyzu->xyuiJbA', t1a, t1b, eris.OOvv, P, P)
#for kx, kw, kz in kpts_helper.loop_kkk(nkpts):
# kv = kconserv[kx, kz, kw]
# for ku in range(nkpts):
# ky = kconserv[kw, kv, ku]
# Ht2ab[kx,ky,ku] += lib.einsum('imea,mebj->ijba', t2ab[kx,kw,kv],WOvvO[kw,kv,ku])
for km, ke, kb in kpts_helper.loop_kkk(nkpts):
kj = kconserv[km, ke, kb]
Ht2ab[:,kj,kb] += einsum('ximea, mebj->xijba', t2ab[:,km,ke], WOvvO[km,ke,kb])
for kz,ku,kw in kpts_helper.loop_kkk(nkpts):
kx = kconserv[kz, kw, ku]
ky = kconserv[kz, kx, ku]
Ht2ab[kx,ky,ku] -= lib.einsum('ie, ma, mjbe->ijba', t1a[kx], t1b[kz], eris.OOvv[kz, kw, ku])
#:Ht2ab += einsum('wxzmIaE,wvumEBj,xwzv,wuvy->yxzjIaB', t2ab, WoVVo, P, P)
#:Ht2ab -= einsum('xIE,zma,zwumjBE,zuwx,xyzu->yxzjIaB', t1b, t1a, eris.ooVV, P, P)
for kx, kw, kz in kpts_helper.loop_kkk(nkpts):
kv = kconserv[kx, kz, kw]
for ku in range(nkpts):
ky = kconserv[kw, kv, ku]
Ht2ab[ky, kx, kz] += lib.einsum('miae,mebj->jiab', t2ab[kw,kx,kz], WoVVo[kw,kv,ku])
for kz, ku, kw in kpts_helper.loop_kkk(nkpts):
kx = kconserv[kz,kw,ku]
ky = kconserv[kz,kx,ku]
Ht2ab[ky,kx,kz] -= lib.einsum('ie, ma, mjbe->jiab', t1b[kx], t1a[kz], eris.ooVV[kz,kw,ku])
#:u2aa = einsum('xwzimae,wvumebj,xwzv,wuvy->xyzijab', t2aa, Wovvo, P, P)
#:u2aa += einsum('xwziMaE,wvuMEbj,xwzv,wuvy->xyzijab', t2ab, WOVvo, P, P)
#Left this in to keep proper shape, need to replace later
u2aa = np.zeros_like(t2aa)
for kx, kw, kz in kpts_helper.loop_kkk(nkpts):
kv = kconserv[kx, kz, kw]
for ku in range(nkpts):
ky = kconserv[kw, kv, ku]
u2aa[kx,ky,kz] += lib.einsum('imae, mebj->ijab', t2aa[kx,kw,kz], Wovvo[kw,kv,ku])
u2aa[kx,ky,kz] += lib.einsum('imae, mebj->ijab', t2ab[kx,kw,kz], WOVvo[kw,kv,ku])
#:u2aa += einsum('xie,zma,zwumjbe,zuwx,xyzu->xyzijab', t1a, t1a, eris.oovv, P, P)
#:u2aa -= einsum('xie,zma,uwzbjme,zuwx,xyzu->xyzijab', t1a, t1a, eris.voov, P, P)
for kz, ku, kw in kpts_helper.loop_kkk(nkpts):
kx = kconserv[kz,kw,ku]
ky = kconserv[kz,kx,ku]
u2aa[kx,ky,kz] += lib.einsum('ie,ma,mjbe->ijab',t1a[kx],t1a[kz],eris.oovv[kz,kw,ku])
u2aa[kx,ky,kz] -= lib.einsum('ie,ma,bjme->ijab',t1a[kx],t1a[kz],eris.voov[ku,kw,kz])
#:u2aa += np.einsum('xie,uyzbjae,uzyx->xyzijab', t1a, eris.vovv, P)
#:u2aa -= np.einsum('zma,xzyimjb->xyzijab', t1a, eris.ooov.conj())
for ky, kx, ku in kpts_helper.loop_kkk(nkpts):
kz = kconserv[ky, ku, kx]
u2aa[kx, ky, kz] += lib.einsum('ie, bjae->ijab', t1a[kx], eris.vovv[ku,ky,kz])
u2aa[kx, ky, kz] -= lib.einsum('ma, imjb->ijab', t1a[kz], eris.ooov[kx,kz,ky].conj())
u2aa = u2aa - u2aa.transpose(1,0,2,4,3,5,6)
u2aa = u2aa - einsum('xyzijab,xyzu->xyuijba', u2aa, P)
Ht2aa += u2aa
#:u2bb = einsum('xwzimae,wvumebj,xwzv,wuvy->xyzijab', t2bb, WOVVO, P, P)
#:u2bb += einsum('wxvMiEa,wvuMEbj,xwzv,wuvy->xyzijab', t2ab, WovVO, P, P)
#:u2bb += einsum('xie,zma,zwumjbe,zuwx,xyzu->xyzijab', t1b, t1b, eris.OOVV, P, P)
#:u2bb -= einsum('xie,zma,uwzbjme,zuwx,xyzu->xyzijab', t1b, t1b, eris.VOOV, P, P)
u2bb = np.zeros_like(t2bb)
for kx, kw, kz in kpts_helper.loop_kkk(nkpts):
kv = kconserv[kx, kz, kw]
for ku in range(nkpts):
ky = kconserv[kw,kv, ku]
u2bb[kx, ky, kz] += lib.einsum('imae,mebj->ijab', t2bb[kx,kw,kz], WOVVO[kw,kv,ku])
u2bb[kx, ky, kz] += lib.einsum('miea, mebj-> ijab', t2ab[kw,kx,kv],WovVO[kw,kv,ku])
for kz, ku, kw in kpts_helper.loop_kkk(nkpts):
kx = kconserv[kz, kw, ku]
ky = kconserv[kz, kx, ku]
u2bb[kx, ky, kz] += lib.einsum('ie, ma, mjbe->ijab',t1b[kx],t1b[kz],eris.OOVV[kz,kw,ku])
u2bb[kx, ky, kz] -= lib.einsum('ie, ma, bjme->ijab', t1b[kx], t1b[kz],eris.VOOV[ku,kw,kz])
#:u2bb += np.einsum('xie,uzybjae,uzyx->xyzijab', t1b, eris.VOVV, P)
#:u2bb -= np.einsum('zma,xzyimjb->xyzijab', t1b, eris.OOOV.conj())
for ky, kx, ku in kpts_helper.loop_kkk(nkpts):
kz = kconserv[ky, ku, kx]
u2bb[kx,ky,kz] += lib.einsum('ie,bjae->ijab', t1b[kx], eris.VOVV[ku,ky,kz])
#for kx, kz, ky in kpts_helper.loop_kkk(nkpts):
# u2bb[kx,ky,kz] -= lib.einsum('ma, imjb-> ijab', t1b[kz], eris.OOOV[kx,kz,ky].conj())
u2bb -= einsum('zma, xzyimjb->xyzijab', t1b, eris.OOOV[:].conj())
u2bb = u2bb - u2bb.transpose(1,0,2,4,3,5,6)
u2bb = u2bb - einsum('xyzijab,xyzu->xyuijba', u2bb, P)
Ht2bb += u2bb
#:Ht2ab += np.einsum('xie,uyzBJae,uzyx->xyziJaB', t1a, eris.VOvv, P)
#:Ht2ab += np.einsum('yJE,zxuaiBE,zuxy->xyziJaB', t1b, eris.voVV, P)
#:Ht2ab -= np.einsum('zma,xzyimjb->xyzijab', t1a, eris.ooOV.conj())
#:Ht2ab -= np.einsum('umb,yuxjmia,xyuz->xyzijab', t1b, eris.OOov.conj(), P)
for ky, kx, ku in kpts_helper.loop_kkk(nkpts):
kz = kconserv[ky,ku,kx]
Ht2ab[kx,ky,kz] += lib.einsum('ie, bjae-> ijab', t1a[kx], eris.VOvv[ku,ky,kz])
Ht2ab[kx,ky,kz] += lib.einsum('je, aibe-> ijab', t1b[ky], eris.voVV[kz,kx,ku])
#for kx, kz, ky in kpts_helper.loop_kkk(nkpts):
# Ht2ab[kx,ky,kz] -= lib.einsum('ma, imjb->ijab', t1a[kz], eris.ooOV[kx,kz,ky].conj())
Ht2ab -= einsum('zma, xzyimjb->xyzijab', t1a, eris.ooOV[:].conj())
for kx, ky, ku in kpts_helper.loop_kkk(nkpts):
kz = kconserv[kx, ku, ky]
Ht2ab[kx,ky,kz] -= lib.einsum('mb,jmia->ijab',t1b[ku],eris.OOov[ky,ku,kx].conj())
eia = []
eIA = []
for ki in range(nkpts):
tmp_alpha = []
tmp_beta = []
for ka in range(nkpts):
tmp_eia = LARGE_DENOM * np.ones((nocca, nvira), dtype=eris.mo_energy[0][0].dtype)
tmp_eIA = LARGE_DENOM * np.ones((noccb, nvirb), dtype=eris.mo_energy[0][0].dtype)
n0_ovp_ia = np.ix_(nonzero_opadding_alpha[ki], nonzero_vpadding_alpha[ka])
n0_ovp_IA = np.ix_(nonzero_opadding_beta[ki], nonzero_vpadding_beta[ka])
tmp_eia[n0_ovp_ia] = (mo_ea_o[ki][:,None] - mo_ea_v[ka])[n0_ovp_ia]
tmp_eIA[n0_ovp_IA] = (mo_eb_o[ki][:,None] - mo_eb_v[ka])[n0_ovp_IA]
tmp_alpha.append(tmp_eia)
tmp_beta.append(tmp_eIA)
eia.append(tmp_alpha)
eIA.append(tmp_beta)
for ki in range(nkpts):
ka = ki
# Remove zero/padded elements from denominator
Ht1a[ki] /= eia[ki][ka]
Ht1b[ki] /= eIA[ki][ka]
for ki, kj, ka in kpts_helper.loop_kkk(nkpts):
kb = kconserv[ki, ka, kj]
eijab = eia[ki][ka][:,None,:,None] + eia[kj][kb][:,None,:]
Ht2aa[ki,kj,ka] /= eijab
eijab = eia[ki][ka][:,None,:,None] + eIA[kj][kb][:,None,:]
Ht2ab[ki,kj,ka] /= eijab
eijab = eIA[ki][ka][:,None,:,None] + eIA[kj][kb][:,None,:]
Ht2bb[ki,kj,ka] /= eijab
time0 = log.timer_debug1('update t1 t2', *time0)
return (Ht1a, Ht1b), (Ht2aa, Ht2ab, Ht2bb)
def get_normt_diff(cc, t1, t2, t1new, t2new):
'''Calculates norm(t1 - t1new) + norm(t2 - t2new).'''
return (np.linalg.norm(t1new[0] - t1[0])**2 +
np.linalg.norm(t1new[1] - t1[1])**2 +
np.linalg.norm(t2new[0] - t2[0])**2 +
np.linalg.norm(t2new[1] - t2[1])**2 +
np.linalg.norm(t2new[2] - t2[2])**2) ** .5
def energy(cc, t1, t2, eris):
t1a, t1b = t1
t2aa, t2ab, t2bb = t2
kka, noa, nva = t1a.shape
kkb, nob, nvb = t1b.shape
assert(kka == kkb)
nkpts = kka
s = 0.0 + 0j
fa, fb = eris.fock
for ki in range(nkpts):
s += einsum('ia,ia', fa[ki, :noa, noa:], t1a[ki, :, :])
s += einsum('ia,ia', fb[ki, :nob, nob:], t1b[ki, :, :])
t1t1aa = np.zeros(shape=t2aa.shape, dtype=t2aa.dtype)
t1t1ab = np.zeros(shape=t2ab.shape, dtype=t2ab.dtype)
t1t1bb = np.zeros(shape=t2bb.shape, dtype=t2bb.dtype)
for ki in range(nkpts):
ka = ki
for kj in range(nkpts):
t1t1aa[ki, kj, ka, :, :, :, :] = einsum('ia,jb->ijab', t1a[ki, :, :], t1a[kj, :, :])
t1t1ab[ki, kj, ka, :, :, :, :] = einsum('ia,jb->ijab', t1a[ki, :, :], t1b[kj, :, :])
t1t1bb[ki, kj, ka, :, :, :, :] = einsum('ia,jb->ijab', t1b[ki, :, :], t1b[kj, :, :])
tauaa = t2aa + 2*t1t1aa
tauab = t2ab + t1t1ab
taubb = t2bb + 2*t1t1bb
d = 0.0 + 0.j
d += 0.25*(einsum('xzyiajb,xyzijab->',eris.ovov,tauaa)
- einsum('yzxjaib,xyzijab->',eris.ovov,tauaa))
d += einsum('xzyiajb,xyzijab->',eris.ovOV,tauab)
d += 0.25*(einsum('xzyiajb,xyzijab->',eris.OVOV,taubb)
- einsum('yzxjaib,xyzijab->',eris.OVOV,taubb))
e = s + d
e /= nkpts
if abs(e.imag) > 1e-4:
logger.warn(cc, 'Non-zero imaginary part found in KCCSD energy %s', e)
return e.real
#def get_nocc(cc, per_kpoint=False):
# '''See also function get_nocc in pyscf/pbc/mp2/kmp2.py'''
# if cc._nocc is not None:
# return cc._nocc
#
# assert(cc.frozen == 0)
#
# if isinstance(cc.frozen, (int, np.integer)):
# nocca = [(np.count_nonzero(cc.mo_occ[0][k] > 0) - cc.frozen) for k in range(cc.nkpts)]
# noccb = [(np.count_nonzero(cc.mo_occ[1][k] > 0) - cc.frozen) for k in range(cc.nkpts)]
#
# else:
# raise NotImplementedError
#
# if not per_kpoint:
# nocca = np.amax(nocca)
# noccb = np.amax(noccb)
# return nocca, noccb
#
#def get_nmo(cc, per_kpoint=False):
# '''See also function get_nmo in pyscf/pbc/mp2/kmp2.py'''
# if cc._nmo is not None:
# return cc._nmo
#
# assert(cc.frozen == 0)
#
# if isinstance(cc.frozen, (int, np.integer)):
# nmoa = [(cc.mo_occ[0][k].size - cc.frozen) for k in range(cc.nkpts)]
# nmob = [(cc.mo_occ[1][k].size - cc.frozen) for k in range(cc.nkpts)]
#
# else:
# raise NotImplementedError
#
# if not per_kpoint:
# nmoa = np.amax(nmoa)
# nmob = np.amax(nmob)
# return nmoa, nmob
#
#def get_frozen_mask(cc):
# '''See also get_frozen_mask function in pyscf/pbc/mp2/kmp2.py'''
#
# moidxa = [np.ones(x.size, dtype=np.bool) for x in cc.mo_occ[0]]
# moidxb = [np.ones(x.size, dtype=np.bool) for x in cc.mo_occ[1]]
# assert(cc.frozen == 0)
#
# if isinstance(cc.frozen, (int, np.integer)):
# for idx in moidxa:
# idx[:cc.frozen] = False
# for idx in moidxb:
# idx[:cc.frozen] = False
# else:
# raise NotImplementedError
#
# return moidxa, moisxb
def amplitudes_to_vector(t1, t2):
return np.hstack((t1[0].ravel(), t1[1].ravel(),
t2[0].ravel(), t2[1].ravel(), t2[2].ravel()))
def vector_to_amplitudes(vec, nmo, nocc, nkpts=1):
nocca, noccb = nocc
nmoa, nmob = nmo
nvira, nvirb = nmoa - nocca, nmob - noccb
sizes = (nkpts*nocca*nvira, nkpts*noccb*nvirb,
nkpts**3*nocca**2*nvira**2, nkpts**3*nocca*noccb*nvira*nvirb,
nkpts**3*noccb**2*nvirb**2)
sections = np.cumsum(sizes[:-1])
t1a, t1b, t2aa, t2ab, t2bb = np.split(vec, sections)
t1a = t1a.reshape(nkpts,nocca,nvira)
t1b = t1b.reshape(nkpts,noccb,nvirb)
t2aa = t2aa.reshape(nkpts,nkpts,nkpts,nocca,nocca,nvira,nvira)
t2ab = t2ab.reshape(nkpts,nkpts,nkpts,nocca,noccb,nvira,nvirb)
t2bb = t2bb.reshape(nkpts,nkpts,nkpts,noccb,noccb,nvirb,nvirb)
return (t1a,t1b), (t2aa,t2ab,t2bb)
def add_vvvv_(cc, Ht2, t1, t2, eris):
nocca, noccb = cc.nocc
nmoa, nmob = cc.nmo
nkpts = cc.nkpts
kconserv = cc.khelper.kconserv
t1a, t1b = t1
t2aa, t2ab, t2bb = t2
Ht2aa, Ht2ab, Ht2bb = Ht2
if cc.direct and getattr(eris, 'Lpv', None) is not None:
def get_Wvvvv(ka, kc, kb):
kd = kconserv[ka,kc,kb]
Lpv = eris.Lpv
LPV = eris.LPV
Lbd = (Lpv[kb,kd][:,nocca:] -
lib.einsum('Lkd,kb->Lbd', Lpv[kb,kd][:,:nocca], t1a[kb]))
Wvvvv = lib.einsum('Lac,Lbd->acbd', Lpv[ka,kc][:,nocca:], Lbd)
kcbd = lib.einsum('Lkc,Lbd->kcbd', Lpv[ka,kc][:,:nocca],
Lpv[kb,kd][:,nocca:])
Wvvvv -= lib.einsum('kcbd,ka->acbd', kcbd, t1a[ka])
LBD = (LPV[kb,kd][:,noccb:] -
lib.einsum('Lkd,kb->Lbd', LPV[kb,kd][:,:noccb], t1b[kb]))
WvvVV = lib.einsum('Lac,Lbd->acbd', Lpv[ka,kc][:,nocca:], LBD)
kcbd = lib.einsum('Lkc,Lbd->kcbd', Lpv[ka,kc][:,:nocca],
LPV[kb,kd][:,noccb:])
WvvVV -= lib.einsum('kcbd,ka->acbd', kcbd, t1a[ka])
WVVVV = lib.einsum('Lac,Lbd->acbd', LPV[ka,kc][:,noccb:], LBD)
kcbd = lib.einsum('Lkc,Lbd->kcbd', LPV[ka,kc][:,:noccb],
LPV[kb,kd][:,noccb:])
WVVVV -= lib.einsum('kcbd,ka->acbd', kcbd, t1b[ka])
Wvvvv *= (1./nkpts)
WvvVV *= (1./nkpts)
WVVVV *= (1./nkpts)
return Wvvvv, WvvVV, WVVVV
else:
_Wvvvv, _WvvVV, _WVVVV = kintermediates_uhf.cc_Wvvvv_half(cc, t1, t2, eris)
def get_Wvvvv(ka, kc, kb):
return _Wvvvv[ka,kc,kb], _WvvVV[ka,kc,kb], _WVVVV[ka,kc,kb]
#:Ht2aa += np.einsum('xyuijef,zuwaebf,xyuv,zwuv->xyzijab', tauaa, _Wvvvv-_Wvvvv.transpose(2,1,0,5,4,3,6), P, P) * .5
#:Ht2bb += np.einsum('xyuijef,zuwaebf,xyuv,zwuv->xyzijab', taubb, _WVVVV-_WVVVV.transpose(2,1,0,5,4,3,6), P, P) * .5
#:Ht2ab += np.einsum('xyuiJeF,zuwaeBF,xyuv,zwuv->xyziJaB', tauab, _WvvVV, P, P)
for ka, kb, kc in kpts_helper.loop_kkk(nkpts):
kd = kconserv[ka,kc,kb]
Wvvvv, WvvVV, WVVVV = get_Wvvvv(ka, kc, kb)
for ki in range(nkpts):
kj = kconserv[ka,ki,kb]
tauaa = t2aa[ki,kj,kc].copy()
tauab = t2ab[ki,kj,kc].copy()
taubb = t2bb[ki,kj,kc].copy()
if ki == kc and kj == kd:
tauaa += einsum('ic,jd->ijcd', t1a[ki], t1a[kj])
tauab += einsum('ic,jd->ijcd', t1a[ki], t1b[kj])
taubb += einsum('ic,jd->ijcd', t1b[ki], t1b[kj])
if ki == kd and kj == kc:
tauaa -= einsum('id,jc->ijcd', t1a[ki], t1a[kj])
taubb -= einsum('id,jc->ijcd', t1b[ki], t1b[kj])
tmp = lib.einsum('acbd,ijcd->ijab', Wvvvv, tauaa) * .5
Ht2aa[ki,kj,ka] += tmp
Ht2aa[ki,kj,kb] -= tmp.transpose(0,1,3,2)
tmp = lib.einsum('acbd,ijcd->ijab', WVVVV, taubb) * .5
Ht2bb[ki,kj,ka] += tmp
Ht2bb[ki,kj,kb] -= tmp.transpose(0,1,3,2)
Ht2ab[ki,kj,ka] += lib.einsum('acbd,ijcd->ijab', WvvVV, tauab)
Wvvvv = WvvVV = WVVVV = None
_Wvvvv = _WvvVV = _WVVVV = None
# Contractions below are merged to Woooo intermediates
# tauaa, tauab, taubb = kintermediates_uhf.make_tau(cc, t2, t1, t1)
# P = kintermediates_uhf.kconserv_mat(cc.nkpts, cc.khelper.kconserv)
# minj = np.einsum('xwymenf,uvwijef,xywz,uvwz->xuyminj', eris.ovov, tauaa, P, P)
# MINJ = np.einsum('xwymenf,uvwijef,xywz,uvwz->xuyminj', eris.OVOV, taubb, P, P)
# miNJ = np.einsum('xwymeNF,uvwiJeF,xywz,uvwz->xuymiNJ', eris.ovOV, tauab, P, P)
# Ht2aa += np.einsum('xuyminj,xywmnab,xyuv->uvwijab', minj, tauaa, P) * .25
# Ht2bb += np.einsum('xuyminj,xywmnab,xyuv->uvwijab', MINJ, taubb, P) * .25
# Ht2ab += np.einsum('xuymiNJ,xywmNaB,xyuv->uvwiJaB', miNJ, tauab, P) * .5
return (Ht2aa, Ht2ab, Ht2bb)
class KUCCSD(uccsd.UCCSD):
max_space = getattr(__config__, 'pbc_cc_kccsd_uhf_KUCCSD_max_space', 20)
def __init__(self, mf, frozen=None, mo_coeff=None, mo_occ=None):
assert(isinstance(mf, scf.khf.KSCF))
uccsd.UCCSD.__init__(self, mf, frozen, mo_coeff, mo_occ)
self.kpts = mf.kpts
self.mo_energy = mf.mo_energy
self.khelper = kpts_helper.KptsHelper(mf.cell, self.kpts)
self.direct = True # If possible, use GDF to compute Wvvvv on-the-fly
keys = set(['kpts', 'mo_energy', 'khelper', 'max_space', 'direct'])
self._keys = self._keys.union(keys)
@property
def nkpts(self):
return len(self.kpts)
get_normt_diff = get_normt_diff
get_nocc = get_nocc
get_nmo = get_nmo
get_frozen_mask = get_frozen_mask
update_amps = update_amps
energy = energy
def dump_flags(self, verbose=None):
return uccsd.UCCSD.dump_flags(self, verbose)
def ao2mo(self, mo_coeff=None):
from pyscf.pbc.df.df import GDF
cell = self._scf.cell
nkpts = self.nkpts
nmoa, nmob = self.nmo
mem_incore = nkpts**3 * (nmoa**4 + nmob**4) * 8 / 1e6
mem_now = lib.current_memory()[0]
if (mem_incore + mem_now < self.max_memory) or self.mol.incore_anyway:
return _make_eris_incore(self, mo_coeff)
elif (self.direct and type(self._scf.with_df) is GDF
and cell.dimension != 2):
# DFKCCSD does not support MDF
return _make_df_eris(self, mo_coeff)
else:
return _make_eris_outcore(self, mo_coeff)
def init_amps(self, eris):
time0 = time.clock(), time.time()
nocca, noccb = self.nocc
nmoa, nmob = self.nmo
nvira, nvirb = nmoa - nocca, nmob - noccb
nkpts = self.nkpts
t1a = np.zeros((nkpts, nocca, nvira), dtype=np.complex128)
t1b = np.zeros((nkpts, noccb, nvirb), dtype=np.complex128)
t1 = (t1a, t1b)
t2aa = np.zeros((nkpts, nkpts, nkpts, nocca, nocca, nvira, nvira), dtype=np.complex128)
t2ab = np.zeros((nkpts, nkpts, nkpts, nocca, noccb, nvira, nvirb), dtype=np.complex128)
t2bb = np.zeros((nkpts, nkpts, nkpts, noccb, noccb, nvirb, nvirb), dtype=np.complex128)
mo_ea_o = [e[:nocca] for e in eris.mo_energy[0]]
mo_eb_o = [e[:noccb] for e in eris.mo_energy[1]]
mo_ea_v = [e[nocca:] for e in eris.mo_energy[0]]
mo_eb_v = [e[noccb:] for e in eris.mo_energy[1]]
# Get location of padded elements in occupied and virtual space
nonzero_padding_alpha, nonzero_padding_beta = padding_k_idx(self, kind="split")
nonzero_opadding_alpha, nonzero_vpadding_alpha = nonzero_padding_alpha
nonzero_opadding_beta, nonzero_vpadding_beta = nonzero_padding_beta
eia = []
eIA = []
# Create denominators, ignoring padded elements
for ki in range(nkpts):
tmp_alpha = []
tmp_beta = []
for ka in range(nkpts):
tmp_eia = LARGE_DENOM * np.ones((nocca, nvira), dtype=eris.mo_energy[0][0].dtype)
tmp_eIA = LARGE_DENOM * np.ones((noccb, nvirb), dtype=eris.mo_energy[0][0].dtype)
n0_ovp_ia = np.ix_(nonzero_opadding_alpha[ki], nonzero_vpadding_alpha[ka])
n0_ovp_IA = np.ix_(nonzero_opadding_beta[ki], nonzero_vpadding_beta[ka])
tmp_eia[n0_ovp_ia] = (mo_ea_o[ki][:,None] - mo_ea_v[ka])[n0_ovp_ia]
tmp_eIA[n0_ovp_IA] = (mo_eb_o[ki][:,None] - mo_eb_v[ka])[n0_ovp_IA]
tmp_alpha.append(tmp_eia)
tmp_beta.append(tmp_eIA)
eia.append(tmp_alpha)
eIA.append(tmp_beta)
kconserv = kpts_helper.get_kconserv(self._scf.cell, self.kpts)
for ki, kj, ka in kpts_helper.loop_kkk(nkpts):
kb = kconserv[ki, ka, kj]
Daa = eia[ki][ka][:,None,:,None] + eia[kj][kb][:,None,:]
Dab = eia[ki][ka][:,None,:,None] + eIA[kj][kb][:,None,:]
Dbb = eIA[ki][ka][:,None,:,None] + eIA[kj][kb][:,None,:]
t2aa[ki,kj,ka] = eris.ovov[ki,ka,kj].conj().transpose((0,2,1,3)) / Daa
t2aa[ki,kj,ka]-= eris.ovov[kj,ka,ki].conj().transpose((2,0,1,3)) / Daa
t2ab[ki,kj,ka] = eris.ovOV[ki,ka,kj].conj().transpose((0,2,1,3)) / Dab
t2bb[ki,kj,ka] = eris.OVOV[ki,ka,kj].conj().transpose((0,2,1,3)) / Dbb
t2bb[ki,kj,ka]-= eris.OVOV[kj,ka,ki].conj().transpose((2,0,1,3)) / Dbb
t2 = (t2aa,t2ab,t2bb)
d = 0.0 + 0.j
d += 0.25*(einsum('xzyiajb,xyzijab->',eris.ovov,t2aa)
- einsum('yzxjaib,xyzijab->',eris.ovov,t2aa))
d += einsum('xzyiajb,xyzijab->',eris.ovOV,t2ab)
d += 0.25*(einsum('xzyiajb,xyzijab->',eris.OVOV,t2bb)
- einsum('yzxjaib,xyzijab->',eris.OVOV,t2bb))
self.emp2 = d/nkpts
logger.info(self, 'Init t2, MP2 energy = %.15g', self.emp2.real)
logger.timer(self, 'init mp2', *time0)
return self.emp2, t1, t2
def amplitudes_to_vector(self, t1, t2):
return amplitudes_to_vector(t1, t2)
def vector_to_amplitudes(self, vec, nmo=None, nocc=None, nkpts=None):
if nocc is None: nocc = self.nocc
if nmo is None: nmo = self.nmo
if nkpts is None: nkpts = self.nkpts
return vector_to_amplitudes(vec, nmo, nocc, nkpts)
UCCSD = KUCCSD
#######################################
#
# _ERIS.
#
# Note the two electron integrals are stored in different orders from
# kccsd_rhf._ERIS. Integrals (ab|cd) are stored as [ka,kb,kc,a,b,c,d] here
# while the order is [ka,kc,kb,a,c,b,d] in kccsd_rhf._ERIS
#
# TODO: use the same convention as kccsd_rhf
#
def _make_eris_incore(cc, mo_coeff=None):
eris = uccsd._ChemistsERIs()
if mo_coeff is None:
mo_coeff = cc.mo_coeff
mo_coeff = convert_mo_coeff(mo_coeff) # FIXME: Remove me!
mo_coeff = padded_mo_coeff(cc, mo_coeff)
eris.mo_coeff = mo_coeff
eris.nocc = cc.nocc
nkpts = cc.nkpts
nocca, noccb = cc.nocc
nmoa, nmob = cc.nmo
nvira, nvirb = nmoa - nocca, nmob - noccb
if gamma_point(cc.kpts):
dtype = np.double
else:
dtype = np.complex128
dtype = np.result_type(dtype, *mo_coeff[0])
eris.oooo = np.empty((nkpts,nkpts,nkpts,nocca,nocca,nocca,nocca), dtype=dtype)
eris.ooov = np.empty((nkpts,nkpts,nkpts,nocca,nocca,nocca,nvira), dtype=dtype)
eris.oovv = np.empty((nkpts,nkpts,nkpts,nocca,nocca,nvira,nvira), dtype=dtype)
eris.ovov = np.empty((nkpts,nkpts,nkpts,nocca,nvira,nocca,nvira), dtype=dtype)
eris.voov = np.empty((nkpts,nkpts,nkpts,nvira,nocca,nocca,nvira), dtype=dtype)
eris.vovv = np.empty((nkpts,nkpts,nkpts,nvira,nocca,nvira,nvira), dtype=dtype)
eris.OOOO = np.empty((nkpts,nkpts,nkpts,noccb,noccb,noccb,noccb), dtype=dtype)
eris.OOOV = np.empty((nkpts,nkpts,nkpts,noccb,noccb,noccb,nvirb), dtype=dtype)
eris.OOVV = np.empty((nkpts,nkpts,nkpts,noccb,noccb,nvirb,nvirb), dtype=dtype)
eris.OVOV = np.empty((nkpts,nkpts,nkpts,noccb,nvirb,noccb,nvirb), dtype=dtype)
eris.VOOV = np.empty((nkpts,nkpts,nkpts,nvirb,noccb,noccb,nvirb), dtype=dtype)
eris.VOVV = np.empty((nkpts,nkpts,nkpts,nvirb,noccb,nvirb,nvirb), dtype=dtype)
eris.ooOO = np.empty((nkpts,nkpts,nkpts,nocca,nocca,noccb,noccb), dtype=dtype)
eris.ooOV = np.empty((nkpts,nkpts,nkpts,nocca,nocca,noccb,nvirb), dtype=dtype)
eris.ooVV = np.empty((nkpts,nkpts,nkpts,nocca,nocca,nvirb,nvirb), dtype=dtype)
eris.ovOV = np.empty((nkpts,nkpts,nkpts,nocca,nvira,noccb,nvirb), dtype=dtype)
eris.voOV = np.empty((nkpts,nkpts,nkpts,nvira,nocca,noccb,nvirb), dtype=dtype)
eris.voVV = np.empty((nkpts,nkpts,nkpts,nvira,nocca,nvirb,nvirb), dtype=dtype)
eris.OOoo = None
eris.OOov = np.empty((nkpts,nkpts,nkpts,noccb,noccb,nocca,nvira), dtype=dtype)
eris.OOvv = np.empty((nkpts,nkpts,nkpts,noccb,noccb,nvira,nvira), dtype=dtype)
eris.OVov = np.empty((nkpts,nkpts,nkpts,noccb,nvirb,nocca,nvira), dtype=dtype)
eris.VOov = np.empty((nkpts,nkpts,nkpts,nvirb,noccb,nocca,nvira), dtype=dtype)
eris.VOvv = np.empty((nkpts,nkpts,nkpts,nvirb,noccb,nvira,nvira), dtype=dtype)
_kuccsd_eris_common_(cc, eris)
thisdf = cc._scf.with_df
orbva = np.asarray(mo_coeff[0][:,:,nocca:], order='C')
orbvb = np.asarray(mo_coeff[1][:,:,noccb:], order='C')
eris.vvvv = thisdf.ao2mo_7d(orbva, factor=1./nkpts)
eris.VVVV = thisdf.ao2mo_7d(orbvb, factor=1./nkpts)
eris.vvVV = thisdf.ao2mo_7d([orbva,orbva,orbvb,orbvb], factor=1./nkpts)
return eris
def _kuccsd_eris_common_(cc, eris, buf=None):
from pyscf.pbc import tools
from pyscf.pbc.cc.ccsd import _adjust_occ
#if not (cc.frozen is None or cc.frozen == 0):
# raise NotImplementedError('cc.frozen = %s' % str(cc.frozen))
cput0 = (time.clock(), time.time())
log = logger.new_logger(cc)
cell = cc._scf.cell
thisdf = cc._scf.with_df
kpts = cc.kpts
nkpts = cc.nkpts
mo_coeff = eris.mo_coeff
nocca, noccb = eris.nocc
nmoa, nmob = cc.nmo
mo_a, mo_b = mo_coeff
# Re-make our fock MO matrix elements from density and fock AO
dm = cc._scf.make_rdm1(cc.mo_coeff, cc.mo_occ)
hcore = cc._scf.get_hcore()
with lib.temporary_env(cc._scf, exxdiv=None):
vhf = cc._scf.get_veff(cell, dm)
focka = [reduce(np.dot, (mo.conj().T, hcore[k]+vhf[0][k], mo))
for k, mo in enumerate(mo_a)]
fockb = [reduce(np.dot, (mo.conj().T, hcore[k]+vhf[1][k], mo))
for k, mo in enumerate(mo_b)]
eris.fock = (np.asarray(focka), np.asarray(fockb))
eris.e_hf = cc._scf.energy_tot(dm=dm, vhf=vhf)
madelung = tools.madelung(cell, kpts)
mo_ea = [focka[k].diagonal().real for k in range(nkpts)]
mo_eb = [fockb[k].diagonal().real for k in range(nkpts)]
mo_ea = [_adjust_occ(e, nocca, -madelung) for e in mo_ea]
mo_eb = [_adjust_occ(e, noccb, -madelung) for e in mo_eb]
eris.mo_energy = (mo_ea, mo_eb)
orboa = np.asarray(mo_coeff[0][:,:,:nocca], order='C')
orbob = np.asarray(mo_coeff[1][:,:,:noccb], order='C')
#orbva = np.asarray(mo_coeff[0][:,:,nocca:], order='C')
#orbvb = np.asarray(mo_coeff[1][:,:,noccb:], order='C')
dtype = np.result_type(*focka).char
# The momentum conservation array
kconserv = cc.khelper.kconserv
out = None
if isinstance(buf, h5py.Group):
out = buf.create_dataset('tmp', (nkpts,nkpts,nkpts,nocca,nmoa,nmoa,nmoa), dtype)
oppp = thisdf.ao2mo_7d([orboa,mo_coeff[0],mo_coeff[0],mo_coeff[0]], kpts,
factor=1./nkpts, out=out)
for kp, kq, kr in kpts_helper.loop_kkk(nkpts):
ks = kconserv[kp,kq,kr]
tmp = np.asarray(oppp[kp,kq,kr])
eris.oooo[kp,kq,kr] = tmp[:nocca,:nocca,:nocca,:nocca]
eris.ooov[kp,kq,kr] = tmp[:nocca,:nocca,:nocca,nocca:]
eris.oovv[kp,kq,kr] = tmp[:nocca,:nocca,nocca:,nocca:]
eris.ovov[kp,kq,kr] = tmp[:nocca,nocca:,:nocca,nocca:]
eris.voov[kq,kp,ks] = tmp[:nocca,nocca:,nocca:,:nocca].conj().transpose(1,0,3,2)
eris.vovv[kq,kp,ks] = tmp[:nocca,nocca:,nocca:,nocca:].conj().transpose(1,0,3,2)
oppp = None
if isinstance(buf, h5py.Group):
del(buf['tmp'])
out = buf.create_dataset('tmp', (nkpts,nkpts,nkpts,noccb,nmob,nmob,nmob), dtype)
oppp = thisdf.ao2mo_7d([orbob,mo_coeff[1],mo_coeff[1],mo_coeff[1]], kpts,
factor=1./nkpts, out=out)
for kp, kq, kr in kpts_helper.loop_kkk(nkpts):
ks = kconserv[kp,kq,kr]
tmp = np.asarray(oppp[kp,kq,kr])
eris.OOOO[kp,kq,kr] = tmp[:noccb,:noccb,:noccb,:noccb]
eris.OOOV[kp,kq,kr] = tmp[:noccb,:noccb,:noccb,noccb:]
eris.OOVV[kp,kq,kr] = tmp[:noccb,:noccb,noccb:,noccb:]
eris.OVOV[kp,kq,kr] = tmp[:noccb,noccb:,:noccb,noccb:]
eris.VOOV[kq,kp,ks] = tmp[:noccb,noccb:,noccb:,:noccb].conj().transpose(1,0,3,2)
eris.VOVV[kq,kp,ks] = tmp[:noccb,noccb:,noccb:,noccb:].conj().transpose(1,0,3,2)
oppp = None
if isinstance(buf, h5py.Group):
del(buf['tmp'])
out = buf.create_dataset('tmp', (nkpts,nkpts,nkpts,nocca,nmoa,nmob,nmob), dtype)
oppp = thisdf.ao2mo_7d([orboa,mo_coeff[0],mo_coeff[1],mo_coeff[1]], kpts,
factor=1./nkpts, out=out)
for kp, kq, kr in kpts_helper.loop_kkk(nkpts):
ks = kconserv[kp,kq,kr]
tmp = np.asarray(oppp[kp,kq,kr])
eris.ooOO[kp,kq,kr] = tmp[:nocca,:nocca,:noccb,:noccb]
eris.ooOV[kp,kq,kr] = tmp[:nocca,:nocca,:noccb,noccb:]
eris.ooVV[kp,kq,kr] = tmp[:nocca,:nocca,noccb:,noccb:]
eris.ovOV[kp,kq,kr] = tmp[:nocca,nocca:,:noccb,noccb:]
eris.voOV[kq,kp,ks] = tmp[:nocca,nocca:,noccb:,:noccb].conj().transpose(1,0,3,2)
eris.voVV[kq,kp,ks] = tmp[:nocca,nocca:,noccb:,noccb:].conj().transpose(1,0,3,2)
oppp = None
if isinstance(buf, h5py.Group):
del(buf['tmp'])
out = buf.create_dataset('tmp', (nkpts,nkpts,nkpts,noccb,nmob,nmoa,nmoa), dtype)
oppp = thisdf.ao2mo_7d([orbob,mo_coeff[1],mo_coeff[0],mo_coeff[0]], kpts,
factor=1./nkpts, out=out)
for kp, kq, kr in kpts_helper.loop_kkk(nkpts):
ks = kconserv[kp,kq,kr]
tmp = np.asarray(oppp[kp,kq,kr])
#eris.OOoo[kp,kq,kr] = tmp[:noccb,:noccb,:nocca,:nocca]
eris.OOov[kp,kq,kr] = tmp[:noccb,:noccb,:nocca,nocca:]
eris.OOvv[kp,kq,kr] = tmp[:noccb,:noccb,nocca:,nocca:]
eris.OVov[kp,kq,kr] = tmp[:noccb,noccb:,:nocca,nocca:]
eris.VOov[kq,kp,ks] = tmp[:noccb,noccb:,nocca:,:nocca].conj().transpose(1,0,3,2)
eris.VOvv[kq,kp,ks] = tmp[:noccb,noccb:,nocca:,nocca:].conj().transpose(1,0,3,2)
oppp = None
log.timer('CCSD integral transformation', *cput0)
return eris
def _make_eris_outcore(cc, mo_coeff=None):
eris = uccsd._ChemistsERIs()
if mo_coeff is None:
mo_coeff = cc.mo_coeff
mo_coeff = convert_mo_coeff(mo_coeff) # FIXME: Remove me!
mo_coeff = padded_mo_coeff(cc, mo_coeff)
eris.mo_coeff = mo_coeff
eris.nocc = cc.nocc
nkpts = cc.nkpts
nocca, noccb = cc.nocc
nmoa, nmob = cc.nmo
nvira, nvirb = nmoa - nocca, nmob - noccb
if gamma_point(cc.kpts):
dtype = np.double
else:
dtype = np.complex128
dtype = np.result_type(dtype, *mo_coeff[0]).char
eris.feri = feri = lib.H5TmpFile()
eris.oooo = feri.create_dataset('oooo', (nkpts,nkpts,nkpts,nocca,nocca,nocca,nocca), dtype)
eris.ooov = feri.create_dataset('ooov', (nkpts,nkpts,nkpts,nocca,nocca,nocca,nvira), dtype)
eris.oovv = feri.create_dataset('oovv', (nkpts,nkpts,nkpts,nocca,nocca,nvira,nvira), dtype)
eris.ovov = feri.create_dataset('ovov', (nkpts,nkpts,nkpts,nocca,nvira,nocca,nvira), dtype)
eris.voov = feri.create_dataset('voov', (nkpts,nkpts,nkpts,nvira,nocca,nocca,nvira), dtype)
eris.vovv = feri.create_dataset('vovv', (nkpts,nkpts,nkpts,nvira,nocca,nvira,nvira), dtype)
eris.vvvv = feri.create_dataset('vvvv', (nkpts,nkpts,nkpts,nvira,nvira,nvira,nvira), dtype)
eris.OOOO = feri.create_dataset('OOOO', (nkpts,nkpts,nkpts,noccb,noccb,noccb,noccb), dtype)
eris.OOOV = feri.create_dataset('OOOV', (nkpts,nkpts,nkpts,noccb,noccb,noccb,nvirb), dtype)
eris.OOVV = feri.create_dataset('OOVV', (nkpts,nkpts,nkpts,noccb,noccb,nvirb,nvirb), dtype)
eris.OVOV = feri.create_dataset('OVOV', (nkpts,nkpts,nkpts,noccb,nvirb,noccb,nvirb), dtype)
eris.VOOV = feri.create_dataset('VOOV', (nkpts,nkpts,nkpts,nvirb,noccb,noccb,nvirb), dtype)
eris.VOVV = feri.create_dataset('VOVV', (nkpts,nkpts,nkpts,nvirb,noccb,nvirb,nvirb), dtype)
eris.VVVV = feri.create_dataset('VVVV', (nkpts,nkpts,nkpts,nvirb,nvirb,nvirb,nvirb), dtype)
eris.ooOO = feri.create_dataset('ooOO', (nkpts,nkpts,nkpts,nocca,nocca,noccb,noccb), dtype)
eris.ooOV = feri.create_dataset('ooOV', (nkpts,nkpts,nkpts,nocca,nocca,noccb,nvirb), dtype)
eris.ooVV = feri.create_dataset('ooVV', (nkpts,nkpts,nkpts,nocca,nocca,nvirb,nvirb), dtype)
eris.ovOV = feri.create_dataset('ovOV', (nkpts,nkpts,nkpts,nocca,nvira,noccb,nvirb), dtype)
eris.voOV = feri.create_dataset('voOV', (nkpts,nkpts,nkpts,nvira,nocca,noccb,nvirb), dtype)
eris.voVV = feri.create_dataset('voVV', (nkpts,nkpts,nkpts,nvira,nocca,nvirb,nvirb), dtype)
eris.vvVV = feri.create_dataset('vvVV', (nkpts,nkpts,nkpts,nvira,nvira,nvirb,nvirb), dtype)
eris.OOoo = None
eris.OOov = feri.create_dataset('OOov', (nkpts,nkpts,nkpts,noccb,noccb,nocca,nvira), dtype)
eris.OOvv = feri.create_dataset('OOvv', (nkpts,nkpts,nkpts,noccb,noccb,nvira,nvira), dtype)
eris.OVov = feri.create_dataset('OVov', (nkpts,nkpts,nkpts,noccb,nvirb,nocca,nvira), dtype)
eris.VOov = feri.create_dataset('VOov', (nkpts,nkpts,nkpts,nvirb,noccb,nocca,nvira), dtype)
eris.VOvv = feri.create_dataset('VOvv', (nkpts,nkpts,nkpts,nvirb,noccb,nvira,nvira), dtype)
eris.VVvv = None
fswap = lib.H5TmpFile()
_kuccsd_eris_common_(cc, eris, fswap)
fswap = None
thisdf = cc._scf.with_df
orbva = np.asarray(mo_coeff[0][:,:,nocca:], order='C')
orbvb = np.asarray(mo_coeff[1][:,:,noccb:], order='C')
thisdf.ao2mo_7d(orbva, cc.kpts, factor=1./nkpts, out=eris.vvvv)
thisdf.ao2mo_7d(orbvb, cc.kpts, factor=1./nkpts, out=eris.VVVV)
thisdf.ao2mo_7d([orbva,orbva,orbvb,orbvb], cc.kpts, factor=1./nkpts, out=eris.vvVV)
return eris
def _make_df_eris(cc, mo_coeff=None):
from pyscf.pbc.df import df
from pyscf.ao2mo import _ao2mo
cell = cc._scf.cell
if cell.dimension == 2:
raise NotImplementedError
eris = uccsd._ChemistsERIs()
if mo_coeff is None:
mo_coeff = cc.mo_coeff
mo_coeff = padded_mo_coeff(cc, mo_coeff)
eris.mo_coeff = mo_coeff
eris.nocc = cc.nocc
thisdf = cc._scf.with_df
kpts = cc.kpts
nkpts = cc.nkpts
nocca, noccb = cc.nocc
nmoa, nmob = cc.nmo
nvira, nvirb = nmoa - nocca, nmob - noccb
#if getattr(thisdf, 'auxcell', None):
# naux = thisdf.auxcell.nao_nr()
#else:
# naux = thisdf.get_naoaux()
nao = cell.nao_nr()
mo_kpts_a, mo_kpts_b = eris.mo_coeff
if gamma_point(kpts):
dtype = np.double
else:
dtype = np.complex128
dtype = np.result_type(dtype, *mo_kpts_a)
eris.feri = feri = lib.H5TmpFile()
eris.oooo = feri.create_dataset('oooo', (nkpts,nkpts,nkpts,nocca,nocca,nocca,nocca), dtype)
eris.ooov = feri.create_dataset('ooov', (nkpts,nkpts,nkpts,nocca,nocca,nocca,nvira), dtype)
eris.oovv = feri.create_dataset('oovv', (nkpts,nkpts,nkpts,nocca,nocca,nvira,nvira), dtype)
eris.ovov = feri.create_dataset('ovov', (nkpts,nkpts,nkpts,nocca,nvira,nocca,nvira), dtype)
eris.voov = feri.create_dataset('voov', (nkpts,nkpts,nkpts,nvira,nocca,nocca,nvira), dtype)
eris.vovv = feri.create_dataset('vovv', (nkpts,nkpts,nkpts,nvira,nocca,nvira,nvira), dtype)
eris.vvvv = None
eris.OOOO = feri.create_dataset('OOOO', (nkpts,nkpts,nkpts,noccb,noccb,noccb,noccb), dtype)
eris.OOOV = feri.create_dataset('OOOV', (nkpts,nkpts,nkpts,noccb,noccb,noccb,nvirb), dtype)
eris.OOVV = feri.create_dataset('OOVV', (nkpts,nkpts,nkpts,noccb,noccb,nvirb,nvirb), dtype)
eris.OVOV = feri.create_dataset('OVOV', (nkpts,nkpts,nkpts,noccb,nvirb,noccb,nvirb), dtype)
eris.VOOV = feri.create_dataset('VOOV', (nkpts,nkpts,nkpts,nvirb,noccb,noccb,nvirb), dtype)
eris.VOVV = feri.create_dataset('VOVV', (nkpts,nkpts,nkpts,nvirb,noccb,nvirb,nvirb), dtype)
eris.VVVV = None
eris.ooOO = feri.create_dataset('ooOO', (nkpts,nkpts,nkpts,nocca,nocca,noccb,noccb), dtype)
eris.ooOV = feri.create_dataset('ooOV', (nkpts,nkpts,nkpts,nocca,nocca,noccb,nvirb), dtype)
eris.ooVV = feri.create_dataset('ooVV', (nkpts,nkpts,nkpts,nocca,nocca,nvirb,nvirb), dtype)
eris.ovOV = feri.create_dataset('ovOV', (nkpts,nkpts,nkpts,nocca,nvira,noccb,nvirb), dtype)
eris.voOV = feri.create_dataset('voOV', (nkpts,nkpts,nkpts,nvira,nocca,noccb,nvirb), dtype)
eris.voVV = feri.create_dataset('voVV', (nkpts,nkpts,nkpts,nvira,nocca,nvirb,nvirb), dtype)
eris.vvVV = None
eris.OOoo = None
eris.OOov = feri.create_dataset('OOov', (nkpts,nkpts,nkpts,noccb,noccb,nocca,nvira), dtype)
eris.OOvv = feri.create_dataset('OOvv', (nkpts,nkpts,nkpts,noccb,noccb,nvira,nvira), dtype)
eris.OVov = feri.create_dataset('OVov', (nkpts,nkpts,nkpts,noccb,nvirb,nocca,nvira), dtype)
eris.VOov = feri.create_dataset('VOov', (nkpts,nkpts,nkpts,nvirb,noccb,nocca,nvira), dtype)
eris.VOvv = feri.create_dataset('VOvv', (nkpts,nkpts,nkpts,nvirb,noccb,nvira,nvira), dtype)
eris.VVvv = None
fswap = lib.H5TmpFile()
_kuccsd_eris_common_(cc, eris, fswap)
fswap = None
eris.Lpv = Lpv = np.empty((nkpts,nkpts), dtype=object)
eris.LPV = LPV = np.empty((nkpts,nkpts), dtype=object)
with h5py.File(thisdf._cderi, 'r') as f:
kptij_lst = f['j3c-kptij'][:]
tao = []
ao_loc = None
for ki, kpti in enumerate(kpts):
for kj, kptj in enumerate(kpts):
kpti_kptj = np.array((kpti,kptj))
Lpq = np.asarray(df._getitem(f, 'j3c', kpti_kptj, kptij_lst))
mo_a = np.hstack((mo_kpts_a[ki], mo_kpts_a[kj][:,nocca:]))
mo_b = np.hstack((mo_kpts_b[ki], mo_kpts_b[kj][:,noccb:]))
mo_a = np.asarray(mo_a, dtype=dtype, order='F')
mo_b = np.asarray(mo_b, dtype=dtype, order='F')
if dtype == np.double:
outa = _ao2mo.nr_e2(Lpq, mo_a, (0, nmoa, nmoa, nmoa+nvira), aosym='s2')
outb = _ao2mo.nr_e2(Lpq, mo_b, (0, nmob, nmob, nmob+nvirb), aosym='s2')
else:
#Note: Lpq.shape[0] != naux if linear dependency is found in auxbasis
if Lpq[0].size != nao**2: # aosym = 's2'
Lpq = lib.unpack_tril(Lpq).astype(np.complex128)
outa = _ao2mo.r_e2(Lpq, mo_a, (0, nmoa, nmoa, nmoa+nvira), tao, ao_loc)
outb = _ao2mo.r_e2(Lpq, mo_b, (0, nmob, nmob, nmob+nvirb), tao, ao_loc)
Lpv[ki,kj] = outa.reshape(-1,nmoa,nvira)
LPV[ki,kj] = outb.reshape(-1,nmob,nvirb)
return eris
scf.kuhf.KUHF.CCSD = lib.class_as_method(KUCCSD)
if __name__ == '__main__':
from pyscf.pbc import gto, cc
from pyscf import lo
cell = gto.Cell()
cell.atom='''
He 0.000000000000 0.000000000000 0.000000000000
He 1.685068664391 1.685068664391 1.685068664391
'''
#cell.basis = [[0, (1., 1.)], [1, (.5, 1.)]]
cell.basis = [[0, (1., 1.)], [0, (.5, 1.)]]
cell.a = '''
0.000000000, 3.370137329, 3.370137329
3.370137329, 0.000000000, 3.370137329
3.370137329, 3.370137329, 0.000000000'''
cell.unit = 'B'
cell.mesh = [13]*3
cell.build()
np.random.seed(2)
# Running HF and CCSD with 1x1x2 Monkhorst-Pack k-point mesh
kmf = scf.KUHF(cell, kpts=cell.make_kpts([1,1,3]), exxdiv=None)
nmo = cell.nao_nr()
kmf.mo_occ = np.zeros((2,3,nmo))
kmf.mo_occ[0,:,:3] = 1
kmf.mo_occ[1,:,:1] = 1
kmf.mo_energy = np.arange(nmo) + np.random.random((2,3,nmo)) * .3
kmf.mo_energy[kmf.mo_occ == 0] += 2
mo = (np.random.random((2,3,nmo,nmo)) +
np.random.random((2,3,nmo,nmo))*1j - .5-.5j)
s = kmf.get_ovlp()
kmf.mo_coeff = np.empty_like(mo)
nkpts = len(kmf.kpts)
for k in range(nkpts):
kmf.mo_coeff[0,k] = lo.orth.vec_lowdin(mo[0,k], s[k])
kmf.mo_coeff[1,k] = lo.orth.vec_lowdin(mo[1,k], s[k])
def rand_t1_t2(mycc):
nkpts = mycc.nkpts
nocca, noccb = mycc.nocc
nmoa, nmob = mycc.nmo
nvira, nvirb = nmoa - nocca, nmob - noccb
np.random.seed(1)
t1a = (np.random.random((nkpts,nocca,nvira)) +
np.random.random((nkpts,nocca,nvira))*1j - .5-.5j)
t1b = (np.random.random((nkpts,noccb,nvirb)) +
np.random.random((nkpts,noccb,nvirb))*1j - .5-.5j)
t2aa = (np.random.random((nkpts,nkpts,nkpts,nocca,nocca,nvira,nvira)) +
np.random.random((nkpts,nkpts,nkpts,nocca,nocca,nvira,nvira))*1j - .5-.5j)
kconserv = kpts_helper.get_kconserv(kmf.cell, kmf.kpts)
t2aa = t2aa - t2aa.transpose(1,0,2,4,3,5,6)
tmp = t2aa.copy()
for ki, kj, kk in kpts_helper.loop_kkk(nkpts):
kl = kconserv[ki, kk, kj]
t2aa[ki,kj,kk] = t2aa[ki,kj,kk] - tmp[ki,kj,kl].transpose(0,1,3,2)
t2ab = (np.random.random((nkpts,nkpts,nkpts,nocca,noccb,nvira,nvirb)) +
np.random.random((nkpts,nkpts,nkpts,nocca,noccb,nvira,nvirb))*1j - .5-.5j)
t2bb = (np.random.random((nkpts,nkpts,nkpts,noccb,noccb,nvirb,nvirb)) +
np.random.random((nkpts,nkpts,nkpts,noccb,noccb,nvirb,nvirb))*1j - .5-.5j)
t2bb = t2bb - t2bb.transpose(1,0,2,4,3,5,6)
tmp = t2bb.copy()
for ki, kj, kk in kpts_helper.loop_kkk(nkpts):
kl = kconserv[ki, kk, kj]
t2bb[ki,kj,kk] = t2bb[ki,kj,kk] - tmp[ki,kj,kl].transpose(0,1,3,2)
t1 = (t1a, t1b)
t2 = (t2aa, t2ab, t2bb)
return t1, t2
mycc = KUCCSD(kmf)
eris = mycc.ao2mo()
t1, t2 = rand_t1_t2(mycc)
Ht1, Ht2 = mycc.update_amps(t1, t2, eris)
print(lib.finger(Ht1[0]) - (2.2677885702176339-2.5150764056992041j))
print(lib.finger(Ht1[1]) - (-51.643438947846086+526.58026126100458j))
print(lib.finger(Ht2[0]) - (-29.490813482748258-8.7509143690136018j))
print(lib.finger(Ht2[1]) - (2256.0440056839416-193.16480896707569j))
print(lib.finger(Ht2[2]) - (-250.59447681063182-397.57189085666982j))
kmf.mo_occ[:] = 0
kmf.mo_occ[:,:,:2] = 1
mycc = KUCCSD(kmf)
eris = mycc.ao2mo()
t1, t2 = rand_t1_t2(mycc)
Ht1, Ht2 = mycc.update_amps(t1, t2, eris)
print(lib.finger(Ht1[0]) - (5.4622516572705662+1.990046725028729j))
print(lib.finger(Ht1[1]) - (4.8801120611799043-5.9940463787453488j))
print(lib.finger(Ht2[0]) - (-192.38864512375193+305.14191018543983j))
print(lib.finger(Ht2[1]) - (23085.044505825954-11527.802302550244j))
print(lib.finger(Ht2[2]) - (115.57932548288559-40.888597453928604j))
from pyscf.pbc.cc import kccsd
kgcc = kccsd.GCCSD(scf.addons.convert_to_ghf(kmf))
kccsd_eris = kccsd._make_eris_incore(kgcc, kgcc._scf.mo_coeff)
r1 = kgcc.spatial2spin(t1)
r2 = kgcc.spatial2spin(t2)
ge = kccsd.energy(kgcc, r1, r2, kccsd_eris)
r1, r2 = kgcc.update_amps(r1, r2, kccsd_eris)
ue = energy(mycc, t1, t2, eris)
print(abs(ge - ue))
print(abs(r1 - kgcc.spatial2spin(Ht1)).max())
print(abs(r2 - kgcc.spatial2spin(Ht2)).max())
kmf = kmf.density_fit(auxbasis=[[0, (1., 1.)]])
mycc = KUCCSD(kmf)
eris = _make_df_eris(mycc, mycc.mo_coeff)
t1, t2 = rand_t1_t2(mycc)
Ht1, Ht2 = mycc.update_amps(t1, t2, eris)
print(lib.finger(Ht1[0]) - (6.9341372555790013+0.87313546297025901j))
print(lib.finger(Ht1[1]) - (6.7538005829391992-0.95702422534126796j))
print(lib.finger(Ht2[0]) - (-509.24544842179876+448.00925776269855j))
print(lib.finger(Ht2[1]) - (107.5960392010511+40.869216223808067j) )
print(lib.finger(Ht2[2]) - (-196.75910296082139+218.53005038057515j))
kgcc = kccsd.GCCSD(scf.addons.convert_to_ghf(kmf))
kccsd_eris = kccsd._make_eris_incore(kgcc, kgcc._scf.mo_coeff)
r1 = kgcc.spatial2spin(t1)
r2 = kgcc.spatial2spin(t2)
ge = kccsd.energy(kgcc, r1, r2, kccsd_eris)
r1, r2 = kgcc.update_amps(r1, r2, kccsd_eris)
print(abs(r1 - kgcc.spatial2spin(Ht1)).max())
print(abs(r2 - kgcc.spatial2spin(Ht2)).max())
print(all([abs(lib.finger(eris.oooo) - (-0.18290712163391809-0.13839081039521306j) )<1e-8,
abs(lib.finger(eris.ooOO) - (-0.084752145202964035-0.28496525042110676j) )<1e-8,
#abs(lib.finger(eris.OOoo) - (0.43054922768629345-0.27990237216969871j) )<1e-8,
abs(lib.finger(eris.OOOO) - (-0.2941475969103261-0.047247498899840978j) )<1e-8,
abs(lib.finger(eris.ooov) - (0.23381463349517045-0.11703340936984277j) )<1e-8,
abs(lib.finger(eris.ooOV) - (-0.052655392703214066+0.69533309442418556j) )<1e-8,
abs(lib.finger(eris.OOov) - (-0.2111361247200903+0.85087916975274647j) )<1e-8,
abs(lib.finger(eris.OOOV) - (-0.36995992208047412-0.18887278030885621j) )<1e-8,
abs(lib.finger(eris.oovv) - (0.21107397525051516+0.0048714991438174871j) )<1e-8,
abs(lib.finger(eris.ooVV) - (-0.076411225687065987+0.11080438166425896j) )<1e-8,
abs(lib.finger(eris.OOvv) - (-0.17880337626095003-0.24174716216954206j) )<1e-8,
abs(lib.finger(eris.OOVV) - (0.059186286356424908+0.68433866387500164j) )<1e-8,
abs(lib.finger(eris.ovov) - (0.15402983765151051+0.064359681685222214j) )<1e-8,
abs(lib.finger(eris.ovOV) - (-0.10697649196044598+0.30351249676253234j) )<1e-8,
#abs(lib.finger(eris.OVov) - (-0.17619329728836752-0.56585020976035816j) )<1e-8,
abs(lib.finger(eris.OVOV) - (-0.63963235318492118+0.69863219317718828j) )<1e-8,
abs(lib.finger(eris.voov) - (-0.24137641647339092+0.18676684336011531j) )<1e-8,
abs(lib.finger(eris.voOV) - (0.19257709151227204+0.38929027819406414j) )<1e-8,
#abs(lib.finger(eris.VOov) - (0.07632606729926053-0.70350947950650355j) )<1e-8,
abs(lib.finger(eris.VOOV) - (-0.47970203195500816+0.46735207193861927j) )<1e-8,
abs(lib.finger(eris.vovv) - (-0.1342049915673903-0.23391327821719513j) )<1e-8,
abs(lib.finger(eris.voVV) - (-0.28989635223866056+0.9644368822688475j) )<1e-8,
abs(lib.finger(eris.VOvv) - (-0.32428269235420271+0.0029847254383674748j))<1e-8,
abs(lib.finger(eris.VOVV) - (0.45031779746222456-0.36858577475752041j) )<1e-8]))
eris = _make_eris_outcore(mycc, mycc.mo_coeff)
print(all([abs(lib.finger(eris.oooo) - (-0.18290712163391809-0.13839081039521306j) )<1e-8,
abs(lib.finger(eris.ooOO) - (-0.084752145202964035-0.28496525042110676j) )<1e-8,
#abs(lib.finger(eris.OOoo) - (0.43054922768629345-0.27990237216969871j) )<1e-8,
abs(lib.finger(eris.OOOO) - (-0.2941475969103261-0.047247498899840978j) )<1e-8,
abs(lib.finger(eris.ooov) - (0.23381463349517045-0.11703340936984277j) )<1e-8,
abs(lib.finger(eris.ooOV) - (-0.052655392703214066+0.69533309442418556j) )<1e-8,
abs(lib.finger(eris.OOov) - (-0.2111361247200903+0.85087916975274647j) )<1e-8,
abs(lib.finger(eris.OOOV) - (-0.36995992208047412-0.18887278030885621j) )<1e-8,
abs(lib.finger(eris.oovv) - (0.21107397525051516+0.0048714991438174871j) )<1e-8,
abs(lib.finger(eris.ooVV) - (-0.076411225687065987+0.11080438166425896j) )<1e-8,
abs(lib.finger(eris.OOvv) - (-0.17880337626095003-0.24174716216954206j) )<1e-8,
abs(lib.finger(eris.OOVV) - (0.059186286356424908+0.68433866387500164j) )<1e-8,
abs(lib.finger(eris.ovov) - (0.15402983765151051+0.064359681685222214j) )<1e-8,
abs(lib.finger(eris.ovOV) - (-0.10697649196044598+0.30351249676253234j) )<1e-8,
#abs(lib.finger(eris.OVov) - (-0.17619329728836752-0.56585020976035816j) )<1e-8,
abs(lib.finger(eris.OVOV) - (-0.63963235318492118+0.69863219317718828j) )<1e-8,
abs(lib.finger(eris.voov) - (-0.24137641647339092+0.18676684336011531j) )<1e-8,
abs(lib.finger(eris.voOV) - (0.19257709151227204+0.38929027819406414j) )<1e-8,
#abs(lib.finger(eris.VOov) - (0.07632606729926053-0.70350947950650355j) )<1e-8,
abs(lib.finger(eris.VOOV) - (-0.47970203195500816+0.46735207193861927j) )<1e-8,
abs(lib.finger(eris.vovv) - (-0.1342049915673903-0.23391327821719513j) )<1e-8,
abs(lib.finger(eris.voVV) - (-0.28989635223866056+0.9644368822688475j) )<1e-8,
abs(lib.finger(eris.VOvv) - (-0.32428269235420271+0.0029847254383674748j))<1e-8,
abs(lib.finger(eris.VOVV) - (0.45031779746222456-0.36858577475752041j) )<1e-8,
abs(lib.finger(eris.vvvv) - (-0.080512851258903173-0.2868384266725581j) )<1e-8,
abs(lib.finger(eris.vvVV) - (-0.5137063762484736+1.1036785801263898j) )<1e-8,
#abs(lib.finger(eris.VVvv) - (0.16468487082491939+0.25730725586992997j) )<1e-8,
abs(lib.finger(eris.VVVV) - (-0.56714875196802295+0.058636785679170501j) )<1e-8]))
| 45.800616 | 120 | 0.608354 |
2ea945829dbd0f950943c3200bb362b0095dcb82 | 5,361 | py | Python | oauth_dropins/reddit.py | ravenscroftj/oauth-dropins | 59cc4bfc8157142249c5eb561b1f665da560e6c1 | [
"Unlicense"
] | null | null | null | oauth_dropins/reddit.py | ravenscroftj/oauth-dropins | 59cc4bfc8157142249c5eb561b1f665da560e6c1 | [
"Unlicense"
] | null | null | null | oauth_dropins/reddit.py | ravenscroftj/oauth-dropins | 59cc4bfc8157142249c5eb561b1f665da560e6c1 | [
"Unlicense"
] | null | null | null | """reddit OAuth drop-in.
reddit API docs:
https://github.com/reddit-archive/reddit/wiki/API
https://www.reddit.com/dev/api
https://www.reddit.com/prefs/apps
praw API docs:
https://praw.readthedocs.io/en/v3.6.0/pages/oauth.html
"""
import logging
import urllib.parse
from flask import request
from google.cloud import ndb
import praw
from . import views, models
from .webutil import appengine_info, flask_util, util
from .webutil.util import json_dumps, json_loads
from random import randint
if appengine_info.DEBUG:
REDDIT_APP_KEY = util.read('reddit_app_key_local')
REDDIT_APP_SECRET = util.read('reddit_app_secret_local')
else:
REDDIT_APP_KEY = util.read('reddit_app_key')
REDDIT_APP_SECRET = util.read('reddit_app_secret')
class RedditAuth(models.BaseAuth):
"""An authenticated reddit user.
Provides methods that return information about this user and make OAuth-signed
requests to the Tumblr API. Stores OAuth credentials in the datastore. See
models.BaseAuth for usage details.
reddit-specific details: implements "access_token," which is really a refresh_token
see: https://stackoverflow.com/questions/28955541/how-to-get-access-token-reddit-api
The datastore entity key name is the reddit username.
"""
# refresh token
refresh_token = ndb.StringProperty(required=True)
user_json = ndb.TextProperty()
def site_name(self):
return 'Reddit'
def user_display_name(self):
"""Returns the username.
"""
return self.key_id()
class Start(views.Start):
"""Starts reddit auth. goes directly to redirect. passes to_path in "state"
"""
NAME = 'reddit'
LABEL = 'Reddit'
DEFAULT_SCOPE = 'identity,read'
def redirect_url(self, state=None):
# if state is None the reddit API redirect breaks, set to random string
if not state:
state = str(randint(100000, 999999))
assert REDDIT_APP_KEY and REDDIT_APP_SECRET, \
"Please fill in the reddit_app_key and reddit_app_secret files in your app's root directory."
url = urllib.parse.urljoin(request.host_url, self.to_path)
reddit = praw.Reddit(client_id=REDDIT_APP_KEY,
client_secret=REDDIT_APP_SECRET,
redirect_uri=url,
user_agent='oauth-dropin reddit api')
# store the state for later use in the callback view
models.OAuthRequestToken(id=state,
token_secret=state,
state=state).put()
st = util.encode_oauth_state({'state': state, 'to_path': self.to_path})
return reddit.auth.url(self.scope.split(self.SCOPE_SEPARATOR), st, 'permanent')
@classmethod
def button_html(cls, *args, **kwargs):
return super(cls, cls).button_html(
*args,
input_style='background-color: #CEE3F8; padding: 10px',
**kwargs)
class Callback(views.Callback):
"""OAuth callback. Only ensures that identity access was granted.
"""
def dispatch_request(self):
error = request.values.get('error')
st = util.decode_oauth_state(request.values.get('state'))
state = st.get('state')
to_path = st.get('to_path')
code = request.values.get('code')
if error or not state or not code:
if error in ('access_denied'):
logging.info(f"User declined: {request.values.get('error_description')}")
return self.finish(None, state=state)
else:
flask_util.error(error)
# look up the stored state to check authenticity
request_token = models.OAuthRequestToken.get_by_id(state)
if request_token is None:
flask_util.error(f'Invalid oauth_token: {state}')
url = urllib.parse.urljoin(request.host_url, to_path)
reddit = praw.Reddit(client_id=REDDIT_APP_KEY,
client_secret=REDDIT_APP_SECRET,
redirect_uri=url,
user_agent='oauth-dropin reddit api')
refresh_token = reddit.auth.authorize(code)
praw_user = reddit.user.me()
user_json = praw_to_user(praw_user)
user_id = user_json.get('name')
auth = RedditAuth(id=user_id,
refresh_token=refresh_token,
user_json=json_dumps(user_json))
auth.put()
return self.finish(auth, state=state)
def praw_to_user(user):
"""
Converts a PRAW user to a dict user.
Args:
user: :class:`praw.models.Redditor`
Note 1: accessing redditor attributes lazily calls reddit API
Note 2: if user.is_suspended is True, other attributes will not exist
Note 3: subreddit refers to a user profile (stored as a subreddit)
Ref: https://praw.readthedocs.io/en/latest/code_overview/models/redditor.html
Returns: dict
Raises:
:class:`prawcore.exceptions.NotFound` if the user doesn't exist or has been
deleted
"""
if getattr(user, 'is_suspended', False):
return {}
subreddit = getattr(user, 'subreddit', None)
if subreddit:
subreddit = {
'id': getattr(subreddit, 'id', None),
'display_name': getattr(subreddit, 'display_name', None),
'name': getattr(subreddit, 'name', None),
'description': getattr(subreddit, 'public_description', None),
}
return {
'name': getattr(user, 'name', None),
'subreddit': subreddit,
'icon_img': getattr(user, 'icon_img', None),
'id': getattr(user, 'id', None),
'created_utc': getattr(user, 'created_utc', None)
}
| 32.101796 | 99 | 0.684574 |
d6b9f9292621585da0c4a1efdf1ad875bc29fd9a | 24,226 | py | Python | tests/test_modeling_gptj.py | Sanger2000/transformers | 5de2046e12cffa5a0381219363cd521e8fe1a2bb | [
"Apache-2.0"
] | 2 | 2022-02-19T07:02:52.000Z | 2022-02-19T07:02:55.000Z | tests/test_modeling_gptj.py | Sanger2000/transformers | 5de2046e12cffa5a0381219363cd521e8fe1a2bb | [
"Apache-2.0"
] | 1 | 2022-02-17T12:40:59.000Z | 2022-02-17T12:40:59.000Z | tests/test_modeling_gptj.py | Sanger2000/transformers | 5de2046e12cffa5a0381219363cd521e8fe1a2bb | [
"Apache-2.0"
] | 1 | 2022-02-20T11:47:53.000Z | 2022-02-20T11:47:53.000Z | # coding=utf-8
# Copyright 2021 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import datetime
import unittest
from transformers import GPTJConfig, is_torch_available
from transformers.testing_utils import require_torch, slow, tooslow, torch_device
from .test_configuration_common import ConfigTester
from .test_generation_utils import GenerationTesterMixin
from .test_modeling_common import ModelTesterMixin, floats_tensor, ids_tensor, random_attention_mask
if is_torch_available():
import torch
from transformers import (
GPTJ_PRETRAINED_MODEL_ARCHIVE_LIST,
AutoTokenizer,
GPTJForCausalLM,
GPTJForQuestionAnswering,
GPTJForSequenceClassification,
GPTJModel,
)
class GPTJModelTester:
def __init__(
self,
parent,
batch_size=14,
seq_length=7,
is_training=True,
use_token_type_ids=True,
use_input_mask=True,
use_labels=True,
use_mc_token_ids=True,
vocab_size=99,
hidden_size=32,
rotary_dim=4,
num_hidden_layers=5,
num_attention_heads=4,
intermediate_size=37,
hidden_act="gelu",
hidden_dropout_prob=0.0,
attention_probs_dropout_prob=0.0,
max_position_embeddings=512,
type_vocab_size=16,
type_sequence_label_size=2,
initializer_range=0.02,
num_labels=3,
num_choices=4,
):
self.parent = parent
self.batch_size = batch_size
self.seq_length = seq_length
self.is_training = is_training
self.use_token_type_ids = use_token_type_ids
self.use_input_mask = use_input_mask
self.use_labels = use_labels
self.use_mc_token_ids = use_mc_token_ids
self.vocab_size = vocab_size
self.hidden_size = hidden_size
self.rotary_dim = rotary_dim
self.num_hidden_layers = num_hidden_layers
self.num_attention_heads = num_attention_heads
self.intermediate_size = intermediate_size
self.hidden_act = hidden_act
self.hidden_dropout_prob = hidden_dropout_prob
self.attention_probs_dropout_prob = attention_probs_dropout_prob
self.max_position_embeddings = max_position_embeddings
self.type_vocab_size = type_vocab_size
self.type_sequence_label_size = type_sequence_label_size
self.initializer_range = initializer_range
self.num_labels = num_labels
self.num_choices = num_choices
self.scope = None
self.bos_token_id = vocab_size - 1
self.eos_token_id = vocab_size - 1
self.pad_token_id = vocab_size - 1
def get_large_model_config(self):
return GPTJConfig.from_pretrained("EleutherAI/gpt-j-6B")
def prepare_config_and_inputs(self):
input_ids = ids_tensor([self.batch_size, self.seq_length], self.vocab_size)
input_mask = None
if self.use_input_mask:
input_mask = random_attention_mask([self.batch_size, self.seq_length])
token_type_ids = None
if self.use_token_type_ids:
token_type_ids = ids_tensor([self.batch_size, self.seq_length], self.type_vocab_size)
mc_token_ids = None
if self.use_mc_token_ids:
mc_token_ids = ids_tensor([self.batch_size, self.num_choices], self.seq_length)
sequence_labels = None
token_labels = None
choice_labels = None
if self.use_labels:
sequence_labels = ids_tensor([self.batch_size], self.type_sequence_label_size)
token_labels = ids_tensor([self.batch_size, self.seq_length], self.num_labels)
choice_labels = ids_tensor([self.batch_size], self.num_choices)
config = self.get_config()
head_mask = ids_tensor([self.num_hidden_layers, self.num_attention_heads], 2)
return (
config,
input_ids,
input_mask,
head_mask,
token_type_ids,
mc_token_ids,
sequence_labels,
token_labels,
choice_labels,
)
def get_config(self):
return GPTJConfig(
vocab_size=self.vocab_size,
n_embd=self.hidden_size,
n_layer=self.num_hidden_layers,
n_head=self.num_attention_heads,
intermediate_size=self.intermediate_size,
hidden_act=self.hidden_act,
hidden_dropout_prob=self.hidden_dropout_prob,
attention_probs_dropout_prob=self.attention_probs_dropout_prob,
n_positions=self.max_position_embeddings,
type_vocab_size=self.type_vocab_size,
initializer_range=self.initializer_range,
use_cache=True,
bos_token_id=self.bos_token_id,
eos_token_id=self.eos_token_id,
pad_token_id=self.pad_token_id,
rotary_dim=self.rotary_dim,
)
def prepare_config_and_inputs_for_decoder(self):
(
config,
input_ids,
input_mask,
head_mask,
token_type_ids,
mc_token_ids,
sequence_labels,
token_labels,
choice_labels,
) = self.prepare_config_and_inputs()
encoder_hidden_states = floats_tensor([self.batch_size, self.seq_length, self.hidden_size])
encoder_attention_mask = ids_tensor([self.batch_size, self.seq_length], vocab_size=2)
return (
config,
input_ids,
input_mask,
head_mask,
token_type_ids,
sequence_labels,
token_labels,
choice_labels,
encoder_hidden_states,
encoder_attention_mask,
)
def create_and_check_gptj_model(self, config, input_ids, input_mask, head_mask, token_type_ids, *args):
model = GPTJModel(config=config)
model.to(torch_device)
model.eval()
result = model(input_ids, token_type_ids=token_type_ids, head_mask=head_mask)
result = model(input_ids, token_type_ids=token_type_ids)
result = model(input_ids)
self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, self.seq_length, self.hidden_size))
self.parent.assertEqual(len(result.past_key_values), config.n_layer)
def create_and_check_gptj_model_past(self, config, input_ids, input_mask, head_mask, token_type_ids, *args):
model = GPTJModel(config=config)
model.to(torch_device)
model.eval()
# first forward pass
outputs = model(input_ids, token_type_ids=token_type_ids, use_cache=True)
outputs_use_cache_conf = model(input_ids, token_type_ids=token_type_ids)
outputs_no_past = model(input_ids, token_type_ids=token_type_ids, use_cache=False)
self.parent.assertTrue(len(outputs) == len(outputs_use_cache_conf))
self.parent.assertTrue(len(outputs) == len(outputs_no_past) + 1)
output, past = outputs.to_tuple()
# create hypothetical next token and extent to next_input_ids
next_tokens = ids_tensor((self.batch_size, 1), config.vocab_size)
next_token_types = ids_tensor([self.batch_size, 1], self.type_vocab_size)
# append to next input_ids and token_type_ids
next_input_ids = torch.cat([input_ids, next_tokens], dim=-1)
next_token_type_ids = torch.cat([token_type_ids, next_token_types], dim=-1)
output_from_no_past = model(next_input_ids, token_type_ids=next_token_type_ids)["last_hidden_state"]
output_from_past = model(next_tokens, token_type_ids=next_token_types, past_key_values=past)[
"last_hidden_state"
]
# select random slice
random_slice_idx = ids_tensor((1,), output_from_past.shape[-1]).item()
output_from_no_past_slice = output_from_no_past[:, -1, random_slice_idx].detach()
output_from_past_slice = output_from_past[:, 0, random_slice_idx].detach()
# test that outputs are equal for slice
self.parent.assertTrue(torch.allclose(output_from_past_slice, output_from_no_past_slice, atol=1e-3))
def create_and_check_gptj_model_attention_mask_past(
self, config, input_ids, input_mask, head_mask, token_type_ids, *args
):
model = GPTJModel(config=config)
model.to(torch_device)
model.eval()
# create attention mask
attn_mask = torch.ones(input_ids.shape, dtype=torch.long, device=torch_device)
half_seq_length = self.seq_length // 2
attn_mask[:, half_seq_length:] = 0
# first forward pass
output, past = model(input_ids, attention_mask=attn_mask).to_tuple()
# create hypothetical next token and extent to next_input_ids
next_tokens = ids_tensor((self.batch_size, 1), config.vocab_size)
# change a random masked slice from input_ids
random_seq_idx_to_change = ids_tensor((1,), half_seq_length).item() + 1
random_other_next_tokens = ids_tensor((self.batch_size, 1), config.vocab_size).squeeze(-1)
input_ids[:, -random_seq_idx_to_change] = random_other_next_tokens
# append to next input_ids and attn_mask
next_input_ids = torch.cat([input_ids, next_tokens], dim=-1)
attn_mask = torch.cat(
[attn_mask, torch.ones((attn_mask.shape[0], 1), dtype=torch.long, device=torch_device)],
dim=1,
)
# get two different outputs
output_from_no_past = model(next_input_ids, attention_mask=attn_mask)["last_hidden_state"]
output_from_past = model(next_tokens, past_key_values=past, attention_mask=attn_mask)["last_hidden_state"]
# select random slice
random_slice_idx = ids_tensor((1,), output_from_past.shape[-1]).item()
output_from_no_past_slice = output_from_no_past[:, -1, random_slice_idx].detach()
output_from_past_slice = output_from_past[:, 0, random_slice_idx].detach()
# test that outputs are equal for slice
self.parent.assertTrue(torch.allclose(output_from_past_slice, output_from_no_past_slice, atol=1e-3))
def create_and_check_gptj_model_past_large_inputs(
self, config, input_ids, input_mask, head_mask, token_type_ids, *args
):
model = GPTJModel(config=config)
model.to(torch_device)
model.eval()
# first forward pass
outputs = model(input_ids, token_type_ids=token_type_ids, attention_mask=input_mask, use_cache=True)
output, past = outputs.to_tuple()
# create hypothetical next token and extent to next_input_ids
next_tokens = ids_tensor((self.batch_size, 3), config.vocab_size)
next_token_types = ids_tensor([self.batch_size, 3], self.type_vocab_size)
next_mask = ids_tensor((self.batch_size, 3), vocab_size=2)
# append to next input_ids and token_type_ids
next_input_ids = torch.cat([input_ids, next_tokens], dim=-1)
next_token_type_ids = torch.cat([token_type_ids, next_token_types], dim=-1)
next_attention_mask = torch.cat([input_mask, next_mask], dim=-1)
output_from_no_past = model(
next_input_ids, token_type_ids=next_token_type_ids, attention_mask=next_attention_mask
)["last_hidden_state"]
output_from_past = model(
next_tokens, token_type_ids=next_token_types, attention_mask=next_attention_mask, past_key_values=past
)["last_hidden_state"]
self.parent.assertTrue(output_from_past.shape[1] == next_tokens.shape[1])
# select random slice
random_slice_idx = ids_tensor((1,), output_from_past.shape[-1]).item()
output_from_no_past_slice = output_from_no_past[:, -3:, random_slice_idx].detach()
output_from_past_slice = output_from_past[:, :, random_slice_idx].detach()
# test that outputs are equal for slice
self.parent.assertTrue(torch.allclose(output_from_past_slice, output_from_no_past_slice, atol=1e-3))
def create_and_check_lm_head_model(self, config, input_ids, input_mask, head_mask, token_type_ids, *args):
model = GPTJForCausalLM(config)
model.to(torch_device)
model.eval()
result = model(input_ids, token_type_ids=token_type_ids, labels=input_ids)
self.parent.assertEqual(result.loss.shape, ())
self.parent.assertEqual(result.logits.shape, (self.batch_size, self.seq_length, self.vocab_size))
def create_and_check_forward_and_backwards(
self, config, input_ids, input_mask, head_mask, token_type_ids, *args, gradient_checkpointing=False
):
model = GPTJForCausalLM(config)
if gradient_checkpointing:
model.gradient_checkpointing_enable()
model.to(torch_device)
result = model(input_ids, token_type_ids=token_type_ids, labels=input_ids)
self.parent.assertEqual(result.loss.shape, ())
self.parent.assertEqual(result.logits.shape, (self.batch_size, self.seq_length, self.vocab_size))
result.loss.backward()
def prepare_config_and_inputs_for_common(self):
config_and_inputs = self.prepare_config_and_inputs()
(
config,
input_ids,
input_mask,
head_mask,
token_type_ids,
mc_token_ids,
sequence_labels,
token_labels,
choice_labels,
) = config_and_inputs
inputs_dict = {"input_ids": input_ids, "token_type_ids": token_type_ids, "head_mask": head_mask}
return config, inputs_dict
@require_torch
class GPTJModelTest(ModelTesterMixin, GenerationTesterMixin, unittest.TestCase):
all_model_classes = (
(GPTJModel, GPTJForCausalLM, GPTJForSequenceClassification, GPTJForQuestionAnswering)
if is_torch_available()
else ()
)
all_generative_model_classes = (GPTJForCausalLM,) if is_torch_available() else ()
fx_compatible = True
test_pruning = False
test_missing_keys = False
test_model_parallel = False
test_head_masking = False
# special case for DoubleHeads model
def _prepare_for_class(self, inputs_dict, model_class, return_labels=False):
inputs_dict = super()._prepare_for_class(inputs_dict, model_class, return_labels=return_labels)
return inputs_dict
def setUp(self):
self.model_tester = GPTJModelTester(self)
self.config_tester = ConfigTester(self, config_class=GPTJConfig, n_embd=37)
def test_config(self):
self.config_tester.run_common_tests()
def test_gptj_model(self):
config_and_inputs = self.model_tester.prepare_config_and_inputs()
self.model_tester.create_and_check_gptj_model(*config_and_inputs)
def test_gptj_model_past(self):
config_and_inputs = self.model_tester.prepare_config_and_inputs()
self.model_tester.create_and_check_gptj_model_past(*config_and_inputs)
def test_gptj_model_att_mask_past(self):
config_and_inputs = self.model_tester.prepare_config_and_inputs()
self.model_tester.create_and_check_gptj_model_attention_mask_past(*config_and_inputs)
def test_gptj_model_past_large_inputs(self):
config_and_inputs = self.model_tester.prepare_config_and_inputs()
self.model_tester.create_and_check_gptj_model_past_large_inputs(*config_and_inputs)
def test_gptj_lm_head_model(self):
config_and_inputs = self.model_tester.prepare_config_and_inputs()
self.model_tester.create_and_check_lm_head_model(*config_and_inputs)
def test_gptj_gradient_checkpointing(self):
config_and_inputs = self.model_tester.prepare_config_and_inputs()
self.model_tester.create_and_check_forward_and_backwards(*config_and_inputs, gradient_checkpointing=True)
@tooslow
def test_batch_generation(self):
# Marked as @tooslow due to GPU OOM
model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", revision="float16", torch_dtype=torch.float16)
model.to(torch_device)
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B", revision="float16")
tokenizer.padding_side = "left"
# Define PAD Token = EOS Token = 50256
tokenizer.pad_token = tokenizer.eos_token
model.config.pad_token_id = model.config.eos_token_id
# use different length sentences to test batching
sentences = [
"Hello, my dog is a little",
"Today, I",
]
inputs = tokenizer(sentences, return_tensors="pt", padding=True)
input_ids = inputs["input_ids"].to(torch_device)
token_type_ids = torch.cat(
[
input_ids.new_full((input_ids.shape[0], input_ids.shape[1] - 1), 0),
input_ids.new_full((input_ids.shape[0], 1), 500),
],
dim=-1,
)
outputs = model.generate(
input_ids=input_ids,
attention_mask=inputs["attention_mask"].to(torch_device),
)
outputs_tt = model.generate(
input_ids=input_ids,
attention_mask=inputs["attention_mask"].to(torch_device),
token_type_ids=token_type_ids,
)
inputs_non_padded = tokenizer(sentences[0], return_tensors="pt").input_ids.to(torch_device)
output_non_padded = model.generate(input_ids=inputs_non_padded)
num_paddings = inputs_non_padded.shape[-1] - inputs["attention_mask"][-1].long().sum().cpu().item()
inputs_padded = tokenizer(sentences[1], return_tensors="pt").input_ids.to(torch_device)
output_padded = model.generate(input_ids=inputs_padded, max_length=model.config.max_length - num_paddings)
batch_out_sentence = tokenizer.batch_decode(outputs, skip_special_tokens=True)
batch_out_sentence_tt = tokenizer.batch_decode(outputs_tt, skip_special_tokens=True)
non_padded_sentence = tokenizer.decode(output_non_padded[0], skip_special_tokens=True)
padded_sentence = tokenizer.decode(output_padded[0], skip_special_tokens=True)
expected_output_sentence = [
"Hello, my dog is a little over a year old and has been diagnosed with a heart murmur",
"Today, I’m going to talk about the most important thing in the",
]
self.assertListEqual(expected_output_sentence, batch_out_sentence)
self.assertTrue(batch_out_sentence_tt != batch_out_sentence) # token_type_ids should change output
self.assertListEqual(expected_output_sentence, [non_padded_sentence, padded_sentence])
@slow
def test_model_from_pretrained(self):
for model_name in GPTJ_PRETRAINED_MODEL_ARCHIVE_LIST[:1]:
model = GPTJModel.from_pretrained(model_name, revision="float16", torch_dtype=torch.float16)
self.assertIsNotNone(model)
@require_torch
class GPTJModelLanguageGenerationTest(unittest.TestCase):
@tooslow
def test_lm_generate_gptj(self):
# Marked as @tooslow due to GPU OOM
for checkpointing in [True, False]:
model = GPTJForCausalLM.from_pretrained(
"EleutherAI/gpt-j-6B", revision="float16", torch_dtype=torch.float16
)
if checkpointing:
model.gradient_checkpointing_enable()
else:
model.gradient_checkpointing_disable()
model.to(torch_device)
input_ids = torch.tensor([[464, 3290]], dtype=torch.long, device=torch_device) # The dog
# fmt: off
# The dog is a man's best friend. It is a loyal companion, and it is a friend
expected_output_ids = [464, 3290, 318, 257, 582, 338, 1266, 1545, 13, 632, 318, 257, 9112, 15185, 11, 290, 340, 318, 257, 1545]
# fmt: on
output_ids = model.generate(input_ids, do_sample=False)
self.assertListEqual(output_ids[0].tolist(), expected_output_ids)
@tooslow
def test_gptj_sample(self):
# Marked as @tooslow due to GPU OOM (issue #13676)
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B", revision="float16")
model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", revision="float16", torch_dtype=torch.float16)
model.to(torch_device)
torch.manual_seed(0)
tokenized = tokenizer("Today is a nice day and", return_tensors="pt", return_token_type_ids=True)
input_ids = tokenized.input_ids.to(torch_device)
output_ids = model.generate(input_ids, do_sample=True)
output_str = tokenizer.decode(output_ids[0], skip_special_tokens=True)
token_type_ids = tokenized.token_type_ids.to(torch_device)
output_seq = model.generate(input_ids=input_ids, do_sample=True, num_return_sequences=5)
output_seq_tt = model.generate(
input_ids=input_ids, token_type_ids=token_type_ids, do_sample=True, num_return_sequences=5
)
output_seq_strs = tokenizer.batch_decode(output_seq, skip_special_tokens=True)
output_seq_tt_strs = tokenizer.batch_decode(output_seq_tt, skip_special_tokens=True)
if torch_device == "cuda":
EXPECTED_OUTPUT_STR = (
"Today is a nice day and I've already been enjoying it. I walked to work with my wife"
)
else:
EXPECTED_OUTPUT_STR = "Today is a nice day and one of those days that feels a bit more alive. I am ready"
self.assertEqual(output_str, EXPECTED_OUTPUT_STR)
self.assertTrue(
all([output_seq_strs[idx] != output_seq_tt_strs[idx] for idx in range(len(output_seq_tt_strs))])
) # token_type_ids should change output
@slow
def test_gptj_sample_max_time(self):
tokenizer = AutoTokenizer.from_pretrained("anton-l/gpt-j-tiny-random")
model = GPTJForCausalLM.from_pretrained("anton-l/gpt-j-tiny-random")
model.to(torch_device)
torch.manual_seed(0)
tokenized = tokenizer("Today is a nice day and", return_tensors="pt", return_token_type_ids=True)
input_ids = tokenized.input_ids.to(torch_device)
MAX_TIME = 0.5
start = datetime.datetime.now()
model.generate(input_ids, do_sample=True, max_time=MAX_TIME, max_length=256)
duration = datetime.datetime.now() - start
self.assertGreater(duration, datetime.timedelta(seconds=MAX_TIME))
self.assertLess(duration, datetime.timedelta(seconds=1.5 * MAX_TIME))
start = datetime.datetime.now()
model.generate(input_ids, do_sample=False, max_time=MAX_TIME, max_length=256)
duration = datetime.datetime.now() - start
self.assertGreater(duration, datetime.timedelta(seconds=MAX_TIME))
self.assertLess(duration, datetime.timedelta(seconds=1.5 * MAX_TIME))
start = datetime.datetime.now()
model.generate(input_ids, do_sample=False, num_beams=2, max_time=MAX_TIME, max_length=256)
duration = datetime.datetime.now() - start
self.assertGreater(duration, datetime.timedelta(seconds=MAX_TIME))
self.assertLess(duration, datetime.timedelta(seconds=1.5 * MAX_TIME))
start = datetime.datetime.now()
model.generate(input_ids, do_sample=True, num_beams=2, max_time=MAX_TIME, max_length=256)
duration = datetime.datetime.now() - start
self.assertGreater(duration, datetime.timedelta(seconds=MAX_TIME))
self.assertLess(duration, datetime.timedelta(seconds=1.5 * MAX_TIME))
start = datetime.datetime.now()
model.generate(input_ids, do_sample=False, max_time=None, max_length=256)
duration = datetime.datetime.now() - start
self.assertGreater(duration, datetime.timedelta(seconds=1.5 * MAX_TIME))
| 42.501754 | 139 | 0.689672 |
6b07f1f42e3b96b78e95b2e0fbd0268fe16986ab | 4,736 | py | Python | venv/lib/python3.8/site-packages/vsts/graph/v4_1/models/graph_scope.py | amcclead7336/Enterprise_Data_Science_Final | ccdc0aa08d4726bf82d71c11a1cc0c63eb301a28 | [
"Unlicense",
"MIT"
] | null | null | null | venv/lib/python3.8/site-packages/vsts/graph/v4_1/models/graph_scope.py | amcclead7336/Enterprise_Data_Science_Final | ccdc0aa08d4726bf82d71c11a1cc0c63eb301a28 | [
"Unlicense",
"MIT"
] | null | null | null | venv/lib/python3.8/site-packages/vsts/graph/v4_1/models/graph_scope.py | amcclead7336/Enterprise_Data_Science_Final | ccdc0aa08d4726bf82d71c11a1cc0c63eb301a28 | [
"Unlicense",
"MIT"
] | 2 | 2021-05-23T16:46:31.000Z | 2021-05-26T23:51:09.000Z | # --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
# Generated file, DO NOT EDIT
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------------------------
from .graph_subject import GraphSubject
class GraphScope(GraphSubject):
"""GraphScope.
:param _links: This field contains zero or more interesting links about the graph subject. These links may be invoked to obtain additional relationships or more detailed information about this graph subject.
:type _links: :class:`ReferenceLinks <graph.v4_1.models.ReferenceLinks>`
:param descriptor: The descriptor is the primary way to reference the graph subject while the system is running. This field will uniquely identify the same graph subject across both Accounts and Organizations.
:type descriptor: str
:param display_name: This is the non-unique display name of the graph subject. To change this field, you must alter its value in the source provider.
:type display_name: str
:param url: This url is the full route to the source resource of this graph subject.
:type url: str
:param legacy_descriptor: [Internal Use Only] The legacy descriptor is here in case you need to access old version IMS using identity descriptor.
:type legacy_descriptor: str
:param origin: The type of source provider for the origin identifier (ex:AD, AAD, MSA)
:type origin: str
:param origin_id: The unique identifier from the system of origin. Typically a sid, object id or Guid. Linking and unlinking operations can cause this value to change for a user because the user is not backed by a different provider and has a different unique id in the new provider.
:type origin_id: str
:param subject_kind: This field identifies the type of the graph subject (ex: Group, Scope, User).
:type subject_kind: str
:param administrator_descriptor: The subject descriptor that references the administrators group for this scope. Only members of this group can change the contents of this scope or assign other users permissions to access this scope.
:type administrator_descriptor: str
:param is_global: When true, this scope is also a securing host for one or more scopes.
:type is_global: bool
:param parent_descriptor: The subject descriptor for the closest account or organization in the ancestor tree of this scope.
:type parent_descriptor: str
:param scope_type: The type of this scope. Typically ServiceHost or TeamProject.
:type scope_type: object
:param securing_host_descriptor: The subject descriptor for the containing organization in the ancestor tree of this scope.
:type securing_host_descriptor: str
"""
_attribute_map = {
'_links': {'key': '_links', 'type': 'ReferenceLinks'},
'descriptor': {'key': 'descriptor', 'type': 'str'},
'display_name': {'key': 'displayName', 'type': 'str'},
'url': {'key': 'url', 'type': 'str'},
'legacy_descriptor': {'key': 'legacyDescriptor', 'type': 'str'},
'origin': {'key': 'origin', 'type': 'str'},
'origin_id': {'key': 'originId', 'type': 'str'},
'subject_kind': {'key': 'subjectKind', 'type': 'str'},
'administrator_descriptor': {'key': 'administratorDescriptor', 'type': 'str'},
'is_global': {'key': 'isGlobal', 'type': 'bool'},
'parent_descriptor': {'key': 'parentDescriptor', 'type': 'str'},
'scope_type': {'key': 'scopeType', 'type': 'object'},
'securing_host_descriptor': {'key': 'securingHostDescriptor', 'type': 'str'}
}
def __init__(self, _links=None, descriptor=None, display_name=None, url=None, legacy_descriptor=None, origin=None, origin_id=None, subject_kind=None, administrator_descriptor=None, is_global=None, parent_descriptor=None, scope_type=None, securing_host_descriptor=None):
super(GraphScope, self).__init__(_links=_links, descriptor=descriptor, display_name=display_name, url=url, legacy_descriptor=legacy_descriptor, origin=origin, origin_id=origin_id, subject_kind=subject_kind)
self.administrator_descriptor = administrator_descriptor
self.is_global = is_global
self.parent_descriptor = parent_descriptor
self.scope_type = scope_type
self.securing_host_descriptor = securing_host_descriptor
| 71.757576 | 288 | 0.675887 |
395f8faa2b8b9ce6494f589cb6344b95ed463932 | 991 | py | Python | pyemto/examples/B2_Cr.py | hitliaomq/pyemto | 334903fbe626fa8b9f7e1f7a089b7c0e30edba46 | [
"MIT"
] | 11 | 2018-04-10T02:01:12.000Z | 2021-12-10T06:44:54.000Z | pyemto/examples/B2_Cr.py | hitliaomq/pyemto | 334903fbe626fa8b9f7e1f7a089b7c0e30edba46 | [
"MIT"
] | null | null | null | pyemto/examples/B2_Cr.py | hitliaomq/pyemto | 334903fbe626fa8b9f7e1f7a089b7c0e30edba46 | [
"MIT"
] | 2 | 2020-02-01T19:59:50.000Z | 2020-04-07T20:53:40.000Z | from pyemto.examples.emto_input_generator import *
import numpy as np
folder = os.getcwd() # Get current working directory.
emtopath = folder+"/Cr_B2_antiferro" # Folder where the calculations will be performed.
latpath = emtopath
prims = np.array([[1.0,0.0,0.0],
[0.0,1.0,0.0],
[0.0,0.0,1.0]])
basis = np.array([[0.0,0.0,0.0],
[0.5,0.5,0.5]])
species = ["Cr2-","Cr2+"]
species_cpa = ["Cr","Cr"]
input_creator = EMTO(folder=emtopath)
input_creator.init_structure(latpath=latpath,
prims=prims,
basis=basis,
species=species,
latname='B2')
input_creator.init_bulk(atoms_cpa=species_cpa,
splts=[-2,2])
sws_range = np.linspace(2,3,6)
input_creator.write_bmdl_kstr_shape_input()
input_creator.write_kgrn_kfcd_swsrange(sws=sws_range)
#input_creator.draw_structure('standard_conv')
| 28.314286 | 88 | 0.586276 |
c8a1283989797dbf53efc59d97b5ab6ed1fe2159 | 833 | py | Python | digsby/src/gui/uberwidgets/autoheightstatictext.py | ifwe/digsby | f5fe00244744aa131e07f09348d10563f3d8fa99 | [
"Python-2.0"
] | 35 | 2015-08-15T14:32:38.000Z | 2021-12-09T16:21:26.000Z | digsby/src/gui/uberwidgets/autoheightstatictext.py | niterain/digsby | 16a62c7df1018a49eaa8151c0f8b881c7e252949 | [
"Python-2.0"
] | 4 | 2015-09-12T10:42:57.000Z | 2017-02-27T04:05:51.000Z | digsby/src/gui/uberwidgets/autoheightstatictext.py | niterain/digsby | 16a62c7df1018a49eaa8151c0f8b881c7e252949 | [
"Python-2.0"
] | 15 | 2015-07-10T23:58:07.000Z | 2022-01-23T22:16:33.000Z | import wx
from gui.textutil import Wrap
class AutoHeightStaticText(wx.StaticText):
'''
Extension of wxStaticText that handels wrapping automatically figures out it's minheight from the contained text
'''
def __init__(self, parent, id, label, pos = wx.DefaultPosition, size = wx.DefaultSize, style = 0, name = 'staticText'):
wx.StaticText.__init__(self, parent, id, label, pos, size, style, name)
self.Bind(wx.EVT_SIZE, self.OnSize)
self.CalcSize()
def OnSize(self, event):
event.Skip()
self.CalcSize()
def CalcSize(self):
dc = wx.MemoryDC()
wlabel = Wrap(self.Label, self.Size.width, self.Font, dc, 0)
exts = dc.GetMultiLineTextExtent(wlabel, self.Font)[:2]
self.SetMinSize((self.MinSize.width, exts[1]))
self.Top.Layout() | 32.038462 | 123 | 0.654262 |
6d9fcc5389e49251c518451ecbe8680da84ccca6 | 504 | py | Python | src/app/main.py | qooba/ainewface | a6b1ad86957304ac5b3452bb084a14256fb080f6 | [
"MIT"
] | null | null | null | src/app/main.py | qooba/ainewface | a6b1ad86957304ac5b3452bb084a14256fb080f6 | [
"MIT"
] | null | null | null | src/app/main.py | qooba/ainewface | a6b1ad86957304ac5b3452bb084a14256fb080f6 | [
"MIT"
] | null | null | null | from fastapi import FastAPI, Request
from fastapi.responses import HTMLResponse, FileResponse
from fastapi.staticfiles import StaticFiles
from common import Bootstapper
from routers import face
app = FastAPI()
container=Bootstapper().bootstrap()
#app.mount("/static", StaticFiles(directory="static"), name="static")
#@app.get("/", response_class=HTMLResponse)
#async def homepage(include_in_schema=False):
# return FileResponse("static/index.html")
app.include_router(face.router, tags=['images'])
| 31.5 | 69 | 0.787698 |
7adddf67c3851b37d6d266d38170977ad763a71d | 16,341 | py | Python | customScrape.py | jacobchh/CourseNotifier | 6f25ec9840d516f2e5094d16ef8267e58038c260 | [
"MIT"
] | 1 | 2020-08-04T05:10:59.000Z | 2020-08-04T05:10:59.000Z | customScrape.py | jacobchh/CourseNotifier | 6f25ec9840d516f2e5094d16ef8267e58038c260 | [
"MIT"
] | 2 | 2020-09-09T03:28:51.000Z | 2021-08-01T02:14:11.000Z | customScrape.py | jacobchh/CourseNotifier | 6f25ec9840d516f2e5094d16ef8267e58038c260 | [
"MIT"
] | null | null | null | import requests
import pandas as pd
import time as t
from datetime import datetime, time
from bs4 import BeautifulSoup
import smtplib
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
from dotenv import load_dotenv
import os
TIMETABLE_URL = 'https://studentservices.uwo.ca/secure/timetables/mastertt/ttindex.cfm'
SUBJECT_CODES = {'ACTURSCI': 'Actuarial Science', 'AMERICAN': 'American Studies',
'ANATCELL': 'Anatomy and Cell Biology', 'ANTHRO': 'Anthropology', 'APPLMATH': 'Applied Mathematics',
'ARABIC': 'Arabic', 'AH': 'Art History', 'ARTHUM': 'Arts and Humanities', 'ASTRONOM': 'Astronomy',
'BIBLSTUD': 'Biblical Studies', 'BIOCHEM': 'Biochemistry', 'BIOLOGY': 'Biology',
'BME': 'Biomedical Engineering', 'BIOSTATS': 'Biostatistics', 'BUSINESS': 'Business Administration',
'CALCULUS': 'Calculus', 'CGS': 'Centre for Global Studies', 'CBE': 'Chem & Biochem Engineering',
'CHEMBIO': 'Chemical Biology', 'CHEM': 'Chemistry', 'CSI': 'Childhood & Social Institutns',
'CHINESE': 'Chinese', 'CHURCH': 'Church History', 'CHURLAW': 'Church Law',
'CHURMUSI': 'Church Music', 'CEE': 'Civil & Envrnmntl Engineering', 'CLASSICS': 'Classical Studies',
'CMBPROG': 'Combined Program Enrollment', 'COMMSCI': 'Communication Sci & Disorders',
'COMPLIT': 'Comparative Lit & Culture', 'COMPSCI': 'Computer Science', 'DANCE': 'Dance',
'DIGICOMM': 'Digital Communication', 'DIGIHUM': 'Digital Humanities',
'DISABST': 'Disability Studies', 'EARTHSCI': 'Earth Sciences', 'ECONOMIC': 'Economics',
'EELC': 'Education English Language Cen', 'ECE': 'Elect & Computer Engineering',
'ENGSCI': 'Engineering Science', 'ENGLISH': 'English', 'ENVIRSCI': 'Environmental Science',
'EPID': 'Epidemiology', 'EPIDEMIO': 'Epidemiology & Biostatistics', 'FIMS': 'FIMS',
'FAMLYSTU': 'Family Studies & Human Develop', 'FLDEDUC': 'Field Education', 'FILM': 'Film Studies',
'FINMOD': 'Financial Modelling', 'FOODNUTR': 'Foods and Nutrition', 'FRENCH': 'French',
'GEOGRAPH': 'Geography', 'GEOLOGY': 'Geology', 'GERMAN': 'German', 'GGB': 'Global Great Books',
'GLE': 'Governance,Leadership & Ethics', 'GREEK': 'Greek', 'GPE': 'Green Process Engineering',
'HEALTSCI': 'Health Sciences', 'HEBREW': 'Hebrew', 'HISTTHEO': 'Historical Theology',
'HISTORY': 'History', 'HISTSCI': 'History of Science', 'HOMILET': 'Homiletics',
'HUMANECO': 'Human Ecology', 'HUMANRS': 'Human Rights Studies', 'INDIGSTU': 'Indigenous Studies',
'INTEGSCI': 'Integrated Science', 'ICC': 'Intercultural Communications',
'INTERDIS': 'Interdisciplinary Studies', 'INTREL': 'International Relations', 'ITALIAN': 'Italian',
'JAPANESE': 'Japanese', 'JEWISH': 'Jewish Studies', 'MTP-BRJR': 'Journalism-Broadcasting Fanshw',
'KINESIOL': 'Kinesiology', 'LATIN': 'Latin', 'LAW': 'Law', 'LS': 'Leadership Studies',
'LINGUIST': 'Linguistics', 'LITURST': 'Liturgical Studies', 'LITURGIC': 'Liturgics',
'MOS': 'Management & Organizational St', 'MTP-MKTG': 'Marketing - Fanshawe', 'MATH': 'Mathematics',
'MME': 'Mech & Materials Engineering', 'MSE': 'Mechatronic Systems Engineerin',
'MIT': 'Media, Information &Technocult', 'MEDBIO': 'Medical Biophysics',
'MEDHINFO': 'Medical Health Informatics', 'MEDSCIEN': 'Medical Sciences',
'MEDIEVAL': 'Medieval Studies', 'MICROIMM': 'Microbiology & Immunology',
'MORALTHE': 'Moral Theology', 'MTP-MMED': 'Multimed Dsgn & Prod Fanshawe',
'MCS': 'Museum and Curatorial Studies', 'MUSIC': 'Music', 'NEURO': 'Neuroscience',
'NURSING': 'Nursing', 'ONEHEALT': 'One Health', 'PASTTHEO': 'Pastoral Theology',
'PATHOL': 'Pathology', 'PHARM': 'Pharmacology', 'PHILST': 'Philosophical Studies',
'P HILOSOP': 'Philosophy', 'PHYSICS': 'Physics', 'PHYSIOL': 'Physiology',
'PHYSPHRM': 'Physiology and Pharmacology', 'POLISCI': 'Political Science',
'PPE': 'Politics, Philosophy, Economic', 'PSYCHOL': 'Psychology',
'REHABSCI': 'Rehabilitation Sciences', 'RELEDUC': 'Religious Education',
'RELSTUD': 'Religious Studies', 'SACRTHEO': 'Sacramental Theology', 'SCHOLARS': 'Scholars Electives',
'SCIENCE': 'Science', 'SOCLJUST': 'Social Justice & Peace Studies', 'SOCSCI': 'Social Science',
'SOCWORK': 'Social Work', 'SOCIOLOG': 'Sociology', 'SE': 'Software Engineering',
'SPANISH': 'Spanish', 'SPEECH': 'Speech', 'SPIRTHEO': 'Spiritual Theology',
'STATS': 'Statistical Sciences', 'SA': 'Studio Art', 'SYSTHEO': 'Systematic Theology',
'THANAT': 'Thanatology', 'THEATRE': 'Theatre Studies', 'THEOETH': 'Theological Ethics',
'THEOLST': 'Theological Studies', 'THESIS': 'Thesis', 'TJ': 'Transitional Justice',
'WTC': 'Western Thought & Civilization', 'WOMENST': "Women's Studies",
'WORLDLIT': 'World Literatures and Cultures', 'WRITING': 'Writing'}
# returns a dictionary of all courseCodes and their statuses for a given subject
# subject code is in all caps
def getCourseCodeStatus(subjectCode):
data = {'subject': subjectCode, 'command': 'search'}
r = requests.post(TIMETABLE_URL, data)
soup = BeautifulSoup(r.text, 'html.parser')
# retrieves all different courses for given subject
courseList = soup.findAll("table", class_="table table-striped")
for i in range(len(courseList)):
courseList[i] = courseList[i].find("tbody")
statusDict = {}
# store all courses in a list
for course in courseList:
# store all sections per course in a list
tempSectionList = course.findAll("tr")
# every other 'tr' tag is a day of week table therefore it is ignored
tempSectionList = tempSectionList[::2]
for section in tempSectionList:
sectionInfo = section.findAll("td")
# td 3 and 10 are course number and class status
statusDict[sectionInfo[2].text] = sectionInfo[-3].text.strip()
return statusDict
# returns a list of all class names (1000-4000) for a given subject
def findCourseTitles(subjectCode):
data = {'subject': subjectCode, 'command': 'search'}
r = requests.post(TIMETABLE_URL, data)
soup = BeautifulSoup(r.text, 'html.parser')
# Get the course titles through filtering 'h4' header tags
courseTitles = []
courseList = soup.find_all("h4")
for course in courseList:
courseTitles.append(course.text)
return courseTitles
# imports user information from csv file with pandas, returns a sorted data frame
def readUserInformation(csvFilePath):
df = pd.read_csv(csvFilePath, names=["First Name", "Last Name", "Subject", "Class Number", "Email", "Phone Number"], skiprows=1, dtype="string")
df = df.sort_values(["Subject", "Class Number"])
return df
# deletes user from csv with pandas, based on class number
def deleteUserInformation(csvFilePath, classNumber):
df = pd.read_csv(csvFilePath, dtype="string")
df = df = df[df.classnumber != classNumber]
df.to_csv(csvFilePath, index=False, columns=["firstname", "lastname", "subject", "classnumber", "email", "phonenumber"], encoding='utf-8')
# sends an email notification to the user
def emailUser(name, email, courseNumber, subject):
SENDER_EMAIL = os.getenv("EMAIL")
SENDER_PASSWORD = os.getenv("EMAIL_PASS")
receiverName = name
receiverEmail = email
courseNumber = courseNumber
subject = subject # "Calculus"
message = MIMEMultipart("alternative")
message["Subject"] = "Your UWO Course Is Now Open!"
message["From"] = SENDER_EMAIL
message["To"] = receiverEmail
# write the plain text part
text = """\
Hi {name},
A spot in your {subject} class #{xxxx} just opened up! Log into Student Center right away to register.
Your information will be deleted from our database soon. If you could not get into your class, please sign up again on our website to receive another notification.
- MyUWOCourseIsFull
"""
# write the HTML part
html = """\
<html>
<body>
<div>
<div>Hi {name},</div>
<div><br></div>
<div>A spot in your {subject} class #{xxxx} just opened up! Log into <a href="https://student.uwo.ca/">Student Center</a> right away to register.<br></div>
<div><br></div>
<div>
Your information will be deleted from our database soon. If you could not get into your class, please sign up again on our <a href="https://www.coursenotifier.com/">website</a> to receive another notification.<br>
<div><br></div>
<div>
- MyUWOCourseIsFull
<div></div>
<div><br></div>
</div>
</div>
</div>
</body>
</html>
"""
# convert both parts to MIMEText objects and add them to the MIMEMultipart message
part1 = MIMEText(text, "plain")
part2 = MIMEText(html, "html")
message.attach(part1)
message.attach(part2)
# set up the SMTP server
server = smtplib.SMTP_SSL('smtp.gmail.com', 465)
server.ehlo()
server.login(SENDER_EMAIL, SENDER_PASSWORD)
server.sendmail(SENDER_EMAIL, receiverEmail,
message.as_string().format(name=receiverName, subject=subject, xxxx=courseNumber))
server.close()
# sends an email notification to the user saying that the course number was invalid
def emailUserCourseError(name, email, courseNumber, subject):
SENDER_EMAIL = os.getenv("EMAIL")
SENDER_PASSWORD = os.getenv("EMAIL_PASS")
receiverName = name
receiverEmail = email
courseNumber = courseNumber
subject = subject # "Calculus"
message = MIMEMultipart("alternative")
message["Subject"] = "Your Class Number Is Invalid!"
message["From"] = SENDER_EMAIL
message["To"] = receiverEmail
# write the plain text part
text = """\
Hi {name},
The class number you entered for {subject} class #{xxxx} is invalid!
Please refer to the "What's This?" link on our website, and sign up again to receive another notification.
- MyUWOCourseIsFull
"""
# write the HTML part
html = """\
<html>
<body>
<div>
<div>Hi {name},</div>
<div><br></div>
<div>The class number you entered for {subject} class #{xxxx} is invalid!<br></div>
<div><br></div>
<div>
Please refer to this <a href="https://www.coursenotifier.com/class-number.png">picture</a> on identifying your class number, and sign up again on our <a href="https://www.coursenotifier.com/">website</a> to receive another notification.<br>
<div><br></div>
<div>
- MyUWOCourseIsFull
<div></div>
<div><br></div>
</div>
</div>
</div>
</body>
</html>
"""
# convert both parts to MIMEText objects and add them to the MIMEMultipart message
part1 = MIMEText(text, "plain")
part2 = MIMEText(html, "html")
message.attach(part1)
message.attach(part2)
# set up the SMTP server
server = smtplib.SMTP_SSL('smtp.gmail.com', 465)
server.ehlo()
server.login(SENDER_EMAIL, SENDER_PASSWORD)
server.sendmail(SENDER_EMAIL, receiverEmail,
message.as_string().format(name=receiverName, subject=subject, xxxx=courseNumber))
server.close()
# sends an email notification to the administrator informing of an error
def emailAdminError(errorMsg):
SENDER_EMAIL = os.getenv("EMAIL")
SENDER_PASSWORD = os.getenv("EMAIL_PASS")
receiverEmail = "jacob.chun@gmail.com"
message = MIMEMultipart("alternative")
message["Subject"] = "Program Error"
message["From"] = SENDER_EMAIL
message["To"] = receiverEmail
# write the plain text
text = "There was an error with your program. Please check the components. Error message: {error}"
# convert the text to MIMEText objects and add them to the MIMEMultipart message
part1 = MIMEText(text, "plain")
message.attach(part1)
# set up the SMTP server
server = smtplib.SMTP_SSL('smtp.gmail.com', 465)
server.ehlo()
server.login(SENDER_EMAIL, SENDER_PASSWORD)
server.sendmail(SENDER_EMAIL, receiverEmail, message.as_string().format(error=errorMsg))
server.close()
# loops through the pandas data frame and checks all requested courses' status
# emails all users with "Not Full" courses and then deletes their information
# saves the updated csv file
def loopThroughDataFrame(userDataFrame, csvFilePath):
# check if the data frame is empty before processing it
if userDataFrame.empty:
t.sleep(20)
return
# get all the subjects present in the data frame
subjectList = userDataFrame["Subject"].unique()
# create a dictionary where subject is the key and a list of all class numbers present in data frame is the value
presentUserCourses = {}
for subject in subjectList:
tempDf = userDataFrame.loc[userDataFrame["Subject"] == subject]
classNumberList = tempDf["Class Number"].unique()
presentUserCourses[subject] = classNumberList
# loop through each subject and each list of class numbers to find which ones are "Not Full"
# create a list of course numbers that are open
listOfOpenCourses = []
for subject in presentUserCourses.keys():
courseStatus = getCourseCodeStatus(subject)
# check if the program encountered a captcha
if courseStatus == {}:
t.sleep(20)
continue
for classNumber in presentUserCourses[subject]:
# removes the user if the class number is not valid
try:
if courseStatus[classNumber] == "Not Full":
listOfOpenCourses.append(classNumber)
except KeyError:
dfFiltered = userDataFrame.loc[userDataFrame["Class Number"] == classNumber]
for index, row in dfFiltered.iterrows():
emailUserCourseError(row["First Name"], row["Email"], row["Class Number"],
SUBJECT_CODES[row["Subject"]])
# delete user from csv
deleteUserInformation(csvFilePath, classNumber)
# pauses for 6 secs before looping to avoid captcha code from website
t.sleep(6)
# loop through each open course, return all users that have selected that course and email them
for classNumber in listOfOpenCourses:
dfFiltered = userDataFrame.loc[userDataFrame["Class Number"] == classNumber]
for index, row in dfFiltered.iterrows():
emailUser(row["First Name"], row["Email"], row["Class Number"], SUBJECT_CODES[row["Subject"]])
# delete users from csv
deleteUserInformation(csvFilePath, classNumber)
def main():
load_dotenv(override=True)
csv = "custom.csv"
start = time(23, 50)
end = time(23, 55)
while True:
try:
# retrieve user information from csv and return a data frame of user information
userDataFrame = readUserInformation(csv)
# loop through data frame, email users, delete users from database that have been messaged
loopThroughDataFrame(userDataFrame, csv)
except Exception as e:
# emails the admin if there is any error
emailAdminError(str(e))
exit()
# time to restart script
now = datetime.now().time()
if start < now < end:
exit()
if __name__ == "__main__":
main()
| 45.773109 | 260 | 0.624197 |
c40ce854706859f944849edeac5d9e57d2aa3d82 | 29,678 | py | Python | test/unit/common/ring/test_utils.py | kevin-wyx/swift | d46b0f29f9e023249b582bfa1fbf80cb8f577182 | [
"Apache-2.0"
] | null | null | null | test/unit/common/ring/test_utils.py | kevin-wyx/swift | d46b0f29f9e023249b582bfa1fbf80cb8f577182 | [
"Apache-2.0"
] | null | null | null | test/unit/common/ring/test_utils.py | kevin-wyx/swift | d46b0f29f9e023249b582bfa1fbf80cb8f577182 | [
"Apache-2.0"
] | null | null | null | # Copyright (c) 2010-2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
from swift.common import ring
from swift.common.ring.utils import (tiers_for_dev, build_tier_tree,
validate_and_normalize_ip,
validate_and_normalize_address,
is_valid_hostname,
is_local_device, parse_search_value,
parse_search_values_from_opts,
parse_change_values_from_opts,
validate_args, parse_args,
parse_builder_ring_filename_args,
build_dev_from_opts, dispersion_report,
parse_address)
class TestUtils(unittest.TestCase):
def setUp(self):
self.test_dev = {'region': 1, 'zone': 1, 'ip': '192.168.1.1',
'port': '6200', 'id': 0}
def get_test_devs():
dev0 = {'region': 1, 'zone': 1, 'ip': '192.168.1.1',
'port': '6200', 'id': 0}
dev1 = {'region': 1, 'zone': 1, 'ip': '192.168.1.1',
'port': '6200', 'id': 1}
dev2 = {'region': 1, 'zone': 1, 'ip': '192.168.1.1',
'port': '6200', 'id': 2}
dev3 = {'region': 1, 'zone': 1, 'ip': '192.168.1.2',
'port': '6200', 'id': 3}
dev4 = {'region': 1, 'zone': 1, 'ip': '192.168.1.2',
'port': '6200', 'id': 4}
dev5 = {'region': 1, 'zone': 1, 'ip': '192.168.1.2',
'port': '6200', 'id': 5}
dev6 = {'region': 1, 'zone': 2, 'ip': '192.168.2.1',
'port': '6200', 'id': 6}
dev7 = {'region': 1, 'zone': 2, 'ip': '192.168.2.1',
'port': '6200', 'id': 7}
dev8 = {'region': 1, 'zone': 2, 'ip': '192.168.2.1',
'port': '6200', 'id': 8}
dev9 = {'region': 1, 'zone': 2, 'ip': '192.168.2.2',
'port': '6200', 'id': 9}
dev10 = {'region': 1, 'zone': 2, 'ip': '192.168.2.2',
'port': '6200', 'id': 10}
dev11 = {'region': 1, 'zone': 2, 'ip': '192.168.2.2',
'port': '6200', 'id': 11}
return [dev0, dev1, dev2, dev3, dev4, dev5,
dev6, dev7, dev8, dev9, dev10, dev11]
self.test_devs = get_test_devs()
def test_tiers_for_dev(self):
self.assertEqual(
tiers_for_dev(self.test_dev),
((1,),
(1, 1),
(1, 1, '192.168.1.1'),
(1, 1, '192.168.1.1', 0)))
def test_build_tier_tree(self):
ret = build_tier_tree(self.test_devs)
self.assertEqual(len(ret), 8)
self.assertEqual(ret[()], set([(1,)]))
self.assertEqual(ret[(1,)], set([(1, 1), (1, 2)]))
self.assertEqual(ret[(1, 1)],
set([(1, 1, '192.168.1.2'),
(1, 1, '192.168.1.1')]))
self.assertEqual(ret[(1, 2)],
set([(1, 2, '192.168.2.2'),
(1, 2, '192.168.2.1')]))
self.assertEqual(ret[(1, 1, '192.168.1.1')],
set([(1, 1, '192.168.1.1', 0),
(1, 1, '192.168.1.1', 1),
(1, 1, '192.168.1.1', 2)]))
self.assertEqual(ret[(1, 1, '192.168.1.2')],
set([(1, 1, '192.168.1.2', 3),
(1, 1, '192.168.1.2', 4),
(1, 1, '192.168.1.2', 5)]))
self.assertEqual(ret[(1, 2, '192.168.2.1')],
set([(1, 2, '192.168.2.1', 6),
(1, 2, '192.168.2.1', 7),
(1, 2, '192.168.2.1', 8)]))
self.assertEqual(ret[(1, 2, '192.168.2.2')],
set([(1, 2, '192.168.2.2', 9),
(1, 2, '192.168.2.2', 10),
(1, 2, '192.168.2.2', 11)]))
def test_is_valid_hostname(self):
self.assertTrue(is_valid_hostname("local"))
self.assertTrue(is_valid_hostname("test.test.com"))
hostname = "test." * 51
self.assertTrue(is_valid_hostname(hostname))
hostname = hostname.rstrip('.')
self.assertTrue(is_valid_hostname(hostname))
hostname = hostname + "00"
self.assertFalse(is_valid_hostname(hostname))
self.assertFalse(is_valid_hostname("$blah#"))
def test_is_local_device(self):
# localhost shows up in whataremyips() output as "::1" for IPv6
my_ips = ["127.0.0.1", "::1"]
my_port = 6200
self.assertTrue(is_local_device(my_ips, my_port,
"127.0.0.1", my_port))
self.assertTrue(is_local_device(my_ips, my_port,
"::1", my_port))
self.assertTrue(is_local_device(
my_ips, my_port,
"0000:0000:0000:0000:0000:0000:0000:0001", my_port))
self.assertTrue(is_local_device(my_ips, my_port,
"localhost", my_port))
self.assertFalse(is_local_device(my_ips, my_port,
"localhost", my_port + 1))
self.assertFalse(is_local_device(my_ips, my_port,
"127.0.0.2", my_port))
# for those that don't have a local port
self.assertTrue(is_local_device(my_ips, None,
my_ips[0], None))
# When servers_per_port is active, the "my_port" passed in is None
# which means "don't include port in the determination of locality
# because it's not reliable in this deployment scenario"
self.assertTrue(is_local_device(my_ips, None,
"127.0.0.1", 6666))
self.assertTrue(is_local_device(my_ips, None,
"::1", 6666))
self.assertTrue(is_local_device(
my_ips, None,
"0000:0000:0000:0000:0000:0000:0000:0001", 6666))
self.assertTrue(is_local_device(my_ips, None,
"localhost", 6666))
self.assertFalse(is_local_device(my_ips, None,
"127.0.0.2", my_port))
def test_validate_and_normalize_ip(self):
ipv4 = "10.0.0.1"
self.assertEqual(ipv4, validate_and_normalize_ip(ipv4))
ipv6 = "fe80::204:61ff:fe9d:f156"
self.assertEqual(ipv6, validate_and_normalize_ip(ipv6.upper()))
hostname = "test.test.com"
self.assertRaises(ValueError,
validate_and_normalize_ip, hostname)
hostname = "$blah#"
self.assertRaises(ValueError,
validate_and_normalize_ip, hostname)
def test_validate_and_normalize_address(self):
ipv4 = "10.0.0.1"
self.assertEqual(ipv4, validate_and_normalize_address(ipv4))
ipv6 = "fe80::204:61ff:fe9d:f156"
self.assertEqual(ipv6, validate_and_normalize_address(ipv6.upper()))
hostname = "test.test.com"
self.assertEqual(hostname,
validate_and_normalize_address(hostname.upper()))
hostname = "$blah#"
self.assertRaises(ValueError,
validate_and_normalize_address, hostname)
def test_parse_search_value(self):
res = parse_search_value('r0')
self.assertEqual(res, {'region': 0})
res = parse_search_value('r1')
self.assertEqual(res, {'region': 1})
res = parse_search_value('r1z2')
self.assertEqual(res, {'region': 1, 'zone': 2})
res = parse_search_value('d1')
self.assertEqual(res, {'id': 1})
res = parse_search_value('z1')
self.assertEqual(res, {'zone': 1})
res = parse_search_value('-127.0.0.1')
self.assertEqual(res, {'ip': '127.0.0.1'})
res = parse_search_value('127.0.0.1')
self.assertEqual(res, {'ip': '127.0.0.1'})
res = parse_search_value('-[127.0.0.1]:10001')
self.assertEqual(res, {'ip': '127.0.0.1', 'port': 10001})
res = parse_search_value(':10001')
self.assertEqual(res, {'port': 10001})
res = parse_search_value('R127.0.0.10')
self.assertEqual(res, {'replication_ip': '127.0.0.10'})
res = parse_search_value('R[127.0.0.10]:20000')
self.assertEqual(res, {'replication_ip': '127.0.0.10',
'replication_port': 20000})
res = parse_search_value('R:20000')
self.assertEqual(res, {'replication_port': 20000})
res = parse_search_value('/sdb1')
self.assertEqual(res, {'device': 'sdb1'})
res = parse_search_value('_meta1')
self.assertEqual(res, {'meta': 'meta1'})
self.assertRaises(ValueError, parse_search_value, 'OMGPONIES')
def test_parse_search_values_from_opts(self):
argv = \
["--id", "1", "--region", "2", "--zone", "3",
"--ip", "test.test.com",
"--port", "6200",
"--replication-ip", "r.test.com",
"--replication-port", "7000",
"--device", "sda3",
"--meta", "some meta data",
"--weight", "3.14159265359",
"--change-ip", "change.test.test.com",
"--change-port", "6201",
"--change-replication-ip", "change.r.test.com",
"--change-replication-port", "7001",
"--change-device", "sdb3",
"--change-meta", "some meta data for change"]
expected = {
'id': 1,
'region': 2,
'zone': 3,
'ip': "test.test.com",
'port': 6200,
'replication_ip': "r.test.com",
'replication_port': 7000,
'device': "sda3",
'meta': "some meta data",
'weight': 3.14159265359,
}
new_cmd_format, opts, args = validate_args(argv)
search_values = parse_search_values_from_opts(opts)
self.assertEqual(search_values, expected)
argv = \
["--id", "1", "--region", "2", "--zone", "3",
"--ip", "127.0.0.1",
"--port", "6200",
"--replication-ip", "127.0.0.10",
"--replication-port", "7000",
"--device", "sda3",
"--meta", "some meta data",
"--weight", "3.14159265359",
"--change-ip", "127.0.0.2",
"--change-port", "6201",
"--change-replication-ip", "127.0.0.20",
"--change-replication-port", "7001",
"--change-device", "sdb3",
"--change-meta", "some meta data for change"]
expected = {
'id': 1,
'region': 2,
'zone': 3,
'ip': "127.0.0.1",
'port': 6200,
'replication_ip': "127.0.0.10",
'replication_port': 7000,
'device': "sda3",
'meta': "some meta data",
'weight': 3.14159265359,
}
new_cmd_format, opts, args = validate_args(argv)
search_values = parse_search_values_from_opts(opts)
self.assertEqual(search_values, expected)
argv = \
["--id", "1", "--region", "2", "--zone", "3",
"--ip", "[127.0.0.1]",
"--port", "6200",
"--replication-ip", "[127.0.0.10]",
"--replication-port", "7000",
"--device", "sda3",
"--meta", "some meta data",
"--weight", "3.14159265359",
"--change-ip", "[127.0.0.2]",
"--change-port", "6201",
"--change-replication-ip", "[127.0.0.20]",
"--change-replication-port", "7001",
"--change-device", "sdb3",
"--change-meta", "some meta data for change"]
new_cmd_format, opts, args = validate_args(argv)
search_values = parse_search_values_from_opts(opts)
self.assertEqual(search_values, expected)
def test_parse_change_values_from_opts(self):
argv = \
["--id", "1", "--region", "2", "--zone", "3",
"--ip", "test.test.com",
"--port", "6200",
"--replication-ip", "r.test.com",
"--replication-port", "7000",
"--device", "sda3",
"--meta", "some meta data",
"--weight", "3.14159265359",
"--change-ip", "change.test.test.com",
"--change-port", "6201",
"--change-replication-ip", "change.r.test.com",
"--change-replication-port", "7001",
"--change-device", "sdb3",
"--change-meta", "some meta data for change"]
expected = {
'ip': "change.test.test.com",
'port': 6201,
'replication_ip': "change.r.test.com",
'replication_port': 7001,
'device': "sdb3",
'meta': "some meta data for change",
}
new_cmd_format, opts, args = validate_args(argv)
search_values = parse_change_values_from_opts(opts)
self.assertEqual(search_values, expected)
argv = \
["--id", "1", "--region", "2", "--zone", "3",
"--ip", "127.0.0.1",
"--port", "6200",
"--replication-ip", "127.0.0.10",
"--replication-port", "7000",
"--device", "sda3",
"--meta", "some meta data",
"--weight", "3.14159265359",
"--change-ip", "127.0.0.2",
"--change-port", "6201",
"--change-replication-ip", "127.0.0.20",
"--change-replication-port", "7001",
"--change-device", "sdb3",
"--change-meta", "some meta data for change"]
expected = {
'ip': "127.0.0.2",
'port': 6201,
'replication_ip': "127.0.0.20",
'replication_port': 7001,
'device': "sdb3",
'meta': "some meta data for change",
}
new_cmd_format, opts, args = validate_args(argv)
search_values = parse_change_values_from_opts(opts)
self.assertEqual(search_values, expected)
argv = \
["--id", "1", "--region", "2", "--zone", "3",
"--ip", "[127.0.0.1]",
"--port", "6200",
"--replication-ip", "[127.0.0.10]",
"--replication-port", "7000",
"--device", "sda3",
"--meta", "some meta data",
"--weight", "3.14159265359",
"--change-ip", "[127.0.0.2]",
"--change-port", "6201",
"--change-replication-ip", "[127.0.0.20]",
"--change-replication-port", "7001",
"--change-device", "sdb3",
"--change-meta", "some meta data for change"]
new_cmd_format, opts, args = validate_args(argv)
search_values = parse_change_values_from_opts(opts)
self.assertEqual(search_values, expected)
def test_validate_args(self):
argv = \
["--id", "1", "--region", "2", "--zone", "3",
"--ip", "test.test.com",
"--port", "6200",
"--replication-ip", "r.test.com",
"--replication-port", "7000",
"--device", "sda3",
"--meta", "some meta data",
"--weight", "3.14159265359",
"--change-ip", "change.test.test.com",
"--change-port", "6201",
"--change-replication-ip", "change.r.test.com",
"--change-replication-port", "7001",
"--change-device", "sdb3",
"--change-meta", "some meta data for change"]
new_cmd_format, opts, args = validate_args(argv)
self.assertTrue(new_cmd_format)
self.assertEqual(opts.id, 1)
self.assertEqual(opts.region, 2)
self.assertEqual(opts.zone, 3)
self.assertEqual(opts.ip, "test.test.com")
self.assertEqual(opts.port, 6200)
self.assertEqual(opts.replication_ip, "r.test.com")
self.assertEqual(opts.replication_port, 7000)
self.assertEqual(opts.device, "sda3")
self.assertEqual(opts.meta, "some meta data")
self.assertEqual(opts.weight, 3.14159265359)
self.assertEqual(opts.change_ip, "change.test.test.com")
self.assertEqual(opts.change_port, 6201)
self.assertEqual(opts.change_replication_ip, "change.r.test.com")
self.assertEqual(opts.change_replication_port, 7001)
self.assertEqual(opts.change_device, "sdb3")
self.assertEqual(opts.change_meta, "some meta data for change")
def test_validate_args_new_cmd_format(self):
argv = \
["--id", "0", "--region", "0", "--zone", "0",
"--ip", "",
"--port", "0",
"--replication-ip", "",
"--replication-port", "0",
"--device", "",
"--meta", "",
"--weight", "0",
"--change-ip", "",
"--change-port", "0",
"--change-replication-ip", "",
"--change-replication-port", "0",
"--change-device", "",
"--change-meta", ""]
new_cmd_format, opts, args = validate_args(argv)
self.assertTrue(new_cmd_format)
argv = \
["--id", None, "--region", None, "--zone", None,
"--ip", "",
"--port", "0",
"--replication-ip", "",
"--replication-port", "0",
"--device", "",
"--meta", "",
"--weight", None,
"--change-ip", "change.test.test.com",
"--change-port", "6201",
"--change-replication-ip", "change.r.test.com",
"--change-replication-port", "7001",
"--change-device", "sdb3",
"--change-meta", "some meta data for change"]
new_cmd_format, opts, args = validate_args(argv)
self.assertFalse(new_cmd_format)
argv = \
["--id", "0"]
new_cmd_format, opts, args = validate_args(argv)
self.assertTrue(new_cmd_format)
argv = \
["--region", "0"]
new_cmd_format, opts, args = validate_args(argv)
self.assertTrue(new_cmd_format)
argv = \
["--zone", "0"]
new_cmd_format, opts, args = validate_args(argv)
self.assertTrue(new_cmd_format)
argv = \
["--weight", "0"]
new_cmd_format, opts, args = validate_args(argv)
self.assertTrue(new_cmd_format)
def test_parse_args(self):
argv = \
["--id", "1", "--region", "2", "--zone", "3",
"--ip", "test.test.com",
"--port", "6200",
"--replication-ip", "r.test.com",
"--replication-port", "7000",
"--device", "sda3",
"--meta", "some meta data",
"--weight", "3.14159265359",
"--change-ip", "change.test.test.com",
"--change-port", "6201",
"--change-replication-ip", "change.r.test.com",
"--change-replication-port", "7001",
"--change-device", "sdb3",
"--change-meta", "some meta data for change"]
opts, args = parse_args(argv)
self.assertEqual(opts.id, 1)
self.assertEqual(opts.region, 2)
self.assertEqual(opts.zone, 3)
self.assertEqual(opts.ip, "test.test.com")
self.assertEqual(opts.port, 6200)
self.assertEqual(opts.replication_ip, "r.test.com")
self.assertEqual(opts.replication_port, 7000)
self.assertEqual(opts.device, "sda3")
self.assertEqual(opts.meta, "some meta data")
self.assertEqual(opts.weight, 3.14159265359)
self.assertEqual(opts.change_ip, "change.test.test.com")
self.assertEqual(opts.change_port, 6201)
self.assertEqual(opts.change_replication_ip, "change.r.test.com")
self.assertEqual(opts.change_replication_port, 7001)
self.assertEqual(opts.change_device, "sdb3")
self.assertEqual(opts.change_meta, "some meta data for change")
self.assertEqual(len(args), 0)
def test_parse_builder_ring_filename_args(self):
args = 'swift-ring-builder object.builder write_ring'
self.assertEqual((
'object.builder', 'object.ring.gz'
), parse_builder_ring_filename_args(args.split()))
args = 'swift-ring-builder container.ring.gz write_builder'
self.assertEqual((
'container.builder', 'container.ring.gz'
), parse_builder_ring_filename_args(args.split()))
# builder name arg should always fall through
args = 'swift-ring-builder test create'
self.assertEqual((
'test', 'test.ring.gz'
), parse_builder_ring_filename_args(args.split()))
args = 'swift-ring-builder my.file.name create'
self.assertEqual((
'my.file.name', 'my.file.name.ring.gz'
), parse_builder_ring_filename_args(args.split()))
def test_build_dev_from_opts(self):
argv = \
["--region", "0", "--zone", "3",
"--ip", "test.test.com",
"--port", "6200",
"--replication-ip", "r.test.com",
"--replication-port", "7000",
"--device", "sda3",
"--meta", "some meta data",
"--weight", "3.14159265359"]
expected = {
'region': 0,
'zone': 3,
'ip': "test.test.com",
'port': 6200,
'replication_ip': "r.test.com",
'replication_port': 7000,
'device': "sda3",
'meta': "some meta data",
'weight': 3.14159265359,
}
opts, args = parse_args(argv)
device = build_dev_from_opts(opts)
self.assertEqual(device, expected)
argv = \
["--region", "2", "--zone", "3",
"--ip", "[test.test.com]",
"--port", "6200",
"--replication-ip", "[r.test.com]",
"--replication-port", "7000",
"--device", "sda3",
"--meta", "some meta data",
"--weight", "3.14159265359"]
opts, args = parse_args(argv)
self.assertRaises(ValueError, build_dev_from_opts, opts)
argv = \
["--region", "2", "--zone", "3",
"--ip", "[test.test.com]",
"--port", "6200",
"--replication-ip", "[r.test.com]",
"--replication-port", "7000",
"--meta", "some meta data",
"--weight", "3.14159265359"]
opts, args = parse_args(argv)
self.assertRaises(ValueError, build_dev_from_opts, opts)
def test_replication_defaults(self):
args = '-r 1 -z 1 -i 127.0.0.1 -p 6010 -d d1 -w 100'.split()
opts, _ = parse_args(args)
device = build_dev_from_opts(opts)
expected = {
'device': 'd1',
'ip': '127.0.0.1',
'meta': '',
'port': 6010,
'region': 1,
'replication_ip': '127.0.0.1',
'replication_port': 6010,
'weight': 100.0,
'zone': 1,
}
self.assertEqual(device, expected)
args = '-r 1 -z 1 -i test.com -p 6010 -d d1 -w 100'.split()
opts, _ = parse_args(args)
device = build_dev_from_opts(opts)
expected = {
'device': 'd1',
'ip': 'test.com',
'meta': '',
'port': 6010,
'region': 1,
'replication_ip': 'test.com',
'replication_port': 6010,
'weight': 100.0,
'zone': 1,
}
self.assertEqual(device, expected)
def test_dispersion_report(self):
rb = ring.RingBuilder(8, 3, 0)
rb.add_dev({'id': 0, 'region': 1, 'zone': 0, 'weight': 100,
'ip': '127.0.0.0', 'port': 10000, 'device': 'sda1'})
rb.add_dev({'id': 3, 'region': 1, 'zone': 0, 'weight': 100,
'ip': '127.0.0.0', 'port': 10000, 'device': 'sdb1'})
rb.add_dev({'id': 4, 'region': 1, 'zone': 0, 'weight': 100,
'ip': '127.0.0.0', 'port': 10000, 'device': 'sdc1'})
rb.add_dev({'id': 5, 'region': 1, 'zone': 0, 'weight': 100,
'ip': '127.0.0.0', 'port': 10000, 'device': 'sdd1'})
rb.add_dev({'id': 1, 'region': 1, 'zone': 1, 'weight': 200,
'ip': '127.0.0.1', 'port': 10001, 'device': 'sda1'})
rb.add_dev({'id': 6, 'region': 1, 'zone': 1, 'weight': 200,
'ip': '127.0.0.1', 'port': 10001, 'device': 'sdb1'})
rb.add_dev({'id': 7, 'region': 1, 'zone': 1, 'weight': 200,
'ip': '127.0.0.1', 'port': 10001, 'device': 'sdc1'})
rb.add_dev({'id': 8, 'region': 1, 'zone': 1, 'weight': 200,
'ip': '127.0.0.1', 'port': 10001, 'device': 'sdd1'})
rb.add_dev({'id': 2, 'region': 1, 'zone': 1, 'weight': 200,
'ip': '127.0.0.2', 'port': 10002, 'device': 'sda1'})
rb.add_dev({'id': 9, 'region': 1, 'zone': 1, 'weight': 200,
'ip': '127.0.0.2', 'port': 10002, 'device': 'sdb1'})
rb.add_dev({'id': 10, 'region': 1, 'zone': 1, 'weight': 200,
'ip': '127.0.0.2', 'port': 10002, 'device': 'sdc1'})
rb.add_dev({'id': 11, 'region': 1, 'zone': 1, 'weight': 200,
'ip': '127.0.0.2', 'port': 10002, 'device': 'sdd1'})
# this ring is pretty volatile and the assertions are pretty brittle
# so we use a specific seed
rb.rebalance(seed=100)
rb.validate()
self.assertEqual(rb.dispersion, 39.84375)
report = dispersion_report(rb)
self.assertEqual(report['worst_tier'], 'r1z1')
self.assertEqual(report['max_dispersion'], 39.84375)
def build_tier_report(max_replicas, placed_parts, dispersion,
replicas):
return {
'max_replicas': max_replicas,
'placed_parts': placed_parts,
'dispersion': dispersion,
'replicas': replicas,
}
# Each node should store 256 partitions to avoid multiple replicas
# 2/5 of total weight * 768 ~= 307 -> 51 partitions on each node in
# zone 1 are stored at least twice on the nodes
expected = [
['r1z1', build_tier_report(
2, 256, 39.84375, [0, 0, 154, 102])],
['r1z1-127.0.0.1', build_tier_report(
1, 256, 19.921875, [0, 205, 51, 0])],
['r1z1-127.0.0.2', build_tier_report(
1, 256, 19.921875, [0, 205, 51, 0])],
]
report = dispersion_report(rb, 'r1z1[^/]*$', verbose=True)
graph = report['graph']
for i, (expected_key, expected_report) in enumerate(expected):
key, report = graph[i]
self.assertEqual(
(key, report),
(expected_key, expected_report)
)
# overcompensate in r1z0
rb.add_dev({'id': 12, 'region': 1, 'zone': 0, 'weight': 500,
'ip': '127.0.0.3', 'port': 10003, 'device': 'sda1'})
rb.add_dev({'id': 13, 'region': 1, 'zone': 0, 'weight': 500,
'ip': '127.0.0.3', 'port': 10003, 'device': 'sdb1'})
rb.add_dev({'id': 14, 'region': 1, 'zone': 0, 'weight': 500,
'ip': '127.0.0.3', 'port': 10003, 'device': 'sdc1'})
rb.add_dev({'id': 15, 'region': 1, 'zone': 0, 'weight': 500,
'ip': '127.0.0.3', 'port': 10003, 'device': 'sdd1'})
# when the biggest tier has the smallest devices things get ugly
# can't move all the part-replicas in one rebalance
rb.rebalance(seed=100)
report = dispersion_report(rb, verbose=True)
self.assertEqual(rb.dispersion, 9.375)
self.assertEqual(report['worst_tier'], 'r1z1-127.0.0.1')
self.assertEqual(report['max_dispersion'], 7.18562874251497)
# do a sencond rebalance
rb.rebalance(seed=100)
report = dispersion_report(rb, verbose=True)
self.assertEqual(rb.dispersion, 50.0)
self.assertEqual(report['worst_tier'], 'r1z0-127.0.0.3')
self.assertEqual(report['max_dispersion'], 50.0)
# ... but overload can square it
rb.set_overload(rb.get_required_overload())
rb.rebalance()
self.assertEqual(rb.dispersion, 0.0)
def test_parse_address_old_format(self):
# Test old format
argv = "127.0.0.1:6200R127.0.0.1:6200/sda1_some meta data"
ip, port, rest = parse_address(argv)
self.assertEqual(ip, '127.0.0.1')
self.assertEqual(port, 6200)
self.assertEqual(rest, 'R127.0.0.1:6200/sda1_some meta data')
if __name__ == '__main__':
unittest.main()
| 42.640805 | 76 | 0.4969 |
c1b245dc6aed99ea16e44575069a11542b0c4eca | 45,635 | py | Python | api/viz.py | mariokart345/air-bnb-solo | efcd46781fb78cc80f47020f03077131f27c1f8d | [
"MIT"
] | null | null | null | api/viz.py | mariokart345/air-bnb-solo | efcd46781fb78cc80f47020f03077131f27c1f8d | [
"MIT"
] | null | null | null | api/viz.py | mariokart345/air-bnb-solo | efcd46781fb78cc80f47020f03077131f27c1f8d | [
"MIT"
] | null | null | null | import os
from fastapi import APIRouter
import plotly.express as px
import pandas as pd
from joblib import load
import dotenv
router = APIRouter()
dotenv.load_dotenv(dotenv.find_dotenv())
MAPBOX_KEY = os.getenv('MAPBOX_KEY')
@router.get('/visualize', name='Plotly Map Box',
summary='Uses a knn model to get the nearest 500 latitude and longitude coordinates to display on a '
'interactive scatter mapbox with plotly. Returns the converted plot in json',
response_description='Jsonified plotly fig',
responses={
200: {
"content": {
"application/json": {
'example': {
"data": [{"hovertemplate": "latitude=%{lat}<br>longitude=%{lon}<extra></extra>",
"lat": [40.75356, 40.75327, 40.754129999999996, 40.7536, 40.752959999999995,
40.75207, 40.75225, 40.75172, 40.751870000000004, 40.75165, 40.75547,
40.75547, 40.75547, 40.75547, 40.75547, 40.75547, 40.751670000000004,
40.75178, 40.75136, 40.75564, 40.7552, 40.75181, 40.75181, 40.75181,
40.75127, 40.75137, 40.75122, 40.75162, 40.75122, 40.7511, 40.75117,
40.75117, 40.75098, 40.75591, 40.751020000000004, 40.75083, 40.75088,
40.7557, 40.75093, 40.750890000000005, 40.756009999999996, 40.75073,
40.75105, 40.75054, 40.75078, 40.75053, 40.75656, 40.75595, 40.75544,
40.75544, 40.75544, 40.75544, 40.75544, 40.75544, 40.75544, 40.75544,
40.75544, 40.75544, 40.75544, 40.75544, 40.75544, 40.750659999999996,
40.75674, 40.75015, 40.754509999999996, 40.75694, 40.75715, 40.74995,
40.750040000000006, 40.7572, 40.757259999999995, 40.75713, 40.74988,
40.757090000000005, 40.75104, 40.751090000000005, 40.75043,
40.757220000000004, 40.75322, 40.75052, 40.7508, 40.75105, 40.74972,
40.75035, 40.75107, 40.750859999999996, 40.75076, 40.74947, 40.75427,
40.757740000000005, 40.757, 40.74948, 40.749959999999994, 40.75262,
40.75262, 40.75262, 40.75262, 40.75674, 40.749340000000004, 40.74929,
40.74928, 40.74925, 40.75736, 40.750820000000004, 40.750890000000005,
40.757020000000004, 40.75634, 40.75634, 40.749309999999994, 40.75111,
40.75797, 40.75725, 40.74923, 40.749179999999996, 40.749990000000004,
40.75013, 40.74935, 40.75727, 40.74924, 40.75745, 40.74987, 40.75812,
40.75776, 40.749, 40.75704, 40.749629999999996, 40.75775, 40.74881,
40.74964, 40.75795, 40.749309999999994, 40.75026, 40.74863, 40.7488,
40.75091, 40.750890000000005, 40.75738, 40.75096, 40.75485, 40.74893,
40.753190000000004, 40.74905, 40.75843, 40.74896, 40.74943,
40.757529999999996, 40.7587, 40.75738, 40.75868, 40.75632,
40.754740000000005, 40.74842, 40.74838, 40.75877, 40.74836, 40.75016,
40.74994, 40.7517, 40.749179999999996, 40.7483, 40.75716, 40.74897,
40.748259999999995, 40.74831, 40.75678, 40.74814, 40.7487, 40.75042,
40.74834, 40.75672, 40.75793, 40.759, 40.7492, 40.74811, 40.74904,
40.74818, 40.75867, 40.74943, 40.74823, 40.74838, 40.75906, 40.75197,
40.748259999999995, 40.7532, 40.748979999999996, 40.7484, 40.75844,
40.74814, 40.75634, 40.74886, 40.75169, 40.7591, 40.75053, 40.74913,
40.75694, 40.74823, 40.74827, 40.7512, 40.748090000000005, 40.75108,
40.74823, 40.7578, 40.74796, 40.759159999999994, 40.75882, 40.74804,
40.74862, 40.74808, 40.748290000000004, 40.74816, 40.75185, 40.74821,
40.7592, 40.7592, 40.74811, 40.75893, 40.74801, 40.75864, 40.75906,
40.75529, 40.748459999999994, 40.74822, 40.74816, 40.74852,
40.747859999999996, 40.7481, 40.7488, 40.759009999999996, 40.75727,
40.747840000000004, 40.74808, 40.75704, 40.75864, 40.7591, 40.74817,
40.7491, 40.7495, 40.748020000000004, 40.74811, 40.759170000000005,
40.74793, 40.75932, 40.74795, 40.74795, 40.7479, 40.75763, 40.74792,
40.75927, 40.7479, 40.75778, 40.74788, 40.748290000000004, 40.74798,
40.748059999999995, 40.74838, 40.759370000000004, 40.74794, 40.7593,
40.74796, 40.7481, 40.74787, 40.7573, 40.75736, 40.7478, 40.75938,
40.75692, 40.7479, 40.757540000000006, 40.74822, 40.75782, 40.74777,
40.758990000000004, 40.74827, 40.7478, 40.74861, 40.75947, 40.75946,
40.759479999999996, 40.74823, 40.759240000000005, 40.758759999999995,
40.758759999999995, 40.758759999999995, 40.758759999999995,
40.758759999999995, 40.758759999999995, 40.758759999999995,
40.758759999999995, 40.758759999999995, 40.74817, 40.750440000000005,
40.747640000000004, 40.759159999999994, 40.75949, 40.759409999999995,
40.75923, 40.74772, 40.74759, 40.74841, 40.74772, 40.75512,
40.748059999999995, 40.74767, 40.7476, 40.74822, 40.75946,
40.759209999999996, 40.75956, 40.74767, 40.75933, 40.748129999999996,
40.75925, 40.747679999999995, 40.747679999999995, 40.75932,
40.758759999999995, 40.754509999999996, 40.7476, 40.75928, 40.74877,
40.74818, 40.75962, 40.7551, 40.75963, 40.747659999999996,
40.759609999999995, 40.74808, 40.74929, 40.75965, 40.748059999999995,
40.747659999999996, 40.7476, 40.74808, 40.748129999999996, 40.74922,
40.748290000000004, 40.74749, 40.74811, 40.75049, 40.74821, 40.74808,
40.74752, 40.75956, 40.75956, 40.75956, 40.75956, 40.75956, 40.75943,
40.748129999999996, 40.75936, 40.74878, 40.74831, 40.74799, 40.75732,
40.7593, 40.74745, 40.74838, 40.748259999999995, 40.75647, 40.75902,
40.74747, 40.75935, 40.75707, 40.748000000000005, 40.747440000000005,
40.74797, 40.75427, 40.7549, 40.747690000000006, 40.759679999999996,
40.74822, 40.75857, 40.74733, 40.75365, 40.74807, 40.751529999999995,
40.75727, 40.75707, 40.74726, 40.748000000000005, 40.7599, 40.74908,
40.75922, 40.747840000000004, 40.74815, 40.75465, 40.74817, 40.74812,
40.75855, 40.74787, 40.75993, 40.748020000000004, 40.74795, 40.75725,
40.75996, 40.75997, 40.75995, 40.747479999999996, 40.75311,
40.758829999999996, 40.75053, 40.747479999999996, 40.747479999999996,
40.760020000000004, 40.74834, 40.7479, 40.74715, 40.7596,
40.758790000000005, 40.75848, 40.759570000000004, 40.75976, 40.7478,
40.747820000000004, 40.759879999999995, 40.74864, 40.7478, 40.75891,
40.74797, 40.74803, 40.75725, 40.74783, 40.749359999999996,
40.759890000000006, 40.75779, 40.75739, 40.749629999999996, 40.75955,
40.759479999999996, 40.74783, 40.7478, 40.74755, 40.75983,
40.747840000000004, 40.74771, 40.76018, 40.7533, 40.750479999999996,
40.759879999999995, 40.75574, 40.74749, 40.75993, 40.74736,
40.746959999999994, 40.75902, 40.754259999999995, 40.74776,
40.746970000000005, 40.750820000000004, 40.749340000000004, 40.76019,
40.76019, 40.76019, 40.76019, 40.76019, 40.748329999999996, 40.74691,
40.75164, 40.747209999999995, 40.74797, 40.74757, 40.75405, 40.75956,
40.74749, 40.75823, 40.74732, 40.74757, 40.75512, 40.74955, 40.75331,
40.748000000000005, 40.74759, 40.7477, 40.76032, 40.747690000000006,
40.74707, 40.74745, 40.755340000000004, 40.74756, 40.7598, 40.74667,
40.74677, 40.76042, 40.747440000000005, 40.75905, 40.74742, 40.7478,
40.760459999999995, 40.74747, 40.75922, 40.74773, 40.746629999999996,
40.76052, 40.74669, 40.75555, 40.76039, 40.74672, 40.74731, 40.75711,
40.74763], "legendgroup": "",
"lon": [-73.98559, -73.9862, -73.98608, -73.98445, -73.98451, -73.98591,
-73.98668, -73.98526, -73.98429, -73.98461, -73.98456999999999,
-73.98456999999999, -73.98456999999999, -73.98456999999999,
-73.98456999999999, -73.98456999999999, -73.98451, -73.98428,
-73.98507, -73.98463000000001, -73.98719, -73.98406, -73.98406,
-73.98406, -73.98621999999999, -73.98465, -73.98510999999999,
-73.98703, -73.98634, -73.98605, -73.98651, -73.98656, -73.98484,
-73.98425, -73.98448, -73.98625, -73.98644, -73.98375, -73.98453,
-73.98461, -73.98713000000001, -73.98631, -73.98716999999999,
-73.98572, -73.98439, -73.98604, -73.98699, -73.98326,
-73.98836999999999, -73.98836999999999, -73.98836999999999,
-73.98836999999999, -73.98836999999999, -73.98836999999999,
-73.98836999999999, -73.98836999999999, -73.98836999999999,
-73.98836999999999, -73.98836999999999, -73.98836999999999,
-73.98836999999999, -73.98736, -73.98438, -73.98567, -73.98897,
-73.98436, -73.98521, -73.98565, -73.98646, -73.98509, -73.98523,
-73.98451, -73.98642, -73.98425, -73.98276, -73.98271, -73.98778,
-73.98449000000001, -73.98176, -73.9832, -73.98284,
-73.98255999999999, -73.98467, -73.98318, -73.98244,
-73.98255999999999, -73.98262, -73.98493, -73.98968, -73.98552,
-73.98316, -73.98443, -73.98334, -73.98975, -73.98975, -73.98975,
-73.98975, -73.98844, -73.98491999999999, -73.98528, -73.98571,
-73.9857, -73.98354, -73.98222, -73.98216, -73.98293000000001,
-73.98222, -73.98222, -73.98662, -73.98196, -73.98543000000001,
-73.98314, -73.98653, -73.98486, -73.98827, -73.98846, -73.98398,
-73.98298, -73.98722, -73.98309, -73.98843000000001,
-73.98463000000001, -73.98352, -73.9845, -73.98881, -73.98291,
-73.98331999999999, -73.98633000000001, -73.98267, -73.9834,
-73.98813, -73.98931, -73.98493, -73.98716999999999, -73.9813,
-73.98129, -73.98893000000001, -73.98123000000001, -73.99051,
-73.98346, -73.99068, -73.9831, -73.98733, -73.98321, -73.98872,
-73.98225, -73.98647, -73.98916, -73.98453, -73.98113000000001,
-73.99072, -73.98676, -73.98666999999999, -73.98465,
-73.98676999999999, -73.98148, -73.98167, -73.99061999999999,
-73.98869, -73.98669, -73.9816, -73.98839, -73.9845, -73.98693,
-73.98995, -73.98545, -73.98316, -73.98115, -73.98711999999999,
-73.99003, -73.98886, -73.98508000000001, -73.98229,
-73.98508000000001, -73.9825, -73.98663, -73.98756999999999,
-73.98198000000001, -73.9869, -73.98743, -73.9854, -73.99086,
-73.98708, -73.99109, -73.98252, -73.98756999999999, -73.98819,
-73.98449000000001, -73.99038, -73.98853000000001, -73.99081,
-73.98586999999999, -73.98094, -73.98223, -73.98116999999999,
-73.98722, -73.98736, -73.98053, -73.98678000000001, -73.98057,
-73.98731, -73.98193, -73.98546, -73.98532, -73.98362,
-73.98664000000001, -73.98827, -73.98686, -73.98756999999999,
-73.98719, -73.98021999999999, -73.98737, -73.98571, -73.98533,
-73.98706999999999, -73.98384, -73.98666999999999, -73.98309,
-73.98423000000001, -73.99099, -73.98808000000001, -73.98753,
-73.98737, -73.98824, -73.98585, -73.98725, -73.98244, -73.98388,
-73.98994, -73.98558, -73.98723000000001, -73.99013000000001,
-73.98824, -73.98411999999999, -73.98754, -73.98196999999999,
-73.98966, -73.98713000000001, -73.98743, -73.98431,
-73.98680999999999, -73.98534000000001, -73.98692, -73.98693,
-73.9867, -73.98968, -73.98682, -73.98646, -73.98675, -73.98955,
-73.98673000000001, -73.98801, -73.98716999999999, -73.98745,
-73.98822, -73.98550999999999, -73.98706999999999, -73.98468000000001,
-73.98715, -73.98759, -73.98680999999999, -73.98111, -73.98114,
-73.98663, -73.98494000000001, -73.99039, -73.98711, -73.98128,
-73.98801999999999, -73.98965, -73.98666999999999, -73.98329,
-73.98821, -73.98689, -73.98236999999999, -73.98562, -73.98519,
-73.98538, -73.98818, -73.98388, -73.98846, -73.98846, -73.98846,
-73.98846, -73.98846, -73.98846, -73.98846, -73.98846, -73.98846,
-73.98809, -73.98053, -73.98504, -73.98359, -73.98606,
-73.98666999999999, -73.98376, -73.98682, -73.98539, -73.98862,
-73.98689, -73.99136999999999, -73.98321999999999, -73.98671999999999,
-73.98491, -73.98833, -73.9867, -73.98763000000001, -73.98598,
-73.98682, -73.98385999999999, -73.9882, -73.98758000000001,
-73.98692, -73.98693, -73.98376999999999, -73.98869, -73.99157,
-73.98666, -73.9876, -73.98187, -73.98839, -73.98529, -73.99146,
-73.98564, -73.98702, -73.98504, -73.98822, -73.98125, -73.98563,
-73.98821, -73.98711, -73.98691, -73.98828, -73.98839, -73.98129,
-73.9887, -73.98652, -73.98841999999999, -73.98026999999999,
-73.98861, -73.98838, -73.98675, -73.98704000000001,
-73.98704000000001, -73.98704000000001, -73.98704000000001,
-73.98704000000001, -73.98366999999999, -73.98854, -73.98343,
-73.98164, -73.98889, -73.98832, -73.99053, -73.98321999999999,
-73.98674, -73.98214, -73.98886, -73.98006, -73.98863, -73.98416,
-73.98796999999999, -73.98038000000001, -73.98852, -73.98702,
-73.98846999999999, -73.99184, -73.99174000000001, -73.98788,
-73.98709000000001, -73.98894, -73.98942, -73.98659, -73.9919,
-73.98871, -73.97961, -73.99070999999999, -73.98031999999999,
-73.98491999999999, -73.98863, -73.98541999999999, -73.9811,
-73.98846, -73.98834000000001, -73.98892, -73.99185, -73.98897,
-73.98889, -73.98956, -73.98846999999999, -73.986, -73.98239000000001,
-73.98867, -73.98036, -73.98528, -73.98566, -73.98617, -73.98354,
-73.992, -73.98928000000001, -73.97991, -73.98345, -73.98344,
-73.98559, -73.98177, -73.98874, -73.9846, -73.98796, -73.98943,
-73.98984, -73.98807, -73.98759, -73.98864, -73.98868, -73.98719,
-73.98131, -73.98865, -73.98935999999999, -73.98899999999999,
-73.98208000000001, -73.99101, -73.98878, -73.98054, -73.98736,
-73.99064, -73.99095, -73.99088, -73.98834000000001, -73.9885,
-73.98886999999999, -73.98883000000001, -73.9828, -73.98777,
-73.98895999999999, -73.98875, -73.98496999999999, -73.99224,
-73.97968, -73.98347, -73.97929, -73.98281999999999, -73.98759,
-73.98810999999999, -73.98676999999999, -73.98949, -73.99226999999999,
-73.98898, -73.98694, -73.97944, -73.99085, -73.98435, -73.98435,
-73.98435, -73.98435, -73.98435, -73.98132, -73.98683, -73.9791,
-73.98795, -73.98942, -73.98877, -73.99237, -73.9888, -73.9825,
-73.99056, -73.98281999999999, -73.98886999999999, -73.99224,
-73.98005, -73.99243, -73.98159, -73.98895, -73.98204, -73.98444,
-73.98915, -73.98785, -73.98874, -73.99223, -73.98895999999999,
-73.98267, -73.9856, -73.98676, -73.98491, -73.98877, -73.98977,
-73.98875, -73.98942, -73.98503000000001, -73.98892, -73.98962,
-73.98939, -73.98487, -73.98524, -73.98676, -73.99226999999999,
-73.98702, -73.987, -73.98871, -73.99161, -73.98931],
"marker": {"color": "#636efa"}, "mode": "markers", "name": "",
"showlegend": False, "subplot": "mapbox", "type": "scattermapbox"}],
"layout": {"legend": {"tracegroupgap": 0}, "mapbox": {
"accesstoken": "pk.eyJ1IjoibWFyaW9rYXJ0MzQ1IiwiYSI6ImNrcTlodnplOTA4bnoyd282ZnF3MWxuOHkifQ.MoUsFyqkQkGSJuakvu3N2g",
"center": {"lat": 40.75269062, "lon": -73.98624919999999},
"domain": {"x": [0.0, 1.0], "y": [0.0, 1.0]}, "zoom": 13}, "margin": {"t": 60},
"template": {"data": {"bar": [
{"error_x": {"color": "#2a3f5f"}, "error_y": {"color": "#2a3f5f"},
"marker": {"line": {"color": "#E5ECF6", "width": 0.5},
"pattern": {"fillmode": "overlay", "size": 10,
"solidity": 0.2}}, "type": "bar"}], "barpolar": [
{"marker": {"line": {"color": "#E5ECF6", "width": 0.5},
"pattern": {"fillmode": "overlay", "size": 10,
"solidity": 0.2}}, "type": "barpolar"}],
"carpet": [{"aaxis": {"endlinecolor": "#2a3f5f",
"gridcolor": "white",
"linecolor": "white",
"minorgridcolor": "white",
"startlinecolor": "#2a3f5f"},
"baxis": {"endlinecolor": "#2a3f5f",
"gridcolor": "white",
"linecolor": "white",
"minorgridcolor": "white",
"startlinecolor": "#2a3f5f"},
"type": "carpet"}], "choropleth": [
{"colorbar": {"outlinewidth": 0, "ticks": ""},
"type": "choropleth"}], "contour": [
{"colorbar": {"outlinewidth": 0, "ticks": ""},
"colorscale": [[0.0, "#0d0887"], [0.1111111111111111, "#46039f"],
[0.2222222222222222, "#7201a8"],
[0.3333333333333333, "#9c179e"],
[0.4444444444444444, "#bd3786"],
[0.5555555555555556, "#d8576b"],
[0.6666666666666666, "#ed7953"],
[0.7777777777777778, "#fb9f3a"],
[0.8888888888888888, "#fdca26"], [1.0, "#f0f921"]],
"type": "contour"}], "contourcarpet": [
{"colorbar": {"outlinewidth": 0, "ticks": ""},
"type": "contourcarpet"}], "heatmap": [
{"colorbar": {"outlinewidth": 0, "ticks": ""},
"colorscale": [[0.0, "#0d0887"], [0.1111111111111111, "#46039f"],
[0.2222222222222222, "#7201a8"],
[0.3333333333333333, "#9c179e"],
[0.4444444444444444, "#bd3786"],
[0.5555555555555556, "#d8576b"],
[0.6666666666666666, "#ed7953"],
[0.7777777777777778, "#fb9f3a"],
[0.8888888888888888, "#fdca26"], [1.0, "#f0f921"]],
"type": "heatmap"}], "heatmapgl": [
{"colorbar": {"outlinewidth": 0, "ticks": ""},
"colorscale": [[0.0, "#0d0887"], [0.1111111111111111, "#46039f"],
[0.2222222222222222, "#7201a8"],
[0.3333333333333333, "#9c179e"],
[0.4444444444444444, "#bd3786"],
[0.5555555555555556, "#d8576b"],
[0.6666666666666666, "#ed7953"],
[0.7777777777777778, "#fb9f3a"],
[0.8888888888888888, "#fdca26"], [1.0, "#f0f921"]],
"type": "heatmapgl"}], "histogram": [{"marker": {
"pattern": {"fillmode": "overlay", "size": 10, "solidity": 0.2}},
"type": "histogram"}],
"histogram2d": [
{"colorbar": {"outlinewidth": 0, "ticks": ""},
"colorscale": [[0.0, "#0d0887"],
[0.1111111111111111, "#46039f"],
[0.2222222222222222, "#7201a8"],
[0.3333333333333333, "#9c179e"],
[0.4444444444444444, "#bd3786"],
[0.5555555555555556, "#d8576b"],
[0.6666666666666666, "#ed7953"],
[0.7777777777777778, "#fb9f3a"],
[0.8888888888888888, "#fdca26"],
[1.0, "#f0f921"]],
"type": "histogram2d"}], "histogram2dcontour": [
{"colorbar": {"outlinewidth": 0, "ticks": ""},
"colorscale": [[0.0, "#0d0887"], [0.1111111111111111, "#46039f"],
[0.2222222222222222, "#7201a8"],
[0.3333333333333333, "#9c179e"],
[0.4444444444444444, "#bd3786"],
[0.5555555555555556, "#d8576b"],
[0.6666666666666666, "#ed7953"],
[0.7777777777777778, "#fb9f3a"],
[0.8888888888888888, "#fdca26"], [1.0, "#f0f921"]],
"type": "histogram2dcontour"}], "mesh3d": [
{"colorbar": {"outlinewidth": 0, "ticks": ""}, "type": "mesh3d"}],
"parcoords": [{"line": {
"colorbar": {"outlinewidth": 0, "ticks": ""}},
"type": "parcoords"}],
"pie": [{"automargin": True, "type": "pie"}],
"scatter": [{"marker": {
"colorbar": {"outlinewidth": 0, "ticks": ""}},
"type": "scatter"}], "scatter3d": [
{"line": {"colorbar": {"outlinewidth": 0, "ticks": ""}},
"marker": {"colorbar": {"outlinewidth": 0, "ticks": ""}},
"type": "scatter3d"}], "scattercarpet": [
{"marker": {"colorbar": {"outlinewidth": 0, "ticks": ""}},
"type": "scattercarpet"}], "scattergeo": [
{"marker": {"colorbar": {"outlinewidth": 0, "ticks": ""}},
"type": "scattergeo"}], "scattergl": [
{"marker": {"colorbar": {"outlinewidth": 0, "ticks": ""}},
"type": "scattergl"}], "scattermapbox": [
{"marker": {"colorbar": {"outlinewidth": 0, "ticks": ""}},
"type": "scattermapbox"}], "scatterpolar": [
{"marker": {"colorbar": {"outlinewidth": 0, "ticks": ""}},
"type": "scatterpolar"}], "scatterpolargl": [
{"marker": {"colorbar": {"outlinewidth": 0, "ticks": ""}},
"type": "scatterpolargl"}], "scatterternary": [
{"marker": {"colorbar": {"outlinewidth": 0, "ticks": ""}},
"type": "scatterternary"}], "surface": [
{"colorbar": {"outlinewidth": 0, "ticks": ""},
"colorscale": [[0.0, "#0d0887"], [0.1111111111111111, "#46039f"],
[0.2222222222222222, "#7201a8"],
[0.3333333333333333, "#9c179e"],
[0.4444444444444444, "#bd3786"],
[0.5555555555555556, "#d8576b"],
[0.6666666666666666, "#ed7953"],
[0.7777777777777778, "#fb9f3a"],
[0.8888888888888888, "#fdca26"], [1.0, "#f0f921"]],
"type": "surface"}], "table": [
{"cells": {"fill": {"color": "#EBF0F8"}, "line": {"color": "white"}},
"header": {"fill": {"color": "#C8D4E3"},
"line": {"color": "white"}}, "type": "table"}]},
"layout": {"annotationdefaults": {"arrowcolor": "#2a3f5f",
"arrowhead": 0,
"arrowwidth": 1},
"autotypenumbers": "strict", "coloraxis": {
"colorbar": {"outlinewidth": 0, "ticks": ""}},
"colorscale": {
"diverging": [[0, "#8e0152"], [0.1, "#c51b7d"],
[0.2, "#de77ae"], [0.3, "#f1b6da"],
[0.4, "#fde0ef"], [0.5, "#f7f7f7"],
[0.6, "#e6f5d0"], [0.7, "#b8e186"],
[0.8, "#7fbc41"], [0.9, "#4d9221"],
[1, "#276419"]],
"sequential": [[0.0, "#0d0887"],
[0.1111111111111111, "#46039f"],
[0.2222222222222222, "#7201a8"],
[0.3333333333333333, "#9c179e"],
[0.4444444444444444, "#bd3786"],
[0.5555555555555556, "#d8576b"],
[0.6666666666666666, "#ed7953"],
[0.7777777777777778, "#fb9f3a"],
[0.8888888888888888, "#fdca26"],
[1.0, "#f0f921"]],
"sequentialminus": [[0.0, "#0d0887"],
[0.1111111111111111,
"#46039f"],
[0.2222222222222222,
"#7201a8"],
[0.3333333333333333,
"#9c179e"],
[0.4444444444444444,
"#bd3786"],
[0.5555555555555556,
"#d8576b"],
[0.6666666666666666,
"#ed7953"],
[0.7777777777777778,
"#fb9f3a"],
[0.8888888888888888,
"#fdca26"],
[1.0, "#f0f921"]]},
"colorway": ["#636efa", "#EF553B", "#00cc96",
"#ab63fa", "#FFA15A", "#19d3f3",
"#FF6692", "#B6E880", "#FF97FF",
"#FECB52"],
"font": {"color": "#2a3f5f"},
"geo": {"bgcolor": "white", "lakecolor": "white",
"landcolor": "#E5ECF6", "showlakes": True,
"showland": True, "subunitcolor": "white"},
"hoverlabel": {"align": "left"},
"hovermode": "closest", "mapbox": {"style": "light"},
"paper_bgcolor": "white", "plot_bgcolor": "#E5ECF6",
"polar": {"angularaxis": {"gridcolor": "white",
"linecolor": "white",
"ticks": ""},
"bgcolor": "#E5ECF6",
"radialaxis": {"gridcolor": "white",
"linecolor": "white",
"ticks": ""}}, "scene": {
"xaxis": {"backgroundcolor": "#E5ECF6",
"gridcolor": "white", "gridwidth": 2,
"linecolor": "white", "showbackground": True,
"ticks": "", "zerolinecolor": "white"},
"yaxis": {"backgroundcolor": "#E5ECF6",
"gridcolor": "white", "gridwidth": 2,
"linecolor": "white", "showbackground": True,
"ticks": "", "zerolinecolor": "white"},
"zaxis": {"backgroundcolor": "#E5ECF6",
"gridcolor": "white", "gridwidth": 2,
"linecolor": "white", "showbackground": True,
"ticks": "", "zerolinecolor": "white"}},
"shapedefaults": {"line": {"color": "#2a3f5f"}},
"ternary": {"aaxis": {"gridcolor": "white",
"linecolor": "white",
"ticks": ""},
"baxis": {"gridcolor": "white",
"linecolor": "white",
"ticks": ""},
"bgcolor": "#E5ECF6",
"caxis": {"gridcolor": "white",
"linecolor": "white",
"ticks": ""}},
"title": {"x": 0.05},
"xaxis": {"automargin": True, "gridcolor": "white",
"linecolor": "white", "ticks": "",
"title": {"standoff": 15},
"zerolinecolor": "white",
"zerolinewidth": 2},
"yaxis": {"automargin": True, "gridcolor": "white",
"linecolor": "white", "ticks": "",
"title": {"standoff": 15},
"zerolinecolor": "white",
"zerolinewidth": 2}}}}
}
}
}
}
}
)
def visualize(dictionary):
nn = load('models/knn_api.pkl')
nearby_query = nn.kneighbors([[dictionary['latitude'], dictionary['longitude']]])
vis_data = pd.read_csv('data/clean/latlng_prices.csv').drop(['Unnamed: 0'], axis=1)
df = pd.DataFrame([vis_data.iloc[num] for num in nearby_query[1][0]])
px.set_mapbox_access_token(MAPBOX_KEY)
fig = px.scatter_mapbox(df, lat='latitude', lon='longitude', color='price', color_continuous_scale='agsunset',
size_max=15, zoom=12)
fig.write_html('api/templates/viz.html')
return fig.to_json()
| 101.636971 | 150 | 0.327599 |
ec17db7c2044e8a815dc6122efe9817e9fa550bc | 253 | py | Python | more/colander/app.py | morepath/more.colander | fa5b2f07aa78e73c39064ed7cb197e7731474381 | [
"BSD-3-Clause"
] | null | null | null | more/colander/app.py | morepath/more.colander | fa5b2f07aa78e73c39064ed7cb197e7731474381 | [
"BSD-3-Clause"
] | null | null | null | more/colander/app.py | morepath/more.colander | fa5b2f07aa78e73c39064ed7cb197e7731474381 | [
"BSD-3-Clause"
] | null | null | null | import morepath
from .errors import Error
class App(morepath.App):
pass
@App.json(model=Error)
def validation_error_default(self, request):
@request.after
def adjust_status(response):
response.status = 422
return self.errors
| 16.866667 | 44 | 0.719368 |
d333b612022285ccd03d69bec00c01dda636884a | 6,916 | py | Python | sdks/python/http_client/v1/polyaxon_sdk/models/v1_interval_schedule.py | erexer/polyaxon | be14dae1ed56d568983388736bcdaf27a7baa4a4 | [
"Apache-2.0"
] | null | null | null | sdks/python/http_client/v1/polyaxon_sdk/models/v1_interval_schedule.py | erexer/polyaxon | be14dae1ed56d568983388736bcdaf27a7baa4a4 | [
"Apache-2.0"
] | null | null | null | sdks/python/http_client/v1/polyaxon_sdk/models/v1_interval_schedule.py | erexer/polyaxon | be14dae1ed56d568983388736bcdaf27a7baa4a4 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
#
# Copyright 2018-2020 Polyaxon, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# coding: utf-8
"""
Polyaxon SDKs and REST API specification.
Polyaxon SDKs and REST API specification. # noqa: E501
The version of the OpenAPI document: 1.1.7
Contact: contact@polyaxon.com
Generated by: https://openapi-generator.tech
"""
import pprint
import re # noqa: F401
import six
from polyaxon_sdk.configuration import Configuration
class V1IntervalSchedule(object):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
"""
Attributes:
openapi_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
openapi_types = {
"kind": "str",
"start_at": "datetime",
"end_at": "datetime",
"frequency": "int",
"depends_on_past": "bool",
}
attribute_map = {
"kind": "kind",
"start_at": "start_at",
"end_at": "end_at",
"frequency": "frequency",
"depends_on_past": "depends_on_past",
}
def __init__(
self,
kind="interval",
start_at=None,
end_at=None,
frequency=None,
depends_on_past=None,
local_vars_configuration=None,
): # noqa: E501
"""V1IntervalSchedule - a model defined in OpenAPI""" # noqa: E501
if local_vars_configuration is None:
local_vars_configuration = Configuration()
self.local_vars_configuration = local_vars_configuration
self._kind = None
self._start_at = None
self._end_at = None
self._frequency = None
self._depends_on_past = None
self.discriminator = None
if kind is not None:
self.kind = kind
if start_at is not None:
self.start_at = start_at
if end_at is not None:
self.end_at = end_at
if frequency is not None:
self.frequency = frequency
if depends_on_past is not None:
self.depends_on_past = depends_on_past
@property
def kind(self):
"""Gets the kind of this V1IntervalSchedule. # noqa: E501
:return: The kind of this V1IntervalSchedule. # noqa: E501
:rtype: str
"""
return self._kind
@kind.setter
def kind(self, kind):
"""Sets the kind of this V1IntervalSchedule.
:param kind: The kind of this V1IntervalSchedule. # noqa: E501
:type: str
"""
self._kind = kind
@property
def start_at(self):
"""Gets the start_at of this V1IntervalSchedule. # noqa: E501
:return: The start_at of this V1IntervalSchedule. # noqa: E501
:rtype: datetime
"""
return self._start_at
@start_at.setter
def start_at(self, start_at):
"""Sets the start_at of this V1IntervalSchedule.
:param start_at: The start_at of this V1IntervalSchedule. # noqa: E501
:type: datetime
"""
self._start_at = start_at
@property
def end_at(self):
"""Gets the end_at of this V1IntervalSchedule. # noqa: E501
:return: The end_at of this V1IntervalSchedule. # noqa: E501
:rtype: datetime
"""
return self._end_at
@end_at.setter
def end_at(self, end_at):
"""Sets the end_at of this V1IntervalSchedule.
:param end_at: The end_at of this V1IntervalSchedule. # noqa: E501
:type: datetime
"""
self._end_at = end_at
@property
def frequency(self):
"""Gets the frequency of this V1IntervalSchedule. # noqa: E501
:return: The frequency of this V1IntervalSchedule. # noqa: E501
:rtype: int
"""
return self._frequency
@frequency.setter
def frequency(self, frequency):
"""Sets the frequency of this V1IntervalSchedule.
:param frequency: The frequency of this V1IntervalSchedule. # noqa: E501
:type: int
"""
self._frequency = frequency
@property
def depends_on_past(self):
"""Gets the depends_on_past of this V1IntervalSchedule. # noqa: E501
:return: The depends_on_past of this V1IntervalSchedule. # noqa: E501
:rtype: bool
"""
return self._depends_on_past
@depends_on_past.setter
def depends_on_past(self, depends_on_past):
"""Sets the depends_on_past of this V1IntervalSchedule.
:param depends_on_past: The depends_on_past of this V1IntervalSchedule. # noqa: E501
:type: bool
"""
self._depends_on_past = depends_on_past
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.openapi_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(
map(lambda x: x.to_dict() if hasattr(x, "to_dict") else x, value)
)
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(
map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict")
else item,
value.items(),
)
)
else:
result[attr] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, V1IntervalSchedule):
return False
return self.to_dict() == other.to_dict()
def __ne__(self, other):
"""Returns true if both objects are not equal"""
if not isinstance(other, V1IntervalSchedule):
return True
return self.to_dict() != other.to_dict()
| 27.444444 | 93 | 0.593696 |
b6887134c248a8e17421ebed181ac559df386e02 | 7,676 | py | Python | cmk.py | patricia-cahill/CPU-Manager-for-Kubernetes | 1a9f09bd95789d83aa111bdbed9109f94f5772e9 | [
"Apache-2.0"
] | null | null | null | cmk.py | patricia-cahill/CPU-Manager-for-Kubernetes | 1a9f09bd95789d83aa111bdbed9109f94f5772e9 | [
"Apache-2.0"
] | null | null | null | cmk.py | patricia-cahill/CPU-Manager-for-Kubernetes | 1a9f09bd95789d83aa111bdbed9109f94f5772e9 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# Copyright (c) 2017 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""cmk.
Usage:
cmk (-h | --help)
cmk --version
cmk cluster-init (--host-list=<list>|--all-hosts) [--cmk-cmd-list=<list>]
[--cmk-img=<img>] [--cmk-img-pol=<pol>] [--conf-dir=<dir>]
[--install-dir=<dir>] [--num-exclusive-cores=<num>]
[--num-shared-cores=<num>] [--pull-secret=<name>]
[--saname=<name>] [--shared-mode=<mode>]
[--exclusive-mode=<mode>] [--namespace=<name>]
[--excl-non-isolcpus=<list>]
cmk init [--conf-dir=<dir>] [--num-exclusive-cores=<num>]
[--num-shared-cores=<num>] [--socket-id=<num>]
[--shared-mode=<mode>] [--exclusive-mode=<mode>]
[--excl-non-isolcpus=<list>]
cmk discover [--conf-dir=<dir>]
cmk describe [--conf-dir=<dir>]
cmk reconcile [--conf-dir=<dir>] [--publish] [--interval=<seconds>]
cmk isolate [--conf-dir=<dir>] [--socket-id=<num>] --pool=<pool> <command>
[-- <args> ...][--no-affinity]
cmk install [--install-dir=<dir>]
cmk node-report [--conf-dir=<dir>] [--publish] [--interval=<seconds>]
cmk uninstall [--install-dir=<dir>] [--conf-dir=<dir>] [--namespace=<name>]
cmk webhook [--conf-file=<file>]
Options:
-h --help Show this screen.
--version Show version.
--host-list=<list> Comma seperated list of Kubernetes nodes to
prepare for CMK software.
--all-hosts Prepare all Kubernetes nodes for the CMK
software.
--cmk-cmd-list=<list> Comma seperated list of CMK sub-commands to run
on each host
[default: init,reconcile,install,discover,nodereport].
--cmk-img=<img> CMK Docker image [default: cmk:v1.4.1].
--cmk-img-pol=<pol> Image pull policy for the CMK Docker image
[default: IfNotPresent].
--conf-dir=<dir> CMK configuration directory [default: /etc/cmk].
--install-dir=<dir> CMK install directory [default: /opt/bin].
--interval=<seconds> Number of seconds to wait between rerunning.
If set to 0, will only run once. [default: 0]
--num-exclusive-cores=<num> Number of cores in exclusive pool. [default: 4].
--num-shared-cores=<num> Number of cores in shared pool. [default: 1].
--pool=<pool> Pool name: either infra, shared or exclusive.
--shared-mode=<mode> Shared pool core allocation mode. Possible
modes: packed and spread [default: packed].
--exclusive-mode=<mode> Exclusive pool core allocation mode. Possible
modes: packed and spread [default: packed].
--publish Whether to publish reports to the Kubernetes
API server.
--pull-secret=<name> Name of secret used for pulling Docker images
from restricted Docker registry.
--saname=<name> ServiceAccount name to pass
[default: cmk-serviceaccount].
--socket-id=<num> ID of socket where allocated core should come
from. If it's set to -1 then child command will
be assigned to any socket [default: -1].
--no-affinity Do not set cpu affinity before forking the child
command. In this mode the user program is
responsible for reading the `CMK_CPUS_ASSIGNED`
environment variable and moving a subset of its
own processes and/or tasks to the assigned CPUs.
--namespace=<name> Set the namespace to deploy pods to during the
cluster-init deployment process.
[default: default].
--excl-non-isolcpus=<list> List of physical cores to be added to the extra
exclusive pool, not governed by isolcpus. Both
hyperthreads of the core will be added to the pool
[default: -1]
""" # noqa: E501
from intel import (
clusterinit, describe, discover, init, install,
isolate, nodereport, reconcile, uninstall, webhook)
from docopt import docopt
import logging
import os
import sys
def main():
setup_logging()
args = docopt(__doc__, version="CMK v1.4.1")
if args["cluster-init"]:
clusterinit.cluster_init(args["--host-list"], args["--all-hosts"],
args["--cmk-cmd-list"], args["--cmk-img"],
args["--cmk-img-pol"], args["--conf-dir"],
args["--install-dir"],
args["--num-exclusive-cores"],
args["--num-shared-cores"],
args["--pull-secret"],
args["--saname"], args["--exclusive-mode"],
args["--shared-mode"], args["--namespace"],
args["--excl-non-isolcpus"])
return
if args["init"]:
init.init(args["--conf-dir"],
int(args["--num-exclusive-cores"]),
int(args["--num-shared-cores"]),
args["--exclusive-mode"],
args["--shared-mode"],
args["--excl-non-isolcpus"])
return
if args["discover"]:
discover.discover(args["--conf-dir"])
return
if args["describe"]:
describe.describe(args["--conf-dir"])
return
if args["isolate"]:
isolate.isolate(args["--conf-dir"],
args["--pool"],
args["--no-affinity"],
args["<command>"],
args["<args>"],
args["--socket-id"])
return
if args["reconcile"]:
reconcile.reconcile(args["--conf-dir"],
int(args["--interval"]),
args["--publish"])
return
if args["install"]:
install.install(args["--install-dir"])
return
if args["uninstall"]:
uninstall.uninstall(args["--install-dir"],
args["--conf-dir"],
args["--namespace"])
return
if args["node-report"]:
nodereport.nodereport(args["--conf-dir"],
int(args["--interval"]),
args["--publish"])
return
if args["webhook"]:
webhook.webhook(args["--conf-file"])
def setup_logging():
level = os.getenv("CMK_LOG_LEVEL", logging.INFO)
logging.basicConfig(level=level)
if __name__ == "__main__":
try:
main()
except RuntimeError as e:
logging.error(e)
sys.exit(1)
| 45.152941 | 85 | 0.517327 |
2078f335884a3423e5e2336bb13366ea472fd306 | 5,435 | py | Python | plasmapy/atomic/tests/test_nuclear.py | HaraldNordgren/PlasmaPy | 62589c37109dbdd0f8c00b5b92042d7a460d4db5 | [
"MIT",
"BSD-2-Clause-Patent",
"BSD-2-Clause",
"BSD-3-Clause"
] | 1 | 2019-04-10T06:00:46.000Z | 2019-04-10T06:00:46.000Z | plasmapy/atomic/tests/test_nuclear.py | HaraldNordgren/PlasmaPy | 62589c37109dbdd0f8c00b5b92042d7a460d4db5 | [
"MIT",
"BSD-2-Clause-Patent",
"BSD-2-Clause",
"BSD-3-Clause"
] | null | null | null | plasmapy/atomic/tests/test_nuclear.py | HaraldNordgren/PlasmaPy | 62589c37109dbdd0f8c00b5b92042d7a460d4db5 | [
"MIT",
"BSD-2-Clause-Patent",
"BSD-2-Clause",
"BSD-3-Clause"
] | null | null | null | from astropy import units as u
import numpy as np
from ..nuclear import nuclear_binding_energy, nuclear_reaction_energy
from ...utils import (
InvalidParticleError,
AtomicError,
run_test,
run_test_equivalent_calls,
)
import pytest
test_nuclear_table = [
[nuclear_binding_energy, 'p', {}, 0 * u.J],
[nuclear_binding_energy, 'n', {}, 0 * u.J],
[nuclear_binding_energy, 'p', {}, 0 * u.J],
[nuclear_binding_energy, "H", {}, AtomicError],
[nuclear_binding_energy, 'He-99', {}, InvalidParticleError],
[nuclear_binding_energy, "He", {"mass_numb": 99}, InvalidParticleError],
[nuclear_binding_energy, 3.1415926535j, {}, TypeError],
[nuclear_reaction_energy, (), {'reactants': ['n'], 'products': 3}, TypeError],
[nuclear_reaction_energy, (), {'reactants': ['n'], 'products': ['He-4']}, AtomicError],
[nuclear_reaction_energy, (), {'reactants': ['h'], 'products': ['H-1']}, AtomicError],
[nuclear_reaction_energy, (), {'reactants': ['e-', 'n'], 'products': ['p+']}, AtomicError],
[nuclear_reaction_energy, (), {'reactants': ['e+', 'n'], 'products': ['p-']}, AtomicError],
[nuclear_reaction_energy, (), {'reactants': ['ksdf'], 'products': ['H-3']}, AtomicError],
[nuclear_reaction_energy, (), {'reactants': ['H'], 'products': ['H-1']}, AtomicError],
[nuclear_reaction_energy, (), {'reactants': ['p'], 'products': ['n', 'n', 'e-']}, AtomicError],
[nuclear_reaction_energy, 'H + H --> H', {}, AtomicError],
[nuclear_reaction_energy, 'H + H', {}, AtomicError],
[nuclear_reaction_energy, 1, {}, TypeError],
[nuclear_reaction_energy, 'H-1 + H-1 --> H-1', {}, AtomicError],
[nuclear_reaction_energy, 'p --> n', {}, AtomicError],
[nuclear_reaction_energy, 'p --> p', {'reactants': 'p', 'products': 'p'}, AtomicError],
]
@pytest.mark.parametrize('test_inputs', test_nuclear_table)
def test_nuclear(test_inputs):
run_test(*test_inputs, rtol=1e-3)
test_nuclear_equivalent_calls = [
[nuclear_binding_energy, ['He-4', {}], ['alpha', {}], ['He', {'mass_numb': 4}]],
]
@pytest.mark.parametrize('test_inputs', test_nuclear_equivalent_calls)
def test_nuclear_equivalent_calls(test_inputs):
run_test_equivalent_calls(test_inputs)
def test_nuclear_binding_energy_D_T():
before = nuclear_binding_energy("D") + nuclear_binding_energy("T")
after = nuclear_binding_energy("alpha")
E_in_MeV = (after - before).to(u.MeV).value # D + T --> alpha + n + E
assert np.isclose(E_in_MeV, 17.58, rtol=0.01)
def test_nuclear_reaction_energy():
reaction1 = 'D + T --> alpha + n'
reaction2 = 'T + D -> n + alpha'
released_energy1 = nuclear_reaction_energy(reaction1)
released_energy2 = nuclear_reaction_energy(reaction2)
assert np.isclose((released_energy1.to(u.MeV)).value, 17.58, rtol=0.01)
assert released_energy1 == released_energy2
assert nuclear_reaction_energy('n + p+ --> n + p+ + p- + p+') == \
nuclear_reaction_energy('n + p+ --> n + 2*p+ + p-')
nuclear_reaction_energy('neutron + antineutron --> neutron + antineutron')
def test_nuclear_reaction_energy_triple_alpha():
triple_alpha1 = 'alpha + He-4 --> Be-8'
triple_alpha2 = 'Be-8 + alpha --> carbon-12'
energy_triplealpha1 = nuclear_reaction_energy(triple_alpha1)
energy_triplealpha2 = nuclear_reaction_energy(triple_alpha2)
assert np.isclose(energy_triplealpha1.to(u.keV).value, -91.8, atol=0.1)
assert np.isclose(energy_triplealpha2.to(u.MeV).value, 7.367, atol=0.1)
reactants = ['He-4', 'alpha']
products = ['Be-8']
energy = nuclear_reaction_energy(reactants=reactants, products=products)
assert np.isclose(energy.to(u.keV).value, -91.8, atol=0.1)
def test_nuclear_reaction_energy_alpha_decay():
alpha_decay_example = 'U-238 --> Th-234 + alpha'
energy_alpha_decay = nuclear_reaction_energy(alpha_decay_example)
assert np.isclose(energy_alpha_decay.to(u.MeV).value, 4.26975, atol=1e-5)
def test_nuclear_reaction_energy_triple_alpha_r():
triple_alpha1_r = '4*He-4 --> 2*Be-8'
energy_triplealpha1_r = nuclear_reaction_energy(triple_alpha1_r)
assert np.isclose(energy_triplealpha1_r.to(u.keV).value,
-91.8 * 2, atol=0.1)
def test_nuclear_reaction_energy_beta():
energy1 = nuclear_reaction_energy(reactants=['n'], products=['p', 'e-'])
assert np.isclose(energy1.to(u.MeV).value, 0.78, atol=0.01)
energy2 = nuclear_reaction_energy(
reactants=['Mg-23'], products=['Na-23', 'e+'])
assert np.isclose(energy2.to(u.MeV).value, 3.034591, atol=1e-5)
# (reactants, products, expectedMeV, tol)
nuclear_reaction_energy_kwargs_table = [
('H-1', 'p', 0.0, 0.0),
(['B-10', 'n'], ['Li-7', 'He-4'], 2.8, 0.06),
(['Li-6', 'D'], ['2*alpha'], 22.2, 0.06),
(['C-12', 'p'], 'N-13', 1.95, 0.006),
(['N-13'], ['C-13', 'e+'], 1.20, 0.006),
(['C-13', 'hydrogen-1'], ['Nitrogen-14'], 7.54, 0.006),
(['N-14', 'H-1'], ['O-15'], 7.35, 0.006),
(['O-15'], ['N-15', 'e+'], 1.73, 0.006),
(('N-15', 'H-1'), ('C-12', 'He-4'), 4.96, 0.006),
]
@pytest.mark.parametrize(
"reactants, products, expectedMeV, tol",
nuclear_reaction_energy_kwargs_table)
def test_nuclear_reaction_energy_kwargs(reactants, products, expectedMeV, tol):
energy = nuclear_reaction_energy(reactants=reactants, products=products).si
expected = (expectedMeV * u.MeV).si
assert np.isclose(expected.value, energy.value, atol=tol)
| 42.795276 | 99 | 0.658142 |
529dfb711d1334b0ae293d022968a8a0a65a75ac | 1,464 | py | Python | flaski/apps/external.py | mpg-age-bioinformatics/flaski | f56e00dd80d8706ecb8593ba6585a97eed881896 | [
"MIT"
] | 9 | 2020-08-03T01:22:59.000Z | 2022-03-03T02:02:04.000Z | flaski/apps/external.py | mpg-age-bioinformatics/flaski | f56e00dd80d8706ecb8593ba6585a97eed881896 | [
"MIT"
] | 79 | 2020-06-03T06:34:46.000Z | 2021-09-22T13:31:43.000Z | flaski/apps/external.py | mpg-age-bioinformatics/flaski | f56e00dd80d8706ecb8593ba6585a97eed881896 | [
"MIT"
] | 5 | 2020-10-05T10:20:23.000Z | 2022-03-01T14:23:12.000Z | ###########################################################################
#
# example of Apps imported as plugin through docker-compose.yml mapping
#
# external.py (this file):
#
# from flaski.apps.routes import histogram
# EXTERNAL_APPS=[{ "name":"Histogram", "id":'histogram_more',"link":'histogram' ,"java":"javascript:ReverseDisplay('histogram_more')", "description":"A histogram."}]
#
# from flaski.apps.main.histogram import figure_defaults as histogram_def
# EXT_DEFAULTS_DIC={"histogram":histogram_def}
#
# docker-compose.yml:
# volumes:
# - ~/histogram/route.py:/flaski/flaski/apps/routes/histogram.py
# - ~/histogram/main.py:/flaski/flaski/apps/main/histogram.py
# - ~/histogram/external.py:/flaski/flaski/apps/external.py
#
###########################################################################
# from flaski.apps.routes import igseaplot
# EXTERNAL_APPS=[{ "name":"iGSEA plot", "id":'igseaplot_more',"link":'igseaplot' ,"java":"javascript:ReverseDisplay('igseaplot_more')", "description":"An app to customize GSEA plots."}]
# from flaski.apps.main.igseaplot import figure_defaults as igseaplot_def
# EXT_DEFAULTS_DIC={"igseaplot":igseaplot_def}
# docker-compose.yml:
# volumes:
# - ~/histogram/route.py:/flaski/flaski/apps/routes/igseaplot.py
# - ~/histogram/main.py:/flaski/flaski/apps/main/igseaplot.py
# - ~/histogram/external.py:/flaski/flaski/apps/igseaplot.py
EXTERNAL_APPS=[]
EXT_DEFAULTS_DIC={} | 43.058824 | 185 | 0.653005 |
792a52171a25f68262fbc5274d9cd48feb449ad3 | 40,398 | py | Python | rbtools/clients/tfs.py | torcolvin/rbtools | 3fbea5f57d0768488f56f398a174056e837f51b1 | [
"MIT"
] | null | null | null | rbtools/clients/tfs.py | torcolvin/rbtools | 3fbea5f57d0768488f56f398a174056e837f51b1 | [
"MIT"
] | null | null | null | rbtools/clients/tfs.py | torcolvin/rbtools | 3fbea5f57d0768488f56f398a174056e837f51b1 | [
"MIT"
] | null | null | null | """A client for Team Foundation Server."""
from __future__ import unicode_literals
import logging
import os
import re
import sys
import tempfile
import xml.etree.ElementTree as ET
from six.moves.urllib.parse import unquote
from rbtools.clients import RepositoryInfo, SCMClient
from rbtools.clients.errors import (InvalidRevisionSpecError,
SCMError,
TooManyRevisionsError)
from rbtools.utils.appdirs import user_data_dir
from rbtools.utils.checks import check_gnu_diff, check_install
from rbtools.utils.diffs import filename_match_any_patterns
from rbtools.utils.process import execute
class TFExeWrapper(object):
"""Implementation wrapper for using VS2017's tf.exe."""
REVISION_WORKING_COPY = '--rbtools-working-copy'
def __init__(self, config=None, options=None):
"""Initialize the wrapper.
Args:
config (dict, optional):
The loaded configuration.
options (argparse.Namespace, optional):
The command line options.
"""
self.config = config
self.options = options
def get_local_path(self):
"""Return the local path to the working tree.
Returns:
unicode:
The filesystem path of the repository on the client system.
"""
workfold = self._run_tf(['vc', 'workfold', os.getcwd()])
m = re.search('^Collection: (.*)$', workfold, re.MULTILINE)
if m:
return unquote(m.group(1))
logging.debug('Could not find the collection from "tf vc workfold"')
return None
def get_repository_info(self):
"""Return repository information for the current working tree.
Returns:
rbtools.clients.RepositoryInfo:
The repository info structure.
"""
path = self.get_local_path()
if path:
# Now that we know it's TFS, make sure we have GNU diff installed, and
# error out if we don't.
check_gnu_diff()
return RepositoryInfo(path=path, local_path=path)
return None
def parse_revision_spec(self, revisions):
"""Parse the given revision spec.
Args:
revisions (list of unicode):
A list of revisions as specified by the user. Items in the list
do not necessarily represent a single revision, since the user
can use the TFS-native syntax of ``r1~r2``. Versions passed in
can be any versionspec, such as a changeset number,
``L``-prefixed label name, ``W`` (latest workspace version), or
``T`` (latest upstream version).
Raises:
rbtools.clients.errors.TooManyRevisionsError:
Too many revisions were specified.
rbtools.clients.errors.InvalidRevisionSpecError:
The given revision spec could not be parsed.
Returns:
dict:
A dictionary with the following keys:
``base`` (:py:class:`unicode`):
A revision to use as the base of the resulting diff.
``tip`` (:py:class:`unicode`):
A revision to use as the tip of the resulting diff.
``parent_base`` (:py:class:`unicode`, optional):
The revision to use as the base of a parent diff.
These will be used to generate the diffs to upload to Review Board
(or print). The diff for review will include the changes in (base,
tip], and the parent diff (if necessary) will include (parent,
base].
If a single revision is passed in, this will return the parent of
that revision for "base" and the passed-in revision for "tip".
If zero revisions are passed in, this will return revisions
relevant for the "current change" (changes in the work folder which
have not yet been checked in).
"""
n_revisions = len(revisions)
if n_revisions == 1 and '~' in revisions[0]:
revisions = revisions[0].split('~')
n_revisions = len(revisions)
if n_revisions == 0:
# Most recent checked-out revision -- working copy
return {
'base': self._convert_symbolic_revision('W'),
'tip': self.REVISION_WORKING_COPY,
}
elif n_revisions == 1:
# Either a numeric revision (n-1:n) or a changelist
revision = self._convert_symbolic_revision(revisions[0])
return {
'base': revision - 1,
'tip': revision,
}
elif n_revisions == 2:
# Diff between two numeric revisions
return {
'base': self._convert_symbolic_revision(revisions[0]),
'tip': self._convert_symbolic_revision(revisions[1]),
}
else:
raise TooManyRevisionsError
return {
'base': None,
'tip': None,
}
def _convert_symbolic_revision(self, revision, path=None):
"""Convert a symbolic revision into a numeric changeset.
Args:
revision (unicode):
The TFS versionspec to convert.
path (unicode, optional):
The itemspec that the revision applies to.
Returns:
int:
The changeset number corresponding to the versionspec.
"""
# We pass results_unicode=False because that uses the filesystem
# encoding to decode the output, but the XML results we get should
# always be UTF-8, and are well-formed with the encoding specified. We
# can therefore let ElementTree determine how to decode it.
data = self._run_tf(['vc', 'history', '/stopafter:1', '/recursive',
'/format:detailed', '/version:%s' % revision,
path or os.getcwd()])
m = re.search('^Changeset: (\d+)$', data, re.MULTILINE)
if not m:
logging.debug('Failed to parse output from "tf vc history":\n%s',
data)
raise InvalidRevisionSpecError(
'"%s" does not appear to be a valid versionspec' % revision)
def diff(self, revisions, include_files, exclude_patterns, **kwargs):
"""Return the generated diff.
Args:
revisions (dict):
A dictionary containing ``base`` and ``tip`` keys.
include_files (list):
A list of file paths to include in the diff.
exclude_patterns (list):
A list of file paths to exclude from the diff.
**kwargs (dict, unused):
Unused keyword arguments.
Returns:
dict:
A dictionary containing the following keys:
``diff`` (:py:class:`bytes`):
The contents of the diff to upload.
``base_commit_id` (:py:class:`unicode`, optional):
The ID of the commit that the change is based on, if available.
This is necessary for some hosting services that don't provide
individual file access.
"""
base = str(revisions['base'])
tip = str(revisions['tip'])
if tip == self.REVISION_WORKING_COPY:
# TODO: support committed revisions
return self._diff_working_copy(base, include_files,
exclude_patterns)
else:
raise SCMError('Posting committed changes is not yet supported '
'for TFS when using the tf.exe wrapper.')
def _diff_working_copy(self, base, include_files, exclude_patterns):
"""Return a diff of the working copy.
Args:
base (unicode):
The base revision to diff against.
include_files (list):
A list of file paths to include in the diff.
exclude_patterns (list):
A list of file paths to exclude from the diff.
Returns:
dict:
A dictionary containing ``diff``, ``parent_diff``, and
``base_commit_id`` keys. In the case of TFS, the parent diff key
will always be ``None``.
"""
# We pass results_unicode=False because that uses the filesystem
# encoding, but the XML results we get should always be UTF-8, and are
# well-formed with the encoding specified. We can therefore let
# ElementTree determine how to decode it.
status = self._run_tf(['vc', 'status', '/format:xml'],
results_unicode=False)
root = ET.fromstring(status)
diff = []
for pending_change in root.findall(
'./PendingSet/PendingChanges/PendingChange'):
action = pending_change.attrib['chg'].split(' ')
old_filename = \
pending_change.attrib.get('srcitem', '').encode('utf-8')
new_filename = pending_change.attrib['item'].encode('utf-8')
local_filename = pending_change.attrib['local']
old_version = \
pending_change.attrib.get('svrfm', '0').encode('utf-8')
file_type = pending_change.attrib['type']
encoding = pending_change.attrib['enc']
new_version = b'(pending)'
old_data = b''
new_data = b''
binary = (encoding == '-1')
copied = 'Branch' in action
if (not file_type or (not os.path.isfile(local_filename) and
'Delete' not in action)):
continue
if (exclude_patterns and
filename_match_any_patterns(local_filename,
exclude_patterns,
base_dir=None)):
continue
if 'Add' in action:
old_filename = b'/dev/null'
if not binary:
with open(local_filename, 'rb') as f:
new_data = f.read()
old_data = b''
elif 'Delete' in action:
old_data = self._run_tf(
['vc', 'view', '/version:%s' % old_version.decode('utf-8'),
old_filename.decode('utf-8')],
results_unicode=False)
new_data = b''
new_version = b'(deleted)'
elif 'Edit' in action:
if not binary:
old_data = self._run_tf(
['vc', 'view', old_filename.decode('utf-8'),
'/version:%s' % old_version.decode('utf-8')],
results_unicode=False)
with open(local_filename, 'rb') as f:
new_data = f.read()
old_label = b'%s\t%s' % (old_filename, old_version)
new_label = b'%s\t%s' % (new_filename, new_version)
if copied:
diff.append(b'Copied from: %s\n' % old_filename)
if binary:
if 'Add' in action:
old_filename = new_filename
diff.append(b'--- %s\n' % old_label)
diff.append(b'+++ %s\n' % new_label)
diff.append(b'Binary files %s and %s differ\n'
% (old_filename, new_filename))
elif old_filename != new_filename and old_data == new_data:
# Renamed file with no changes.
diff.append(b'--- %s\n' % old_label)
diff.append(b'+++ %s\n' % new_label)
else:
old_tmp = tempfile.NamedTemporaryFile(delete=False)
old_tmp.write(old_data)
old_tmp.close()
new_tmp = tempfile.NamedTemporaryFile(delete=False)
new_tmp.write(new_data)
new_tmp.close()
unified_diff = execute(
['diff', '-u',
'--label', old_label.decode('utf-8'),
'--label', new_label.decode('utf-8'),
old_tmp.name, new_tmp.name],
extra_ignore_errors=(1,),
log_output_on_error=False,
results_unicode=False)
diff.append(unified_diff)
os.unlink(old_tmp.name)
os.unlink(new_tmp.name)
return {
'diff': b''.join(diff),
'parent_diff': None,
'base_commit_id': base,
}
def _run_tf(self, args, **kwargs):
"""Run the "tf" command.
Args:
args (list):
A list of arguments to pass to rb-tfs.
**kwargs (dict):
Additional keyword arguments for the :py:meth:`execute` call.
Returns:
unicode:
The output of the command.
"""
command = ['tf'] + args + ['/noprompt']
if getattr(self.options, 'tfs_login', None):
command.append('/login:%s' % self.options.tfs_login)
return execute(command, ignore_errors=True, **kwargs)
class TEEWrapper(object):
"""Implementation wrapper for using Team Explorer Everywhere."""
REVISION_WORKING_COPY = '--rbtools-working-copy'
def __init__(self, config=None, options=None):
"""Initialize the wrapper.
Args:
config (dict, optional):
The loaded configuration.
options (argparse.Namespace, optional):
The command line options.
"""
self.config = config
self.options = options
self.tf = None
tf_locations = []
if options and getattr(options, 'tf_cmd', None):
tf_locations.append(options.tf_cmd)
if sys.platform.startswith('win'):
# First check in the system path. If that doesn't work, look in the
# two standard install locations.
tf_locations.extend([
'tf.cmd',
(r'%programfiles(x86)%\Microsoft Visual Studio 12.0\Common7'
r'\IDE\tf.cmd'),
(r'%programfiles%\Microsoft Team Foundation Server 12.0\Tools'
r'\tf.cmd'),
])
else:
tf_locations.append('tf')
for location in tf_locations:
location = os.path.expandvars(location)
if check_install([location, 'help']):
self.tf = location
break
def get_local_path(self):
"""Return the local path to the working tree.
Returns:
unicode:
The filesystem path of the repository on the client system.
"""
if self.tf is None:
logging.debug('Unable to execute "tf help": skipping TFS')
return None
workfold = self._run_tf(['workfold', os.getcwd()])
m = re.search('^Collection: (.*)$', workfold, re.MULTILINE)
if m:
return unquote(m.group(1))
logging.debug('Could not find the collection from "tf workfold"')
return None
def get_repository_info(self):
"""Return repository information for the current working tree.
Returns:
rbtools.clients.RepositoryInfo:
The repository info structure.
"""
path = self.get_local_path()
if path:
# Now that we know it's TFS, make sure we have GNU diff installed,
# and error out if we don't.
check_gnu_diff()
return RepositoryInfo(path=path, local_path=path)
return None
def parse_revision_spec(self, revisions):
"""Parse the given revision spec.
Args:
revisions (list of unicode):
A list of revisions as specified by the user. Items in the list
do not necessarily represent a single revision, since the user
can use the TFS-native syntax of ``r1~r2``. Versions passed in
can be any versionspec, such as a changeset number,
``L``-prefixed label name, ``W`` (latest workspace version), or
``T`` (latest upstream version).
Returns:
dict:
A dictionary with the following keys:
``base`` (:py:class:`unicode`):
A revision to use as the base of the resulting diff.
``tip`` (:py:class:`unicode`):
A revision to use as the tip of the resulting diff.
``parent_base`` (:py:class:`unicode`, optional):
The revision to use as the base of a parent diff.
These will be used to generate the diffs to upload to Review Board
(or print). The diff for review will include the changes in (base,
tip], and the parent diff (if necessary) will include (parent,
base].
If a single revision is passed in, this will return the parent of
that revision for "base" and the passed-in revision for "tip".
If zero revisions are passed in, this will return revisions
relevant for the "current change" (changes in the work folder which
have not yet been checked in).
Raises:
rbtools.clients.errors.TooManyRevisionsError:
Too many revisions were specified.
rbtools.clients.errors.InvalidRevisionSpecError:
The given revision spec could not be parsed.
"""
n_revisions = len(revisions)
if n_revisions == 1 and '~' in revisions[0]:
revisions = revisions[0].split('~')
n_revisions = len(revisions)
if n_revisions == 0:
# Most recent checked-out revision -- working copy
return {
'base': self._convert_symbolic_revision('W'),
'tip': self.REVISION_WORKING_COPY,
}
elif n_revisions == 1:
# Either a numeric revision (n-1:n) or a changelist
revision = self._convert_symbolic_revision(revisions[0])
return {
'base': revision - 1,
'tip': revision,
}
elif n_revisions == 2:
# Diff between two numeric revisions
return {
'base': self._convert_symbolic_revision(revisions[0]),
'tip': self._convert_symbolic_revision(revisions[1]),
}
else:
raise TooManyRevisionsError
return {
'base': None,
'tip': None,
}
def _convert_symbolic_revision(self, revision, path=None):
"""Convert a symbolic revision into a numeric changeset.
Args:
revision (unicode):
The TFS versionspec to convert.
path (unicode, optional):
The itemspec that the revision applies to.
Returns:
int:
The changeset number corresponding to the versionspec.
"""
args = ['history', '-stopafter:1', '-recursive', '-format:xml']
# 'tf history -version:W'` doesn't seem to work (even though it's
# supposed to). Luckily, W is the default when -version isn't passed,
# so just elide it.
if revision != 'W':
args.append('-version:%s' % revision)
args.append(path or os.getcwd())
# We pass results_unicode=False because that uses the filesystem
# encoding to decode the output, but the XML results we get should
# always be UTF-8, and are well-formed with the encoding specified. We
# can therefore let ElementTree determine how to decode it.
data = self._run_tf(args, results_unicode=False)
try:
root = ET.fromstring(data)
item = root.find('./changeset')
if item is not None:
return int(item.attrib['id'])
else:
raise Exception('No changesets found')
except Exception as e:
logging.debug('Failed to parse output from "tf history": %s\n%s',
e, data, exc_info=True)
raise InvalidRevisionSpecError(
'"%s" does not appear to be a valid versionspec' % revision)
def diff(self, revisions, include_files, exclude_patterns):
"""Return the generated diff.
Args:
revisions (dict):
A dictionary containing ``base`` and ``tip`` keys.
include_files (list):
A list of file paths to include in the diff.
exclude_patterns (list):
A list of file paths to exclude from the diff.
Returns:
dict:
A dictionary containing the following keys:
``diff`` (:py:class:`bytes`):
The contents of the diff to upload.
``base_commit_id` (:py:class:`unicode`, optional):
The ID of the commit that the change is based on, if available.
This is necessary for some hosting services that don't provide
individual file access.
"""
base = str(revisions['base'])
tip = str(revisions['tip'])
if tip == self.REVISION_WORKING_COPY:
return self._diff_working_copy(base, include_files,
exclude_patterns)
else:
raise SCMError('Posting committed changes is not yet supported '
'for TFS when using the Team Explorer Everywhere '
'wrapper.')
def _diff_working_copy(self, base, include_files, exclude_patterns):
"""Return a diff of the working copy.
Args:
base (unicode):
The base revision to diff against.
include_files (list):
A list of file paths to include in the diff.
exclude_patterns (list):
A list of file paths to exclude from the diff.
Returns:
dict:
A dictionary containing ``diff``, ``parent_diff``, and
``base_commit_id`` keys. In the case of TFS, the parent diff key
will always be ``None``.
"""
# We pass results_unicode=False because that uses the filesystem
# encoding, but the XML results we get should always be UTF-8, and are
# well-formed with the encoding specified. We can therefore let
# ElementTree determine how to decode it.
status = self._run_tf(['status', '-format:xml'], results_unicode=False)
root = ET.fromstring(status)
diff = []
for pending_change in root.findall('./pending-changes/pending-change'):
action = pending_change.attrib['change-type'].split(', ')
new_filename = pending_change.attrib['server-item'].encode('utf-8')
local_filename = pending_change.attrib['local-item']
old_version = pending_change.attrib['version'].encode('utf-8')
file_type = pending_change.attrib.get('file-type')
new_version = b'(pending)'
old_data = b''
new_data = b''
copied = 'branch' in action
if (not file_type or (not os.path.isfile(local_filename) and
'delete' not in action)):
continue
if (exclude_patterns and
filename_match_any_patterns(local_filename,
exclude_patterns,
base_dir=None)):
continue
if 'rename' in action:
old_filename = \
pending_change.attrib['source-item'].encode('utf-8')
else:
old_filename = new_filename
if copied:
old_filename = \
pending_change.attrib['source-item'].encode('utf-8')
old_version = (
'%d' % self._convert_symbolic_revision(
'W', old_filename.decode('utf-8')))
if 'add' in action:
old_filename = b'/dev/null'
if file_type != 'binary':
with open(local_filename) as f:
new_data = f.read()
old_data = b''
elif 'delete' in action:
old_data = self._run_tf(
['print', '-version:%s' % old_version.decode('utf-8'),
old_filename.decode('utf-8')],
results_unicode=False)
new_data = b''
new_version = b'(deleted)'
elif 'edit' in action:
old_data = self._run_tf(
['print', '-version:%s' % old_version.decode('utf-8'),
old_filename.decode('utf-8')],
results_unicode=False)
with open(local_filename) as f:
new_data = f.read()
old_label = b'%s\t%s' % (old_filename, old_version)
new_label = b'%s\t%s' % (new_filename, new_version)
if copied:
diff.append(b'Copied from: %s\n' % old_filename)
if file_type == 'binary':
if 'add' in action:
old_filename = new_filename
diff.append(b'--- %s\n' % old_label)
diff.append(b'+++ %s\n' % new_label)
diff.append(b'Binary files %s and %s differ\n'
% (old_filename, new_filename))
elif old_filename != new_filename and old_data == new_data:
# Renamed file with no changes
diff.append(b'--- %s\n' % old_label)
diff.append(b'+++ %s\n' % new_label)
else:
old_tmp = tempfile.NamedTemporaryFile(delete=False)
old_tmp.write(old_data)
old_tmp.close()
new_tmp = tempfile.NamedTemporaryFile(delete=False)
new_tmp.write(new_data)
new_tmp.close()
unified_diff = execute(
['diff', '-u',
'--label', old_label.decode('utf-8'),
'--label', new_label.decode('utf-8'),
old_tmp.name, new_tmp.name],
extra_ignore_errors=(1,),
log_output_on_error=False,
results_unicode=False)
diff.append(unified_diff)
os.unlink(old_tmp.name)
os.unlink(new_tmp.name)
if len(root.findall('./candidate-pending-changes/pending-change')) > 0:
logging.warning('There are added or deleted files which have not '
'been added to TFS. These will not be included '
'in your review request.')
return {
'diff': b''.join(diff),
'parent_diff': None,
'base_commit_id': base,
}
def _run_tf(self, args, **kwargs):
"""Run the "tf" command.
Args:
args (list):
A list of arguments to pass to rb-tfs.
**kwargs (dict):
Additional keyword arguments for the :py:meth:`execute` call.
Returns:
unicode:
The output of the command.
"""
cmdline = [self.tf, '-noprompt']
if getattr(self.options, 'tfs_login', None):
cmdline.append('-login:%s' % self.options.tfs_login)
cmdline += args
# Use / style arguments when running on windows.
if sys.platform.startswith('win'):
for i, arg in enumerate(cmdline):
if arg.startswith('-'):
cmdline[i] = '/' + arg[1:]
return execute(cmdline, ignore_errors=True, **kwargs)
class TFHelperWrapper(object):
"""Implementation wrapper using our own helper."""
def __init__(self, helper_path, config=None, options=None):
"""Initialize the wrapper.
Args:
helper_path (unicode):
The path to the helper binary.
config (dict, optional):
The loaded configuration.
options (argparse.Namespace, optional):
The command line options.
"""
self.helper_path = helper_path
self.config = config
self.options = options
def get_local_path(self):
"""Return the local path to the working tree.
Returns:
unicode:
The filesystem path of the repository on the client system.
"""
rc, path, errors = self._run_helper(['get-collection'],
ignore_errors=True)
if rc == 0:
return path.strip()
return None
def get_repository_info(self):
"""Return repository information for the current working tree.
Returns:
rbtools.clients.RepositoryInfo:
The repository info structure.
"""
path = self.get_local_path()
if path:
return RepositoryInfo(path=path, local_path=path)
return None
def parse_revision_spec(self, revisions):
"""Parse the given revision spec.
Args:
revisions (list of unicode):
A list of revisions as specified by the user. Items in the list
do not necessarily represent a single revision, since the user
can use the TFS-native syntax of ``r1~r2``. Versions passed in
can be any versionspec, such as a changeset number,
``L``-prefixed label name, ``W`` (latest workspace version), or
``T`` (latest upstream version).
Returns:
dict:
A dictionary with the following keys:
``base`` (:py:class:`unicode`):
A revision to use as the base of the resulting diff.
``tip`` (:py:class:`unicode`):
A revision to use as the tip of the resulting diff.
``parent_base`` (:py:class:`unicode`, optional):
The revision to use as the base of a parent diff.
These will be used to generate the diffs to upload to Review Board
(or print). The diff for review will include the changes in (base,
tip], and the parent diff (if necessary) will include (parent,
base].
If a single revision is passed in, this will return the parent of
that revision for "base" and the passed-in revision for "tip".
If zero revisions are passed in, this will return revisions
relevant for the "current change" (changes in the work folder which
have not yet been checked in).
Raises:
rbtools.clients.errors.TooManyRevisionsError:
Too many revisions were specified.
rbtools.clients.errors.InvalidRevisionSpecError:
The given revision spec could not be parsed.
"""
if len(revisions) > 2:
raise TooManyRevisionsError
rc, revisions, errors = self._run_helper(
['parse-revision'] + revisions, split_lines=True)
if rc == 0:
return {
'base': revisions[0].strip(),
'tip': revisions[1].strip()
}
else:
raise InvalidRevisionSpecError('\n'.join(errors))
def diff(self, revisions, include_files, exclude_patterns):
"""Return the generated diff.
Args:
revisions (dict):
A dictionary containing ``base`` and ``tip`` keys.
include_files (list):
A list of file paths to include in the diff.
exclude_patterns (list):
A list of file paths to exclude from the diff.
Returns:
dict:
A dictionary containing the following keys:
``diff`` (:py:class:`bytes`):
The contents of the diff to upload.
``base_commit_id` (:py:class:`unicode`, optional):
The ID of the commit that the change is based on, if available.
This is necessary for some hosting services that don't provide
individual file access.
Raises:
rbtools.clients.errors.SCMError:
Something failed when creating the diff.
"""
base = revisions['base']
tip = revisions['tip']
rc, diff, errors = self._run_helper(['diff', '--', base, tip],
ignore_errors=True,
results_unicode=False,
log_output_on_error=False)
if rc in (0, 2):
if rc == 2:
# Magic return code that means success, but there were
# un-tracked files in the working directory.
logging.warning('There are added or deleted files which have '
'not been added to TFS. These will not be '
'included in your review request.')
return {
'diff': diff,
'parent_diff': None,
'base_commit_id': None,
}
else:
raise SCMError(errors.strip())
def _run_helper(self, args, **kwargs):
"""Run the rb-tfs binary.
Args:
args (list):
A list of arguments to pass to rb-tfs.
**kwargs (dict):
Additional keyword arguments for the :py:meth:`execute` call.
Returns:
tuple:
A 3-tuple of return code, output, and error output. The output and
error output may be lists depending on the contents of ``kwargs``.
"""
if len(args) == 0:
raise ValueError('_run_helper called without any arguments')
cmdline = ['java']
cmdline += getattr(self.config, 'JAVA_OPTS', ['-Xmx2048M'])
cmdline += ['-jar', self.helper_path]
cmdline.append(args[0])
if self.options:
if self.options.debug:
cmdline.append('--debug')
if getattr(self.options, 'tfs_shelveset_owner', None):
cmdline += ['--shelveset-owner',
self.options.tfs_shelveset_owner]
if getattr(self.options, 'tfs_login', None):
cmdline += ['--login', self.options.tfs_login]
cmdline += args[1:]
return execute(cmdline,
with_errors=False,
results_unicode=False,
return_error_code=True,
return_errors=True,
**kwargs)
class TFSClient(SCMClient):
"""A client for Team Foundation Server."""
name = 'Team Foundation Server'
server_tool_names = 'Team Foundation Server'
supports_diff_exclude_patterns = True
supports_patch_revert = True
def __init__(self, config=None, options=None):
"""Initialize the client.
Args:
config (dict, optional):
The loaded configuration.
options (argparse.Namespace, optional):
The command line options.
"""
super(TFSClient, self).__init__(config, options)
# There are three different backends that can be used to access the
# underlying TFS repository. We try them in this order:
# - VS2017+ tf.exe
# - Our custom rb-tfs wrapper, built on the TFS Java SDK
# - Team Explorer Everywhere's tf command
use_tf_exe = False
try:
tf_vc_output = execute(['tf', 'vc', 'help'], ignore_errors=True,
none_on_ignored_error=True)
# VS2015 has a tf.exe but it's not good enough.
if (tf_vc_output and
'Version Control Tool, Version 15' in tf_vc_output):
use_tf_exe = True
except OSError:
pass
helper_path = os.path.join(user_data_dir('rbtools'), 'packages', 'tfs',
'rb-tfs.jar')
if use_tf_exe:
self.tf_wrapper = TFExeWrapper(config, options)
elif os.path.exists(helper_path):
self.tf_wrapper = TFHelperWrapper(helper_path, config, options)
else:
self.tf_wrapper = TEEWrapper(config, options)
def get_local_path(self):
"""Return the local path to the working tree.
Returns:
unicode:
The filesystem path of the repository on the client system.
"""
return self.tf_wrapper.get_local_path()
def get_repository_info(self):
"""Return repository information for the current working tree.
Returns:
rbtools.clients.RepositoryInfo:
The repository info structure.
"""
return self.tf_wrapper.get_repository_info()
def parse_revision_spec(self, revisions):
"""Parse the given revision spec.
Args:
revisions (list of unicode):
A list of revisions as specified by the user. Items in the list
do not necessarily represent a single revision, since the user
can use the TFS-native syntax of ``r1~r2``. Versions passed in
can be any versionspec, such as a changeset number,
``L``-prefixed label name, ``W`` (latest workspace version), or
``T`` (latest upstream version).
Returns:
dict:
A dictionary with the following keys:
``base`` (:py:class:`unicode`):
A revision to use as the base of the resulting diff.
``tip`` (:py:class:`unicode`):
A revision to use as the tip of the resulting diff.
``parent_base`` (:py:class:`unicode`, optional):
The revision to use as the base of a parent diff.
These will be used to generate the diffs to upload to Review Board
(or print). The diff for review will include the changes in (base,
tip], and the parent diff (if necessary) will include (parent,
base].
If a single revision is passed in, this will return the parent of
that revision for "base" and the passed-in revision for "tip".
If zero revisions are passed in, this will return revisions
relevant for the "current change" (changes in the work folder which
have not yet been checked in).
Raises:
rbtools.clients.errors.TooManyRevisionsError:
Too many revisions were specified.
rbtools.clients.errors.InvalidRevisionSpecError:
The given revision spec could not be parsed.
"""
return self.tf_wrapper.parse_revision_spec(revisions)
def diff(self, revisions, include_files=[], exclude_patterns=[],
no_renames=False, extra_args=[]):
"""Return the generated diff.
Args:
revisions (dict):
A dictionary containing ``base`` and ``tip`` keys.
include_files (list, optional):
A list of file paths to include in the diff.
exclude_patterns (list, optional):
A list of file paths to exclude from the diff.
extra_args (list, optional):
Unused.
Returns:
dict:
A dictionary containing the following keys:
``diff`` (:py:class:`bytes`):
The contents of the diff to upload.
``base_commit_id` (:py:class:`unicode`, optional):
The ID of the commit that the change is based on, if available.
This is necessary for some hosting services that don't provide
individual file access.
"""
return self.tf_wrapper.diff(revisions, include_files, exclude_patterns)
| 35.718833 | 82 | 0.542626 |
0b0d6f760e2e70def6963f184b8121db80c63bda | 294 | py | Python | eval.py | MoonBlvd/pytorch-i3d | 3804ab2e1df018619cd12342dff7976bb302058e | [
"Apache-2.0"
] | null | null | null | eval.py | MoonBlvd/pytorch-i3d | 3804ab2e1df018619cd12342dff7976bb302058e | [
"Apache-2.0"
] | null | null | null | eval.py | MoonBlvd/pytorch-i3d | 3804ab2e1df018619cd12342dff7976bb302058e | [
"Apache-2.0"
] | null | null | null | import sys
sys.path.append('..')
from evaluation.eval_detection import ANETdetection
evaluator = ANETdetection(ground_truth_filename='A3D_i3d_label.json',
prediction_filename='tmp',
subset='val',
check_status=False) | 36.75 | 69 | 0.602041 |
e51629d83366ea77fd8af04f9275d77a9159411e | 7,796 | py | Python | tensorflow_estimator/python/estimator/head/base_head_test.py | ziky90/estimator | 825c02ce244ce21ec4f01360dfdf90cbf92f6bde | [
"Apache-2.0"
] | null | null | null | tensorflow_estimator/python/estimator/head/base_head_test.py | ziky90/estimator | 825c02ce244ce21ec4f01360dfdf90cbf92f6bde | [
"Apache-2.0"
] | null | null | null | tensorflow_estimator/python/estimator/head/base_head_test.py | ziky90/estimator | 825c02ce244ce21ec4f01360dfdf90cbf92f6bde | [
"Apache-2.0"
] | null | null | null | # Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for base_head.py."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from tensorflow.core.framework import summary_pb2
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.framework import test_util
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.platform import test
from tensorflow.python.saved_model import signature_constants
from tensorflow_estimator.python.estimator import model_fn
from tensorflow_estimator.python.estimator.head import base_head
_DEFAULT_SERVING_KEY = signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY
def _assert_simple_summaries(test_case, expected_summaries, summary_str,
tol=1e-6):
"""Assert summary the specified simple values.
Args:
test_case: test case.
expected_summaries: Dict of expected tags and simple values.
summary_str: Serialized `summary_pb2.Summary`.
tol: Tolerance for relative and absolute.
"""
summary = summary_pb2.Summary()
summary.ParseFromString(summary_str)
test_case.assertAllClose(expected_summaries, {
v.tag: v.simple_value for v in summary.value
}, rtol=tol, atol=tol)
def _assert_no_hooks(test_case, spec):
test_case.assertAllEqual([], spec.training_chief_hooks)
test_case.assertAllEqual([], spec.training_hooks)
class CreateEstimatorSpecTest(test.TestCase):
class _HeadWithTPUSupport(base_head.Head):
"""Head that overrides _create_tpu_estimator_spec."""
def name(self):
return 'HeadWithTPUSupport'
def logits_dimension(self):
return None
def loss_reduction(self):
return None
def loss(self, features, mode, logits, labels):
return None
def predictions(self, logits):
return None
def metrics(self, regularization_losses=None):
return None
def update_metrics(self, eval_metrics, features, logits, labels,
mode=None, regularization_losses=None):
return None
def _create_tpu_estimator_spec(self, features, mode, logits, labels=None,
optimizer=None, train_op_fn=None,
regularization_losses=None):
return model_fn._TPUEstimatorSpec(
mode=model_fn.ModeKeys.EVAL,
loss=constant_op.constant(0.0, dtype=dtypes.float32))
class _HeadWithOutTPUSupport(base_head.Head):
"""Head that overrides create_estimator_spec."""
def name(self):
return 'HeadWithOutTPUSupport'
def logits_dimension(self):
return None
def loss_reduction(self):
return None
def loss(self, features, mode, logits, labels):
return None
def predictions(self, logits):
return None
def metrics(self, regularization_losses=None):
return None
def update_metrics(self, eval_metrics, features, logits, labels,
mode=None, regularization_losses=None):
return None
def create_estimator_spec(self, features, mode, logits, labels=None,
optimizer=None, train_op_fn=None,
regularization_losses=None):
return model_fn.EstimatorSpec(
mode=model_fn.ModeKeys.EVAL,
loss=constant_op.constant(0.0, dtype=dtypes.float32))
class _InvalidHead(base_head.Head):
"""Head that overrides neither estimator_spec functions."""
def name(self):
return 'InvalidHead'
def logits_dimension(self):
return None
def loss_reduction(self):
return None
def loss(self, features, mode, logits, labels):
return None
def predictions(self, logits):
return None
def metrics(self, regularization_losses=None):
return None
def update_metrics(self, eval_metrics, features, logits, labels,
mode=None, regularization_losses=None):
return None
def test_head_override_tpu_estimator_spec(self):
"""Test for `_Head` that overrides _create_tpu_estimator_spec."""
head = self._HeadWithTPUSupport()
tpu_spec = head._create_tpu_estimator_spec(
features=None, mode=None, logits=None)
self.assertTrue(isinstance(tpu_spec, model_fn._TPUEstimatorSpec))
est_spec = head.create_estimator_spec(
features=None, mode=None, logits=None)
self.assertTrue(isinstance(est_spec, model_fn.EstimatorSpec))
def test_head_override_estimator_spec(self):
"""Test for `Head` that overrides create_estimator_spec."""
head = self._HeadWithOutTPUSupport()
with self.assertRaisesRegexp(
NotImplementedError,
'TPUEstimatorSpec not available for this model head.'):
_ = head._create_tpu_estimator_spec(
features=None, mode=None, logits=None)
est_spec = head.create_estimator_spec(
features=None, mode=None, logits=None)
self.assertTrue(isinstance(est_spec, model_fn.EstimatorSpec))
def test_invalid_head_class(self):
head = self._InvalidHead()
with self.assertRaisesRegexp(
NotImplementedError,
'TPUEstimatorSpec not available for this model head.'):
_ = head._create_tpu_estimator_spec(
features=None, mode=None, logits=None)
with self.assertRaisesRegexp(
NotImplementedError,
r'Subclasses of Head must implement `create_estimator_spec\(\)` or '
r'_create_tpu_estimator_spec\(\).'):
_ = head.create_estimator_spec(
features=None, mode=None, logits=None)
@test_util.deprecated_graph_mode_only
def test_tensor_shape_checking_in_graph_mode(self):
"""Test for shape checking of tensor with partially defined shape."""
labels_placeholder = array_ops.placeholder(
dtype=dtypes.float32, shape=(None, 1))
logits_placeholder = array_ops.placeholder(
dtype=dtypes.float32, shape=(None, 1))
labels_input = np.array([[-10.], [10.]], dtype=np.float32)
logits_input = np.array([[1.], [0.]], dtype=np.float32)
loss = np.array([[1.], [2.]], dtype=np.float32)
def _loss_fn(labels, logits):
check_labels = control_flow_ops.Assert(
math_ops.reduce_all(math_ops.equal(labels, labels_input)),
data=[labels])
check_logits = control_flow_ops.Assert(
math_ops.reduce_all(math_ops.equal(logits, logits_input)),
data=[logits])
with ops.control_dependencies([check_labels, check_logits]):
return constant_op.constant(loss)
unweighted_loss = base_head.call_loss_fn(
loss_fn=_loss_fn,
labels=labels_placeholder,
logits=logits_placeholder,
features={'x': np.array(((42,),), dtype=np.int32)})
with self.cached_session():
self.assertAllClose(
unweighted_loss.eval({
labels_placeholder: labels_input,
logits_placeholder: logits_input
}),
loss)
if __name__ == '__main__':
test.main()
| 34.343612 | 80 | 0.697922 |
1becdf29e21ac472f8b631985003d04825eb3396 | 164 | py | Python | barrelseq/version.py | BeckResearchLab/barrelseq | 044b9f69f10b4b0413231d821ea80af1c7c31544 | [
"MIT"
] | 1 | 2021-11-27T08:35:15.000Z | 2021-11-27T08:35:15.000Z | barrelseq/version.py | BeckResearchLab/barrelseq | 044b9f69f10b4b0413231d821ea80af1c7c31544 | [
"MIT"
] | 5 | 2018-09-19T21:50:01.000Z | 2019-07-16T22:14:52.000Z | barrelseq/version.py | BeckResearchLab/barrelseq | 044b9f69f10b4b0413231d821ea80af1c7c31544 | [
"MIT"
] | null | null | null |
_version_major = 0
_version_minor = 1
_version_build = 0
_version = [ _version_major, _version_minor, _version_build ]
__version__ = '.'.join(map(str, _version))
| 20.5 | 61 | 0.756098 |
f7cd64b134db5195f529bca36236f27c45b48559 | 489 | py | Python | gpio/sound.py | mc-b/iotkitmp | a526617c3f5347d1ae607063ae8c759a46b4715d | [
"MIT"
] | null | null | null | gpio/sound.py | mc-b/iotkitmp | a526617c3f5347d1ae607063ae8c759a46b4715d | [
"MIT"
] | null | null | null | gpio/sound.py | mc-b/iotkitmp | a526617c3f5347d1ae607063ae8c759a46b4715d | [
"MIT"
] | 1 | 2022-03-04T09:38:26.000Z | 2022-03-04T09:38:26.000Z | import time, math, machine
from lib.config import *
def pulse(l, t):
for i in range(20):
l.duty(int(math.sin(i / 10 * math.pi) * 500 + 500))
time.sleep_ms(t)
buzzer = machine.Pin(DEFAULT_IOTKIT_BUZZER)
while True:
try:
p = machine.PWM(buzzer, freq=3969, duty=50 )
time.sleep( 0.5 )
p = machine.PWM(buzzer, freq=2800, duty=50 )
time.sleep( 0.5 )
except:
p = machine.PWM(buzzer, freq=0, duty=0 )
break
| 19.56 | 59 | 0.566462 |
14d8ffbaf9e801f7090959f4f9c3db571019c1a7 | 23,061 | py | Python | tests/grid/test_create_network.py | schmidtjonathan/landlab | b5fd0f84090002c2f888efbc8be01661729e980d | [
"MIT"
] | 1 | 2022-01-07T02:36:07.000Z | 2022-01-07T02:36:07.000Z | tests/grid/test_create_network.py | schmidtjonathan/landlab | b5fd0f84090002c2f888efbc8be01661729e980d | [
"MIT"
] | null | null | null | tests/grid/test_create_network.py | schmidtjonathan/landlab | b5fd0f84090002c2f888efbc8be01661729e980d | [
"MIT"
] | 2 | 2019-08-19T08:58:10.000Z | 2022-01-07T02:36:01.000Z | import hypothesis.extra.numpy as hynp
import numpy as np
import pytest
from hypothesis import given, settings
from hypothesis.strategies import composite, floats, integers, lists
from numpy.testing import assert_array_equal
from landlab import RasterModelGrid
from landlab.grid.create_network import (
_reduce_to_fewest_nodes,
_reduce_nodes,
create_network_links,
create_xy_of_node,
get_node_fields,
network_grid_from_raster,
network_grid_from_segments,
pairwise,
reindex_network_nodes,
spacing_from_drainage_area,
AlongChannelSpacingAtLeast,
ChannelSegment,
ChannelSegmentConnector,
AtMostNodes,
JustEndNodes,
SegmentLinkCollector,
SegmentNodeCoordinateCollector,
SegmentNodeReindexer,
SpacingAtLeast,
)
@given(
drainage_area=hynp.arrays(
dtype=hynp.floating_dtypes(),
shape=hynp.array_shapes(),
elements=floats(min_value=0, width=16),
)
)
def test_calc_spacing_always_positive(drainage_area):
assert np.all(spacing_from_drainage_area(drainage_area) >= 0.0)
@given(
drainage_area=hynp.arrays(
dtype=hynp.floating_dtypes(),
shape=hynp.array_shapes(),
elements=floats(min_value=0, width=16),
)
)
def test_calc_spacing_unit_keywords(drainage_area):
spacing = spacing_from_drainage_area(drainage_area, a=1, b=1, n_widths=1)
assert np.allclose(spacing, drainage_area / 1e6)
@given(nodes=lists(integers(), min_size=2, max_size=1024))
def test_channel_segment(nodes):
segment = ChannelSegment(nodes)
assert segment.downstream_node == nodes[0]
assert segment.upstream_node == nodes[-1]
assert_array_equal(segment.nodes, nodes)
assert len(segment) == len(nodes)
@given(nodes=lists(integers(), min_size=2, max_size=1024))
def test_channel_segment_set_nodes(nodes):
segment = ChannelSegment([0, 1])
segment.nodes = nodes
assert segment.downstream_node == nodes[0]
assert segment.upstream_node == nodes[-1]
assert_array_equal(segment.nodes, nodes)
assert len(segment) == len(nodes)
def test_channel_segment_add_downstream_node():
segment = ChannelSegment([0, 1])
downstream = ChannelSegment([5, 6])
assert segment.downstream is None
assert len(downstream.upstream) == 0
segment.downstream = downstream
assert segment.downstream is downstream
assert segment in downstream.upstream
def test_channel_segment_add_upstream_node():
segment = ChannelSegment([0, 1])
upstream = ChannelSegment([5, 6])
assert len(segment.upstream) == 0
assert upstream.downstream is None
segment.add_upstream(upstream)
assert upstream in segment.upstream
assert upstream.downstream is segment
@given(segments=lists(lists(integers(), min_size=2, max_size=1024), min_size=1))
def test_channel_segment_many_upstream(segments):
segments = [ChannelSegment(segment) for segment in segments]
root = segments[0]
for current, next in pairwise(segments):
current.add_upstream(next)
assert root.count_segments(direction="upstream") == len(segments) - 1
assert root.count_segments(direction="downstream") == 0
@given(segments=lists(lists(integers(), min_size=2, max_size=1024), min_size=1))
def test_channel_segment_many_flat_upstream(segments):
segments = [ChannelSegment(segment) for segment in segments]
root = segments[0]
for segment in segments[1:]:
root.add_upstream(segment)
assert root.downstream is None
assert len(root.upstream) == len(segments) - 1
assert root.count_segments(direction="upstream") == len(segments) - 1
assert root.count_segments(direction="downstream") == 0
@given(segments=lists(lists(integers(), min_size=2, max_size=1024), min_size=1))
def test_channel_segment_many_downstream(segments):
segments = [ChannelSegment(segment) for segment in segments]
root = segments[0]
for current, next in pairwise(segments):
current.downstream = next
root = segments[0]
leaf = segments[-1]
assert root.count_segments(direction="upstream") == 0
assert root.count_segments(direction="downstream") == len(segments) - 1
assert leaf.count_segments(direction="upstream") == len(segments) - 1
assert leaf.count_segments(direction="downstream") == 0
@given(nodes=lists(integers(), min_size=2, max_size=1024))
def test_channel_segment_for_each(nodes):
all_nodes = []
def collect_nodes(segment):
all_nodes.extend(list(segment.nodes))
segment = ChannelSegment(nodes)
segment.for_each(collect_nodes)
assert_array_equal(all_nodes, segment.nodes)
def test_connector_add_upstream():
segment = ChannelSegment([0, 1])
connector = ChannelSegmentConnector(segment)
assert connector.root is segment
assert len(connector.orphans) == 0
connector.add(ChannelSegment([1, 2]))
assert connector.root is segment
assert connector.root.count_segments(direction="upstream") == 1
assert connector.root.downstream is None
def test_connector_add_downstream():
segment_1 = ChannelSegment([0, 1])
segment_2 = ChannelSegment([2, 0])
connector = ChannelSegmentConnector(segment_1)
connector.add(segment_2)
assert connector.root is segment_2
assert connector.root.count_segments(direction="upstream") == 1
assert connector.root.downstream is None
def test_connector_add_orphan():
segment_1 = ChannelSegment([0, 1])
segment_2 = ChannelSegment([2, 3])
connector = ChannelSegmentConnector(segment_1)
connector.add(segment_2)
assert connector.root is segment_1
assert connector.root.count_segments(direction="upstream") == 0
assert connector.root.downstream is None
assert len(connector.orphans) == 1
assert connector.orphans == (segment_2,)
connector.add(ChannelSegment([1, 2]))
assert connector.root.count_segments(direction="upstream") == 2
assert connector.orphans == ()
_grid_dims_to_test = integers(min_value=3, max_value=128)
@composite
def shape_and_indices(draw, elements=_grid_dims_to_test):
shape = draw(lists(elements, min_size=2, max_size=2))
indices = draw(
lists(
integers(min_value=0, max_value=shape[0] * shape[1] - 1),
min_size=1,
max_size=1024,
)
)
return shape, indices
@given(shape_and_segment=shape_and_indices())
def test_construct_xy_of_node(shape_and_segment):
shape, segment = shape_and_segment
grid = RasterModelGrid(shape)
collect_coordinates = SegmentNodeCoordinateCollector(grid)
collect_coordinates(ChannelSegment(segment))
xy_of_node = collect_coordinates.xy_of_node
assert len(xy_of_node) == len(segment)
x_of_node, y_of_node = zip(*xy_of_node)
assert (x_of_node == grid.x_of_node[segment]).all()
assert (y_of_node == grid.y_of_node[segment]).all()
@given(nodes=lists(integers(), min_size=0, max_size=1024))
def test_reindex_segment_nodes_orphan(nodes):
segment = ChannelSegment(nodes)
reindex = SegmentNodeReindexer()
reindex(segment)
assert segment.nodes == list(range(len(nodes)))
@given(nodes=lists(integers(), min_size=0, max_size=1024), last_node=integers())
def test_reindex_segment_nodes_with_last_node(nodes, last_node):
segment = ChannelSegment(nodes)
reindex = SegmentNodeReindexer(nodes=[last_node])
reindex(segment)
assert segment.nodes == list(range(last_node + 1, last_node + 1 + len(nodes)))
@given(nodes=lists(integers(), min_size=0, max_size=1024), last_node=integers())
def test_reindex_segment_nodes_with_downstream(nodes, last_node):
root = ChannelSegment([0, 1])
segment = ChannelSegment(nodes)
root.add_upstream(segment)
reindex = SegmentNodeReindexer(nodes=[last_node])
reindex(segment)
assert segment.nodes[0] == root.nodes[-1]
assert segment.nodes[1:] == list(range(last_node + 1, last_node + len(nodes)))
@given(nodes=lists(integers(), min_size=2, max_size=1024))
def test_create_links(nodes):
segment = ChannelSegment(nodes)
collect_links = SegmentLinkCollector()
collect_links(segment)
links = collect_links.links
assert len(links) == len(segment) - 1
heads, tails = zip(*links)
assert list(heads) == nodes[:-1]
assert list(tails) == nodes[1:]
@given(nodes=lists(integers(), min_size=2, max_size=1024))
def test_create_links_with_existing(nodes):
segment = ChannelSegment(nodes)
collect_links = SegmentLinkCollector(links=[(1, 2), (3, 4)])
collect_links(segment)
links = collect_links.links
assert links[:2] == [(1, 2), (3, 4)]
assert len(links[2:]) == len(segment) - 1
heads, tails = zip(*links[2:])
assert list(heads) == nodes[:-1]
assert list(tails) == nodes[1:]
@given(nodes=lists(integers(), min_size=2, max_size=1024))
def test_create_links_with_downstream(nodes):
root = ChannelSegment([0, 1])
segment = ChannelSegment(nodes)
root.add_upstream(segment)
collect_links = SegmentLinkCollector()
collect_links(segment)
links = collect_links.links
assert len(links) == len(segment) - 1
assert links[0] == (root.nodes[-1], segment.nodes[1])
if len(links) > 1:
heads, tails = zip(*links[1:])
assert list(heads) == nodes[1:-1]
assert list(tails) == nodes[2:]
def test_reindex_network_nodes():
root = ChannelSegmentConnector([10, 11, 12], [12, 13], [12, 14], [14, 15]).root
reindex_network_nodes(root)
assert list(root.nodes) == [0, 1, 2]
assert list(root.upstream[0].nodes) == [2, 3]
assert list(root.upstream[1].nodes) == [2, 4]
assert list(root.upstream[1].upstream[0].nodes) == [4, 5]
def test_create_network_links():
root = ChannelSegmentConnector([0, 1, 2], [2, 3], [2, 4], [4, 5]).root
links = create_network_links(root)
assert links == [(0, 1), (1, 2), (2, 3), (2, 4), (4, 5)]
def test_graph_from_segments():
r"""
::
*
|
* *
\ /
* *
| |
* * *
| | /
* * - *
\ /
*
\
*
|
*
"""
grid = RasterModelGrid((8, 6))
grid.at_node["z"] = list(range(grid.number_of_nodes))
segments = [
[3, 9, 14],
[14, 19, 25, 31],
[14, 21],
[21, 27, 33],
[33, 40],
[33, 38, 44],
[21, 22, 29],
]
graph = network_grid_from_segments(grid, segments)
assert graph.number_of_nodes == 14
assert graph.number_of_links == 13
assert list(zip(graph.x_of_node, graph.y_of_node)) == [
(3.0, 0.0),
(3.0, 1.0),
(2.0, 2.0),
(1.0, 3.0),
(3.0, 3.0),
(4.0, 3.0),
(1.0, 4.0),
(3.0, 4.0),
(5.0, 4.0),
(1.0, 5.0),
(3.0, 5.0),
(2.0, 6.0),
(4.0, 6.0),
(2.0, 7.0),
]
assert "z" in graph.at_node
assert list(graph.at_node["z"]) == [
3,
9,
14,
19,
21,
22,
25,
27,
29,
31,
33,
38,
40,
44,
]
def test_reduce_nodes():
nodes = _reduce_nodes([0.0, 1.0, 2.0, 3.0, 3.5], spacing=1.0)
assert nodes == [0, 1, 2, 3, 4]
nodes = _reduce_nodes([0.0, 1.0, 2.0, 3.0, 3.5], spacing=0.5)
assert nodes == [0, 1, 2, 3, 4]
nodes = _reduce_nodes([0.0, 1.0, 2.0, 3.0, 4.0], spacing=1.75)
assert nodes == [0, 2, 4]
nodes = _reduce_nodes([0.0, 1.0, 2.0, 3.0, 4.0], spacing=[1.0, 1.0, 2.0, 2.0, 2.0])
assert nodes == [0, 1, 2, 4]
nodes = _reduce_nodes([0.0, 1.0, 2.0, 3.0, 4.0], spacing=1000.0)
assert nodes == [0, 4]
nodes = _reduce_nodes([0.0, 1.0, 2.0, 3.0, 4.0, 5.0], spacing=2.0)
assert nodes == [0, 2, 4, 5]
def test_reduce_to_fewest_nodes():
x = [0.0, 1.0, 2.0, 3.0, 3.5]
y = [0.0] * len(x)
nodes = _reduce_to_fewest_nodes(list(zip(x, y)), spacing=1.0)
assert nodes == [0, 1, 2, 3, 4]
nodes = _reduce_to_fewest_nodes(list(zip(x, y)), spacing=0.5)
assert nodes == [0, 1, 2, 3, 4]
x = [0.0, 1.0, 2.0, 3.0, 4.0]
y = [0.0] * len(x)
nodes = _reduce_to_fewest_nodes(list(zip(x, y)), spacing=1.75)
assert nodes == [0, 2, 4]
nodes = _reduce_to_fewest_nodes(list(zip(x, y)), spacing=[1.0, 1.0, 2.0, 2.0, 2.0])
assert nodes == [0, 1, 2, 4]
nodes = _reduce_to_fewest_nodes(list(zip(x, y)), spacing=1000.0)
assert nodes == [0, 4]
x = [0.0, 1.0, 2.0, 3.0, 4.0, 5.0]
y = [0.0] * len(x)
nodes = _reduce_to_fewest_nodes(list(zip(x, y)), spacing=2.0)
assert nodes == [0, 2, 4, 5]
def test_reduce_nodes_stay_the_same():
nodes = _reduce_nodes([0.0, 1.0, 2.0, 3.0, 4.0], spacing=1.0)
assert nodes == [0, 1, 2, 3, 4]
nodes = _reduce_nodes([0.0, 2.0, 4.0, 6.0, 8.0], spacing=2.0)
assert nodes == [0, 1, 2, 3, 4]
nodes = _reduce_nodes([0.0, 1.0, 2.0, 3.0, 4.0], spacing=0.5)
assert nodes == [0, 1, 2, 3, 4]
nodes = _reduce_nodes([0.0, 1.0, 3.0, 6.0, 10.0], spacing=[1, 2, 3, 4, 5])
assert nodes == [0, 1, 2, 3, 4]
@pytest.mark.parametrize(
"x,spacing",
[
([0.0, 1.0, 2.0, 3.0, 4.0], 1.0),
([0.0, 2.0, 4.0, 6.0, 8.0], 2.0),
([0.0, 1.0, 2.0, 3.0, 4.0], 0.5),
([0.0, 1.0, 3.0, 6.0, 10.0], [1, 2, 3, 4, 5]),
],
)
def test_reduce_to_fewest_nodes_stay_the_same(x, spacing):
y = [0.0] * len(x)
nodes = _reduce_to_fewest_nodes(list(zip(x, y)), spacing=spacing)
assert nodes == [0, 1, 2, 3, 4]
@given(
spacing=hynp.arrays(
dtype=float,
shape=hynp.array_shapes(min_dims=1, max_dims=1, min_side=2),
elements=floats(min_value=1e-3, max_value=1e3),
)
)
def test_reduce_nodes_min_max_spacing(spacing):
distance_along_segment = np.cumsum(spacing)
if np.any(np.diff(distance_along_segment) <= 0):
raise ValueError(f"array not sorted ({distance_along_segment})")
nodes = _reduce_nodes(distance_along_segment, spacing=spacing.min())
assert np.all(nodes == np.arange(len(spacing)))
nodes = _reduce_nodes(
distance_along_segment,
spacing=distance_along_segment[-1] - distance_along_segment[0],
)
assert nodes == [0, len(spacing) - 1]
@given(
spacing=hynp.arrays(
dtype=float,
shape=hynp.array_shapes(min_dims=1, max_dims=1, min_side=2),
elements=floats(min_value=1e-3, max_value=1e3),
)
)
def test_reduce_to_fewest_nodes_min_max_spacing(spacing):
distance_along_segment = np.cumsum(spacing)
if np.any(np.diff(distance_along_segment) <= 0):
raise ValueError(f"array not sorted ({distance_along_segment})")
xy_of_node = list(zip(distance_along_segment, [0.0] * len(distance_along_segment)))
min_spacing = np.diff(distance_along_segment).min()
nodes = _reduce_to_fewest_nodes(xy_of_node, spacing=min_spacing)
assert np.all(nodes == np.arange(len(spacing)))
nodes = _reduce_to_fewest_nodes(
xy_of_node,
spacing=distance_along_segment[-1] - distance_along_segment[0],
)
assert nodes == [0, len(spacing) - 1]
def test_educe_to_fewest_nodes_wraparound():
x = [0.0, 0.0, 0.0, 1.0, 1.0, 1.0]
y = [0.0, 1.0, 2.0, 2.0, 1.0, 0.0]
assert _reduce_to_fewest_nodes(list(zip(x, y)), spacing=1.001) == [0, 5]
def test_create_xy_of_node_with_branch():
grid = RasterModelGrid((3, 4), xy_spacing=(2.0, 3.0))
network = ChannelSegmentConnector([4, 5], [5, 2, 3], [5, 10, 11])
xy_of_node = create_xy_of_node(network.root, grid)
assert np.allclose(xy_of_node, [[0, 3], [2, 3], [4, 0], [6, 0], [4, 6], [6, 6]])
@given(
nodes=lists(
integers(min_value=0, max_value=1023),
min_size=2,
max_size=1024,
),
)
@settings(deadline=None)
def test_create_xy_of_node_one_segement(nodes):
grid = RasterModelGrid((16, 64), xy_spacing=(2.0, 3.0))
network = ChannelSegmentConnector(nodes)
xy_of_node = create_xy_of_node(network.root, grid)
assert np.allclose(xy_of_node[:, 0], grid.x_of_node[nodes])
assert np.allclose(xy_of_node[:, 1], grid.y_of_node[nodes])
def test_xy_of_node_if_not_network_root():
grid = RasterModelGrid((3, 4), xy_spacing=(2.0, 3.0))
network = ChannelSegmentConnector([4, 5], [5, 2], [5, 10, 11], [2, 3], [2, 7])
base = network.root.upstream[0]
xy_of_node = create_xy_of_node(base, grid)
assert np.allclose(xy_of_node, [[4, 0], [6, 0], [6, 3]])
def test_get_node_fields_one_field():
grid = RasterModelGrid((3, 4))
grid.at_node["foo"] = np.arange(12) * 10
network = ChannelSegmentConnector([0, 5], [5, 6, 7], [5, 9])
fields = get_node_fields(network.root, grid)
assert list(fields) == ["foo"]
assert_array_equal(fields["foo"], [0, 50, 60, 70, 90])
def test_get_node_fields_two_fields():
grid = RasterModelGrid((3, 4))
grid.at_node["foo"] = np.arange(12) * 10
grid.at_node["bar"] = np.arange(12) * 100
network = ChannelSegmentConnector([0, 5], [5, 6, 7], [5, 9])
fields = get_node_fields(network.root, grid)
assert sorted(list(fields)) == ["bar", "foo"]
assert_array_equal(fields["foo"], [0, 50, 60, 70, 90])
assert_array_equal(fields["bar"], [0, 500, 600, 700, 900])
def test_get_node_fields_include():
grid = RasterModelGrid((3, 4))
grid.at_node["foo"] = np.arange(12) * 10
grid.at_node["bar"] = np.arange(12) * 100
grid.at_node["baz"] = np.arange(12) * 1000
network = ChannelSegmentConnector([0, 5], [5, 6, 7], [5, 9])
fields = get_node_fields(network.root, grid, include="at_node:f*")
assert list(fields) == ["foo"]
assert_array_equal(fields["foo"], [0, 50, 60, 70, 90])
fields = get_node_fields(network.root, grid, include="at_node:b*")
assert sorted(list(fields)) == ["bar", "baz"]
assert_array_equal(fields["bar"], [0, 500, 600, 700, 900])
assert_array_equal(fields["baz"], [0, 5000, 6000, 7000, 9000])
def test_get_node_fields_exclude():
grid = RasterModelGrid((3, 4))
grid.add_empty("foo", at="node")
grid.add_empty("bar", at="node")
grid.add_empty("baz", at="node")
network = ChannelSegmentConnector([0, 5], [5, 6, 7], [5, 9])
expected = get_node_fields(network.root, grid, include="at_node:b*")
actual = get_node_fields(network.root, grid, exclude="at_node:f*")
assert actual.keys() == expected.keys()
for name in actual:
assert_array_equal(actual[name], expected[name])
def test_get_node_fields_ignore_non_node_fields():
grid = RasterModelGrid((3, 4))
grid.add_empty("foo", at="node")
grid.add_empty("bar", at="node")
grid.add_empty("baz", at="link")
network = ChannelSegmentConnector([0, 5], [5, 6, 7], [5, 9])
fields = get_node_fields(network.root, grid, include="*")
assert sorted(list(fields)) == ["bar", "foo"]
def test_network_grid_from_raster():
grid = RasterModelGrid((4, 5))
grid.at_node["topographic__elevation"] = np.flipud(
np.asarray(
[
[4, 4, 4, 4, 4],
[4, 4, 2, 4, 4],
[4, 4, 1, 4, 4],
[4, 4, 0, 4, 4],
],
dtype=float,
)
)
network = network_grid_from_raster(grid)
assert network.number_of_nodes == 7
assert network.number_of_links == 6
assert np.allclose(
network.xy_of_node,
[
[2.0, 0.0],
[1.0, 1.0],
[2.0, 1.0],
[3.0, 1.0],
[1.0, 2.0],
[2.0, 2.0],
[3.0, 2.0],
],
)
assert_array_equal(
network.nodes_at_link, [[0, 2], [1, 2], [2, 3], [4, 2], [2, 5], [2, 6]]
)
@given(nodes=lists(integers(), min_size=2))
def test_reducer_just_end_nodes(nodes):
reduce = JustEndNodes()
assert_array_equal(reduce(nodes), [nodes[0], nodes[-1]])
@given(nodes=lists(integers(), min_size=3))
def test_reducer_min_three_nodes(nodes):
reduce = AtMostNodes()
assert_array_equal(reduce(nodes), [nodes[0], nodes[len(nodes) // 2], nodes[-1]])
@given(nodes=lists(integers(), min_size=2))
def test_reducer_min_ndoes_matches_just_end_nodes(nodes):
just_end_nodes = JustEndNodes()
at_most_two_nodes = AtMostNodes(count=2)
assert_array_equal(just_end_nodes(nodes), at_most_two_nodes(nodes))
@given(nodes=lists(integers(), min_size=2))
def test_reducer_min_nodes_no_change(nodes):
reduce = AtMostNodes(count=len(nodes) + 1)
assert_array_equal(reduce(nodes), nodes)
@pytest.mark.parametrize("count", [-1, 0, 1])
def test_reducer_min_nodes_less_than_two(count):
with pytest.raises(ValueError):
AtMostNodes(count=count)
def test_reducer_spacing_at_least():
grid = RasterModelGrid((3, 6), xy_spacing=(3.0, 4.0))
reduce = SpacingAtLeast(xy_of_node=grid.xy_of_node)
assert_array_equal(
reduce.calc_distance_along_segment([6, 7, 8, 9, 10, 11]), [0, 3, 6, 9, 12, 15]
)
assert_array_equal(reduce.calc_distance_along_segment([0, 7, 14]), [0, 5, 10])
assert_array_equal(reduce.calc_distance_along_segment([0, 6, 12]), [0, 4, 8])
reduce = SpacingAtLeast(xy_of_node=grid.xy_of_node, spacing=3.0)
assert_array_equal(reduce([6, 7, 8, 9, 10, 11]), [6, 7, 8, 9, 10, 11])
reduce = SpacingAtLeast(xy_of_node=grid.xy_of_node, spacing=1.5)
assert_array_equal(reduce([6, 7, 8, 9, 10, 11]), [6, 7, 8, 9, 10, 11])
reduce = SpacingAtLeast(xy_of_node=grid.xy_of_node, spacing=6.0)
assert_array_equal(reduce([6, 7, 8, 9, 10, 11]), [6, 8, 10, 11])
def test_reducer_spacing_at_least_variable():
xy_of_node = [[0, 0], [1, 0], [2, 0], [3, 0], [4, 0], [5, 0]]
spacing = [1, 2, 3, 4, 5, 6]
reduce = SpacingAtLeast(xy_of_node=xy_of_node, spacing=spacing)
assert_array_equal(reduce([0, 1, 2, 3, 4, 5]), [0, 1, 3, 5])
assert_array_equal(reduce([0, 1, 2, 3, 4, 5]), reduce(reduce([0, 1, 2, 3, 4, 5])))
@given(
xy_of_node=hynp.arrays(
dtype=float,
shape=hynp.array_shapes(min_dims=1, max_dims=1, min_side=4, max_side=128),
elements=integers(min_value=-1024, max_value=1024),
unique=True,
),
spacing=floats(min_value=0.0, exclude_min=True),
)
def test_reducer_spacing_at_least_all_greater(xy_of_node, spacing):
xy_of_node = xy_of_node[: len(xy_of_node) - len(xy_of_node) % 2].reshape((-1, 2))
xy_of_node /= 100.0
segment = np.arange(len(xy_of_node))
reduce = SpacingAtLeast(xy_of_node=xy_of_node, spacing=spacing)
reduced_segment = reduce(segment)
distance_along_segment = reduce.calc_distance_along_segment(reduced_segment[:-1])
assert reduced_segment[0] == segment[0]
assert reduced_segment[-1] == segment[-1]
assert np.all(np.diff(distance_along_segment) >= spacing)
def test_reducer_along_channel_spacing_at_least_variable():
xy_of_node = [[0, 0], [1, 0], [2, 0], [3, 0], [4, 0], [5, 0]]
spacing = [1, 2, 3, 4, 5, 6]
reduce = AlongChannelSpacingAtLeast(xy_of_node=xy_of_node, spacing=spacing)
assert_array_equal(reduce([0, 1, 2, 3, 4, 5]), [0, 1, 3, 5])
assert_array_equal(reduce([0, 1, 2, 3, 4, 5]), reduce(reduce([0, 1, 2, 3, 4, 5])))
| 30.584881 | 87 | 0.644074 |
559fd239ecd87013b8f1ed1dc65567d8b8da3956 | 1,699 | py | Python | final/170401074/client.py | hasan-se/blm304 | 893d15282497a426ff96b0c8b6c77d57c406742e | [
"Unlicense"
] | 1 | 2021-05-04T21:46:08.000Z | 2021-05-04T21:46:08.000Z | final/170401074/client.py | hasan-se/blm304 | 893d15282497a426ff96b0c8b6c77d57c406742e | [
"Unlicense"
] | null | null | null | final/170401074/client.py | hasan-se/blm304 | 893d15282497a426ff96b0c8b6c77d57c406742e | [
"Unlicense"
] | null | null | null | #Batuhan :OZALP - 170401074
import socket
import time
import os
import datetime
import subprocess
import shlex
from decimal import Decimal
ip = input("Server ip adresini girin >>")
port = 142
buffer = 1024
def main():
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((ip, port))
simdiki_zaman = time.time()
zaman = str(s.recv(buffer).decode())
zaman = Decimal(zaman)
zaman *= 1000 #milisaniyeye cevirdik
print("Serverdan gelen milisaniye cinsinden gecen zaman >>", zaman)
zaman = int(zaman / 1000)#saniyeye cevirdik
time.sleep(0.3)
utc = str(s.recv(buffer).decode())#serverdan gelen utc degeri
utc = int(utc)
if(utc > 0):
print("Server tarafindaki zaman dilimi >> UTC+%d" %utc)
else:
print("Server tarafindaki zaman dilimi >> UTC%d" %utc)
utc_saati = zaman
time.sleep(0.3)
server_istegi = str(s.recv(buffer).decode()) #degisecek zaman diliminin degeri
istek_zamani = time.time()
server_istegi = int(server_istegi)
print("Istenen zaman dilimi >>", server_istegi)
gecikme = istek_zamani - simdiki_zaman
utc_saati = (3600 * server_istegi) + gecikme + utc_saati
saat = str(time.ctime(utc_saati))
print("Ayarlanmasi beklenen zaman >>", time.ctime(utc_saati))
subprocess.call(shlex.split("timedatectl set-ntp false"))
subprocess.call(shlex.split("sudo date -s '%s'" % saat))
subprocess.call(shlex.split("sudo hwclock -w"))
except:
s.close()
main()
| 24.271429 | 87 | 0.597999 |
069e9ff4aba4ac69a6ad153238a3b656eed65f07 | 934 | py | Python | compare/copy_missing_lfw.py | corganhejijun/frontal-face-trans | a8d4d99bf537b4947258666272622f5a4bc759b5 | [
"MIT"
] | null | null | null | compare/copy_missing_lfw.py | corganhejijun/frontal-face-trans | a8d4d99bf537b4947258666272622f5a4bc759b5 | [
"MIT"
] | null | null | null | compare/copy_missing_lfw.py | corganhejijun/frontal-face-trans | a8d4d99bf537b4947258666272622f5a4bc759b5 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import os
from shutil import copyfile
LFW_DIR = "../datasets/lfw"
COPY_DIR = "result/lfw_fill"
DEST_DIR = COPY_DIR + "_full"
FILE_EXT = ".png"
if not os.path.exists(DEST_DIR):
os.mkdir(DEST_DIR)
dirList = os.listdir(LFW_DIR)
for index, dirName in enumerate(dirList):
print("processing {0}, {1} out of total {2}".format(dirName, index, len(dirList)))
subDir = os.path.join(LFW_DIR, dirName)
for file in os.listdir(subDir):
copyFilePath = os.path.join(COPY_DIR, dirName, file[:-4] + FILE_EXT)
if not os.path.exists(os.path.join(DEST_DIR, dirName)):
os.mkdir(os.path.join(DEST_DIR, dirName))
if not os.path.exists(copyFilePath):
copyfile(os.path.join(subDir, file), os.path.join(DEST_DIR, dirName, file))
continue
destFilePath = os.path.join(DEST_DIR, dirName, file[:-4] + FILE_EXT)
copyfile(copyFilePath, destFilePath) | 38.916667 | 87 | 0.663812 |
50fe4fdd07c23db786bb34df2caa835524976823 | 1,425 | py | Python | pastepwn/analyzers/logicalanalyzers/tests/logicalbaseanalyzer_test.py | robotboyfriend/pastepwn | ca6dd87afd053b5032857eb0615a947c3b9dfad9 | [
"MIT"
] | 113 | 2018-09-06T22:14:52.000Z | 2022-02-17T01:32:29.000Z | pastepwn/analyzers/logicalanalyzers/tests/logicalbaseanalyzer_test.py | robotboyfriend/pastepwn | ca6dd87afd053b5032857eb0615a947c3b9dfad9 | [
"MIT"
] | 199 | 2018-09-15T22:17:58.000Z | 2022-01-23T23:45:09.000Z | pastepwn/analyzers/logicalanalyzers/tests/logicalbaseanalyzer_test.py | robotboyfriend/pastepwn | ca6dd87afd053b5032857eb0615a947c3b9dfad9 | [
"MIT"
] | 88 | 2018-09-09T13:02:06.000Z | 2022-01-23T22:56:09.000Z | # -*- coding: utf-8 -*-
import unittest
from unittest import mock
from pastepwn.actions.basicaction import BasicAction
from pastepwn.analyzers.logicalanalyzers import LogicalBaseAnalyzer
class TestLogicalBaseAnalyzer(unittest.TestCase):
def setUp(self):
self.paste = mock.Mock()
def test_exception(self):
analyzer = LogicalBaseAnalyzer([], [])
self.assertRaises(NotImplementedError, analyzer.match, mock.Mock())
def test_actions_present(self):
action = mock.MagicMock(spec=BasicAction)
analyzer = LogicalBaseAnalyzer(action, None)
self.assertEqual([action], analyzer.actions)
def test_analyzers_present(self):
analyzer = LogicalBaseAnalyzer(None, self.paste)
self.assertEqual([self.paste], analyzer.analyzers)
def test_merge_actions(self):
action1 = mock.Mock()
action2 = mock.Mock()
action3 = mock.Mock()
analyzer1 = mock.Mock()
analyzer1.actions = [action1, action2]
analyzer2 = mock.Mock()
analyzer2.actions = [action3]
analyzer = LogicalBaseAnalyzer(analyzers=[analyzer1, analyzer2], actions=[], merge_actions=True)
self.assertEqual(3, len(analyzer.actions), "Wrong amount of actions in LogicalBaseAnalyzer!")
self.assertEqual([action1, action2, action3], analyzer.actions, "Actions do not match!")
if __name__ == "__main__":
unittest.main()
| 33.139535 | 104 | 0.690526 |
90eaa0b4cc8e0205269965981644f6e790efe5f8 | 9,069 | py | Python | common/api/views.py | AdamCottrill/fwsb_common | f888747e26fd2cd9c581ec86c16c3722e503a4dd | [
"MIT"
] | null | null | null | common/api/views.py | AdamCottrill/fwsb_common | f888747e26fd2cd9c581ec86c16c3722e503a4dd | [
"MIT"
] | null | null | null | common/api/views.py | AdamCottrill/fwsb_common | f888747e26fd2cd9c581ec86c16c3722e503a4dd | [
"MIT"
] | null | null | null | """Views for the api for our common models
The veiws in this file should all be publicly available as readonly.
"""
from rest_framework import viewsets, status, generics
from rest_framework.decorators import api_view, permission_classes
from rest_framework.permissions import IsAuthenticatedOrReadOnly, AllowAny
from rest_framework.response import Response
from ..models import Species, Lake, ManagementUnit, Grid5
from .filters import Grid5Filter, ManagementUnitFilter, LakeFilter
from .utils import parse_point
from .serializers import (
SpeciesSerializer,
SpeciesDetailSerializer,
LakeSerializer,
LakeDetailSerializer,
ManagementUnitSerializer,
Grid5Serializer,
Grid5DetailSerializer,
)
class LakeListView(generics.ListAPIView):
queryset = Lake.objects.all()
serializer_class = LakeSerializer
filterset_class = LakeFilter
class LakeDetailView(generics.RetrieveAPIView):
queryset = Lake.objects.all()
serializer_class = LakeDetailSerializer
lookup_field = "abbrev"
class ManagementUnitListView(generics.ListAPIView):
queryset = ManagementUnit.objects.all().prefetch_related("lake")
serializer_class = ManagementUnitSerializer
filterset_class = ManagementUnitFilter
class ManagementUnitDetailView(generics.RetrieveAPIView):
queryset = ManagementUnit.objects.all()
serializer_class = ManagementUnitSerializer
lookup_field = "slug"
class Grid5ListView(generics.ListAPIView):
queryset = Grid5.objects.all().prefetch_related("lake")
serializer_class = Grid5Serializer
filterset_class = Grid5Filter
class Grid5DetailView(generics.RetrieveAPIView):
queryset = Grid5.objects.all()
serializer_class = Grid5DetailSerializer
lookup_field = "slug"
class SpeciesListView(generics.ListAPIView):
queryset = Species.objects.all()
serializer_class = SpeciesSerializer
class SpeciesDetailView(generics.RetrieveAPIView):
queryset = Species.objects.all()
serializer_class = SpeciesDetailSerializer
lookup_field = "spc"
@api_view(["POST"])
@permission_classes([AllowAny])
def get_lake_from_pt(request):
"""This function accepts post requests that contain a geojson
representation of a point. The view returns a dictionary contianing
the id, abbreviation, name, centroid and bounds (extent) of the lake
containing the point, or an empty dictionary if the dat is not geojson
or falls outside of any lake.
TODO: add options for 'pure' and 'plus' geometries
"""
geom = request.query_params.get("geom")
pt = parse_point(request.data.get("point"))
if pt is None:
return Response({}, status=status.HTTP_400_BAD_REQUEST)
lake = Lake.objects.filter(geom__contains=pt).first()
if lake:
ret = dict(
id=lake.id,
abbrev=lake.abbrev,
lake_name=lake.lake_name,
centroid=lake.centroid.wkt if lake.centroid else "",
envelope=lake.envelope.wkt if lake.envelope else "",
centroid_ontario=lake.centroid_ontario.wkt if lake.centroid_ontario else "",
envelope_ontario=lake.envelope_ontario.wkt if lake.envelope_ontario else "",
)
# return one geom or the other - not both
if geom == "geom":
ret["geom"] = lake.geom.geojson
elif geom == "geom_ontario":
ret["geom"] = lake.geom_ontario.geojson
return Response(ret, status=status.HTTP_200_OK)
else:
# no lake object could be associated with that point.
# 400
return Response({}, status=status.HTTP_404_NOT_FOUND)
def manUnit_dict(obj, geom=None):
"""Serialize a management unit to a python dictionary
Arguments:
- `obj`: a ManagementUnit instance
"""
item = dict(
id=obj.id,
slug=obj.slug,
label=obj.label,
mu_type=obj.mu_type,
centroid=obj.centroid.wkt,
envelope=obj.envelope.wkt,
)
if geom == "geom":
item["geom"] = obj.geom.geojson
return item
@api_view(["POST"])
@permission_classes([AllowAny])
def get_management_unit_from_pt(request):
"""This function accepts post requests that contains a geojson
representation of a point. The view returns a dictionary contianing
the id, label, mu_type, centroid and bounds (extent) of the management_unit
containing the point, or an empty dictionary if the data is not geojson
or falls outside of any management_unit.
This function takes an additional argument (mu_type) as a query
parameter that controls what type of managemnet unit is returned -
current options are stat_dist, mu, qma, ltrz. Others could be
added in the future. If the mu_type argument is not included in
the request, the management_unit with primary=True is returned by
default.
TODO: add options for 'pure' and 'plus' geometries
"""
geom = request.query_params.get("geom")
mu_type = request.query_params.get("mu_type")
all_mus = request.query_params.get("all")
pt = parse_point(request.data.get("point"))
if pt is None:
return Response({}, status=status.HTTP_400_BAD_REQUEST)
qs = ManagementUnit.objects.filter(geom__contains=pt)
if all_mus and all_mus in ["T", "t", "TRUE", "True", "true"]:
qs = qs.all()
elif mu_type:
qs = qs.filter(mu_type=mu_type).first()
else:
qs = qs.filter(primary=True).first()
if qs:
if all_mus:
ret = [manUnit_dict(x, geom) for x in qs]
else:
ret = manUnit_dict(qs, geom)
return Response(ret, status=status.HTTP_200_OK)
else:
# no qs object could be associated with that point.
# 400
return Response({}, status=status.HTTP_404_NOT_FOUND)
@api_view(["POST"])
@permission_classes([AllowAny])
def get_grid5_from_pt(request):
"""This function accepts post requests that contain a geojson
representation of a point. The view returns a dictionary contianing
the id, abbreviation, name, centroid and bounds (extent) of the grid5
containing the point, or an empty dictionary if the dat is not geojson
or falls outside of any grid5.
post request should be of the form:
{"point": "POINT(-81.5 44.5)"}
"""
pt = parse_point(request.data.get("point"))
if pt is None:
return Response({}, status=status.HTTP_400_BAD_REQUEST)
grid5 = Grid5.objects.select_related("lake").filter(geom__contains=pt).first()
geom = request.query_params.get("geom")
if grid5:
ret = dict(
id=grid5.id,
grid=grid5.grid,
slug=grid5.slug,
centroid=grid5.centroid.wkt,
envelope=grid5.envelope.wkt,
# lake attributes:
lake=dict(
lake_id=grid5.lake.id,
lake_abbrev=grid5.lake.abbrev,
lake_name=grid5.lake.lake_name,
),
)
if geom == "geom":
ret["geom"] = grid5.geom.geojson
return Response(ret, status=status.HTTP_200_OK)
else:
# no grid5 object could be associated with that point.
# 400
return Response({}, status=status.HTTP_404_NOT_FOUND)
@api_view(["POST"])
@permission_classes([AllowAny])
def pt_spatial_attrs(request):
"""This function accepts post requests that contain a geojson
representation of a point and returns a dictionary containing the
basic lake, management unit(s) and 5-minute grid that contains that point. Given a lat-lon, return
a dictionary with the following elements:
+ lake - with id, lake name, lake abbrev
+ manUnit(s) - [(id, label), ....]
+ grid5 - id, label, lake abbrev.
post data should contain a json string of the form:
{"point": "POINT(-81.5 44.5)"}
TODO: implement the mangement unit array.
"""
pt = parse_point(request.data.get("point"))
if pt is None:
return Response({}, status=status.HTTP_400_BAD_REQUEST)
ret = dict()
lake = Lake.objects.filter(geom__contains=pt).first()
if lake:
ret["lake"] = dict(
id=lake.id,
abbrev=lake.abbrev,
lake_name=lake.lake_name,
centroid=lake.centroid.wkt,
)
else:
ret["lake"] = ""
manUnit = (
ManagementUnit.objects.filter(geom__contains=pt)
.filter(mu_type="stat_dist")
.first()
)
if manUnit:
ret["manUnit"] = dict(
id=manUnit.id,
slug=manUnit.slug,
label=manUnit.label,
centroid=manUnit.centroid.wkt,
)
else:
ret["manUnit"] = ""
grid5 = Grid5.objects.select_related("lake").filter(geom__contains=pt).first()
if grid5:
ret["grid5"] = dict(
id=grid5.id,
grid=grid5.grid,
slug=grid5.slug,
centroid=grid5.centroid.wkt,
lake_abbrev=grid5.lake.abbrev,
)
else:
ret["grid5"] = ""
return Response(ret, status=status.HTTP_200_OK)
| 29.160772 | 103 | 0.663138 |
3fc6623f3b22674c370601fe1833ce77b1c4b1c4 | 26,433 | py | Python | SRC/engine/IO/GUI/meshparamwidgets.py | usnistgov/OOF3D | 4fd423a48aea9c5dc207520f02de53ae184be74c | [
"X11"
] | 31 | 2015-04-01T15:59:36.000Z | 2022-03-18T20:21:47.000Z | SRC/engine/IO/GUI/meshparamwidgets.py | usnistgov/OOF3D | 4fd423a48aea9c5dc207520f02de53ae184be74c | [
"X11"
] | 3 | 2015-02-06T19:30:24.000Z | 2017-05-25T14:14:31.000Z | SRC/engine/IO/GUI/meshparamwidgets.py | usnistgov/OOF3D | 4fd423a48aea9c5dc207520f02de53ae184be74c | [
"X11"
] | 7 | 2015-01-23T15:19:22.000Z | 2021-06-09T09:03:59.000Z | # -*- python -*-
# This software was produced by NIST, an agency of the U.S. government,
# and by statute is not subject to copyright in the United States.
# Recipients of this software assume all responsibilities associated
# with its operation, modification and maintenance. However, to
# facilitate maintenance we ask that before distributing modified
# versions of this software, you first contact the authors at
# oof_manager@nist.gov.
# ParameterWidgets for things that depend on a Mesh, such as Fields,
# Fluxes, and Equations defined on the Mesh.
from ooflib.SWIG.common import config
from ooflib.SWIG.common import ooferror
from ooflib.SWIG.common import switchboard
from ooflib.SWIG.engine import equation
from ooflib.SWIG.engine import field
from ooflib.SWIG.engine import flux
from ooflib.SWIG.engine import planarity
from ooflib.common import debug
from ooflib.common.IO.GUI import chooser
from ooflib.common.IO.GUI import parameterwidgets
from ooflib.common.IO.GUI import whowidget
from ooflib.engine import mesh
from ooflib.engine import skeletoncontext
from ooflib.engine import subproblemcontext
from ooflib.engine.IO import meshparameters
import gtk
import string
#Interface branch
from ooflib.common.IO import placeholder
class MeshParamWidgetBase(parameterwidgets.ParameterWidget):
# Base class for a widget that displays and allows choices from a
# list of things from a Mesh (eg, defined Fields or active
# Equations). The meshfunc constructor argument is a function
# that returns the list of things to display. It's called with
# the mesh as its first argument, so it can be a Mesh member
# function. self.chooser is the ChooserWidget that displays the
# items returned by meshfunc.
def __init__(self, param, whoclass, meshfunc, scope, name=None,
separator_func=None, verbose=False):
self.meshfunc = meshfunc
self.whoclass = whoclass
self.chooser = chooser.ChooserWidget([], callback=self.chooserCB,
name=name,
separator_func=separator_func,)
parameterwidgets.ParameterWidget.__init__(self, self.chooser.gtk, scope,
verbose=verbose)
self.meshwidget = scope.findWidget(
lambda w: isinstance(w, whowidget.WhoWidget)
and w.whoclass is whoclass)
assert self.meshwidget is not None
if self.meshwidget is None:
raise ooferror.ErrPyProgrammingError("Can't find WhoWidget for %s"
% `whoclass`)
self.sbcallbacks = [
switchboard.requestCallbackMain(self.meshwidget, self.update),
switchboard.requestCallbackMain("mesh changed", self.meshChangeCB),
switchboard.requestCallbackMain("subproblem changed",
self.meshChangeCB)
]
self.update(interactive=0)
self.set_value(param.value)
def update(self, interactive):
msh = self.getSource()
self.vals = {None:None}
if msh is not None:
namelist = []
for obj in self.meshfunc(msh):
self.vals[obj.name()] = obj
namelist.append(obj.name())
self.chooser.update(namelist)
self.widgetChanged(len(namelist) > 0, interactive)
else:
self.chooser.update([])
self.widgetChanged(0, interactive)
#def dumpCycle(self, subp):
#msh = self.getSource()
#self.vals = {None:None}
#if msh is not None:
#namelist = []
#for obj in self.meshfunc(msh):
#if obj.path() != subp.path():
##The current subproblem cannot depend on a possible subproblem subproblem that depend on it already: Loop deep 1
#if subp.path() not in obj.subptype.get_dependencies():
##The current subproblem dependents cannot be in a possible selectable subproblem dependenties: Loop deep 2
#intersection = False
#for sub in obj.subptype.get_dependencies():
#if sub in subp.subptype.get_dependents():
#intersection = True
#if not intersection:
##self.subps.append(subproblem)
#self.vals[obj.name()] = obj
#namelist.append(obj.name())
#self.chooser.update(namelist, to_update=False)
#self.widgetChanged(len(namelist) > 0, False)
#else:
#self.chooser.update([], to_update=False)
#self.widgetChanged(0, False)
def chooserCB(self, *args):
self.widgetChanged(1, interactive=1)
def meshChangeCB(self, meshcontext):
if self.meshwidget: # we haven't been cleaned up
src = self.getSource()
# 'meshcontext' might be a subproblem, actually.
if meshcontext is src or meshcontext.getParent() is src:
self.update(interactive=0)
def getSource(self):
if self.meshwidget is not None:
meshname = self.meshwidget.get_value()
try:
return self.whoclass[meshname]
except KeyError:
pass
def get_value(self):
return self.vals[self.chooser.get_value()]
def set_value(self, obj):
if obj is not None and obj.name() in self.chooser.choices():
self.chooser.set_state(obj.name())
self.widgetChanged(1, interactive=0)
def cleanUp(self):
debug.mainthreadTest()
self.meshwidget = None
map(switchboard.removeCallback, self.sbcallbacks)
parameterwidgets.ParameterWidget.cleanUp(self)
class MeshParamWidget(MeshParamWidgetBase):
def __init__(self, param, meshfunc, scope, name=None, separator_func=None,
verbose=False):
MeshParamWidgetBase.__init__(self, param,
mesh.meshes,
meshfunc, scope, name, separator_func,
verbose=verbose)
class SubProblemParamWidget(MeshParamWidgetBase):
def __init__(self, param, meshfunc, scope, name=None, separator_func=None,
verbose=False):
MeshParamWidgetBase.__init__(self, param,
subproblemcontext.subproblems,
meshfunc, scope, name, separator_func,
verbose=verbose)
# Widgets for quantities to which Invariants can be calculated should
# be subclasses of InvariandWidget.
class InvariandWidget: pass
# Widgets for quantities for which components can be extracted should
# be subclasses of IndexableWidget.
class IndexableWidget: pass
#############################
def meshfieldlister(meshctxt): # meshfunc for FieldParameterWidgets
# list fields and out-of-plane fields, if defined.
try:
compoundfields = meshctxt.all_compound_subproblem_fields()
flds = []
for fld in compoundfields:
flds.append(fld)
if config.dimension() == 2:
if not meshctxt.getObject().in_plane(fld):
flds.append(fld.out_of_plane())
return flds
except:
# This might be called with an unresolvable proxy if there
# are no meshes.
return []
class FieldParameterWidget(MeshParamWidget, InvariandWidget, IndexableWidget):
def __init__(self, param, scope, name=None, verbose=False):
if param.outofplane:
flist = meshfieldlister
else:
flist = mesh.Mesh.all_compound_subproblem_fields
MeshParamWidget.__init__(self, param, flist, scope, name, verbose)
## self.sbcallbacks.append(
## switchboard.requestCallbackMain("field defined", self.fieldDefCB))
def fieldDefCB(self, *args):
self.update(interactive=False)
def _makeFieldWidget(param, scope, verbose=False):
return FieldParameterWidget(param, scope, name=param.name, verbose=verbose)
meshparameters.FieldParameter.makeWidget = _makeFieldWidget
def subpfieldlister(subp): # meshfunc for SubProblemFieldParameterWidgets
# list fields and out-of-plane fields, if defined.
try:
compoundfields = subp.all_compound_fields()
flds = []
for fld in compoundfields:
flds.append(fld)
if (config.dimension() == 2 and
not subp.getParent().getObject().in_plane(fld)):
flds.append(fld.out_of_plane())
return flds
except:
# This might be called with an unresolvable proxy if there
# are no meshes.
return []
class SubProblemFieldParameterWidget(SubProblemParamWidget, InvariandWidget,
IndexableWidget):
def __init__(self, param, scope, name=None, verbose=False):
if param.outofplane:
flist = subpfieldlister
else:
flist = subproblemcontext.SubProblemContext.all_compound_fields
SubProblemParamWidget.__init__(self, param, flist, scope, name,
verbose=verbose)
def _makeSubPFieldWidget(param, scope, verbose=False):
return FieldParameterWidget(param, scope, name=param.name, verbose=verbose)
meshparameters.SubProblemFieldParameter.makeWidget = _makeSubPFieldWidget
#############################
class FluxParameterWidget(MeshParamWidget, InvariandWidget, IndexableWidget):
def __init__(self, param, scope, name=None, verbose=False):
MeshParamWidget.__init__(self, param,
mesh.Mesh.all_subproblem_fluxes,
scope,
name=name,
verbose=verbose)
def _makeFluxWidget(param, scope, verbose=False):
return FluxParameterWidget(param, scope, name=param.name, verbose=verbose)
meshparameters.FluxParameter.makeWidget = _makeFluxWidget
class SubProblemFluxParameterWidget(SubProblemParamWidget, InvariandWidget,
IndexableWidget):
def __init__(self, param, scope, name=None, verbose=False):
SubProblemParamWidget.__init__(
self, param,
subproblemcontext.SubProblemContext.all_fluxes,
scope,
name=name,
verbose=verbose)
def _makeSubPFluxWidget(param, scope, verbose=False):
return SubProblemParamWidget(param, scope, name=param.name, verbose=verbose)
meshparameters.SubProblemFluxParameter.makeWidget = _makeSubPFluxWidget
#############################
class EquationParameterWidget(MeshParamWidget, IndexableWidget):
def __init__(self, param, scope, name=None, verbose=False):
eqfunc = mesh.Mesh.all_subproblem_equations
MeshParamWidget.__init__(self, param, eqfunc, scope, name=name,
verbose=verbose)
def _makeEquationWidget(param, scope, verbose=False):
return EquationParameterWidget(param, scope, name=param.name,
verbose=verbose)
meshparameters.EquationParameter.makeWidget = _makeEquationWidget
class SubProblemEquationParameterWidget(SubProblemParamWidget, IndexableWidget):
def __init__(self, param, scope, name=None, verbose=False):
eqfunc = subproblemcontext.SubProblemContext.all_equations
SubProblemParamWidget.__init__(self, param, eqfunc, scope, name=name,
verbose=verbose)
def _makeSubPEquationWidget(param, scope, verbose=False):
return SubProblemEquationParameterWidget(param, scope, name=param.name,
verbose=verbose)
meshparameters.SubProblemEquationParameter.makeWidget = _makeSubPEquationWidget
class EquationBCParameterWidget(MeshParamWidget, IndexableWidget):
def __init__(self, param, scope, name=None, verbose=False):
eqfunc = mesh.Mesh.all_subproblem_equations_bc
MeshParamWidget.__init__(self, param, eqfunc, scope, name,
verbose=verbose)
def _makeEquationBCWidget(param, scope, verbose=False):
return EquationBCParameterWidget(param, scope, name=param.name,
verbose=verbose)
meshparameters.EquationBCParameter.makeWidget = _makeEquationBCWidget
class SubProblemEquationBCParameterWidget(SubProblemParamWidget,
IndexableWidget):
def __init__(self, param, scope, name=None, verbose=False):
eqfunc = subproblemcontext.SubProblemContext.all_equations_bc
SubProblemParamWidget.__init__(self, param, eqfunc, scope, name,
verbose=verbose)
def _makeSubPEquationBCWidget(param, scope, verbose=False):
return SubProblemEquationBCParameterWidget(param, scope, name=param.name,
verbose=verbose)
meshparameters.SubProblemEquationBCParameter.makeWidget = \
_makeSubPEquationBCWidget
#############################################
#############################################
class FieldIndexParameterWidget(parameterwidgets.ParameterWidget):
def __init__(self, param, scope, name=None, verbose=False):
debug.mainthreadTest()
self.chooser = chooser.ChooserWidget([], callback=self.chooserCB,
name=name)
box = gtk.VBox()
box.pack_start(self.chooser.gtk, expand=0, fill=0)
parameterwidgets.ParameterWidget.__init__(self, box, scope, verbose)
self.fieldwidget = scope.findWidget(
lambda w: isinstance(w, IndexableWidget))
self.sbcallback = switchboard.requestCallbackMain(self.fieldwidget,
self.fieldCB)
self.notapplicable = gtk.Label('(Not Applicable)')
self.notapplicable.set_alignment(0.0, 0.5)
box.pack_start(self.notapplicable, expand=0, fill=0)
self.nIndices = 0
self.update()
if param.value in self.chooser.choices():
self.chooser.set_state(param.value)
self.widgetChanged(1, interactive=0)
def chooserCB(self, *args):
self.widgetChanged(1, interactive=1)
def fieldCB(self, interactive):
self.update()
self.widgetChanged(1, interactive)
def update(self): # field has changed
itlist = []
self.nIndices = 0
field = self.fieldwidget.get_value()
if field is not None:
iterator = field.iterator_all()
while not iterator.end():
self.nIndices += 1
it = iterator.cloneIndex()
itrepr = it.shortrepr()
itlist.append(itrepr)
iterator.next()
self.chooser.update(itlist)
self.show()
def show(self):
debug.mainthreadTest()
self.gtk.show()
if self.nIndices > 1:
self.chooser.show()
self.notapplicable.hide()
else:
self.chooser.gtk.hide()
self.notapplicable.show()
def get_value(self):
val = self.chooser.get_value()
if val is None:
return ''
return val
def cleanUp(self):
parameterwidgets.ParameterWidget.cleanUp(self)
switchboard.removeCallback(self.sbcallback)
def _makeFieldIndexParameterWidget(param, scope, verbose=False):
return FieldIndexParameterWidget(param, scope, name=param.name,
verbose=verbose)
meshparameters.FieldIndexParameter.makeWidget = _makeFieldIndexParameterWidget
class SubProblemExcluder:
def __init__(self, widget, base):
self.widget = widget
self.subps = []
self.base = base
def __call__(self, mesh):
if self.widget is None:
return mesh.subproblems()
else:
subp = self.base[self.widget.get_value()]
for subproblem in mesh.subproblems():
if (subproblem.path() != subp.path() or
subproblem.name() == "default"):
self.subps.append(subproblem)
return self.subps
############################################################
# Not a widget for listing things in a SubProblem, but for listing the
# SubProblems in a Mesh. This is only a bit of a hack, since it's
# different than the other widgets in the class. The value of a
# SubProblemParameter is the SubProblem's path (similar to a
# WhoWidget), whereas the value of the other types of MeshParamWidget
# is a real object.
class SubProblemWidget(MeshParamWidget):
def __init__(self, param, scope, name=None, verbose=False):
self.scope = scope
self.targetwidget = self.scope.findWidget(
lambda w: isinstance(w, whowidget.WhoParameterWidget))
self.exclude = SubProblemExcluder(
self.targetwidget, subproblemcontext.subproblems)
MeshParamWidget.__init__(self, param, self.exclude, scope, name,
verbose=verbose)
self.sbcallbacks.append(
switchboard.requestCallbackMain(("new who", "SubProblem"),
self.newSubProblemCB))
def newSubProblemCB(self, subp):
self.update(interactive=False)
def get_value(self):
subp = MeshParamWidget.get_value(self)
if subp:
return subp.path()
def set_value(self, path):
try:
subp = subproblemcontext.subproblems[path]
MeshParamWidget.set_value(self, subp)
except KeyError:
pass
def _makeSubProblemWidget(param, scope, verbose=False):
return SubProblemWidget(param, scope, name=param.name, verbose=verbose)
subproblemcontext.SubProblemParameter.makeWidget = _makeSubProblemWidget
############################################################
# Special widget for mesh boundary parameters. As it turns out, these
# params take strings, not boundary objects, so we override a
# significant fraction of the foregoing machinery, which is really
# designed for parameters which take the actual object.
class MeshBoundaryParamWidget(MeshParamWidget):
def __init__(self, param, scope, name=None, verbose=False):
MeshParamWidget.__init__(self, param, _getSortedBdyNames,
scope, name=name, separator_func=_bdysepfunc,
verbose=verbose)
self.sbcallbacks.append(
switchboard.requestCallbackMain('mesh boundaries changed',
self.newBdys) )
def newBdys(self, msh):
if msh is self.getSource():
self.update(interactive=0)
def update(self, interactive):
msh = self.getSource()
if msh is not None:
namelist = self.meshfunc(msh)
self.chooser.update(namelist)
## # remove from namelist the boundaries with visible=False
## for name in namelist:
## try: # ignore the separator which will cause a key error
## print name, msh.getObject().boundaries[name].visible
## if not msh.getObject().boundaries[name].visible:
## namelist.remove(name)
## except:
## pass
if namelist: # If list has nonzero length, widget is valid.
self.widgetChanged(1, interactive)
else:
self.chooser.update([])
self.widgetChanged(0, interactive)
def set_value(self, value):
if value in self.chooser.choices():
self.chooser.set_state(value)
self.widgetChanged(1, interactive=0)
def get_value(self):
return self.chooser.get_value()
def _makeBoundaryWidget(param, scope, verbose=False):
return MeshBoundaryParamWidget(param, scope, name=param.name,
verbose=verbose)
meshparameters.MeshBoundaryParameter.makeWidget = _makeBoundaryWidget
# The MeshBoundaryParamWidget puts _separator_proxy in the list of
# boundaries to divide the edge boundaries from the point boundaries.
# The ChooserWidget replaces it with a real separator, using
# _bdysepfunc as the predicate.
_separator_proxy = "----------"
def _getSortedBdyNames(msh):
if config.dimension() == 2:
return (msh.edgeBoundaryNames() + [_separator_proxy] +
msh.visiblePointBoundaryNames())
return (msh.faceBoundaryNamesSorted() + [_separator_proxy] +
msh.edgeBoundaryNamesSorted() + [_separator_proxy] +
msh.visiblePointBoundaryNamesSorted())
def _bdysepfunc(model, iter):
return model[iter][0] == _separator_proxy
# Special cases for nontrivial face, edge and point boundaries.
if config.dimension() == 3:
class MeshFaceBdyParamWidget(MeshBoundaryParamWidget):
def __init__(self, param, scope, name=None, verbose=False):
MeshParamWidget.__init__(self, param,
mesh.Mesh.faceBoundaryNamesSorted,
scope, name=name, verbose=verbose)
self.sbcallbacks.append(
switchboard.requestCallbackMain('mesh boundaries changed',
self.newBdys) )
def _makeFaceBdyWidget(param, scope, verbose=False):
return MeshFaceBdyParamWidget(param, scope, name=param.name,
verbose=verbose)
meshparameters.MeshFaceBdyParameter.makeWidget = _makeFaceBdyWidget
class MeshEdgeBdyParamWidget(MeshBoundaryParamWidget):
def __init__(self, param, scope, name=None, verbose=False):
MeshParamWidget.__init__(self, param,
mesh.Mesh.edgeBoundaryNames,
scope, name=name, verbose=verbose)
self.sbcallbacks.append(
switchboard.requestCallbackMain('mesh boundaries changed',
self.newBdys) )
def _makeEdgeBdyWidget(param, scope, verbose=False):
return MeshEdgeBdyParamWidget(param, scope, name=param.name,
verbose=verbose)
meshparameters.MeshEdgeBdyParameter.makeWidget = _makeEdgeBdyWidget
class MeshPeriodicEdgeBdyParamWidget(MeshBoundaryParamWidget):
def __init__(self, param, scope, name=None, verbose=False):
MeshParamWidget.__init__(self, param,
mesh.Mesh.periodicEdgeBoundaryNames,
scope, name=name, verbose=verbose)
self.sbcallbacks.append(
switchboard.requestCallbackMain('mesh boundaries changed',
self.newBdys) )
def _makePeriodicEdgeBdyWidget(param, scope, verbose=False):
return MeshPeriodicEdgeBdyParamWidget(param, scope, name=param.name,
verbose=verbose)
meshparameters.MeshPeriodicEdgeBdyParameter.makeWidget = _makePeriodicEdgeBdyWidget
class MeshPointBdyParamWidget(MeshBoundaryParamWidget):
def __init__(self, param, scope, name=None, verbose=False):
MeshParamWidget.__init__(self, param,
mesh.Mesh.visiblePointBoundaryNames,
scope, name=name, verbose=verbose)
self.sbcallbacks.append(
switchboard.requestCallbackMain('mesh boundaries changed',
self.newBdys) )
def _makePointBdyWidget(param, scope, verbose=False):
return MeshPointBdyParamWidget(param, scope, name=param.name,
verbose=verbose)
meshparameters.MeshPointBdyParameter.makeWidget = _makePointBdyWidget
#Interface branch
def _getSortedBdyInterfaceNames(msh):
interfacemsplugin=msh.getMicrostructure().getPlugIn("Interfaces")
if config.dimension() == 2:
return msh.edgeBoundaryNames() + [_separator_proxy] + \
interfacemsplugin.getInterfaceNames()
return msh.faceBoundaryNamesSorted() + [_separator_proxy] + \
msh.edgeBoundaryNames() + [_separator_proxy] + \
interfacemsplugin.getInterfaceNames()
# This one includes the string "<every>"
if config.dimension() == 3:
def _getMeshFaceBdyNamesExtra(msh):
return [placeholder.every.IDstring] + msh.faceBoundaryNamesSorted()
def _getMeshEdgeBdyNamesExtra(msh):
return [placeholder.every.IDstring] + msh.edgeBoundaryNames()
#This one lists mesh boundary names and interface names together.
#It is no longer used, as new mesh boundaries are also
#created from interfaces (originally, mesh boundaries are
#created only based on a skeleton boundary template).
class MeshEdgeBdyInterfaceParamWidget(MeshBoundaryParamWidget):
def __init__(self, param, scope, name=None, verbose=False):
MeshParamWidget.__init__(self, param, _getSortedBdyInterfaceNames,
scope, name=name, separator_func=_bdysepfunc,
verbose=verbose)
self.sbcallbacks.append(
switchboard.requestCallbackMain('mesh boundaries changed',
self.newBdys) )
def _makeEdgeBdyInterfaceWidget(param, scope, verbose=False):
return MeshEdgeBdyInterfaceParamWidget(param, scope, name=param.name,
verbose=verbose)
meshparameters.MeshEdgeBdyInterfaceParameter.makeWidget = _makeEdgeBdyInterfaceWidget
class MeshEdgeBdyParamWidgetExtra(MeshBoundaryParamWidget):
def __init__(self, param, scope, name=None, verbose=False):
MeshParamWidget.__init__(self, param, _getMeshEdgeBdyNamesExtra,
scope, name=name, separator_func=_bdysepfunc,
verbose=verbose)
self.sbcallbacks.append(
switchboard.requestCallbackMain('mesh boundaries changed',
self.newBdys) )
def _makeEdgeBdyWidgetExtra(param, scope, verbose=False):
return MeshEdgeBdyParamWidgetExtra(param, scope, name=param.name,
verbose=verbose)
meshparameters.MeshEdgeBdyParameterExtra.makeWidget = _makeEdgeBdyWidgetExtra
| 41.89065 | 119 | 0.630538 |
51f6543a413edc1c3f4074ad94867a3838939d40 | 2,439 | py | Python | connect4/Connect4Players.py | user01/alpha-zero-general | 7edf122015e02a2e78168ac9f6eaa5c5e20600cc | [
"MIT"
] | 2,836 | 2017-12-18T02:11:38.000Z | 2022-03-30T09:07:15.000Z | connect4/Connect4Players.py | user01/alpha-zero-general | 7edf122015e02a2e78168ac9f6eaa5c5e20600cc | [
"MIT"
] | 212 | 2017-12-28T06:47:57.000Z | 2022-01-06T20:22:26.000Z | connect4/Connect4Players.py | user01/alpha-zero-general | 7edf122015e02a2e78168ac9f6eaa5c5e20600cc | [
"MIT"
] | 892 | 2017-12-18T08:56:45.000Z | 2022-03-29T23:00:45.000Z | import numpy as np
class RandomPlayer():
def __init__(self, game):
self.game = game
def play(self, board):
a = np.random.randint(self.game.getActionSize())
valids = self.game.getValidMoves(board, 1)
while valids[a] != 1:
a = np.random.randint(self.game.getActionSize())
return a
class HumanConnect4Player():
def __init__(self, game):
self.game = game
def play(self, board):
valid_moves = self.game.getValidMoves(board, 1)
print('\nMoves:', [i for (i, valid) in enumerate(valid_moves) if valid])
while True:
move = int(input())
if valid_moves[move]: break
else: print('Invalid move')
return move
class OneStepLookaheadConnect4Player():
"""Simple player who always takes a win if presented, or blocks a loss if obvious, otherwise is random."""
def __init__(self, game, verbose=True):
self.game = game
self.player_num = 1
self.verbose = verbose
def play(self, board):
valid_moves = self.game.getValidMoves(board, self.player_num)
win_move_set = set()
fallback_move_set = set()
stop_loss_move_set = set()
for move, valid in enumerate(valid_moves):
if not valid: continue
if self.player_num == self.game.getGameEnded(*self.game.getNextState(board, self.player_num, move)):
win_move_set.add(move)
if -self.player_num == self.game.getGameEnded(*self.game.getNextState(board, -self.player_num, move)):
stop_loss_move_set.add(move)
else:
fallback_move_set.add(move)
if len(win_move_set) > 0:
ret_move = np.random.choice(list(win_move_set))
if self.verbose: print('Playing winning action %s from %s' % (ret_move, win_move_set))
elif len(stop_loss_move_set) > 0:
ret_move = np.random.choice(list(stop_loss_move_set))
if self.verbose: print('Playing loss stopping action %s from %s' % (ret_move, stop_loss_move_set))
elif len(fallback_move_set) > 0:
ret_move = np.random.choice(list(fallback_move_set))
if self.verbose: print('Playing random action %s from %s' % (ret_move, fallback_move_set))
else:
raise Exception('No valid moves remaining: %s' % game.stringRepresentation(board))
return ret_move
| 37.523077 | 114 | 0.621976 |
3ef16c812607e628552fee3773ff284e49010be7 | 743 | py | Python | distributed/tests/test_sizeof.py | ogrisel/distributed | bf0b861881e55be740b1987c1e9d69f90328b2b4 | [
"BSD-3-Clause"
] | 1 | 2016-07-21T04:03:22.000Z | 2016-07-21T04:03:22.000Z | distributed/tests/test_sizeof.py | minrk/distributed | 6da80822c75a069c14c55297cf9fc798416d3cd4 | [
"BSD-3-Clause"
] | null | null | null | distributed/tests/test_sizeof.py | minrk/distributed | 6da80822c75a069c14c55297cf9fc798416d3cd4 | [
"BSD-3-Clause"
] | 1 | 2019-11-12T19:17:22.000Z | 2019-11-12T19:17:22.000Z | import sys
import pytest
from distributed.sizeof import sizeof
def test_base():
assert sizeof(1) == sys.getsizeof(1)
def test_containers():
assert sizeof([1, 2, [3]]) > (sys.getsizeof(3) * 3 + sys.getsizeof([]))
def test_numpy():
np = pytest.importorskip('numpy')
assert sizeof(np.empty(1000, dtype='f8')) >= 8000
def test_pandas():
pd = pytest.importorskip('pandas')
df = pd.DataFrame({'x': [1, 2, 3], 'y': ['a'*1000, 'b'*1000, 'c'*1000]},
index=[10, 20, 30])
assert sizeof(df) >= sizeof(df.x) + sizeof(df.y) - sizeof(df.index)
assert sizeof(df.x) >= sizeof(df.index)
if pd.__version__ >= '0.17.1':
assert sizeof(df.y) >= 1000 * 3
assert sizeof(df.index) >= 20
| 23.967742 | 76 | 0.592194 |
212a6ec44d957649c5a42fe5d328daa329318fea | 7,187 | py | Python | modin/backends/pyarrow/query_compiler.py | xrmx/modin | 7f19fa2200993a0b8f009b6b603afb4a4022cec8 | [
"Apache-2.0"
] | null | null | null | modin/backends/pyarrow/query_compiler.py | xrmx/modin | 7f19fa2200993a0b8f009b6b603afb4a4022cec8 | [
"Apache-2.0"
] | null | null | null | modin/backends/pyarrow/query_compiler.py | xrmx/modin | 7f19fa2200993a0b8f009b6b603afb4a4022cec8 | [
"Apache-2.0"
] | null | null | null | from modin.backends.pandas.query_compiler import PandasQueryCompiler
import pyarrow as pa
import pandas
from pandas.core.computation.expr import Expr
from pandas.core.computation.scope import Scope
from pandas.core.computation.ops import UnaryOp, BinOp, Term, MathCall, Constant
class FakeSeries:
def __init__(self, dtype):
self.dtype = dtype
class PyarrowQueryCompiler(PandasQueryCompiler):
def query(self, expr, **kwargs):
"""Query columns of the QueryCompiler with a boolean expression.
Args:
expr: Boolean expression to query the columns with.
Returns:
QueryCompiler containing the rows where the boolean expression is satisfied.
"""
def gen_table_expr(table, expr):
resolver = {
name: FakeSeries(dtype.to_pandas_dtype())
for name, dtype in zip(table.schema.names, table.schema.types)
}
scope = Scope(level=0, resolvers=(resolver,))
return Expr(expr=expr, env=scope)
import pyarrow.gandiva as gandiva
unary_ops = {"~": "not"}
math_calls = {"log": "log", "exp": "exp", "log10": "log10", "cbrt": "cbrt"}
bin_ops = {
"+": "add",
"-": "subtract",
"*": "multiply",
"/": "divide",
"**": "power",
}
cmp_ops = {
"==": "equal",
"!=": "not_equal",
">": "greater_than",
"<": "less_than",
"<=": "less_than_or_equal_to",
">": "greater_than",
">=": "greater_than_or_equal_to",
"like": "like",
}
def build_node(table, terms, builder):
if isinstance(terms, Constant):
return builder.make_literal(
terms.value, (pa.from_numpy_dtype(terms.return_type))
)
if isinstance(terms, Term):
return builder.make_field(table.schema.field_by_name(terms.name))
if isinstance(terms, BinOp):
lnode = build_node(table, terms.lhs, builder)
rnode = build_node(table, terms.rhs, builder)
return_type = pa.from_numpy_dtype(terms.return_type)
if terms.op == "&":
return builder.make_and([lnode, rnode])
if terms.op == "|":
return builder.make_or([lnode, rnode])
if terms.op in cmp_ops:
assert return_type == pa.bool_()
return builder.make_function(
cmp_ops[terms.op], [lnode, rnode], return_type
)
if terms.op in bin_ops:
return builder.make_function(
bin_ops[terms.op], [lnode, rnode], return_type
)
if isinstance(terms, UnaryOp):
return_type = pa.from_numpy_dtype(terms.return_type)
return builder.make_function(
unary_ops[terms.op],
[build_node(table, terms.operand, builder)],
return_type,
)
if isinstance(terms, MathCall):
return_type = pa.from_numpy_dtype(terms.return_type)
childern = [
build_node(table, child, builder) for child in terms.operands
]
return builder.make_function(
math_calls[terms.op], childern, return_type
)
raise TypeError("Unsupported term type: %s" % terms)
def can_be_condition(expr):
if isinstance(expr.terms, BinOp):
if expr.terms.op in cmp_ops or expr.terms.op in ("&", "|"):
return True
elif isinstance(expr.terms, UnaryOp):
if expr.terms.op == "~":
return True
return False
def filter_with_selection_vector(table, s):
record_batch = table.to_batches()[0]
indices = s.to_array() # .to_numpy()
new_columns = [
pa.array(c.to_numpy()[indices]) for c in record_batch.columns
]
return pa.Table.from_arrays(new_columns, record_batch.schema.names)
def gandiva_query(table, query):
expr = gen_table_expr(table, query)
if not can_be_condition(expr):
raise ValueError("Root operation should be a filter.")
builder = gandiva.TreeExprBuilder()
root = build_node(table, expr.terms, builder)
cond = builder.make_condition(root)
filt = gandiva.make_filter(table.schema, cond)
sel_vec = filt.evaluate(table.to_batches()[0], pa.default_memory_pool())
result = filter_with_selection_vector(table, sel_vec)
return result
def gandiva_query2(table, query):
expr = gen_table_expr(table, query)
if not can_be_condition(expr):
raise ValueError("Root operation should be a filter.")
builder = gandiva.TreeExprBuilder()
root = build_node(table, expr.terms, builder)
cond = builder.make_condition(root)
filt = gandiva.make_filter(table.schema, cond)
return filt
def query_builder(arrow_table, **kwargs):
return gandiva_query(arrow_table, kwargs.get("expr", ""))
kwargs["expr"] = expr
func = self._prepare_method(query_builder, **kwargs)
new_data = self._map_across_full_axis(1, func)
# Query removes rows, so we need to update the index
new_index = self.compute_index(0, new_data, False)
return self.__constructor__(
new_data, new_index, self.columns, self._dtype_cache
)
def compute_index(self, axis, data_object, compute_diff=True):
def arrow_index_extraction(table, axis):
if not axis:
return pandas.Index(table.column(table.num_columns - 1))
else:
try:
return pandas.Index(table.columns)
except AttributeError:
return []
index_obj = self.index if not axis else self.columns
old_blocks = self.data if compute_diff else None
new_indices = data_object.get_indices(
axis=axis,
index_func=lambda df: arrow_index_extraction(df, axis),
old_blocks=old_blocks,
)
return index_obj[new_indices] if compute_diff else new_indices
def to_pandas(self):
"""Converts Modin DataFrame to Pandas DataFrame.
Returns:
Pandas DataFrame of the QueryCompiler.
"""
return self._modin_frame.to_pandas()
def to_numpy(self):
"""Converts Modin DataFrame to NumPy Array.
Returns:
NumPy Array of the QueryCompiler.
"""
return self._modin_frame.to_numpy()
| 38.433155 | 89 | 0.54362 |
579182bcdd9d6fa5156445b464011a1a97ef2a71 | 3,080 | py | Python | Lib/site-packages/tensorflow_probability/python/experimental/inference_gym/targets/ground_truth/_numpy/german_credit_numeric_logistic_regression.py | caiyongji/tf2.3.1-py3.7.9-full-built | ace4efcbf05b2b494388739718a18c13eab83c71 | [
"CNRI-Python-GPL-Compatible"
] | null | null | null | Lib/site-packages/tensorflow_probability/python/experimental/inference_gym/targets/ground_truth/_numpy/german_credit_numeric_logistic_regression.py | caiyongji/tf2.3.1-py3.7.9-full-built | ace4efcbf05b2b494388739718a18c13eab83c71 | [
"CNRI-Python-GPL-Compatible"
] | null | null | null | Lib/site-packages/tensorflow_probability/python/experimental/inference_gym/targets/ground_truth/_numpy/german_credit_numeric_logistic_regression.py | caiyongji/tf2.3.1-py3.7.9-full-built | ace4efcbf05b2b494388739718a18c13eab83c71 | [
"CNRI-Python-GPL-Compatible"
] | null | null | null | # Lint as: python2, python3
# Copyright 2020 The TensorFlow Probability Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
r"""Ground truth values for `german_credit_numeric_logistic_regression`.
Automatically generated using the command:
```
bazel run //tools/inference_gym_ground_truth:get_ground_truth -- \
--target \
german_credit_numeric_logistic_regression \
```
"""
import numpy as np
IDENTITY_MEAN = np.array([
-0.7351048553686668,
0.41854235448568405,
-0.4140361022849266,
0.12687262544490638,
-0.36453584119271787,
-0.1786990235929577,
-0.1528721119830067,
0.0130935930161775,
0.18071618213137836,
-0.11077840748059746,
-0.22434837978228872,
0.12239538160879522,
0.028775849958589513,
-0.13628208974727007,
-0.29222110498210363,
0.2783575897857832,
-0.2996277708109526,
0.30372734184257766,
0.27038791575592425,
0.12251564641333557,
-0.062930540861664,
-0.09271734036278598,
-0.025386265018982113,
-0.022952091856998594,
-1.2033366774193333,
]).reshape((25,))
IDENTITY_MEAN_STANDARD_ERROR = np.array([
5.842293909946494e-05,
7.242951181494356e-05,
6.287678982885978e-05,
7.585193280148798e-05,
6.115211849593741e-05,
6.021116416974708e-05,
5.204191507761724e-05,
5.860998969304511e-05,
7.29503297927934e-05,
6.490239025755679e-05,
4.990373753354614e-05,
6.283413887066306e-05,
5.430645722326503e-05,
6.406386782855579e-05,
7.892840871425272e-05,
5.308342894035861e-05,
6.703376967839617e-05,
8.521129854167403e-05,
7.765561215475798e-05,
0.00010413139262019992,
0.00010841073917598099,
6.237296545620734e-05,
9.654815236395932e-05,
9.49005719330975e-05,
6.225181243823337e-05,
]).reshape((25,))
IDENTITY_STANDARD_DEVIATION = np.array([
0.0898313720177512,
0.10433392890125515,
0.09494358976312321,
0.10821559696336329,
0.09451801286114327,
0.09209986501802636,
0.08194570882231808,
0.09096249386093944,
0.10427142118193244,
0.09706664883314095,
0.07886872456716118,
0.09415178440121623,
0.08568162266412561,
0.094635647710843,
0.11794843143366165,
0.08278578466826157,
0.10338649406760281,
0.12112997506000497,
0.11129990766341216,
0.13748697192324197,
0.14311733514054628,
0.09036915198426924,
0.12757406812435373,
0.12488837996746398,
0.09189586059142167,
]).reshape((25,))
| 27.256637 | 78 | 0.707468 |
284d53ea4fb1bd840c65a484a6f5f705aa7f15ad | 546 | py | Python | manage.py | LeeSinLiang/budget_project | 8684fe64640e900f48f042172aebe604fd94d061 | [
"MIT"
] | null | null | null | manage.py | LeeSinLiang/budget_project | 8684fe64640e900f48f042172aebe604fd94d061 | [
"MIT"
] | 6 | 2019-01-22T03:54:53.000Z | 2019-01-25T04:49:18.000Z | manage.py | joyliao07/budget_tool | a20974f47d5bfa8ef2ef285f57c7e1aafde42f29 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import os
import sys
if __name__ == '__main__':
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'budget_project.settings')
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv)
| 34.125 | 78 | 0.690476 |
d4322d26eeebe679eb8137d6b3b652448d23d604 | 69 | py | Python | aib/releases.py | FrankMillman/AccInABox | fc4cd26bf525c1bbe8e541d9339c69b0adbad546 | [
"MIT"
] | 3 | 2015-02-25T19:44:43.000Z | 2020-12-18T05:49:09.000Z | aib/releases.py | FrankMillman/AccInABox | fc4cd26bf525c1bbe8e541d9339c69b0adbad546 | [
"MIT"
] | 1 | 2019-11-20T12:31:34.000Z | 2019-11-20T12:31:35.000Z | aib/releases.py | FrankMillman/AccInABox | fc4cd26bf525c1bbe8e541d9339c69b0adbad546 | [
"MIT"
] | 1 | 2020-06-07T06:25:19.000Z | 2020-06-07T06:25:19.000Z | program_version_info = (0, 1, 1)
datamodel_version_info = (0, 1, 11)
| 23 | 35 | 0.710145 |
52c803e542178f1c0d423ca8cf9d0aa762ef584f | 36,317 | py | Python | python/ccxt/kuna.py | ChristianCoenen/ccxt | 261e3549b4cfe9fa4ecf1a00feb0450337eab686 | [
"MIT"
] | null | null | null | python/ccxt/kuna.py | ChristianCoenen/ccxt | 261e3549b4cfe9fa4ecf1a00feb0450337eab686 | [
"MIT"
] | null | null | null | python/ccxt/kuna.py | ChristianCoenen/ccxt | 261e3549b4cfe9fa4ecf1a00feb0450337eab686 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# PLEASE DO NOT EDIT THIS FILE, IT IS GENERATED AND WILL BE OVERWRITTEN:
# https://github.com/ccxt/ccxt/blob/master/CONTRIBUTING.md#how-to-contribute-code
from ccxt.base.exchange import Exchange
from ccxt.base.errors import ArgumentsRequired
from ccxt.base.errors import InsufficientFunds
from ccxt.base.errors import OrderNotFound
from ccxt.base.errors import NotSupported
from ccxt.base.decimal_to_precision import TICK_SIZE
class kuna(Exchange):
def describe(self):
return self.deep_extend(super(kuna, self).describe(), {
'id': 'kuna',
'name': 'Kuna',
'countries': ['UA'],
'rateLimit': 1000,
'version': 'v2',
'has': {
'CORS': None,
'spot': True,
'margin': None,
'swap': False,
'future': False,
'option': False,
'cancelOrder': True,
'createOrder': True,
'fetchBalance': True,
'fetchFundingHistory': False,
'fetchFundingRate': False,
'fetchFundingRateHistory': False,
'fetchFundingRates': False,
'fetchIndexOHLCV': False,
'fetchL3OrderBook': True,
'fetchLeverage': False,
'fetchMarkets': True,
'fetchMarkOHLCV': False,
'fetchMyTrades': True,
'fetchOHLCV': 'emulated',
'fetchOpenInterestHistory': False,
'fetchOpenOrders': True,
'fetchOrder': True,
'fetchOrderBook': True,
'fetchPositions': False,
'fetchPositionsRisk': False,
'fetchPremiumIndexOHLCV': False,
'fetchTicker': True,
'fetchTickers': True,
'fetchTime': True,
'fetchTrades': True,
'fetchTradingFee': False,
'fetchTradingFees': False,
'reduceMargin': False,
'setLeverage': False,
'setPositionMode': False,
'withdraw': None,
},
'timeframes': None,
'urls': {
'extension': '.json',
'referral': 'https://kuna.io?r=kunaid-gvfihe8az7o4',
'logo': 'https://user-images.githubusercontent.com/51840849/87153927-f0578b80-c2c0-11ea-84b6-74612568e9e1.jpg',
'api': {
'xreserve': 'https://api.xreserve.fund',
'v3': 'https://api.kuna.io',
'public': 'https://kuna.io', # v2
'private': 'https://kuna.io', # v2
},
'www': 'https://kuna.io',
'doc': 'https://kuna.io/documents/api',
'fees': 'https://kuna.io/documents/api',
},
'api': {
'xreserve': {
'get': {
'nonce': 1,
'fee': 1,
'delegated-transactions': 1,
},
'post': {
'delegate-transfer': 1,
},
},
'v3': {
'public': {
'get': {
'timestamp': 1,
'currencies': 1,
'markets': 1,
'tickers': 1,
'k': 1,
'trades_history': 1,
'fees': 1,
'exchange-rates': 1,
'exchange-rates/currency': 1,
'book/market': 1,
'kuna_codes/code/check': 1,
'landing_page_statistic': 1,
'translations/locale': 1,
'trades/market/hist': 1,
},
'post': {
'http_test': 1,
'deposit_channels': 1,
'withdraw_channels': 1,
'subscription_plans': 1,
'send_to': 1,
'confirm_token': 1,
'kunaid': 1,
'withdraw/prerequest': 1,
'deposit/prerequest': 1,
'deposit/exchange-rates': 1,
},
},
'sign': {
'get': {
'reset_password/token': 1,
},
'post': {
'signup/google': 1,
'signup/resend_confirmation': 1,
'signup': 1,
'signin': 1,
'signin/two_factor': 1,
'signin/resend_confirm_device': 1,
'signin/confirm_device': 1,
'reset_password': 1,
'cool-signin': 1,
},
'put': {
'reset_password/token': 1,
'signup/code/confirm': 1,
},
},
'private': {
'post': {
'auth/w/order/submit': 1,
'auth/r/orders': 1,
'auth/r/orders/market': 1,
'auth/r/orders/markets': 1,
'auth/api_tokens/delete': 1,
'auth/api_tokens/create': 1,
'auth/api_tokens': 1,
'auth/signin_history/uniq': 1,
'auth/signin_history': 1,
'auth/disable_withdraw_confirmation': 1,
'auth/change_password': 1,
'auth/deposit_address': 1,
'auth/announcements/accept': 1,
'auth/announcements/unaccepted': 1,
'auth/otp/deactivate': 1,
'auth/otp/activate': 1,
'auth/otp/secret': 1,
'auth/r/order/market/:order_id/trades': 1,
'auth/r/orders/market/hist': 1,
'auth/r/orders/hist': 1,
'auth/r/orders/hist/markets': 1,
'auth/r/orders/details': 1,
'auth/assets-history': 1,
'auth/assets-history/withdraws': 1,
'auth/assets-history/deposits': 1,
'auth/r/wallets': 1,
'auth/markets/favorites': 1,
'auth/markets/favorites/list': 1,
'auth/me/update': 1,
'auth/me': 1,
'auth/fund_sources': 1,
'auth/fund_sources/list': 1,
'auth/withdraw/resend_confirmation': 1,
'auth/withdraw': 1,
'auth/withdraw/details': 1,
'auth/withdraw/info': 1,
'auth/payment_addresses': 1,
'auth/deposit/prerequest': 1,
'auth/deposit/exchange-rates': 1,
'auth/deposit': 1,
'auth/deposit/details': 1,
'auth/deposit/info': 1,
'auth/kuna_codes/count': 1,
'auth/kuna_codes/details': 1,
'auth/kuna_codes/edit': 1,
'auth/kuna_codes/send-pdf': 1,
'auth/kuna_codes': 1,
'auth/kuna_codes/redeemed-by-me': 1,
'auth/kuna_codes/issued-by-me': 1,
'auth/payment_requests/invoice': 1,
'auth/payment_requests/type': 1,
'auth/referral_program/weekly_earnings': 1,
'auth/referral_program/stats': 1,
'auth/merchant/payout_services': 1,
'auth/merchant/withdraw': 1,
'auth/merchant/payment_services': 1,
'auth/merchant/deposit': 1,
'auth/verification/auth_token': 1,
'auth/kunaid_purchase/create': 1,
'auth/devices/list': 1,
'auth/sessions/list': 1,
'auth/subscriptions/reactivate': 1,
'auth/subscriptions/cancel': 1,
'auth/subscriptions/prolong': 1,
'auth/subscriptions/create': 1,
'auth/subscriptions/list': 1,
'auth/kuna_ids/list': 1,
'order/cancel/multi': 1,
'order/cancel': 1,
},
'put': {
'auth/fund_sources/id': 1,
'auth/kuna_codes/redeem': 1,
},
'delete': {
'auth/markets/favorites': 1,
'auth/fund_sources': 1,
'auth/devices': 1,
'auth/devices/list': 1,
'auth/sessions/list': 1,
'auth/sessions': 1,
},
},
},
'public': {
'get': [
'depth', # Get depth or specified market Both asks and bids are sorted from highest price to lowest.
'k_with_pending_trades', # Get K data with pending trades, which are the trades not included in K data yet, because there's delay between trade generated and processed by K data generator
'k', # Get OHLC(k line) of specific market
'markets', # Get all available markets
'order_book', # Get the order book of specified market
'order_book/{market}',
'tickers', # Get ticker of all markets
'tickers/{market}', # Get ticker of specific market
'timestamp', # Get server current time, in seconds since Unix epoch
'trades', # Get recent trades on market, each trade is included only once Trades are sorted in reverse creation order.
'trades/{market}',
],
},
'private': {
'get': [
'members/me', # Get your profile and accounts info
'deposits', # Get your deposits history
'deposit', # Get details of specific deposit
'deposit_address', # Where to deposit The address field could be empty when a new address is generating(e.g. for bitcoin), you should try again later in that case.
'orders', # Get your orders, results is paginated
'order', # Get information of specified order
'trades/my', # Get your executed trades Trades are sorted in reverse creation order.
'withdraws', # Get your cryptocurrency withdraws
'withdraw', # Get your cryptocurrency withdraw
],
'post': [
'orders', # Create a Sell/Buy order
'orders/multi', # Create multiple sell/buy orders
'orders/clear', # Cancel all my orders
'order/delete', # Cancel an order
'withdraw', # Create a withdraw
],
},
},
'fees': {
'trading': {
'tierBased': False,
'percentage': True,
'taker': self.parse_number('0.0025'),
'maker': self.parse_number('0.0025'),
},
'funding': {
'withdraw': {
'UAH': '1%',
'BTC': 0.001,
'BCH': 0.001,
'ETH': 0.01,
'WAVES': 0.01,
'GOL': 0.0,
'GBG': 0.0,
# 'RMC': 0.001 BTC
# 'ARN': 0.01 ETH
# 'R': 0.01 ETH
# 'EVR': 0.01 ETH
},
'deposit': {
# 'UAH': (amount) => amount * 0.001 + 5
},
},
},
'commonCurrencies': {
'PLA': 'Plair',
},
'precisionMode': TICK_SIZE,
'exceptions': {
'2002': InsufficientFunds,
'2003': OrderNotFound,
},
})
def fetch_time(self, params={}):
"""
fetches the current integer timestamp in milliseconds from the exchange server
:param dict params: extra parameters specific to the kuna api endpoint
:returns int: the current integer timestamp in milliseconds from the exchange server
"""
response = self.publicGetTimestamp(params)
#
# 1594911427
#
return response * 1000
def fetch_markets(self, params={}):
"""
retrieves data on all markets for kuna
:param dict params: extra parameters specific to the exchange api endpoint
:returns [dict]: an array of objects representing market data
"""
quotes = ['btc', 'rub', 'uah', 'usd', 'usdt', 'usdc']
markets = []
response = self.publicGetTickers(params)
#
# {
# shibuah: {
# at: '1644463685',
# ticker: {
# buy: '0.000911',
# sell: '0.00092',
# low: '0.000872',
# high: '0.000963',
# last: '0.000911',
# vol: '1539278096.0',
# price: '1434244.211249'
# }
# }
# }
#
ids = list(response.keys())
for i in range(0, len(ids)):
id = ids[i]
for j in range(0, len(quotes)):
quoteId = quotes[j]
# usd gets matched before usdt in usdtusd USDT/USD
# https://github.com/ccxt/ccxt/issues/9868
slicedId = id[1:]
index = slicedId.find(quoteId)
slice = slicedId[index:]
if (index > 0) and (slice == quoteId):
# usd gets matched before usdt in usdtusd USDT/USD
# https://github.com/ccxt/ccxt/issues/9868
baseId = id[0] + slicedId.replace(quoteId, '')
base = self.safe_currency_code(baseId)
quote = self.safe_currency_code(quoteId)
markets.append({
'id': id,
'symbol': base + '/' + quote,
'base': base,
'quote': quote,
'settle': None,
'baseId': baseId,
'quoteId': quoteId,
'settleId': None,
'type': 'spot',
'spot': True,
'margin': False,
'swap': False,
'future': False,
'option': False,
'active': None,
'contract': False,
'linear': None,
'inverse': None,
'contractSize': None,
'expiry': None,
'expiryDatetime': None,
'strike': None,
'optionType': None,
'precision': {
'amount': None,
'price': None,
},
'limits': {
'leverage': {
'min': None,
'max': None,
},
'amount': {
'min': None,
'max': None,
},
'price': {
'min': None,
'max': None,
},
'cost': {
'min': None,
'max': None,
},
},
'info': None,
})
return markets
def fetch_order_book(self, symbol, limit=None, params={}):
"""
fetches information on open orders with bid(buy) and ask(sell) prices, volumes and other data
:param str symbol: unified symbol of the market to fetch the order book for
:param int|None limit: the maximum amount of order book entries to return
:param dict params: extra parameters specific to the kuna api endpoint
:returns dict: A dictionary of `order book structures <https://docs.ccxt.com/en/latest/manual.html#order-book-structure>` indexed by market symbols
"""
self.load_markets()
market = self.market(symbol)
request = {
'market': market['id'],
}
if limit is not None:
request['limit'] = limit # default = 300
orderbook = self.publicGetDepth(self.extend(request, params))
timestamp = self.safe_timestamp(orderbook, 'timestamp')
return self.parse_order_book(orderbook, symbol, timestamp)
def parse_ticker(self, ticker, market=None):
timestamp = self.safe_timestamp(ticker, 'at')
ticker = ticker['ticker']
symbol = self.safe_symbol(None, market)
last = self.safe_string(ticker, 'last')
return self.safe_ticker({
'symbol': symbol,
'timestamp': timestamp,
'datetime': self.iso8601(timestamp),
'high': self.safe_string(ticker, 'high'),
'low': self.safe_string(ticker, 'low'),
'bid': self.safe_string(ticker, 'buy'),
'bidVolume': None,
'ask': self.safe_string(ticker, 'sell'),
'askVolume': None,
'vwap': None,
'open': self.safe_string(ticker, 'open'),
'close': last,
'last': last,
'previousClose': None,
'change': None,
'percentage': None,
'average': None,
'baseVolume': self.safe_string(ticker, 'vol'),
'quoteVolume': None,
'info': ticker,
}, market)
def fetch_tickers(self, symbols=None, params={}):
"""
fetches price tickers for multiple markets, statistical calculations with the information calculated over the past 24 hours each market
:param [str]|None symbols: unified symbols of the markets to fetch the ticker for, all market tickers are returned if not assigned
:param dict params: extra parameters specific to the kuna api endpoint
:returns dict: an array of `ticker structures <https://docs.ccxt.com/en/latest/manual.html#ticker-structure>`
"""
self.load_markets()
response = self.publicGetTickers(params)
ids = list(response.keys())
result = {}
for i in range(0, len(ids)):
id = ids[i]
market = None
symbol = id
if id in self.markets_by_id:
market = self.markets_by_id[id]
symbol = market['symbol']
else:
base = id[0:3]
quote = id[3:6]
base = base.upper()
quote = quote.upper()
base = self.safe_currency_code(base)
quote = self.safe_currency_code(quote)
symbol = base + '/' + quote
result[symbol] = self.parse_ticker(response[id], market)
return self.filter_by_array(result, 'symbol', symbols)
def fetch_ticker(self, symbol, params={}):
"""
fetches a price ticker, a statistical calculation with the information calculated over the past 24 hours for a specific market
:param str symbol: unified symbol of the market to fetch the ticker for
:param dict params: extra parameters specific to the kuna api endpoint
:returns dict: a `ticker structure <https://docs.ccxt.com/en/latest/manual.html#ticker-structure>`
"""
self.load_markets()
market = self.market(symbol)
request = {
'market': market['id'],
}
response = self.publicGetTickersMarket(self.extend(request, params))
return self.parse_ticker(response, market)
def fetch_l3_order_book(self, symbol, limit=None, params={}):
return self.fetch_order_book(symbol, limit, params)
def fetch_trades(self, symbol, since=None, limit=None, params={}):
"""
get the list of most recent trades for a particular symbol
:param str symbol: unified symbol of the market to fetch trades for
:param int|None since: timestamp in ms of the earliest trade to fetch
:param int|None limit: the maximum amount of trades to fetch
:param dict params: extra parameters specific to the kuna api endpoint
:returns [dict]: a list of `trade structures <https://docs.ccxt.com/en/latest/manual.html?#public-trades>`
"""
self.load_markets()
market = self.market(symbol)
request = {
'market': market['id'],
}
response = self.publicGetTrades(self.extend(request, params))
#
# [
# {
# "id":11353466,
# "price":"3000.16",
# "volume":"0.000397",
# "funds":"1.19106352",
# "market":"ethusdt",
# "created_at":"2022-04-12T18:32:36Z",
# "side":null,
# "trend":"sell"
# },
# ]
#
return self.parse_trades(response, market, since, limit)
def parse_trade(self, trade, market=None):
#
# fetchTrades(public)
#
# {
# "id":11353466,
# "price":"3000.16",
# "volume":"0.000397",
# "funds":"1.19106352",
# "market":"ethusdt",
# "created_at":"2022-04-12T18:32:36Z",
# "side":null,
# "trend":"sell"
# }
#
# fetchMyTrades(private)
#
# {
# "id":11353719,
# "price":"0.13566",
# "volume":"99.0",
# "funds":"13.43034",
# "market":"dogeusdt",
# "created_at":"2022-04-12T18:58:44Z",
# "side":"ask",
# "order_id":1665670371,
# "trend":"buy"
# }
#
timestamp = self.parse8601(self.safe_string(trade, 'created_at'))
symbol = None
if market:
symbol = market['symbol']
side = self.safe_string_2(trade, 'side', 'trend')
if side is not None:
sideMap = {
'ask': 'sell',
'bid': 'buy',
}
side = self.safe_string(sideMap, side, side)
priceString = self.safe_string(trade, 'price')
amountString = self.safe_string(trade, 'volume')
costString = self.safe_number(trade, 'funds')
orderId = self.safe_string(trade, 'order_id')
id = self.safe_string(trade, 'id')
return self.safe_trade({
'id': id,
'info': trade,
'timestamp': timestamp,
'datetime': self.iso8601(timestamp),
'symbol': symbol,
'type': None,
'side': side,
'order': orderId,
'takerOrMaker': None,
'price': priceString,
'amount': amountString,
'cost': costString,
'fee': None,
}, market)
def fetch_ohlcv(self, symbol, timeframe='1m', since=None, limit=None, params={}):
"""
fetches historical candlestick data containing the open, high, low, and close price, and the volume of a market
:param str symbol: unified symbol of the market to fetch OHLCV data for
:param str timeframe: the length of time each candle represents
:param int|None since: timestamp in ms of the earliest candle to fetch
:param int|None limit: the maximum amount of candles to fetch
:param dict params: extra parameters specific to the kuna api endpoint
:returns [[int]]: A list of candles ordered as timestamp, open, high, low, close, volume
"""
self.load_markets()
trades = self.fetch_trades(symbol, since, limit, params)
ohlcvc = self.build_ohlcvc(trades, timeframe, since, limit)
result = []
for i in range(0, len(ohlcvc)):
ohlcv = ohlcvc[i]
result.append([
ohlcv[0],
ohlcv[1],
ohlcv[2],
ohlcv[3],
ohlcv[4],
ohlcv[5],
])
return result
def parse_balance(self, response):
balances = self.safe_value(response, 'accounts', [])
result = {'info': balances}
for i in range(0, len(balances)):
balance = balances[i]
currencyId = self.safe_string(balance, 'currency')
code = self.safe_currency_code(currencyId)
account = self.account()
account['free'] = self.safe_string(balance, 'balance')
account['used'] = self.safe_string(balance, 'locked')
result[code] = account
return self.safe_balance(result)
def fetch_balance(self, params={}):
"""
query for balance and get the amount of funds available for trading or funds locked in orders
:param dict params: extra parameters specific to the kuna api endpoint
:returns dict: a `balance structure <https://docs.ccxt.com/en/latest/manual.html?#balance-structure>`
"""
self.load_markets()
response = self.privateGetMembersMe(params)
return self.parse_balance(response)
def create_order(self, symbol, type, side, amount, price=None, params={}):
"""
create a trade order
:param str symbol: unified symbol of the market to create an order in
:param str type: 'market' or 'limit'
:param str side: 'buy' or 'sell'
:param float amount: how much of currency you want to trade in units of base currency
:param float price: the price at which the order is to be fullfilled, in units of the quote currency, ignored in market orders
:param dict params: extra parameters specific to the kuna api endpoint
:returns dict: an `order structure <https://docs.ccxt.com/en/latest/manual.html#order-structure>`
"""
self.load_markets()
request = {
'market': self.market_id(symbol),
'side': side,
'volume': str(amount),
'ord_type': type,
}
if type == 'limit':
request['price'] = str(price)
response = self.privatePostOrders(self.extend(request, params))
marketId = self.safe_value(response, 'market')
market = self.safe_value(self.markets_by_id, marketId)
return self.parse_order(response, market)
def cancel_order(self, id, symbol=None, params={}):
"""
cancels an open order
:param str id: order id
:param str|None symbol: not used by kuna cancelOrder()
:param dict params: extra parameters specific to the kuna api endpoint
:returns dict: An `order structure <https://docs.ccxt.com/en/latest/manual.html#order-structure>`
"""
self.load_markets()
request = {
'id': id,
}
response = self.privatePostOrderDelete(self.extend(request, params))
order = self.parse_order(response)
status = order['status']
if status == 'closed' or status == 'canceled':
raise OrderNotFound(self.id + ' ' + self.json(order))
return order
def parse_order_status(self, status):
statuses = {
'done': 'closed',
'wait': 'open',
'cancel': 'canceled',
}
return self.safe_string(statuses, status, status)
def parse_order(self, order, market=None):
marketId = self.safe_string(order, 'market')
symbol = self.safe_symbol(marketId, market)
timestamp = self.parse8601(self.safe_string(order, 'created_at'))
status = self.parse_order_status(self.safe_string(order, 'state'))
type = self.safe_string(order, 'type')
side = self.safe_string(order, 'side')
id = self.safe_string(order, 'id')
return self.safe_order({
'id': id,
'clientOrderId': None,
'timestamp': timestamp,
'datetime': self.iso8601(timestamp),
'lastTradeTimestamp': None,
'status': status,
'symbol': symbol,
'type': type,
'timeInForce': None,
'postOnly': None,
'side': side,
'price': self.safe_string(order, 'price'),
'stopPrice': None,
'amount': self.safe_string(order, 'volume'),
'filled': self.safe_string(order, 'executed_volume'),
'remaining': self.safe_string(order, 'remaining_volume'),
'trades': None,
'fee': None,
'info': order,
'cost': None,
'average': None,
}, market)
def fetch_order(self, id, symbol=None, params={}):
"""
fetches information on an order made by the user
:param str|None symbol: not used by kuna fetchOrder
:param dict params: extra parameters specific to the kuna api endpoint
:returns dict: An `order structure <https://docs.ccxt.com/en/latest/manual.html#order-structure>`
"""
self.load_markets()
request = {
'id': int(id),
}
response = self.privateGetOrder(self.extend(request, params))
return self.parse_order(response)
def fetch_open_orders(self, symbol=None, since=None, limit=None, params={}):
if symbol is None:
raise ArgumentsRequired(self.id + ' fetchOpenOrders() requires a symbol argument')
self.load_markets()
market = self.market(symbol)
request = {
'market': market['id'],
}
response = self.privateGetOrders(self.extend(request, params))
# todo emulation of fetchClosedOrders, fetchOrders, fetchOrder
# with order cache + fetchOpenOrders
# as in BTC-e, Liqui, Yobit, DSX, Tidex, WEX
return self.parse_orders(response, market, since, limit)
def fetch_my_trades(self, symbol=None, since=None, limit=None, params={}):
#
# [
# {
# "id":11353719,
# "price":"0.13566",
# "volume":"99.0",
# "funds":"13.43034",
# "market":"dogeusdt",
# "created_at":"2022-04-12T18:58:44Z",
# "side":"ask",
# "order_id":1665670371,
# "trend":"buy"
# },
# ]
#
if symbol is None:
raise ArgumentsRequired(self.id + ' fetchMyTrades() requires a symbol argument')
self.load_markets()
market = self.market(symbol)
request = {
'market': market['id'],
}
response = self.privateGetTradesMy(self.extend(request, params))
return self.parse_trades(response, market, since, limit)
def nonce(self):
return self.milliseconds()
def encode_params(self, params):
if 'orders' in params:
orders = params['orders']
query = self.urlencode(self.keysort(self.omit(params, 'orders')))
for i in range(0, len(orders)):
order = orders[i]
keys = list(order.keys())
for k in range(0, len(keys)):
key = keys[k]
value = order[key]
query += '&orders%5B%5D%5B' + key + '%5D=' + str(value)
return query
return self.urlencode(self.keysort(params))
def sign(self, path, api='public', method='GET', params={}, headers=None, body=None):
url = None
if isinstance(api, list):
version, access = api
url = self.urls['api'][version] + '/' + version + '/' + self.implode_params(path, params)
if access == 'public':
if method == 'GET':
if params:
url += '?' + self.urlencode(params)
elif (method == 'POST') or (method == 'PUT'):
headers = {'Content-Type': 'application/json'}
body = self.json(params)
elif access == 'private':
raise NotSupported(self.id + ' private v3 API is not supported yet')
else:
request = '/api/' + self.version + '/' + self.implode_params(path, params)
if 'extension' in self.urls:
request += self.urls['extension']
query = self.omit(params, self.extract_params(path))
url = self.urls['api'][api] + request
if api == 'public':
if query:
url += '?' + self.urlencode(query)
else:
self.check_required_credentials()
nonce = str(self.nonce())
query = self.encode_params(self.extend({
'access_key': self.apiKey,
'tonce': nonce,
}, params))
auth = method + '|' + request + '|' + query
signed = self.hmac(self.encode(auth), self.encode(self.secret))
suffix = query + '&signature=' + signed
if method == 'GET':
url += '?' + suffix
else:
body = suffix
headers = {'Content-Type': 'application/x-www-form-urlencoded'}
return {'url': url, 'method': method, 'body': body, 'headers': headers}
def handle_errors(self, code, reason, url, method, headers, body, response, requestHeaders, requestBody):
if response is None:
return
if code == 400:
error = self.safe_value(response, 'error')
errorCode = self.safe_string(error, 'code')
feedback = self.id + ' ' + self.json(response)
self.throw_exactly_matched_exception(self.exceptions, errorCode, feedback)
# fallback to default error handler
| 43.286055 | 212 | 0.459372 |
10ccd2fcdf6883a985844a6176f6c907bbbc7a50 | 368 | py | Python | lims/addressbook/serializers.py | sqilz/LIMS-Backend | b64e1fa512f89e4492803d44c6b8c35e4d4724cc | [
"MIT"
] | 12 | 2017-03-01T10:39:36.000Z | 2022-01-04T06:17:19.000Z | lims/addressbook/serializers.py | sqilz/LIMS-Backend | b64e1fa512f89e4492803d44c6b8c35e4d4724cc | [
"MIT"
] | 29 | 2017-04-25T14:05:08.000Z | 2021-06-21T14:41:53.000Z | lims/addressbook/serializers.py | sqilz/LIMS-Backend | b64e1fa512f89e4492803d44c6b8c35e4d4724cc | [
"MIT"
] | 4 | 2017-10-11T16:22:53.000Z | 2021-02-23T15:45:21.000Z | from django.contrib.auth.models import User
from rest_framework import serializers
from .models import Address
class AddressSerializer(serializers.ModelSerializer):
user = serializers.SlugRelatedField(queryset=User.objects.all(),
slug_field='username')
class Meta:
model = Address
fields = "__all__"
| 26.285714 | 68 | 0.673913 |
d398f8db92fc5d3154bab08187e3c155a8023778 | 2,244 | py | Python | tests/test_plots.py | jwise77/ytree | 8bd905bb0995383c1285aeba586d41859f494a9b | [
"BSD-3-Clause-Clear"
] | null | null | null | tests/test_plots.py | jwise77/ytree | 8bd905bb0995383c1285aeba586d41859f494a9b | [
"BSD-3-Clause-Clear"
] | null | null | null | tests/test_plots.py | jwise77/ytree | 8bd905bb0995383c1285aeba586d41859f494a9b | [
"BSD-3-Clause-Clear"
] | null | null | null | """
tests for plotting
"""
#-----------------------------------------------------------------------------
# Copyright (c) ytree development team. All rights reserved.
#
# Distributed under the terms of the Modified BSD License.
#
# The full license is in the file COPYING.txt, distributed with this software.
#-----------------------------------------------------------------------------
import os
from ytree.utilities.testing import \
requires_file, \
TempDirTest
import ytree
CT = "consistent_trees/tree_0_0_0.dat"
class TreePlotTest(TempDirTest):
@requires_file(CT)
def test_default_plot(self):
a = ytree.load(CT)
p = ytree.TreePlot(a[0])
p.save()
@requires_file(CT)
def test_non_defaults(self):
attrs = {'size_field': 'virial_radius',
'size_log': False,
'min_mass': 1e14,
'min_mass_ratio': 0.1}
a = ytree.load(CT)
for attr, val in attrs.items():
p = ytree.TreePlot(a[0])
setattr(p, attr, val)
p.save()
@requires_file(CT)
def test_save(self):
a = ytree.load(CT)
p = ytree.TreePlot(a[0])
p.save('tree.png')
@requires_file(CT)
def test_dot_kwargs(self):
a = ytree.load(CT)
p = ytree.TreePlot(a[0], dot_kwargs={'dpi': 200})
p.save()
@requires_file(CT)
def test_node_function(self):
def my_func(halo):
label = "%d" % halo['uid']
return {"label": label}
a = ytree.load(CT)
p = ytree.TreePlot(a[0], node_function=my_func)
p.save()
@requires_file(CT)
def test_node_function_bad(self):
a = ytree.load(CT)
with self.assertRaises(RuntimeError):
ytree.TreePlot(a[0], node_function='notafunc')
@requires_file(CT)
def test_edge_function(self):
def my_func(desc, anc):
return {"color": "red"}
a = ytree.load(CT)
p = ytree.TreePlot(a[0], edge_function=my_func)
p.save()
@requires_file(CT)
def test_edge_function_bad(self):
a = ytree.load(CT)
with self.assertRaises(RuntimeError):
ytree.TreePlot(a[0], edge_function='notafunc')
| 25.793103 | 78 | 0.543672 |
8b32b5ad3d4bc0615f42aa3b3ebd30ae2aec93c9 | 2,519 | py | Python | docs/conf.py | fighterpoul/gitflow_linter | f856284b2545a7b307d158fcc524bc047884c0a0 | [
"MIT"
] | null | null | null | docs/conf.py | fighterpoul/gitflow_linter | f856284b2545a7b307d158fcc524bc047884c0a0 | [
"MIT"
] | null | null | null | docs/conf.py | fighterpoul/gitflow_linter | f856284b2545a7b307d158fcc524bc047884c0a0 | [
"MIT"
] | null | null | null | # Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
# import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))
# -- Project information -----------------------------------------------------
import gitflow_linter
project = 'gitflow_linter'
copyright = '2021, Poul Fighter'
author = 'Poul Fighter'
# The full version, including alpha/beta/rc tags
release = '0.0.5'
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.autosectionlabel']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'nature'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
command = 'gitflow-linter'
url = 'https://github.com/fighterpoul/gitflow_linter.git'
rst_epilog = """
.. |version| replace:: {versionnum}
.. |project| replace:: {project}
.. |command| replace:: {command}
.. |url| replace:: {url}
.. |doc_url| replace:: {doc_url}
.. |settings_file| replace:: {settings_file}
""".format(
versionnum=release,
project=project,
command=command,
url=url,
doc_url='https://fighterpoul.github.io/gitflow_linter/',
settings_file=gitflow_linter.DEFAULT_LINTER_OPTIONS,
)
| 34.506849 | 79 | 0.674474 |
5269ad9515b35d968279819f2e0a951dc0da6ea4 | 367 | py | Python | django_project/MVC/models.py | GudniNatan/Super_Duper_Ultra_Lokaverkefni | 7360c65c854154cb86d8f2e8a0da93e753562de2 | [
"MIT"
] | null | null | null | django_project/MVC/models.py | GudniNatan/Super_Duper_Ultra_Lokaverkefni | 7360c65c854154cb86d8f2e8a0da93e753562de2 | [
"MIT"
] | 4 | 2017-12-03T00:09:11.000Z | 2017-12-03T00:21:11.000Z | django_project/MVC/models.py | GudniNatan/Super_Duper_Ultra_Lokaverkefni | 7360c65c854154cb86d8f2e8a0da93e753562de2 | [
"MIT"
] | null | null | null | from django.db import models
# Create your models here.
class Book(models.Model):
book_title = models.CharField(max_length=255)
book_author = models.CharField(max_length=255)
book_publisher = models.CharField(max_length=255)
book_year = models.PositiveIntegerField()
def __str__(self):
return self.book_title + ' by ' + self.book_author | 33.363636 | 59 | 0.73297 |
a8e7565e8810f2f462f5b807e2a222bb271a5501 | 19,753 | py | Python | localstack/services/dynamodb/dynamodb_listener.py | nghiadhd/localstack | 4b932c5b0e2203d064e7e4577562cf88c42c5e38 | [
"Apache-2.0"
] | null | null | null | localstack/services/dynamodb/dynamodb_listener.py | nghiadhd/localstack | 4b932c5b0e2203d064e7e4577562cf88c42c5e38 | [
"Apache-2.0"
] | null | null | null | localstack/services/dynamodb/dynamodb_listener.py | nghiadhd/localstack | 4b932c5b0e2203d064e7e4577562cf88c42c5e38 | [
"Apache-2.0"
] | 6 | 2019-07-10T10:27:54.000Z | 2021-04-08T09:59:54.000Z | import re
import json
import random
import logging
import threading
from binascii import crc32
from requests.models import Response
from localstack import config
from localstack.utils.aws import aws_stack
from localstack.utils.common import to_bytes, to_str, clone
from localstack.utils.analytics import event_publisher
from localstack.services.awslambda import lambda_api
from localstack.services.generic_proxy import ProxyListener
from localstack.services.dynamodbstreams import dynamodbstreams_api
# cache table definitions - used for testing
TABLE_DEFINITIONS = {}
# action header prefix
ACTION_PREFIX = 'DynamoDB_20120810'
# set up logger
LOGGER = logging.getLogger(__name__)
class ProxyListenerDynamoDB(ProxyListener):
thread_local = threading.local()
def __init__(self):
self._table_ttl_map = {}
def forward_request(self, method, path, data, headers):
if path.startswith('/shell'):
return True
data = json.loads(to_str(data))
ddb_client = aws_stack.connect_to_service('dynamodb')
if random.random() < config.DYNAMODB_ERROR_PROBABILITY:
return error_response_throughput()
action = headers.get('X-Amz-Target')
if action == '%s.CreateTable' % ACTION_PREFIX:
# Check if table exists, to avoid error log output from DynamoDBLocal
table_names = ddb_client.list_tables()['TableNames']
if to_str(data['TableName']) in table_names:
return 200
elif action in ('%s.PutItem' % ACTION_PREFIX, '%s.UpdateItem' % ACTION_PREFIX, '%s.DeleteItem' % ACTION_PREFIX):
# find an existing item and store it in a thread-local, so we can access it in return_response,
# in order to determine whether an item already existed (MODIFY) or not (INSERT)
ProxyListenerDynamoDB.thread_local.existing_item = find_existing_item(data)
elif action == '%s.DescribeTable' % ACTION_PREFIX:
# Check if table exists, to avoid error log output from DynamoDBLocal
table_names = ddb_client.list_tables()['TableNames']
if to_str(data['TableName']) not in table_names:
response = error_response(message='Cannot do operations on a non-existent table',
error_type='ResourceNotFoundException')
fix_headers_for_updated_response(response)
return response
elif action == '%s.DeleteTable' % ACTION_PREFIX:
# Check if table exists, to avoid error log output from DynamoDBLocal
table_names = ddb_client.list_tables()['TableNames']
if to_str(data['TableName']) not in table_names:
response = error_response(message='Cannot do operations on a non-existent table',
error_type='ResourceNotFoundException')
fix_headers_for_updated_response(response)
return response
elif action == '%s.BatchWriteItem' % ACTION_PREFIX:
existing_items = []
for table_name in sorted(data['RequestItems'].keys()):
for request in data['RequestItems'][table_name]:
for key in ['PutRequest', 'DeleteRequest']:
inner_request = request.get(key)
if inner_request:
existing_items.append(find_existing_item(inner_request, table_name))
ProxyListenerDynamoDB.thread_local.existing_items = existing_items
elif action == '%s.TransactWriteItems' % ACTION_PREFIX:
existing_items = []
for item in data['TransactItems']:
for key in ['Put', 'Update', 'Delete']:
inner_item = item.get(key)
if inner_item:
existing_items.append(find_existing_item(inner_item))
ProxyListenerDynamoDB.thread_local.existing_items = existing_items
elif action == '%s.UpdateTimeToLive' % ACTION_PREFIX:
# TODO: TTL status is maintained/mocked but no real expiry is happening for items
response = Response()
response.status_code = 200
self._table_ttl_map[data['TableName']] = {
'AttributeName': data['TimeToLiveSpecification']['AttributeName'],
'Status': data['TimeToLiveSpecification']['Enabled']
}
response._content = json.dumps({'TimeToLiveSpecification': data['TimeToLiveSpecification']})
fix_headers_for_updated_response(response)
return response
elif action == '%s.DescribeTimeToLive' % ACTION_PREFIX:
response = Response()
response.status_code = 200
if data['TableName'] in self._table_ttl_map:
if self._table_ttl_map[data['TableName']]['Status']:
ttl_status = 'ENABLED'
else:
ttl_status = 'DISABLED'
response._content = json.dumps({
'TimeToLiveDescription': {
'AttributeName': self._table_ttl_map[data['TableName']]['AttributeName'],
'TimeToLiveStatus': ttl_status
}
})
else: # TTL for dynamodb table not set
response._content = json.dumps({'TimeToLiveDescription': {'TimeToLiveStatus': 'DISABLED'}})
fix_headers_for_updated_response(response)
return response
elif action == '%s.TagResource' % ACTION_PREFIX or action == '%s.UntagResource' % ACTION_PREFIX:
response = Response()
response.status_code = 200
response._content = '' # returns an empty body on success.
fix_headers_for_updated_response(response)
return response
elif action == '%s.ListTagsOfResource' % ACTION_PREFIX:
response = Response()
response.status_code = 200
response._content = json.dumps({'Tags': []}) # TODO: mocked and returns an empty list of tags for now.
fix_headers_for_updated_response(response)
return response
return True
def return_response(self, method, path, data, headers, response):
if path.startswith('/shell'):
return
data = json.loads(to_str(data))
# update table definitions
if data and 'TableName' in data and 'KeySchema' in data:
TABLE_DEFINITIONS[data['TableName']] = data
if response._content:
# fix the table and latest stream ARNs (DynamoDBLocal hardcodes "ddblocal" as the region)
content_replaced = re.sub(r'("TableArn"|"LatestStreamArn"|"StreamArn")\s*:\s*"arn:aws:dynamodb:' +
'ddblocal:([^"]+)"', r'\1: "arn:aws:dynamodb:%s:\2"' % aws_stack.get_local_region(),
to_str(response._content))
if content_replaced != response._content:
response._content = content_replaced
fix_headers_for_updated_response(response)
action = headers.get('X-Amz-Target')
if not action:
return
record = {
'eventID': '1',
'eventVersion': '1.0',
'dynamodb': {
'StreamViewType': 'NEW_AND_OLD_IMAGES',
'SizeBytes': -1
},
'awsRegion': config.DEFAULT_REGION,
'eventSource': 'aws:dynamodb'
}
records = [record]
if action == '%s.UpdateItem' % ACTION_PREFIX:
if response.status_code == 200:
updated_item = find_existing_item(data)
if not updated_item:
return
record['eventName'] = 'MODIFY'
record['dynamodb']['Keys'] = data['Key']
record['dynamodb']['OldImage'] = self._thread_local('existing_item')
record['dynamodb']['NewImage'] = updated_item
record['dynamodb']['SizeBytes'] = len(json.dumps(updated_item))
elif action == '%s.BatchWriteItem' % ACTION_PREFIX:
records = self.prepare_batch_write_item_records(record, data)
elif action == '%s.TransactWriteItems' % ACTION_PREFIX:
records = self.prepare_transact_write_item_records(record, data)
elif action == '%s.PutItem' % ACTION_PREFIX:
if response.status_code == 200:
existing_item = self._thread_local('existing_item')
record['eventName'] = 'INSERT' if not existing_item else 'MODIFY'
keys = dynamodb_extract_keys(item=data['Item'], table_name=data['TableName'])
if isinstance(keys, Response):
return keys
record['dynamodb']['Keys'] = keys
record['dynamodb']['NewImage'] = data['Item']
record['dynamodb']['SizeBytes'] = len(json.dumps(data['Item']))
if existing_item:
record['dynamodb']['OldImage'] = existing_item
elif action == '%s.GetItem' % ACTION_PREFIX:
if response.status_code == 200:
content = json.loads(to_str(response.content))
# make sure we append 'ConsumedCapacity', which is properly
# returned by dynalite, but not by AWS's DynamoDBLocal
if 'ConsumedCapacity' not in content and data.get('ReturnConsumedCapacity') in ('TOTAL', 'INDEXES'):
content['ConsumedCapacity'] = {
'CapacityUnits': 0.5, # TODO hardcoded
'TableName': data['TableName']
}
response._content = json.dumps(content)
fix_headers_for_updated_response(response)
elif action == '%s.DeleteItem' % ACTION_PREFIX:
if response.status_code == 200:
old_item = self._thread_local('existing_item')
record['eventName'] = 'REMOVE'
record['dynamodb']['Keys'] = data['Key']
record['dynamodb']['OldImage'] = old_item
elif action == '%s.CreateTable' % ACTION_PREFIX:
if 'StreamSpecification' in data:
if response.status_code == 200:
content = json.loads(to_str(response._content))
create_dynamodb_stream(data, content['TableDescription']['LatestStreamLabel'])
event_publisher.fire_event(event_publisher.EVENT_DYNAMODB_CREATE_TABLE,
payload={'n': event_publisher.get_hash(data['TableName'])})
return
elif action == '%s.DeleteTable' % ACTION_PREFIX:
event_publisher.fire_event(event_publisher.EVENT_DYNAMODB_DELETE_TABLE,
payload={'n': event_publisher.get_hash(data['TableName'])})
return
elif action == '%s.UpdateTable' % ACTION_PREFIX:
if 'StreamSpecification' in data:
if response.status_code == 200:
content = json.loads(to_str(response._content))
create_dynamodb_stream(data, content['TableDescription']['LatestStreamLabel'])
return
else:
# nothing to do
return
if len(records) > 0 and 'eventName' in records[0]:
if 'TableName' in data:
records[0]['eventSourceARN'] = aws_stack.dynamodb_table_arn(data['TableName'])
forward_to_lambda(records)
forward_to_ddb_stream(records)
def prepare_batch_write_item_records(self, record, data):
records = []
i = 0
for table_name in sorted(data['RequestItems'].keys()):
for request in data['RequestItems'][table_name]:
put_request = request.get('PutRequest')
if put_request:
existing_item = self._thread_local('existing_items')[i]
keys = dynamodb_extract_keys(item=put_request['Item'], table_name=table_name)
if isinstance(keys, Response):
return keys
new_record = clone(record)
new_record['eventName'] = 'INSERT' if not existing_item else 'MODIFY'
new_record['dynamodb']['Keys'] = keys
new_record['dynamodb']['NewImage'] = put_request['Item']
if existing_item:
new_record['dynamodb']['OldImage'] = existing_item
new_record['eventSourceARN'] = aws_stack.dynamodb_table_arn(table_name)
records.append(new_record)
delete_request = request.get('DeleteRequest')
if delete_request:
keys = delete_request['Key']
if isinstance(keys, Response):
return keys
new_record = clone(record)
new_record['eventName'] = 'REMOVE'
new_record['dynamodb']['Keys'] = keys
new_record['dynamodb']['OldImage'] = self._thread_local('existing_items')[i]
new_record['eventSourceARN'] = aws_stack.dynamodb_table_arn(table_name)
records.append(new_record)
i += 1
return records
def prepare_transact_write_item_records(self, record, data):
records = []
for i, request in enumerate(data['TransactItems']):
put_request = request.get('Put')
if put_request:
existing_item = self._thread_local('existing_items')[i]
table_name = put_request['TableName']
keys = dynamodb_extract_keys(item=put_request['Item'], table_name=table_name)
if isinstance(keys, Response):
return keys
new_record = clone(record)
new_record['eventName'] = 'INSERT' if not existing_item else 'MODIFY'
new_record['dynamodb']['Keys'] = keys
new_record['dynamodb']['NewImage'] = put_request['Item']
if existing_item:
new_record['dynamodb']['OldImage'] = existing_item
new_record['eventSourceARN'] = aws_stack.dynamodb_table_arn(table_name)
records.append(new_record)
update_request = request.get('Update')
if update_request:
table_name = update_request['TableName']
keys = update_request['Key']
if isinstance(keys, Response):
return keys
updated_item = find_existing_item(update_request, table_name)
if not updated_item:
return
new_record = clone(record)
new_record['eventName'] = 'MODIFY'
new_record['dynamodb']['Keys'] = keys
new_record['dynamodb']['OldImage'] = self._thread_local('existing_items')[i]
new_record['dynamodb']['NewImage'] = updated_item
new_record['eventSourceARN'] = aws_stack.dynamodb_table_arn(table_name)
records.append(new_record)
delete_request = request.get('Delete')
if delete_request:
table_name = delete_request['TableName']
keys = delete_request['Key']
if isinstance(keys, Response):
return keys
new_record = clone(record)
new_record['eventName'] = 'REMOVE'
new_record['dynamodb']['Keys'] = keys
new_record['dynamodb']['OldImage'] = self._thread_local('existing_items')[i]
new_record['eventSourceARN'] = aws_stack.dynamodb_table_arn(table_name)
records.append(new_record)
return records
def _thread_local(self, name, default=None):
try:
return getattr(ProxyListenerDynamoDB.thread_local, name)
except AttributeError:
return default
# instantiate listener
UPDATE_DYNAMODB = ProxyListenerDynamoDB()
def find_existing_item(put_item, table_name=None):
table_name = table_name or put_item['TableName']
ddb_client = aws_stack.connect_to_service('dynamodb')
search_key = {}
if 'Key' in put_item:
search_key = put_item['Key']
else:
schema = ddb_client.describe_table(TableName=table_name)
schemas = [schema['Table']['KeySchema']]
for index in schema['Table'].get('GlobalSecondaryIndexes', []):
# schemas.append(index['KeySchema'])
pass
for schema in schemas:
for key in schema:
key_name = key['AttributeName']
search_key[key_name] = put_item['Item'][key_name]
if not search_key:
return
req = {'TableName': table_name, 'Key': search_key}
existing_item = aws_stack.dynamodb_get_item_raw(req)
if 'Item' not in existing_item:
if 'message' in existing_item:
table_names = ddb_client.list_tables()['TableNames']
msg = ('Unable to get item from DynamoDB (existing tables: %s): %s' %
(table_names, existing_item['message']))
LOGGER.warning(msg)
return
return existing_item.get('Item')
def fix_headers_for_updated_response(response):
response.headers['content-length'] = len(to_bytes(response.content))
response.headers['x-amz-crc32'] = calculate_crc32(response)
def calculate_crc32(response):
return crc32(to_bytes(response.content)) & 0xffffffff
def create_dynamodb_stream(data, latest_stream_label):
stream = data['StreamSpecification']
enabled = stream.get('StreamEnabled')
if enabled not in [False, 'False']:
table_name = data['TableName']
view_type = stream['StreamViewType']
dynamodbstreams_api.add_dynamodb_stream(table_name=table_name,
latest_stream_label=latest_stream_label, view_type=view_type, enabled=enabled)
def forward_to_lambda(records):
for record in records:
sources = lambda_api.get_event_sources(source_arn=record['eventSourceARN'])
event = {
'Records': [record]
}
for src in sources:
lambda_api.run_lambda(event=event, context={}, func_arn=src['FunctionArn'])
def forward_to_ddb_stream(records):
dynamodbstreams_api.forward_events(records)
def dynamodb_extract_keys(item, table_name):
result = {}
if table_name not in TABLE_DEFINITIONS:
LOGGER.warning('Unknown table: %s not found in %s' % (table_name, TABLE_DEFINITIONS))
return None
for key in TABLE_DEFINITIONS[table_name]['KeySchema']:
attr_name = key['AttributeName']
if attr_name not in item:
return error_response(error_type='ValidationException',
message='One of the required keys was not given a value')
result[attr_name] = item[attr_name]
return result
def error_response(message=None, error_type=None, code=400):
if not message:
message = 'Unknown error'
if not error_type:
error_type = 'UnknownError'
if 'com.amazonaws.dynamodb' not in error_type:
error_type = 'com.amazonaws.dynamodb.v20120810#%s' % error_type
response = Response()
response.status_code = code
content = {
'message': message,
'__type': error_type
}
response._content = json.dumps(content)
return response
def error_response_throughput():
message = ('The level of configured provisioned throughput for the table was exceeded. ' +
'Consider increasing your provisioning level with the UpdateTable API')
error_type = 'ProvisionedThroughputExceededException'
return error_response(message, error_type)
| 46.259953 | 120 | 0.602896 |
5c08db5b485aec0384444dc9182f9abe85f788a6 | 4,175 | py | Python | argo/workflows/client/models/v1alpha1_workflow_suspend_request.py | zgs225/argo-client-python | 2e49a0df9b4f8fc9e90f7808caf22819ff54166c | [
"Apache-2.0"
] | 75 | 2020-03-17T03:55:23.000Z | 2021-11-08T09:38:37.000Z | argo/workflows/client/models/v1alpha1_workflow_suspend_request.py | zgs225/argo-client-python | 2e49a0df9b4f8fc9e90f7808caf22819ff54166c | [
"Apache-2.0"
] | 24 | 2020-04-18T13:02:36.000Z | 2021-10-20T09:01:23.000Z | argo/workflows/client/models/v1alpha1_workflow_suspend_request.py | zgs225/argo-client-python | 2e49a0df9b4f8fc9e90f7808caf22819ff54166c | [
"Apache-2.0"
] | 26 | 2020-04-18T12:56:28.000Z | 2022-01-05T04:47:30.000Z | # coding: utf-8
"""
Argo Server API
You can get examples of requests and responses by using the CLI with `--gloglevel=9`, e.g. `argo list --gloglevel=9` # noqa: E501
The version of the OpenAPI document: v2.12.2
Generated by: https://openapi-generator.tech
"""
import pprint
import re # noqa: F401
import six
from argo.workflows.client.configuration import Configuration
class V1alpha1WorkflowSuspendRequest(object):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
"""
Attributes:
openapi_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
openapi_types = {
'name': 'str',
'namespace': 'str'
}
attribute_map = {
'name': 'name',
'namespace': 'namespace'
}
def __init__(self, name=None, namespace=None, local_vars_configuration=None): # noqa: E501
"""V1alpha1WorkflowSuspendRequest - a model defined in OpenAPI""" # noqa: E501
if local_vars_configuration is None:
local_vars_configuration = Configuration()
self.local_vars_configuration = local_vars_configuration
self._name = None
self._namespace = None
self.discriminator = None
if name is not None:
self.name = name
if namespace is not None:
self.namespace = namespace
@property
def name(self):
"""Gets the name of this V1alpha1WorkflowSuspendRequest. # noqa: E501
:return: The name of this V1alpha1WorkflowSuspendRequest. # noqa: E501
:rtype: str
"""
return self._name
@name.setter
def name(self, name):
"""Sets the name of this V1alpha1WorkflowSuspendRequest.
:param name: The name of this V1alpha1WorkflowSuspendRequest. # noqa: E501
:type: str
"""
self._name = name
@property
def namespace(self):
"""Gets the namespace of this V1alpha1WorkflowSuspendRequest. # noqa: E501
:return: The namespace of this V1alpha1WorkflowSuspendRequest. # noqa: E501
:rtype: str
"""
return self._namespace
@namespace.setter
def namespace(self, namespace):
"""Sets the namespace of this V1alpha1WorkflowSuspendRequest.
:param namespace: The namespace of this V1alpha1WorkflowSuspendRequest. # noqa: E501
:type: str
"""
self._namespace = namespace
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.openapi_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, V1alpha1WorkflowSuspendRequest):
return False
return self.to_dict() == other.to_dict()
def __ne__(self, other):
"""Returns true if both objects are not equal"""
if not isinstance(other, V1alpha1WorkflowSuspendRequest):
return True
return self.to_dict() != other.to_dict()
| 28.401361 | 134 | 0.590898 |
3b3ee7273f931160b42a4c424d5caf48b8593fc2 | 7,416 | py | Python | demo.py | khoroo/deep-text-recognition-benchmark | 6089c4035c5b8136c2f055126e5dd43a121501d9 | [
"Apache-2.0"
] | null | null | null | demo.py | khoroo/deep-text-recognition-benchmark | 6089c4035c5b8136c2f055126e5dd43a121501d9 | [
"Apache-2.0"
] | null | null | null | demo.py | khoroo/deep-text-recognition-benchmark | 6089c4035c5b8136c2f055126e5dd43a121501d9 | [
"Apache-2.0"
] | null | null | null | import string
import argparse
import torch
import torch.backends.cudnn as cudnn
import torch.utils.data
import torch.nn.functional as F
from utils import CTCLabelConverter, AttnLabelConverter
from dataset import RawDataset, AlignCollate, MapDataset
from model import Model
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
def demo(opt):
""" model configuration """
if 'CTC' in opt.Prediction:
converter = CTCLabelConverter(opt.character)
else:
converter = AttnLabelConverter(opt.character)
opt.num_class = len(converter.character)
if opt.rgb:
opt.input_channel = 3
model = Model(opt)
print('model input parameters', opt.imgH, opt.imgW, opt.num_fiducial, opt.input_channel, opt.output_channel,
opt.hidden_size, opt.num_class, opt.batch_max_length, opt.Transformation, opt.FeatureExtraction,
opt.SequenceModeling, opt.Prediction)
model = torch.nn.DataParallel(model).to(device)
# load model
print('loading pretrained model from %s' % opt.saved_model)
model.load_state_dict(torch.load(opt.saved_model, map_location=device))
# prepare data. two demo images from https://github.com/bgshih/crnn#run-demo
AlignCollate_demo = AlignCollate(imgH=opt.imgH, imgW=opt.imgW, keep_ratio_with_pad=opt.PAD)
if opt.map_mode:
demo_data = MapDataset(root=opt.image_folder, opt=opt)
else:
demo_data = RawDataset(root=opt.image_folder, opt=opt) # use RawDataset
demo_loader = torch.utils.data.DataLoader(
demo_data, batch_size=opt.batch_size,
shuffle=False,
num_workers=int(opt.workers),
collate_fn=AlignCollate_demo, pin_memory=True)
# predict
model.eval()
with torch.no_grad():
for image_tensors, labels in demo_loader:
batch_size = image_tensors.size(0)
image = image_tensors.to(device)
# For max length prediction
length_for_pred = torch.IntTensor([opt.batch_max_length] * batch_size).to(device)
text_for_pred = torch.LongTensor(batch_size, opt.batch_max_length + 1).fill_(0).to(device)
if 'CTC' in opt.Prediction:
preds = model(image, text_for_pred)
# Select max probabilty (greedy decoding) then decode index to character
preds_size = torch.IntTensor([preds.size(1)] * batch_size)
_, preds_index = preds.max(2)
preds_index = preds_index.view(-1)
preds_str = converter.decode(preds_index.data, preds_size.data)
else:
preds = model(image, text_for_pred, is_train=False)
# select max probabilty (greedy decoding) then decode index to character
_, preds_index = preds.max(2)
preds_str = converter.decode(preds_index, length_for_pred)
log = open(f'./log_demo_result.txt', 'a')
dashed_line = '-' * 80
head = f'{"image_path":25s}\t{"predicted_labels":25s}\tconfidence score'
print(f'{dashed_line}\n{head}\n{dashed_line}')
log.write(f'{dashed_line}\n{head}\n{dashed_line}\n')
preds_prob = F.softmax(preds, dim=2)
preds_max_prob, _ = preds_prob.max(dim=2)
def _append_line(fpath, line_number, string):
import fileinput
for pos,line in enumerate(fileinput.input(fpath, inplace=True)):
if pos == line_number:
print(line.strip('\n')+string, end='\n')
else:
print(line, end='')
fileinput.close()
for label, pred, pred_max_prob in zip(labels, preds_str, preds_max_prob):
if 'Attn' in opt.Prediction:
pred_EOS = pred.find('[s]')
pred = pred[:pred_EOS] # prune after "end of sentence" token ([s])
pred_max_prob = pred_max_prob[:pred_EOS]
# calculate confidence score (= multiply of pred_max_prob)
confidence_score = pred_max_prob.cumprod(dim=0)[-1]
if opt.map_mode:
print(f'{label.txt_file:25s}\t{label.line_num}\t{pred:25s}\t{confidence_score:0.4f}')
_append_line(label.txt_file, label.line_num, f',"{pred}",{confidence_score:0.4f}')
else:
print(f'{label:25s}\t{pred:25s}\t{confidence_score:0.4f}')
log.write(f'{label:25s}\t{pred:25s}\t{confidence_score:0.4f}\n')
log.close()
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--image_folder', required=True, help='path to image_folder which contains text images')
parser.add_argument('--workers', type=int, help='number of data loading workers', default=4)
parser.add_argument('--batch_size', type=int, default=192, help='input batch size')
parser.add_argument('--saved_model', required=True, help="path to saved_model to evaluation")
""" Data processing """
parser.add_argument('--batch_max_length', type=int, default=25, help='maximum-label-length')
parser.add_argument('--imgH', type=int, default=32, help='the height of the input image')
parser.add_argument('--imgW', type=int, default=100, help='the width of the input image')
parser.add_argument('--rgb', action='store_true', help='use rgb input')
parser.add_argument('--character', type=str, default='0123456789abcdefghijklmnopqrstuvwxyz', help='character label')
parser.add_argument('--sensitive', action='store_true', help='for sensitive character mode')
parser.add_argument('--PAD', action='store_true', help='whether to keep ratio then pad for image resize')
""" Model Architecture """
parser.add_argument('--Transformation', type=str, required=True, help='Transformation stage. None|TPS')
parser.add_argument('--FeatureExtraction', type=str, required=True, help='FeatureExtraction stage. VGG|RCNN|ResNet')
parser.add_argument('--SequenceModeling', type=str, required=True, help='SequenceModeling stage. None|BiLSTM')
parser.add_argument('--Prediction', type=str, required=True, help='Prediction stage. CTC|Attn')
parser.add_argument('--num_fiducial', type=int, default=20, help='number of fiducial points of TPS-STN')
parser.add_argument('--input_channel', type=int, default=1, help='the number of input channel of Feature extractor')
parser.add_argument('--output_channel', type=int, default=512,
help='the number of output channel of Feature extractor')
parser.add_argument('--hidden_size', type=int, default=256, help='the size of the LSTM hidden state')
parser.add_argument('--map_mode', action='store_true', help='map mode')
parser.add_argument('--map_ext', type=str, default='tif', help='File extension of map tile images')
opt = parser.parse_args()
""" vocab / character number configuration """
if opt.sensitive:
opt.character = string.printable[:-6] # same with ASTER setting (use 94 char).
cudnn.benchmark = True
cudnn.deterministic = True
opt.num_gpu = torch.cuda.device_count()
demo(opt)
| 49.112583 | 121 | 0.637136 |
0f551643679cc8978bac3675840778ee4675d6f0 | 674 | py | Python | p099.py | yehnan/project_euler_python | 9c8a50e992b71c1c313b08a16ea24298ce5cf020 | [
"MIT"
] | 1 | 2017-03-29T19:30:32.000Z | 2017-03-29T19:30:32.000Z | p099.py | yehnan/project_euler_python | 9c8a50e992b71c1c313b08a16ea24298ce5cf020 | [
"MIT"
] | null | null | null | p099.py | yehnan/project_euler_python | 9c8a50e992b71c1c313b08a16ea24298ce5cf020 | [
"MIT"
] | 1 | 2018-10-29T02:40:06.000Z | 2018-10-29T02:40:06.000Z |
# Problem 99: Largest exponential
# https://projecteuler.net/problem=99
from io import open
from math import log
def main(filename):
be_max_log = 0
base_max = 0
exp_max = 0
line_number_max = 0
with open(filename, 'r', encoding='ascii') as fin:
for ln, line in enumerate(fin):
be = line.split(',')
base, exp = int(be[0]), int(be[1])
m = exp * log(base)
if be_max_log < m:
be_max_log = m
base_max = base
exp_max = exp
line_number_max = ln + 1
return line_number_max
print(main('p099_base_exp.txt'))
| 24.071429 | 55 | 0.529674 |
53bae094d5ed5fd85cfb2c93fb76d460728cd05c | 819 | py | Python | Retos/POO/ponencia.py | juanpanu/Juan_DS_Path | 24e71616dae692e931e95cd3815ca88fa9b8a46a | [
"MIT"
] | null | null | null | Retos/POO/ponencia.py | juanpanu/Juan_DS_Path | 24e71616dae692e931e95cd3815ca88fa9b8a46a | [
"MIT"
] | null | null | null | Retos/POO/ponencia.py | juanpanu/Juan_DS_Path | 24e71616dae692e931e95cd3815ca88fa9b8a46a | [
"MIT"
] | null | null | null | class Ponencia(Contribucion):
"""Clase que representa la Ponencia"""
def __init__(self,id, titulo, idAutor,calificacion,fecha,eje):
super().__init__(self, id, titulo, idAutor,calificacion)
self.fechaPublicacion = fecha
self.ejeTematico = eje
#fechaPublicacion Getter function
@property
def fechaPublicacion(self):
return self._fechaPublicacion
#fechaPublicacion Setter function
@fechaPublicacion.setter
def fechaPublicacion(self,fechaPublicacion):
self._fechaPublicacion = fechaPublicacion
#ejeTematico Getter function
@property
def ejeTematico(self):
return self._ejeTematico
#ejeTematico Setter function
@ejeTematico.setter
def ejeTematico(self,ejeTematico):
self._ejeTematico = ejeTematico
| 30.333333 | 66 | 0.704518 |
01a6b10c283e8104c742016a4bb590f43f2272d2 | 7,888 | py | Python | getearnings.py | peterfabakker/zipline-utils | 07ad6f56910b7c34c2cb45442b282213eae0efc6 | [
"MIT"
] | null | null | null | getearnings.py | peterfabakker/zipline-utils | 07ad6f56910b7c34c2cb45442b282213eae0efc6 | [
"MIT"
] | null | null | null | getearnings.py | peterfabakker/zipline-utils | 07ad6f56910b7c34c2cb45442b282213eae0efc6 | [
"MIT"
] | null | null | null |
"""
getearnings.py
Created by Peter Bakker on 2017-10-26.
Copyright (c) 20188 unhedged. All rights reserved.
"""
import sys
import os
import pandas as pd
import numpy as np
import datetime
from shutil import copy
from yahoo_earnings_calendar import YahooEarningsCalendar
from zipline.data import bundles as bundles_module
from zipline.data.bundles.core import load, bundles
from zipline.utils.math_utils import nanmean, nanstd
#import zipline.pipeline.loaders.blaze
from IPython import embed
import imp
ext = imp.load_source('ext', '/root/.zipline/extension.py')
import getopt
symbols = ["A","AAL","AAOI","AAP","AAPL","ABB","ABBV","ABC","ABT","ACN","ADBE","ADI","ADM","ADP","ADS","ADSK","AEE","AEP","AES","AET","AFL","AGN","AGNC","AIA","AIG","AIV","AIZ","AJG","AKAM","AKAO","ALB","ALE","ALGN","ALK","ALL","ALLE","ALRM","ALXN","AMAT","AMBA","AMD","AME","AMG","AMGN","AMP","AMT","AMZN","ANDV","ANSS","ANTM","AON","AOS","APA","APC","APD","APH","APTV","ARE","ARNC","ASML","ATVI","AUTO","AVB","AVGO","AVX","AVY","AWK","AXP","AYI","AZO","BA","BABA","BAC","BAX","BBT","BBY","BDX","BEN","BHF","BHGE","BIDU","BIIB","BK","BKNG","BLD","BLK","BLL","BMY","BP","BSX","BUX","BWA","BX","BXP","BZUN","C","CA","CAG","CAH","CARA","CAT","CB","CBOE","CBRE","CBS","CCI","CCL","CDNS","CELG","CERN","CF","CFG","CG","CHD","CHK","CHRW","CHTR","CHW","CI","CINF","CL","CLX","CMA","CMCSA","CME","CMG","CMI","CMS","CNA","CNC","CNP","COF","COG","COHR","COL","COO","COP","COST","COTY","CPB","CRM","CSCO","CSX","CTAS","CTL","CTRP","CTSH","CTXS","CUTR","CVS","CVX","CXO","D","DAL","DAR","DE","DFS","DG","DGX","DHI","DHR","DIS","DISCA","DISCK","DISH","DLR","DLTR","DOV","DPS","DRE","DRI","DTE","DUK","DVA","DVN","DWDP","DXC","EA","EBAY","ECL","ED","EDV","EE","EFX","EIX","EL","EME","EMN","EMR","EOG","EPZM","EQIX","EQR","EQT","ERV","ES","ESRX","ESS","ETFC","ETN","ETR","EVHC","EW","EXC","EXPD","EXPE","EXR","F","FAST","FB","FBHS","FCX","FDX","FE","FFIV","FIS","FISV","FITB","FL","FLIR","FLR","FLS","FMC","FOX","FOXA","FRT","FTI","FTV","GALT","GD","GDX","GE","GGP","GILD","GIS","GKOS","GLD","GLW","GM","GOOG","GOOGL","GPC","GPN","GPRO","GPS","GRMN","GS","GT","GWW","HAL","HAS","HBAN","HBI","HCA","HCP","HD","HES","HF","HIG","HII","HLT","HOG","HOLX","HON","HOP","HP","HPE","HPQ","HRB","HRL","HRS","HSIC","HST","HSY","HUM","HW","IBM","ICE","IDXX","IFF","IJH","IJR","ILMN","INCY","INFO","INO","INTC","INTU","IP","IPG","IPGP","IQV","IR","IRM","ISRG","IT","ITW","IVB","IVZ","IWM","JBHT","JCI","JEC","JM","JNJ","JNPR","JNUG","JPM","JWN","K","KEY","KHC","KIM","KLAC","KMB","KMI","KMPR","KMX","KNDI","KO","KORS","KR","KSS","KSU","L","LABL","LB","LEG","LEN","LG","LH","LITE","LKQ","LLL","LLY","LMT","LNC","LNT","LOW","LRCX","LUK","LUV","LYB","M","MA","MAA","MAC","MAR","MAS","MAT","MCD","MCHP","MCK","MCO","MDLZ","MDT","MET","MGM","MGPI","MHK","MKC","MLM","MMC","MMM","MNST","MO","MOMO","MON","MOS","MOV","MP","MPC","MRCC","MRK","MRO","MRVL","MS","MSCI","MSFT","MSI","MTB","MTD","MU","MYL","NA","NANO","NAP","NAVI","NBL","NCLH","NDAQ","NEE","NEM","NFLX","NFX","NI","NKE","NKTR","NLSN","NM","NOC","NOV","NPS","NRG","NSC","NTAP","NTRS","NUE","NUGT","NVDA","NVLN","NWL","NWS","NWSA","O","OCLR","OKE","OLED","OMC","ORCL","ORLY","OXY","PANW","PAYX","PBCT","PCAR","PCG","PEG","PEP","PFE","PFG","PG","PGI","PGR","PH","PHM","PIR","PKG","PKI","PLAY","PLD","PLV","PM","PNC","PNR","PNW","PPG","PPL","PRFT","PRGO","PRU","PSA","PSX","PVH","PWR","PX","PXD","PY","PYG","PYPL","Q","QCOM","QQQ","QRVO","QTM","RCL","RE","REG","REGN","RF","RH","RHI","RHT","RJF","RL","RMD","ROK","ROP","ROST","RRC","RSG","RTN","SO","T","TAP","TDG","TEL","TEVA","TGT","THO","TI","TIF","TJX","TLO","TLT","TMF","TMK","TMO","TPR","TQQQ","TRIP","TROW","TRV","TSCO","TSN","TSS","TT","TTWO","TWLO","TWTR","TWX","TX","TXN","TXT","TZ","UA","UAA","UAL","UDR","UHS","ULTA","UNH","UNM","UNP","UPRO","UPS","URBN","URI","USB","USO","UTX","UVXY","V","VAR","VFC","VGIT","VHT","VIAB","VIX","VIX3M","VLO","VMC","VNO","VRSK","VRSN","VRTX","VRX","VTR","VXMT","VXST","VXX","VZ","WAT","WBA","WDC","WEC","WELL","WFC","WHR","WIX","WK","WKS","WLTW","WM","WMB","WMT","WR","WRK","WRLD","WU","WY","WYN","WYNN","XEC","XEL","XIV","XL","XLNX","XLP","XLY","XOM","XRAY","XRX","XYL","YF","YK","YMC","YUM","YY","ZBH","ZION","ZIV","ZTS"]
def get_tickers_from_bundle(bundle_name):
"""Gets a list of tickers from a given bundle"""
bundle_data = load(bundle_name, os.environ, None)
# get a list of all sids
lifetimes = bundle_data.asset_finder._compute_asset_lifetimes()
all_sids = lifetimes.sid
# retreive all assets in the bundle
all_assets = bundle_data.asset_finder.retrieve_all(all_sids)
# return only tickers
return map(lambda x: (x.symbol, x.sid), all_assets)
def get_ticker_sid_dict_from_bundle(bundle_name):
"""Packs the (ticker,sid) tuples into a dict."""
all_equities = get_tickers_from_bundle(bundle_name)
return dict(all_equities)
def main(argv=None):
DL_earningsdates()
convert_to_blaze()
#convert_to_blaze(bundle,'/root/data/temp/earningsdata.csv')
def DL_earningsdates(argv=None):
today = datetime.datetime.today()
oneweeks = datetime.timedelta(days=21)
threeweeks = datetime.timedelta(days=21)
yec = YahooEarningsCalendar()
earningsDF = None
frames = []
for i in xrange(1,26,1):
if i > 1: today = today - (threeweeks)
threeweeksago = today - threeweeks
inthreeweeks = today + threeweeks
earnings = yec.earnings_between(threeweeksago, inthreeweeks)
df = pd.DataFrame(earnings)
print 'Found '+str(len(df))+' results for between '+ str(threeweeksago) + ' and '+str(inthreeweeks)
if ('startdatetime' not in df.columns) or len(df) == 0:
print "no startdatetime detected/no data"
continue
df['startdatetime'] = pd.to_datetime(df['startdatetime']) # TODO make timezone aware and convert to UTC
pdtoday = pd.to_datetime(today)
df['today'] = pdtoday
df['temp'] = df['startdatetime'].view('int64')-df['today'].view('int64')
df['days'] = df['temp'].apply(lambda x: np.timedelta64(x, 'ns').astype('timedelta64[D]')/np.timedelta64(1, 'D') )
frames.append(df)
df.to_csv('/root/data/temp/earningsdata-'+str(threeweeksago)+'-'+str(inthreeweeks)+'.csv')
earningsDF = pd.concat(frames)
earningsDF.to_csv('/root/data/temp/earningsdata.csv')
def convert_to_blaze(bundle=None, filename= '/root/data/temp/earningsdata.csv'):
df = pd.read_csv(filename)
df.rename(columns = {'ticker':'symbol', 'today':'asof_date'}, inplace=True)
bu = bundles
cleaned_days_dfs =[]
cleaned_date_dfs =[]
for b in bu.keys():
data = df.copy()
tickers = get_tickers_from_bundle(b)
t = pd.DataFrame(tickers)
t.rename(columns = {0:'symbol',1:'sid'}, inplace=True)
data = pd.merge(left=data, right=t,on=['symbol','symbol'], how= 'left').dropna(subset=['sid'])
data['sid'] = data['sid'].astype(int)
data.rename(columns = {'today':'asof_date'}, inplace=True)
cleaned_days_df = data['asof_date','sid','days']
cleaned_days_df.rename(columns = {'days':'value'}, inplace=True)
cleaned_days_df.to_csv('/root/data/temp/earningsdata_days_'+b+'.csv')
cleaned_date_df = data['asof_date','sid','startdatetime']
cleaned_date_df.rename(columns = {'startdatetime':'value'}, inplace=True)
cleaned_date_df.to_csv('/root/data/temp/earningsdata_date_'+b+'.csv')
def addfiles(dir='/root/data/temp/'):
frames = []
for item in os.listdir(dir):
if item.startswith('earningsdata-') and item.endswith('.csv'):
df = pd.read_csv(dir+item)
frames.append(df)
earningsDF = pd.concat(frames)
earningsDF.to_csv('/root/data/temp/earningsdata.csv')
if __name__ == '__main__':
main()
| 61.625 | 3,643 | 0.595208 |
f884dee2754ed402b80b4844d7b32bdccabbb6dd | 1,851 | py | Python | moviepy/audio/fx/audio_delay.py | va6996/moviepy | 60b95c37816413da6bf304e85f8c0ba8e2d2c6e7 | [
"MIT"
] | null | null | null | moviepy/audio/fx/audio_delay.py | va6996/moviepy | 60b95c37816413da6bf304e85f8c0ba8e2d2c6e7 | [
"MIT"
] | null | null | null | moviepy/audio/fx/audio_delay.py | va6996/moviepy | 60b95c37816413da6bf304e85f8c0ba8e2d2c6e7 | [
"MIT"
] | null | null | null | import cupy as np
from moviepy.audio.AudioClip import CompositeAudioClip
from moviepy.audio.fx.multiply_volume import multiply_volume
from moviepy.decorators import audio_video_fx
@audio_video_fx
def audio_delay(clip, offset=0.2, n_repeats=8, decay=1):
"""Repeats audio certain number of times at constant intervals multiplying
their volume levels using a linear space in the range 1 to ``decay`` argument
value.
Parameters
----------
offset : float, optional
Gap between repetitions start times, in seconds.
n_repeats : int, optional
Number of repetitions (without including the clip itself).
decay : float, optional
Multiplication factor for the volume level of the last repetition. Each
repetition will have a value in the linear function between 1 and this value,
increasing or decreasing constantly. Keep in mind that the last repetition
will be muted if this is 0, and if is greater than 1, the volume will increase
for each repetition.
Examples
--------
>>> from moviepy import *
>>> videoclip = AudioFileClip('myaudio.wav').fx(
... audio_delay, offset=.2, n_repeats=10, decayment=.2
... )
>>> # stereo A note
>>> make_frame = lambda t: np.array(
... [np.sin(440 * 2 * np.pi * t), np.sin(880 * 2 * np.pi * t)]
... ).T
... clip = AudioClip(make_frame=make_frame, duration=0.1, fps=44100)
... clip = audio_delay(clip, offset=.2, n_repeats=11, decay=0)
"""
decayments = np.linspace(1, max(0, decay), n_repeats + 1)
return CompositeAudioClip(
[
clip.copy(),
*[
multiply_volume(
clip.with_start((rep + 1) * offset), decayments[rep + 1]
)
for rep in range(n_repeats)
],
]
)
| 32.473684 | 84 | 0.627769 |
457494f7343d316dedff69acf430cc877e8c9155 | 1,125 | py | Python | configs/localization/bsn/bsn_pgm_800x100_activitynet_feature_train_with_label.py | Naoki-Wake/mmaction2 | a2032605db82509744a18d993c94a06feb1efd15 | [
"Apache-2.0"
] | null | null | null | configs/localization/bsn/bsn_pgm_800x100_activitynet_feature_train_with_label.py | Naoki-Wake/mmaction2 | a2032605db82509744a18d993c94a06feb1efd15 | [
"Apache-2.0"
] | null | null | null | configs/localization/bsn/bsn_pgm_800x100_activitynet_feature_train_with_label.py | Naoki-Wake/mmaction2 | a2032605db82509744a18d993c94a06feb1efd15 | [
"Apache-2.0"
] | null | null | null | # dataset settings
dataset_type = 'ActivityNetDataset'
data_root = '/dataset/maction_feat_with_label/'#'/dataset/activitynet/maction_feat/'
data_root_val = '/dataset/maction_feat_with_label/'
ann_file_train = '/dataset/annotation/anet_anno_train.json'
ann_file_val = '/dataset/annotation/anet_anno_test.json'
ann_file_test = '/dataset/annotation/anet_anno_test.json'
work_dir = 'work_dirs/bsn_800x100_20e_1x16_activitynet_feature_train_with_label/'
tem_results_dir = f'{work_dir}/tem_results/'
pgm_proposals_dir = f'{work_dir}/pgm_proposals/'
pgm_features_dir = f'{work_dir}/pgm_features/'
temporal_scale = 100
pgm_proposals_cfg = dict(
pgm_proposals_thread=8, temporal_scale=temporal_scale, peak_threshold=0.5)
pgm_features_test_cfg = dict(
pgm_features_thread=4,
top_k=150,#1000,
num_sample_start=8,
num_sample_end=8,
num_sample_action=16,
num_sample_interp=3,
bsp_boundary_ratio=0.2)
pgm_features_train_cfg = dict(
pgm_features_thread=4,
top_k=150,#500,
num_sample_start=8,
num_sample_end=8,
num_sample_action=16,
num_sample_interp=3,
bsp_boundary_ratio=0.2)
| 33.088235 | 84 | 0.784 |
181efe6d0c9e71e1321dfd460411a6154bfe1a71 | 4,021 | py | Python | kplr/ld.py | danielrios12/kplr---acesso-a-dados-do-kepler | 4c6a823ad6a88ccd2d5cf8d9eed912a1e57489a2 | [
"MIT"
] | 35 | 2015-01-21T22:38:12.000Z | 2020-08-05T21:15:19.000Z | kplr/ld.py | danielrios12/kplr---acesso-a-dados-do-kepler | 4c6a823ad6a88ccd2d5cf8d9eed912a1e57489a2 | [
"MIT"
] | 12 | 2015-03-17T18:54:15.000Z | 2021-08-06T18:19:13.000Z | kplr/ld.py | danielrios12/kplr---acesso-a-dados-do-kepler | 4c6a823ad6a88ccd2d5cf8d9eed912a1e57489a2 | [
"MIT"
] | 17 | 2015-02-11T19:49:00.000Z | 2019-10-15T18:06:28.000Z | # -*- coding: utf-8 -*-
from __future__ import (division, print_function, absolute_import,
unicode_literals)
__all__ = ["get_quad_coeffs"]
import os
import shutil
import sqlite3
import logging
from tempfile import NamedTemporaryFile
from six.moves import urllib
from .config import KPLR_ROOT
DB_FILENAME = "ldcoeffs.db"
def get_quad_coeffs(teff=5778, logg=None, feh=None, data_root=None,
clobber=False):
"""
Get the quadratic coefficients for the standard Kepler limb-darkening
profile.
:param teff: (optional)
The effective temperature in degrees K.
:param logg: (optional)
The log10 surface gravity in cm/s/s.
:param feh: (optional)
The metallicity [Fe/H].
:param data_root: (optional)
The local base directory where the grids will be downloaded to. This
can also be set using the ``KPLR_ROOT`` environment variable. The
default value is ``~/.kplr``.
:param clobber: (optional)
Should the database file be overwritten even if it already exists?
(default: False)
"""
assert teff is not None
# Make sure that the database is saved locally.
filename = download_database(data_root=data_root, clobber=clobber)
# Construct the SQL query.
q = """
SELECT mu1,mu2 FROM claret11 WHERE
teff=(SELECT teff FROM claret11 ORDER BY abs(teff-?) LIMIT 1)
ORDER BY (logg-?) * (logg-?) + (feh-?) * (feh-?) LIMIT 1
"""
pars = [teff, logg, logg, feh, feh]
# Execute the command.
with sqlite3.connect(filename) as conn:
c = conn.cursor()
rows = c.execute(q, pars)
mu1, mu2 = rows.fetchone()
return mu1, mu2
def download_database(data_root=None, clobber=False):
"""
Download a SQLite database containing the limb darkening coefficients
computed by `Claret & Bloemen (2011)
<http://adsabs.harvard.edu/abs/2011A%26A...529A..75C>`_. The table is
available online on `Vizier
<http://vizier.cfa.harvard.edu/viz-bin/VizieR?-source=J/A+A/529/A75>`_.
Using the ASCII data table, the SQLite database was generated with the
following Python commands:
.. code-block:: python
import sqlite3
import numpy as np
with sqlite3.connect("ldcoeffs.db") as conn:
c = conn.cursor()
c.execute("CREATE TABLE IF NOT EXISTS claret11 "
"(teff REAL, logg REAL, feh REAL, veloc REAL, mu1 REAL, "
"mu2 REAL)")
data = np.loadtxt("claret11.txt", skiprows=59, delimiter="|",
usecols=range(1, 7))
c.executemany("INSERT INTO claret11 (logg,teff,feh,veloc,mu1,mu2) "
"VALUES (?,?,?,?,?,?)", data)
"""
# Figure out the local filename for the database.
if data_root is None:
data_root = KPLR_ROOT
filename = os.path.join(data_root, DB_FILENAME)
if not clobber and os.path.exists(filename):
return filename
# Make sure that the target directory exists.
try:
os.makedirs(data_root)
except os.error:
pass
# MAGIC: specify the URL for the remote file.
url = "http://bbq.dfm.io/~dfm/ldcoeffs.db"
# Fetch the database from the server.
logging.info("Downloading file from: '{0}'".format(url))
r = urllib.request.Request(url)
handler = urllib.request.urlopen(r)
code = handler.getcode()
if int(code) != 200:
raise RuntimeError("Couldn't download file from {0}. Returned: {1}"
.format(url, code))
# Save the contents of the file.
logging.info("Saving file to: '{0}'".format(filename))
# Atomically write to disk.
# http://stackoverflow.com/questions/2333872/ \
# atomic-writing-to-file-with-python
f = NamedTemporaryFile("wb", delete=False)
f.write(handler.read())
f.flush()
os.fsync(f.fileno())
f.close()
shutil.move(f.name, filename)
return filename
| 30.007463 | 79 | 0.625964 |
e2729f8616eb04affdf06b927d1b22df1aef29d8 | 2,653 | py | Python | demos/HFL/example/oneflow/bert_client.py | monadyn/fedlearn-algo | c4459d421139b0bb765527d636fff123bf17bda4 | [
"Apache-2.0"
] | 86 | 2021-07-20T01:54:21.000Z | 2021-10-06T04:02:40.000Z | demos/HFL/example/oneflow/bert_client.py | fedlearnAI/fedlearnalgo | 63d9ceb64d331ff2b5103ae49e54229cad7e2095 | [
"Apache-2.0"
] | 5 | 2021-07-23T21:22:16.000Z | 2021-09-12T15:48:35.000Z | demos/HFL/example/oneflow/bert_client.py | fedlearnAI/fedlearnalgo | 63d9ceb64d331ff2b5103ae49e54229cad7e2095 | [
"Apache-2.0"
] | 28 | 2021-07-20T07:15:33.000Z | 2021-08-22T20:04:57.000Z | # Copyright 2021 Fedlearn authors.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import os,sys
import numpy as np
from typing import Callable, List, Dict
root_path = os.getcwd()
sys.path.append(root_path)
sys.path.append(os.path.join(root_path,'demos/HFL'))
from demos.HFL.example.oneflow.local_bert_classifier import train_classifier
from demos.HFL.example.oneflow.local_bert_classifier.train_classifier import oneFlowBertClassifier
from demos.HFL.base_client import Client
from demos.HFL.common.param_util import(
Params,
ParamsRes,
TrainArgs,
EvalArgs,
TrainRes,
EvalRes
)
class BertClient(Client):
def __init__(self):
load_pretrained_model = True
self.model = oneFlowBertClassifier(train_classifier.args, load_pretrained_model)
def _init_params(self):
W = self.__get_dummy_weights( num_layers =16)
self.params = Params(
names = list(W.keys()),
weights= list(W.values()),
weight_type= 'float'
)
def get_params(self)->ParamsRes:
param_dict: Dict[str,np.ndarray] = \
self.model.get_model_parameters()
return ParamsRes(
Params(
names=list(param_dict.keys()),
weights=list(param_dict.values()),
weight_type='float'
),
response_messages={'dummy_res',1}
)
def set_params(self, params:Params)->None:
model_params = dict(zip(params.names, params.weights))
self.model.load(model_params)
def train(self, trainArgs:TrainArgs)->TrainRes:
self.set_params(trainArgs.params)
param_dict, acc = self.model.train()
trainRes = TrainRes(
params = Params(
names=list(param_dict.keys()),
weights=list(param_dict.values()),
weight_type='float'
),
num_samples= 100,
metrics = {'acc':acc}
)
return trainRes
def evaluate(self, evalAgrs:EvalArgs)->EvalArgs:
pass
| 28.223404 | 98 | 0.636638 |
00690a13d7ca916d3fa16960c3dc38758db94bfa | 18,707 | py | Python | ruruki/entities.py | terence-bigtt/ruruki | 4c02cb1277e08fa55bf4b5b23f68e66385c76995 | [
"Apache-2.0"
] | 90 | 2016-02-17T06:51:26.000Z | 2022-01-23T15:16:21.000Z | ruruki/entities.py | terence-bigtt/ruruki | 4c02cb1277e08fa55bf4b5b23f68e66385c76995 | [
"Apache-2.0"
] | 49 | 2016-02-17T05:10:19.000Z | 2017-05-20T04:45:11.000Z | ruruki/entities.py | otherJL0/ruruki | 4c02cb1277e08fa55bf4b5b23f68e66385c76995 | [
"Apache-2.0"
] | 25 | 2016-02-17T06:45:01.000Z | 2022-02-26T11:39:10.000Z | """
Entities
"""
from ruruki import interfaces
class Entity(interfaces.IEntity):
"""
Base class for containing the common methods used for the other
entities like vertices and edges.
.. note::
See :class:`~.IEntity` for doco.
.. note::
The properties can be accessed as if they are attributes
directly by prepending ``prop__`` to the key.
.. code-block:: python
>>> e = Entity("Entity", name="Example")
>>> e.prop__name
'Example'
:param label: :class:`.IEntity` label.
:type ident: :class:`str` or :obj:`None`
:param kwargs: Additional properties for the :class:`.IEntity`.
:type kwargs: :class:`str`=value or :class:`dict`
"""
__slots__ = ["ident", "label", "properties", "graph"]
def __init__(self, label=None, **kwargs):
self.graph = None
self.ident = None
self.label = label
self.properties = kwargs
def is_bound(self):
return self.graph is not None
def remove_property(self, key):
if key in self.properties:
del self.properties[key]
def _update_properties(self, kwargs):
self.properties.update(kwargs)
def set_property(self, **kwargs):
if not kwargs:
raise interfaces.EntityUpdateError(
"Can not update with no key and values."
)
if self.is_bound():
self.graph.set_property(self, **kwargs)
self._update_properties(kwargs)
def as_dict(self, include_privates=False):
if include_privates is True:
properties = self.properties
else:
properties = {
key: value
for key, value in self.properties.items()
if key.startswith("_") is False
}
return {
"metadata": {},
"id": self.ident,
"label": self.label,
"properties": properties,
}
def __lt__(self, other):
if self.ident is None:
return True
if other.ident is None:
return False
return self.ident < other.ident
def __getattribute__(self, name):
if name.startswith("prop__"):
_, key = name.split("prop__", 1)
try:
return self.properties[key]
except KeyError:
pass
return super(Entity, self).__getattribute__(name)
def __str__(self):
return "<{0}> {1}".format(
self.__class__.__name__, self.ident
)
def __repr__(self): # pragma: no cover
return "<{0}> ident: {1}, label: {2}, properties: {3}".format(
self.__class__.__name__, self.ident, self.label, self.properties
)
class Vertex(interfaces.IVertex, Entity):
"""
Vertex/Node is the representation of a entity. It can be anything
and contains properties for additional information.
.. note::
See :class:`~.IVertex` for doco.
.. note::
The properties can be accessed as if they are attributes
directly by prepending ``prop__`` to the key.
.. code-block:: python
>>> v = Vertex("Person", name="Foo")
>>> v.prop__name
'Foo'
:param label: :class:`.IEntity` label.
:type ident: :class:`str` or :obj:`None`
:param kwargs: Additional properties for the :class:`.IEntity`.
:type kwargs: :class:`str`=value or :class:`dict`
"""
__slots__ = ["in_edges", "out_edges"]
def __init__(self, label=None, **kwargs):
super(Vertex, self).__init__(label=label, **kwargs)
self.in_edges = EntitySet()
self.out_edges = EntitySet()
def in_edge_count(self):
return len(self.in_edges)
def out_edge_count(self):
return len(self.out_edges)
def add_in_edge(self, vertex, label=None, **kwargs):
# if the vertex is bound to a graph, then let the graph
# handle the edge creation.
if self.is_bound():
return self.graph.add_edge(vertex, label, self, **kwargs)
edge = Edge(vertex, label, self, **kwargs)
self.in_edges.add(edge)
return edge
def add_out_edge(self, vertex, label, **kwargs):
# if the vertex is bound to a graph, then let the graph
# handle the edge creation.
if self.is_bound():
return self.graph.add_edge(self, label, vertex, **kwargs)
edge = Edge(self, label, vertex, **kwargs)
self.out_edges.add(edge)
return edge
def remove_edge(self, edge):
head = edge.head
tail = edge.tail
if head == self:
self.out_edges.remove(edge)
elif tail == self:
self.in_edges.remove(edge)
else:
raise interfaces.VertexError(
"Unknown edge to this vertex: {}".format(edge)
)
def get_in_edges(self, label=None, **kwargs):
return self.in_edges.filter(label, **kwargs)
def get_out_edges(self, label=None, **kwargs):
return self.out_edges.filter(label, **kwargs)
def get_both_edges(self, label=None, **kwargs):
edges = self.in_edges | self.out_edges
return edges.filter(label, **kwargs) # pylint: disable=no-member
def get_in_vertices(self, label=None, **kwargs):
vertices = [
each.get_in_vertex() for each in self.get_in_edges()
]
return EntitySet(vertices).filter(label, **kwargs)
def get_out_vertices(self, label=None, **kwargs):
vertices = [
each.get_out_vertex() for each in self.get_out_edges()
]
return EntitySet(vertices).filter(label, **kwargs)
def get_both_vertices(self, label=None, **kwargs):
in_set = self.get_in_vertices(label=label, **kwargs)
out_set = self.get_out_vertices(label=label, **kwargs)
return in_set | out_set
def as_dict(self, include_privates=False):
as_dict = super(Vertex, self).as_dict(include_privates)
as_dict["metadata"].update(
{
"in_edge_count": self.in_edge_count(),
"out_edge_count": self.out_edge_count(),
}
)
return as_dict
class PersistentVertex(Vertex):
"""
Persistent Vertex behaves exactly the same as a :class:`~.Vertex` but has
an additional path attribute which is the disk location.
"""
__slots__ = ["path"]
def __init__(self, *args, **kwargs):
super(PersistentVertex, self).__init__(*args, **kwargs)
self.path = None
class Edge(interfaces.IEdge, Entity):
"""
Edge/Relationship is the representation of a relationship between two
entities. A edge has properties for additional information.
.. note::
See :class:`~.IEdge` for doco.
.. note::
The properties can be accessed as if they are attributes
directly by prepending ``prop__`` to the key.
.. code-block:: python
>>> v1 = Vertex("Person", name="Foo")
>>> v2 = Vertex("Person", name="Bar")
>>> e = Edge(v1, "knows", v2, since="school")
>>> e.prop__since
'school'
:param head: Head :class:`.IVertex` of the edge.
:type head: :class:`.IVertex`
:param label: :class:`.IEntity` label.
:type ident: :class:`str` or :obj:`None`
:param tail: Tail :class:`.IVertex` of the edge.
:type tail: :class:`.IVertex`
:param kwargs: Additional properties for the :class:`.IEntity`.
:type kwargs: :class:`str`=value or :class:`dict`
"""
__slots__ = ["head", "tail"]
def __init__(self, head, label, tail, **kwargs):
super(Edge, self).__init__(label=label, **kwargs)
self.head = head
self.tail = tail
def get_in_vertex(self):
return self.head
def get_out_vertex(self):
return self.tail
def as_dict(self, include_privates=False):
as_dict = super(Edge, self).as_dict(include_privates)
as_dict["head_id"] = self.head.ident
as_dict["tail_id"] = self.tail.ident
return as_dict
def __str__(self): # pragma: no cover
return "<{0}> ident: {1} [{3}-{2}-{4}]".format(
self.__class__.__name__, self.ident, self.label,
self.head.ident, self.tail.ident
)
def __repr__(self): # pragma: no cover
return (
"<{0}> ident: {1}, label: {2}, properties: "
"{3} [{4}-{2}-{5}]".format(
self.__class__.__name__, self.ident, self.label,
self.properties, self.head.ident, self.tail.ident
)
)
class PersistentEdge(Edge):
"""
Persistent Edge behaves exactly the same as a :class:`~.Edge` but has an
additional path attribute which is the disk location.
"""
__slots__ = ["path"]
def __init__(self, *args, **kwargs):
super(PersistentEdge, self).__init__(*args, **kwargs)
self.path = None
def _split_key_into_noun_verb(key):
"""
Internal helper function that takes the key and splits it into the
noun and verb, and returns the noun and verb.
.. note::
Example of a key with the special operator.
key: name__contains
return: name, contains
:param key: Key that you are splitting into the noun and verb. The key
should end with __<operator>
:type key: :class:`str`
:returns: Key name and the operator.
:rtype: :class:`tuple` (:class:`str`, :class:`str` or :obj:`None`)
"""
split = key.rsplit("__", 1)
if len(split) == 2:
return split[0], split[1]
return key, None
def _contains(prop_value, cmp_value, ignore_case=False):
"""
Helper function that take two arguments and checks if :param cmp_value:
is in :param prop_value:.
:param prop_value: Property value that you are checking.
:type prop_value: :class:`str`
:param cmp_value: Value that you are checking if it is in the property
value.
:type cmp_value: :class:`str`
:param ignore_case: True to run using incase sensitive.
:type ignore_case: :class:`bool`
:returns: True if :param cmp_value: is in :param prop_value:
:rtype: class:`bool`
"""
if ignore_case is True:
prop_value = prop_value.lower()
cmp_value = cmp_value.lower()
return cmp_value in prop_value
def _startswith(prop_value, cmp_value, ignore_case=False):
"""
Helper function that take two arguments and checks if :param prop_value:
startswith :param cmp_value:
:param prop_value: Property value that you are checking.
:type prop_value: :class:`str`
:param cmp_value: Value that you are checking if it is in the property
value startswith.
:type cmp_value: :class:`str`
:param ignore_case: True to run using incase sensitive.
:type ignore_case: :class:`bool`
:returns: True if :param prop_value: startswith :param cmp_value:
:rtype: class:`bool`
"""
if ignore_case is True:
prop_value = prop_value.lower()
cmp_value = cmp_value.lower()
return prop_value.startswith(cmp_value)
def _endswith(prop_value, cmp_value, ignore_case=False):
"""
Helper function that take two arguments and checks if :param prop_value:
endswith :param cmp_value:
:param prop_value: Property value that you are checking.
:type prop_value: :class:`str`
:param cmp_value: Value that you are checking if it is in the property
value endswith.
:type cmp_value: :class:`str`
:param ignore_case: True to run using incase sensitive.
:type ignore_case: :class:`bool`
:returns: True if :param prop_value: endswith :param cmp_value:
:rtype: class:`bool`
"""
if ignore_case is True:
prop_value = prop_value.lower()
cmp_value = cmp_value.lower()
return prop_value.endswith(cmp_value)
def _eq(prop_value, cmp_value, ignore_case=False):
"""
Helper function that take two arguments and checks if :param prop_value:
equals :param cmp_value:
:param prop_value: Property value that you are checking.
:type prop_value: :class:`str`
:param cmp_value: Value that you are checking if they are equal.
:type cmp_value: :class:`str`
:param ignore_case: True to run using incase sensitive.
:type ignore_case: :class:`bool`
:returns: True if :param prop_value: and :param cmp_value: are
equal.
:rtype: class:`bool`
"""
if ignore_case is True:
prop_value = prop_value.lower()
cmp_value = cmp_value.lower()
return cmp_value == prop_value
def _ne(prop_value, cmp_value, ignore_case=False):
"""
Helper function that take two arguments and checks if :param prop_value:
is not equal to :param cmp_value:
:param prop_value: Property value that you are checking.
:type prop_value: :class:`str`
:param cmp_value: Value that you are checking if they are not equal.
:type cmp_value: :class:`str`
:param ignore_case: True to run using incase sensitive.
:type ignore_case: :class:`bool`
:returns: True if :param prop_value: and :param cmp_value: are
not equal.
:rtype: class:`bool`
"""
if ignore_case is True:
prop_value = prop_value.lower()
cmp_value = cmp_value.lower()
return cmp_value != prop_value
OPERATORS = {
"contains": _contains,
"icontains": _contains, # require to be called with ignore_case
"startswith": _startswith,
"istartswith": _startswith, # require to be called with ignore_case
"endswith": _endswith,
"iendswith": _endswith, # require to be called with ignore_case
"le": lambda prop_value, value, ignore_case: value >= prop_value,
"lt": lambda prop_value, value, ignore_case: value > prop_value,
"ge": lambda prop_value, value, ignore_case: value <= prop_value,
"gt": lambda prop_value, value, ignore_case: value < prop_value,
"eq": _eq,
"ieq": _eq, # require to be called with ignore_case
"ne": _ne,
"ine": _ne, # require to be called with ignore_case
}
class EntitySet(interfaces.IEntitySet):
"""
EntitySet used for storing, filtering, and iterating over
:class:`~.IEntity` objects.
.. note::
See :class:`~.IEntitySet` for documenation.
:param entities: Entities being added to the set.
:type entities: Iterable of :class:`.IEntity`
"""
def __init__(self, entities=None):
super(EntitySet, self).__init__()
self._prop_reference = {}
self._id_reference = {}
if entities is not None:
for entity in entities:
self.add(entity)
def all(self, label=None, **kwargs):
return list(self.filter(label, **kwargs))
def sorted(self, key=None, reverse=False):
return sorted(self, key=key, reverse=reverse)
def get_labels(self):
return self._prop_reference.keys()
def get_indexes(self):
for label in self._prop_reference:
for key in self._prop_reference[label]:
if not key.startswith("_all"):
yield label, key
def get(self, ident):
entity = self._id_reference.get(ident)
if entity is None:
raise KeyError("No such id {0!r} exists.".format(ident))
return entity
def update_index(self, entity, **kwargs):
collection = self._prop_reference.setdefault(
entity.label,
{"_all": set()},
)
collection["_all"].add(entity)
# Add in a indexed property reference.
for key in kwargs:
collection.setdefault(key, set()).add(entity)
def add(self, entity):
if entity.ident in self._id_reference:
if entity != self._id_reference[entity.ident]:
raise KeyError(
"Conflict: {0} (current) <-> {1} (conflict)".format(
self._id_reference[entity.ident], entity
)
)
# Add in a reference for fast id search.
self._id_reference[entity.ident] = entity
self.update_index(entity, **entity.properties)
super(EntitySet, self).add(entity)
def remove(self, entity):
if entity.ident in self._id_reference:
del self._id_reference[entity.ident]
else:
raise KeyError("No such id {0!r} exists.".format(entity.ident))
# unbind the entity from the Graph
entity.graph = None
# remove the entity from the _all protected reference
self._prop_reference[entity.label]["_all"].discard(entity)
collection = self._prop_reference[entity.label]
for key in entity.properties:
if key in collection:
collection[key].discard(entity)
super(EntitySet, self).remove(entity)
def filter(self, label=None, **kwargs): # pylint: disable=too-many-locals,too-many-branches
if label is None and not kwargs:
return self
if label and not kwargs:
if label in self._prop_reference:
return EntitySet(entities=self._prop_reference[label]["_all"])
keys_values = kwargs.items()
get_func = OPERATORS.get
noun_verb_cache = {
key: _split_key_into_noun_verb(key)
for key, value in keys_values
}
elements = set()
if label is None:
elements = set(self._id_reference.values())
elif label in self._prop_reference:
for key, value in keys_values:
key, verb = noun_verb_cache[key]
if key not in self._prop_reference[label]:
return EntitySet()
elements = elements | self._prop_reference[label][key]
container = EntitySet()
for entity in elements:
mismatch = False
for key, value in keys_values:
key, verb = noun_verb_cache[key]
icase = verb[0] == "i" if verb else False
func = get_func(verb)
if key not in entity.properties:
mismatch = True
break
prop_value = entity.properties[key]
if prop_value is None:
mismatch = True
break
if not func:
if prop_value != value:
mismatch = True
break
elif not func(prop_value, value, icase):
mismatch = True
break
if not mismatch:
container.add(entity)
return container
| 31.760611 | 96 | 0.602234 |
e2bbac35647499a885278b21858940077cca3af5 | 3,466 | py | Python | azuresite/settings.py | baronnavy/djangoapp | 641868e74c27f430df3d8bdd5788c17915f86580 | [
"MIT"
] | null | null | null | azuresite/settings.py | baronnavy/djangoapp | 641868e74c27f430df3d8bdd5788c17915f86580 | [
"MIT"
] | null | null | null | azuresite/settings.py | baronnavy/djangoapp | 641868e74c27f430df3d8bdd5788c17915f86580 | [
"MIT"
] | null | null | null | """
Django settings for azuresite project.
Generated by 'django-admin startproject' using Django 2.1.2.
For more information on this file, see
https://docs.djangoproject.com/en/2.1/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/2.1/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.1/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '-^rq(x*d--6_#635*j84d5(fz9@-3(9vdr_s$9+^@cw08dq(ja'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
'polls.apps.PollsConfig',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'azuresite.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'azuresite.wsgi.application'
# Database
# https://docs.djangoproject.com/en/2.1/ref/settings/#databases
DATABASES = {
'default': {
#'ENGINE': 'django.db.backends.sqlite3',
#'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
'ENGINE': 'sql_server.pyodbc',
'NAME': 'djangopractice',
'USER': 'uehara@Nwgh2018',
'PASSWORD': 'Nwgh2018',
'HOST': 'djangoserver1.database.windows.net',
'PORT': '',
'OPTIONS': {
'driver': 'ODBC Driver 13 for SQL Server',
},
}
}
# Password validation
# https://docs.djangoproject.com/en/2.1/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/2.1/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.1/howto/static-files/
STATIC_URL = '/static/'
| 26.257576 | 91 | 0.678015 |
5eca414b16b8dd26e80a2feb5bc457ead64b1bce | 1,696 | py | Python | divmachines/demo/classifiers/mf/movielens.py | DanielMorales9/FactorizationPyTorch | 50f0644fdb4a903550fb3f1ba78fb9fb8649ceb1 | [
"MIT"
] | 4 | 2017-12-14T22:34:35.000Z | 2019-07-12T17:18:34.000Z | divmachines/demo/classifiers/mf/movielens.py | DanielMorales9/FactorizationPyTorch | 50f0644fdb4a903550fb3f1ba78fb9fb8649ceb1 | [
"MIT"
] | null | null | null | divmachines/demo/classifiers/mf/movielens.py | DanielMorales9/FactorizationPyTorch | 50f0644fdb4a903550fb3f1ba78fb9fb8649ceb1 | [
"MIT"
] | 1 | 2017-12-14T22:35:00.000Z | 2017-12-14T22:35:00.000Z | import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from divmachines.classifiers import MF
from divmachines.logging import TrainingLogger as TLogger
cols = ['user', 'item', 'rating', 'timestamp']
train = pd.read_csv('../../../../data/ua.base', delimiter='\t', names=cols)
# map_user = train.groupby('user').count().reset_index()[['user']].reset_index()
# map_user.columns = ['u_idx', 'user']
# map_item = train.groupby('item').count().reset_index()[['item']].reset_index()
# map_item.columns = ['i_idx', 'item']
# train = pd.merge(pd.merge(train, map_user, on="user"), map_item, on="item")
logger = TLogger()
model = MF(n_iter=100,
n_jobs=2,
batch_size=1000,
learning_rate=0.60653066,
use_cuda=False,
logger=logger,
early_stopping=True,
verbose=True)
interactions = train[['user', 'item', 'rating']].values
n_users = np.unique(train[["user"]].values).shape[0]
n_items = np.unique(train[["item"]].values).shape[0]
print("Number of users: %s" % n_users)
print("Number of items: %s" % n_items)
x = interactions[:100, :-1]
y = interactions[:100, -1]
model.fit(x,
y,
dic={'users': 0, 'items': 1},
n_users=n_users, n_items=n_items)
print(model.predict(x))
model.save("./time.pth.tar")
model = MF(n_iter=1,
n_jobs=8,
batch_size=10,
learning_rate=0.60653066,
use_cuda=False,
logger=logger,
early_stopping=True,
model="./time.pth.tar",
verbose=True)
x = interactions[:100, :-1]
y = interactions[:100, -1]
print(model.predict(x))
plt.plot(logger.epochs, logger.losses)
plt.show()
| 27.803279 | 80 | 0.618514 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.