Unnamed: 0 int64 0 10k | repository_name stringlengths 7 54 | func_path_in_repository stringlengths 5 223 | func_name stringlengths 1 134 | whole_func_string stringlengths 100 30.3k | language stringclasses 1
value | func_code_string stringlengths 100 30.3k | func_code_tokens stringlengths 138 33.2k | func_documentation_string stringlengths 1 15k | func_documentation_tokens stringlengths 5 5.14k | split_name stringclasses 1
value | func_code_url stringlengths 91 315 |
|---|---|---|---|---|---|---|---|---|---|---|---|
4,100 | OLC-Bioinformatics/sipprverse | method.py | Method.complete | def complete(self):
"""
Determine if the analyses of the strains are complete e.g. there are no missing GDCS genes, and the
sample.general.bestassemblyfile != 'NA'
"""
# Boolean to store the completeness of the analyses
allcomplete = True
# Clear the list of samp... | python | def complete(self):
"""
Determine if the analyses of the strains are complete e.g. there are no missing GDCS genes, and the
sample.general.bestassemblyfile != 'NA'
"""
# Boolean to store the completeness of the analyses
allcomplete = True
# Clear the list of samp... | ['def', 'complete', '(', 'self', ')', ':', '# Boolean to store the completeness of the analyses', 'allcomplete', '=', 'True', '# Clear the list of samples that still require more sequence data', 'self', '.', 'incomplete', '=', 'list', '(', ')', 'for', 'sample', 'in', 'self', '.', 'runmetadata', '.', 'samples', ':', 'if... | Determine if the analyses of the strains are complete e.g. there are no missing GDCS genes, and the
sample.general.bestassemblyfile != 'NA' | ['Determine', 'if', 'the', 'analyses', 'of', 'the', 'strains', 'are', 'complete', 'e', '.', 'g', '.', 'there', 'are', 'no', 'missing', 'GDCS', 'genes', 'and', 'the', 'sample', '.', 'general', '.', 'bestassemblyfile', '!', '=', 'NA'] | train | https://github.com/OLC-Bioinformatics/sipprverse/blob/d4f10cdf8e1a39dac0953db61c21c97efc6006de/method.py#L228-L261 |
4,101 | blakev/python-syncthing | syncthing/__init__.py | Database.browse | def browse(self, folder, levels=None, prefix=None):
""" Returns the directory tree of the global model.
Directories are always JSON objects (map/dictionary), and files are
always arrays of modification time and size. The first integer is
the files modification time, and the ... | python | def browse(self, folder, levels=None, prefix=None):
""" Returns the directory tree of the global model.
Directories are always JSON objects (map/dictionary), and files are
always arrays of modification time and size. The first integer is
the files modification time, and the ... | ['def', 'browse', '(', 'self', ',', 'folder', ',', 'levels', '=', 'None', ',', 'prefix', '=', 'None', ')', ':', 'assert', 'isinstance', '(', 'levels', ',', 'int', ')', 'or', 'levels', 'is', 'None', 'assert', 'isinstance', '(', 'prefix', ',', 'string_types', ')', 'or', 'prefix', 'is', 'None', 'return', 'self', '.', 'get... | Returns the directory tree of the global model.
Directories are always JSON objects (map/dictionary), and files are
always arrays of modification time and size. The first integer is
the files modification time, and the second integer is the file
size.
Args:
... | ['Returns', 'the', 'directory', 'tree', 'of', 'the', 'global', 'model', '.'] | train | https://github.com/blakev/python-syncthing/blob/a7f4930f86f7543cd96990277945467896fb523d/syncthing/__init__.py#L581-L603 |
4,102 | nugget/python-insteonplm | insteonplm/states/x10.py | X10DimmableSwitch.set_level | def set_level(self, val):
"""Set the device ON LEVEL."""
if val == 0:
self.off()
elif val == 255:
self.on()
else:
setlevel = 255
if val < 1:
setlevel = val * 255
elif val <= 0xff:
setlevel = val
... | python | def set_level(self, val):
"""Set the device ON LEVEL."""
if val == 0:
self.off()
elif val == 255:
self.on()
else:
setlevel = 255
if val < 1:
setlevel = val * 255
elif val <= 0xff:
setlevel = val
... | ['def', 'set_level', '(', 'self', ',', 'val', ')', ':', 'if', 'val', '==', '0', ':', 'self', '.', 'off', '(', ')', 'elif', 'val', '==', '255', ':', 'self', '.', 'on', '(', ')', 'else', ':', 'setlevel', '=', '255', 'if', 'val', '<', '1', ':', 'setlevel', '=', 'val', '*', '255', 'elif', 'val', '<=', '0xff', ':', 'setleve... | Set the device ON LEVEL. | ['Set', 'the', 'device', 'ON', 'LEVEL', '.'] | train | https://github.com/nugget/python-insteonplm/blob/65548041f1b0729ae1ae904443dd81b0c6cbf1bf/insteonplm/states/x10.py#L107-L134 |
4,103 | jkokorian/pyqt2waybinding | pyqt2waybinding/__init__.py | Observer.bindToProperty | def bindToProperty(self,instance,propertyName,useGetter=False):
"""
2-way binds to an instance property.
Parameters:
- instance -- the object instance
- propertyName -- the name of the property to bind to
- useGetter: when True, calls the getter method to obtain the valu... | python | def bindToProperty(self,instance,propertyName,useGetter=False):
"""
2-way binds to an instance property.
Parameters:
- instance -- the object instance
- propertyName -- the name of the property to bind to
- useGetter: when True, calls the getter method to obtain the valu... | ['def', 'bindToProperty', '(', 'self', ',', 'instance', ',', 'propertyName', ',', 'useGetter', '=', 'False', ')', ':', 'endpoint', '=', 'BindingEndpoint', '.', 'forProperty', '(', 'instance', ',', 'propertyName', ',', 'useGetter', '=', 'useGetter', ')', 'self', '.', 'bindToEndPoint', '(', 'endpoint', ')'] | 2-way binds to an instance property.
Parameters:
- instance -- the object instance
- propertyName -- the name of the property to bind to
- useGetter: when True, calls the getter method to obtain the value. When False, the signal argument is used as input for the target setter. (default ... | ['2', '-', 'way', 'binds', 'to', 'an', 'instance', 'property', '.'] | train | https://github.com/jkokorian/pyqt2waybinding/blob/fb1fb84f55608cfbf99c6486650100ba81743117/pyqt2waybinding/__init__.py#L142-L166 |
4,104 | kensho-technologies/graphql-compiler | graphql_compiler/schema_generation/schema_graph.py | SchemaGraph.get_default_property_values | def get_default_property_values(self, classname):
"""Return a dict with default values for all properties declared on this class."""
schema_element = self.get_element_by_class_name(classname)
result = {
property_name: property_descriptor.default
for property_name, proper... | python | def get_default_property_values(self, classname):
"""Return a dict with default values for all properties declared on this class."""
schema_element = self.get_element_by_class_name(classname)
result = {
property_name: property_descriptor.default
for property_name, proper... | ['def', 'get_default_property_values', '(', 'self', ',', 'classname', ')', ':', 'schema_element', '=', 'self', '.', 'get_element_by_class_name', '(', 'classname', ')', 'result', '=', '{', 'property_name', ':', 'property_descriptor', '.', 'default', 'for', 'property_name', ',', 'property_descriptor', 'in', 'six', '.', '... | Return a dict with default values for all properties declared on this class. | ['Return', 'a', 'dict', 'with', 'default', 'values', 'for', 'all', 'properties', 'declared', 'on', 'this', 'class', '.'] | train | https://github.com/kensho-technologies/graphql-compiler/blob/f6079c6d10f64932f6b3af309b79bcea2123ca8f/graphql_compiler/schema_generation/schema_graph.py#L297-L311 |
4,105 | tensorpack/tensorpack | tensorpack/models/layer_norm.py | LayerNorm | def LayerNorm(
x, epsilon=1e-5,
use_bias=True, use_scale=True,
gamma_init=None, data_format='channels_last'):
"""
Layer Normalization layer, as described in the paper:
`Layer Normalization <https://arxiv.org/abs/1607.06450>`_.
Args:
x (tf.Tensor): a 4D or 2D tensor. When... | python | def LayerNorm(
x, epsilon=1e-5,
use_bias=True, use_scale=True,
gamma_init=None, data_format='channels_last'):
"""
Layer Normalization layer, as described in the paper:
`Layer Normalization <https://arxiv.org/abs/1607.06450>`_.
Args:
x (tf.Tensor): a 4D or 2D tensor. When... | ['def', 'LayerNorm', '(', 'x', ',', 'epsilon', '=', '1e-5', ',', 'use_bias', '=', 'True', ',', 'use_scale', '=', 'True', ',', 'gamma_init', '=', 'None', ',', 'data_format', '=', "'channels_last'", ')', ':', 'data_format', '=', 'get_data_format', '(', 'data_format', ',', 'keras_mode', '=', 'False', ')', 'shape', '=', 'x... | Layer Normalization layer, as described in the paper:
`Layer Normalization <https://arxiv.org/abs/1607.06450>`_.
Args:
x (tf.Tensor): a 4D or 2D tensor. When 4D, the layout should match data_format.
epsilon (float): epsilon to avoid divide-by-zero.
use_scale, use_bias (bool): whether to... | ['Layer', 'Normalization', 'layer', 'as', 'described', 'in', 'the', 'paper', ':', 'Layer', 'Normalization', '<https', ':', '//', 'arxiv', '.', 'org', '/', 'abs', '/', '1607', '.', '06450', '>', '_', '.'] | train | https://github.com/tensorpack/tensorpack/blob/d7a13cb74c9066bc791d7aafc3b744b60ee79a9f/tensorpack/models/layer_norm.py#L14-L63 |
4,106 | SiLab-Bonn/pyBAR | pybar/utils/utils.py | get_iso_time | def get_iso_time():
'''returns time as ISO string, mapping to and from datetime in ugly way
convert to string with str()
'''
t1 = time.time()
t2 = datetime.datetime.fromtimestamp(t1)
t4 = t2.__str__()
try:
t4a, t4b = t4.split(".", 1)
except ValueError:
t4a = t... | python | def get_iso_time():
'''returns time as ISO string, mapping to and from datetime in ugly way
convert to string with str()
'''
t1 = time.time()
t2 = datetime.datetime.fromtimestamp(t1)
t4 = t2.__str__()
try:
t4a, t4b = t4.split(".", 1)
except ValueError:
t4a = t... | ['def', 'get_iso_time', '(', ')', ':', 't1', '=', 'time', '.', 'time', '(', ')', 't2', '=', 'datetime', '.', 'datetime', '.', 'fromtimestamp', '(', 't1', ')', 't4', '=', 't2', '.', '__str__', '(', ')', 'try', ':', 't4a', ',', 't4b', '=', 't4', '.', 'split', '(', '"."', ',', '1', ')', 'except', 'ValueError', ':', 't4a',... | returns time as ISO string, mapping to and from datetime in ugly way
convert to string with str() | ['returns', 'time', 'as', 'ISO', 'string', 'mapping', 'to', 'and', 'from', 'datetime', 'in', 'ugly', 'way', 'convert', 'to', 'string', 'with', 'str', '()'] | train | https://github.com/SiLab-Bonn/pyBAR/blob/5ad95bbcd41cd358825823fb78f396cfce23593e/pybar/utils/utils.py#L267-L282 |
4,107 | PmagPy/PmagPy | programs/s_hext.py | main | def main():
"""
NAME
s_hext.py
DESCRIPTION
calculates Hext statistics for tensor data
SYNTAX
s_hext.py [-h][-i][-f file] [<filename]
OPTIONS
-h prints help message and quits
-f file specifies filename on command line
-l NMEAS do line by line instead of... | python | def main():
"""
NAME
s_hext.py
DESCRIPTION
calculates Hext statistics for tensor data
SYNTAX
s_hext.py [-h][-i][-f file] [<filename]
OPTIONS
-h prints help message and quits
-f file specifies filename on command line
-l NMEAS do line by line instead of... | ['def', 'main', '(', ')', ':', 'ave', '=', '1', 'if', "'-h'", 'in', 'sys', '.', 'argv', ':', 'print', '(', 'main', '.', '__doc__', ')', 'sys', '.', 'exit', '(', ')', 'if', "'-l'", 'in', 'sys', '.', 'argv', ':', 'ind', '=', 'sys', '.', 'argv', '.', 'index', '(', "'-l'", ')', 'npts', '=', 'int', '(', 'sys', '.', 'argv', ... | NAME
s_hext.py
DESCRIPTION
calculates Hext statistics for tensor data
SYNTAX
s_hext.py [-h][-i][-f file] [<filename]
OPTIONS
-h prints help message and quits
-f file specifies filename on command line
-l NMEAS do line by line instead of whole file, use number ... | ['NAME', 's_hext', '.', 'py'] | train | https://github.com/PmagPy/PmagPy/blob/c7984f8809bf40fe112e53dcc311a33293b62d0b/programs/s_hext.py#L8-L76 |
4,108 | bcbio/bcbio-nextgen | bcbio/qc/qsignature.py | summary | def summary(*samples):
"""Run SignatureCompareRelatedSimple module from qsignature tool.
Creates a matrix of pairwise comparison among samples. The
function will not run if the output exists
:param samples: list with only one element containing all samples information
:returns: (dict) with the pat... | python | def summary(*samples):
"""Run SignatureCompareRelatedSimple module from qsignature tool.
Creates a matrix of pairwise comparison among samples. The
function will not run if the output exists
:param samples: list with only one element containing all samples information
:returns: (dict) with the pat... | ['def', 'summary', '(', '*', 'samples', ')', ':', 'warnings', ',', 'similar', '=', '[', ']', ',', '[', ']', 'qsig', '=', 'config_utils', '.', 'get_program', '(', '"qsignature"', ',', 'samples', '[', '0', ']', '[', '0', ']', '[', '"config"', ']', ')', 'if', 'not', 'qsig', ':', 'return', '[', '[', ']', ']', 'res_qsig', '... | Run SignatureCompareRelatedSimple module from qsignature tool.
Creates a matrix of pairwise comparison among samples. The
function will not run if the output exists
:param samples: list with only one element containing all samples information
:returns: (dict) with the path of the output to be joined t... | ['Run', 'SignatureCompareRelatedSimple', 'module', 'from', 'qsignature', 'tool', '.'] | train | https://github.com/bcbio/bcbio-nextgen/blob/6a9348c0054ccd5baffd22f1bb7d0422f6978b20/bcbio/qc/qsignature.py#L71-L118 |
4,109 | pypa/pipenv | pipenv/vendor/distlib/_backport/tarfile.py | _Stream._read | def _read(self, size):
"""Return size bytes from the stream.
"""
if self.comptype == "tar":
return self.__read(size)
c = len(self.dbuf)
while c < size:
buf = self.__read(self.bufsize)
if not buf:
break
try:
... | python | def _read(self, size):
"""Return size bytes from the stream.
"""
if self.comptype == "tar":
return self.__read(size)
c = len(self.dbuf)
while c < size:
buf = self.__read(self.bufsize)
if not buf:
break
try:
... | ['def', '_read', '(', 'self', ',', 'size', ')', ':', 'if', 'self', '.', 'comptype', '==', '"tar"', ':', 'return', 'self', '.', '__read', '(', 'size', ')', 'c', '=', 'len', '(', 'self', '.', 'dbuf', ')', 'while', 'c', '<', 'size', ':', 'buf', '=', 'self', '.', '__read', '(', 'self', '.', 'bufsize', ')', 'if', 'not', 'bu... | Return size bytes from the stream. | ['Return', 'size', 'bytes', 'from', 'the', 'stream', '.'] | train | https://github.com/pypa/pipenv/blob/cae8d76c210b9777e90aab76e9c4b0e53bb19cde/pipenv/vendor/distlib/_backport/tarfile.py#L583-L602 |
4,110 | mitsei/dlkit | dlkit/json_/repository/sessions.py | CompositionLookupSession.get_composition | def get_composition(self, composition_id):
"""Gets the ``Composition`` specified by its ``Id``.
arg: composition_id (osid.id.Id): ``Id`` of the
``Composiiton``
return: (osid.repository.Composition) - the composition
raise: NotFound - ``composition_id`` not found
... | python | def get_composition(self, composition_id):
"""Gets the ``Composition`` specified by its ``Id``.
arg: composition_id (osid.id.Id): ``Id`` of the
``Composiiton``
return: (osid.repository.Composition) - the composition
raise: NotFound - ``composition_id`` not found
... | ['def', 'get_composition', '(', 'self', ',', 'composition_id', ')', ':', '# Implemented from template for', '# osid.resource.ResourceLookupSession.get_resource', '# NOTE: This implementation currently ignores plenary view', 'collection', '=', 'JSONClientValidated', '(', "'repository'", ',', 'collection', '=', "'Composi... | Gets the ``Composition`` specified by its ``Id``.
arg: composition_id (osid.id.Id): ``Id`` of the
``Composiiton``
return: (osid.repository.Composition) - the composition
raise: NotFound - ``composition_id`` not found
raise: NullArgument - ``composition_id`` is ``nul... | ['Gets', 'the', 'Composition', 'specified', 'by', 'its', 'Id', '.'] | train | https://github.com/mitsei/dlkit/blob/445f968a175d61c8d92c0f617a3c17dc1dc7c584/dlkit/json_/repository/sessions.py#L3225-L3247 |
4,111 | pyQode/pyqode.core | pyqode/core/widgets/tabs.py | TabWidget._on_dirty_changed | def _on_dirty_changed(self, dirty):
"""
Adds a star in front of a dirtt tab and emits dirty_changed.
"""
try:
title = self._current._tab_name
index = self.indexOf(self._current)
if dirty:
self.setTabText(index, "* " + title)
... | python | def _on_dirty_changed(self, dirty):
"""
Adds a star in front of a dirtt tab and emits dirty_changed.
"""
try:
title = self._current._tab_name
index = self.indexOf(self._current)
if dirty:
self.setTabText(index, "* " + title)
... | ['def', '_on_dirty_changed', '(', 'self', ',', 'dirty', ')', ':', 'try', ':', 'title', '=', 'self', '.', '_current', '.', '_tab_name', 'index', '=', 'self', '.', 'indexOf', '(', 'self', '.', '_current', ')', 'if', 'dirty', ':', 'self', '.', 'setTabText', '(', 'index', ',', '"* "', '+', 'title', ')', 'else', ':', 'self'... | Adds a star in front of a dirtt tab and emits dirty_changed. | ['Adds', 'a', 'star', 'in', 'front', 'of', 'a', 'dirtt', 'tab', 'and', 'emits', 'dirty_changed', '.'] | train | https://github.com/pyQode/pyqode.core/blob/a99ec6cd22d519394f613309412f8329dc4e90cb/pyqode/core/widgets/tabs.py#L428-L441 |
4,112 | AoiKuiyuyou/AoikI18n | src/aoiki18n/aoiki18n_.py | I18n.yaml_force_unicode | def yaml_force_unicode():
"""
Force pyyaml to return unicode values.
"""
#/
## modified from |http://stackoverflow.com/a/2967461|
if sys.version_info[0] == 2:
def construct_func(self, node):
return self.construct_scalar(node)
yaml.L... | python | def yaml_force_unicode():
"""
Force pyyaml to return unicode values.
"""
#/
## modified from |http://stackoverflow.com/a/2967461|
if sys.version_info[0] == 2:
def construct_func(self, node):
return self.construct_scalar(node)
yaml.L... | ['def', 'yaml_force_unicode', '(', ')', ':', '#/', '## modified from |http://stackoverflow.com/a/2967461|', 'if', 'sys', '.', 'version_info', '[', '0', ']', '==', '2', ':', 'def', 'construct_func', '(', 'self', ',', 'node', ')', ':', 'return', 'self', '.', 'construct_scalar', '(', 'node', ')', 'yaml', '.', 'Loader', '.... | Force pyyaml to return unicode values. | ['Force', 'pyyaml', 'to', 'return', 'unicode', 'values', '.'] | train | https://github.com/AoiKuiyuyou/AoikI18n/blob/8d60ea6a2be24e533a9cf92b433a8cfdb67f813e/src/aoiki18n/aoiki18n_.py#L78-L88 |
4,113 | orb-framework/orb | orb/core/query.py | QueryCompound.columns | def columns(self, model=None):
"""
Returns any columns used within this query.
:return [<orb.Column>, ..]
"""
for query in self.__queries:
for column in query.columns(model=model):
yield column | python | def columns(self, model=None):
"""
Returns any columns used within this query.
:return [<orb.Column>, ..]
"""
for query in self.__queries:
for column in query.columns(model=model):
yield column | ['def', 'columns', '(', 'self', ',', 'model', '=', 'None', ')', ':', 'for', 'query', 'in', 'self', '.', '__queries', ':', 'for', 'column', 'in', 'query', '.', 'columns', '(', 'model', '=', 'model', ')', ':', 'yield', 'column'] | Returns any columns used within this query.
:return [<orb.Column>, ..] | ['Returns', 'any', 'columns', 'used', 'within', 'this', 'query', '.'] | train | https://github.com/orb-framework/orb/blob/575be2689cb269e65a0a2678232ff940acc19e5a/orb/core/query.py#L1384-L1392 |
4,114 | numenta/htmresearch | htmresearch/algorithms/lateral_pooler.py | LateralPooler.feedforward | def feedforward(self):
"""
Soon to be depriciated.
Needed to make the SP implementation compatible
with some older code.
"""
m = self._numInputs
n = self._numColumns
W = np.zeros((n, m))
for i in range(self._numColumns):
self.getPermanence(i, W[i, :])
return W | python | def feedforward(self):
"""
Soon to be depriciated.
Needed to make the SP implementation compatible
with some older code.
"""
m = self._numInputs
n = self._numColumns
W = np.zeros((n, m))
for i in range(self._numColumns):
self.getPermanence(i, W[i, :])
return W | ['def', 'feedforward', '(', 'self', ')', ':', 'm', '=', 'self', '.', '_numInputs', 'n', '=', 'self', '.', '_numColumns', 'W', '=', 'np', '.', 'zeros', '(', '(', 'n', ',', 'm', ')', ')', 'for', 'i', 'in', 'range', '(', 'self', '.', '_numColumns', ')', ':', 'self', '.', 'getPermanence', '(', 'i', ',', 'W', '[', 'i', ',',... | Soon to be depriciated.
Needed to make the SP implementation compatible
with some older code. | ['Soon', 'to', 'be', 'depriciated', '.', 'Needed', 'to', 'make', 'the', 'SP', 'implementation', 'compatible', 'with', 'some', 'older', 'code', '.'] | train | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/algorithms/lateral_pooler.py#L201-L213 |
4,115 | ConsenSys/mythril-classic | mythril/laser/ethereum/state/machine_state.py | MachineState.mem_extend | def mem_extend(self, start: int, size: int) -> None:
"""Extends the memory of this machine state.
:param start: Start of memory extension
:param size: Size of memory extension
"""
m_extend = self.calculate_extension_size(start, size)
if m_extend:
extend_gas =... | python | def mem_extend(self, start: int, size: int) -> None:
"""Extends the memory of this machine state.
:param start: Start of memory extension
:param size: Size of memory extension
"""
m_extend = self.calculate_extension_size(start, size)
if m_extend:
extend_gas =... | ['def', 'mem_extend', '(', 'self', ',', 'start', ':', 'int', ',', 'size', ':', 'int', ')', '->', 'None', ':', 'm_extend', '=', 'self', '.', 'calculate_extension_size', '(', 'start', ',', 'size', ')', 'if', 'm_extend', ':', 'extend_gas', '=', 'self', '.', 'calculate_memory_gas', '(', 'start', ',', 'size', ')', 'self', '... | Extends the memory of this machine state.
:param start: Start of memory extension
:param size: Size of memory extension | ['Extends', 'the', 'memory', 'of', 'this', 'machine', 'state', '.'] | train | https://github.com/ConsenSys/mythril-classic/blob/27af71c34b2ce94f4fae5613ec457f93df1a8f56/mythril/laser/ethereum/state/machine_state.py#L151-L163 |
4,116 | toumorokoshi/sprinter | sprinter/formula/base.py | FormulaBase.validate | def validate(self):
"""
validates the feature configuration, and returns a list of errors (empty list if no error)
validate should:
* required variables
* warn on unused variables
errors should either be reported via self._log_error(), or raise an exception
"""... | python | def validate(self):
"""
validates the feature configuration, and returns a list of errors (empty list if no error)
validate should:
* required variables
* warn on unused variables
errors should either be reported via self._log_error(), or raise an exception
"""... | ['def', 'validate', '(', 'self', ')', ':', 'if', 'self', '.', 'target', ':', 'for', 'k', 'in', 'self', '.', 'target', '.', 'keys', '(', ')', ':', 'if', 'k', 'in', 'self', '.', 'deprecated_options', ':', 'self', '.', 'logger', '.', 'warn', '(', 'self', '.', 'deprecated_options', '[', 'k', ']', '.', 'format', '(', 'optio... | validates the feature configuration, and returns a list of errors (empty list if no error)
validate should:
* required variables
* warn on unused variables
errors should either be reported via self._log_error(), or raise an exception | ['validates', 'the', 'feature', 'configuration', 'and', 'returns', 'a', 'list', 'of', 'errors', '(', 'empty', 'list', 'if', 'no', 'error', ')'] | train | https://github.com/toumorokoshi/sprinter/blob/846697a7a087e69c61d075232e754d6975a64152/sprinter/formula/base.py#L141-L163 |
4,117 | dmlc/gluon-nlp | scripts/sentiment_analysis/text_cnn.py | model | def model(dropout, vocab, model_mode, output_size):
"""Construct the model."""
textCNN = SentimentNet(dropout=dropout, vocab_size=len(vocab), model_mode=model_mode,\
output_size=output_size)
textCNN.hybridize()
return textCNN | python | def model(dropout, vocab, model_mode, output_size):
"""Construct the model."""
textCNN = SentimentNet(dropout=dropout, vocab_size=len(vocab), model_mode=model_mode,\
output_size=output_size)
textCNN.hybridize()
return textCNN | ['def', 'model', '(', 'dropout', ',', 'vocab', ',', 'model_mode', ',', 'output_size', ')', ':', 'textCNN', '=', 'SentimentNet', '(', 'dropout', '=', 'dropout', ',', 'vocab_size', '=', 'len', '(', 'vocab', ')', ',', 'model_mode', '=', 'model_mode', ',', 'output_size', '=', 'output_size', ')', 'textCNN', '.', 'hybridize'... | Construct the model. | ['Construct', 'the', 'model', '.'] | train | https://github.com/dmlc/gluon-nlp/blob/4b83eb6bcc8881e5f1081a3675adaa19fac5c0ba/scripts/sentiment_analysis/text_cnn.py#L40-L46 |
4,118 | saltstack/salt | salt/netapi/rest_cherrypy/app.py | salt_api_acl_tool | def salt_api_acl_tool(username, request):
'''
..versionadded:: 2016.3.0
Verifies user requests against the API whitelist. (User/IP pair)
in order to provide whitelisting for the API similar to the
master, but over the API.
..code-block:: yaml
rest_cherrypy:
api_acl:
... | python | def salt_api_acl_tool(username, request):
'''
..versionadded:: 2016.3.0
Verifies user requests against the API whitelist. (User/IP pair)
in order to provide whitelisting for the API similar to the
master, but over the API.
..code-block:: yaml
rest_cherrypy:
api_acl:
... | ['def', 'salt_api_acl_tool', '(', 'username', ',', 'request', ')', ':', 'failure_str', '=', '(', '"[api_acl] Authentication failed for "', '"user %s from IP %s"', ')', 'success_str', '=', '(', '"[api_acl] Authentication sucessful for "', '"user %s from IP %s"', ')', 'pass_str', '=', '(', '"[api_acl] Authentication not ... | ..versionadded:: 2016.3.0
Verifies user requests against the API whitelist. (User/IP pair)
in order to provide whitelisting for the API similar to the
master, but over the API.
..code-block:: yaml
rest_cherrypy:
api_acl:
users:
'*':
... | ['..', 'versionadded', '::', '2016', '.', '3', '.', '0'] | train | https://github.com/saltstack/salt/blob/e8541fd6e744ab0df786c0f76102e41631f45d46/salt/netapi/rest_cherrypy/app.py#L696-L762 |
4,119 | go-macaroon-bakery/py-macaroon-bakery | macaroonbakery/httpbakery/_browser.py | WebBrowserInteractor._wait_for_token | def _wait_for_token(self, ctx, wait_token_url):
''' Returns a token from a the wait token URL
@param wait_token_url URL to wait for (string)
:return DischargeToken
'''
resp = requests.get(wait_token_url)
if resp.status_code != 200:
raise InteractionError('cann... | python | def _wait_for_token(self, ctx, wait_token_url):
''' Returns a token from a the wait token URL
@param wait_token_url URL to wait for (string)
:return DischargeToken
'''
resp = requests.get(wait_token_url)
if resp.status_code != 200:
raise InteractionError('cann... | ['def', '_wait_for_token', '(', 'self', ',', 'ctx', ',', 'wait_token_url', ')', ':', 'resp', '=', 'requests', '.', 'get', '(', 'wait_token_url', ')', 'if', 'resp', '.', 'status_code', '!=', '200', ':', 'raise', 'InteractionError', '(', "'cannot get {}'", '.', 'format', '(', 'wait_token_url', ')', ')', 'json_resp', '=',... | Returns a token from a the wait token URL
@param wait_token_url URL to wait for (string)
:return DischargeToken | ['Returns', 'a', 'token', 'from', 'a', 'the', 'wait', 'token', 'URL'] | train | https://github.com/go-macaroon-bakery/py-macaroon-bakery/blob/63ce1ef1dabe816eb8aaec48fbb46761c34ddf77/macaroonbakery/httpbakery/_browser.py#L49-L69 |
4,120 | cbclab/MOT | mot/lib/cl_function.py | _ProcedureWorker._build_kernel | def _build_kernel(self, kernel_source, compile_flags=()):
"""Convenience function for building the kernel for this worker.
Args:
kernel_source (str): the kernel source to use for building the kernel
Returns:
cl.Program: a compiled CL kernel
"""
return cl... | python | def _build_kernel(self, kernel_source, compile_flags=()):
"""Convenience function for building the kernel for this worker.
Args:
kernel_source (str): the kernel source to use for building the kernel
Returns:
cl.Program: a compiled CL kernel
"""
return cl... | ['def', '_build_kernel', '(', 'self', ',', 'kernel_source', ',', 'compile_flags', '=', '(', ')', ')', ':', 'return', 'cl', '.', 'Program', '(', 'self', '.', '_cl_context', ',', 'kernel_source', ')', '.', 'build', '(', "' '", '.', 'join', '(', 'compile_flags', ')', ')'] | Convenience function for building the kernel for this worker.
Args:
kernel_source (str): the kernel source to use for building the kernel
Returns:
cl.Program: a compiled CL kernel | ['Convenience', 'function', 'for', 'building', 'the', 'kernel', 'for', 'this', 'worker', '.'] | train | https://github.com/cbclab/MOT/blob/fb3243b65025705842e82704705c00902f9a35af/mot/lib/cl_function.py#L706-L715 |
4,121 | openstack/networking-cisco | networking_cisco/plugins/cisco/cfg_agent/device_drivers/iosxe/iosxe_routing_driver.py | IosXeRoutingDriver._cfg_exists | def _cfg_exists(self, cfg_str):
"""Check a partial config string exists in the running config.
:param cfg_str: config string to check
:return : True or False
"""
ios_cfg = self._get_running_config()
parse = HTParser(ios_cfg)
cfg_raw = parse.find_lines("^" + cfg_s... | python | def _cfg_exists(self, cfg_str):
"""Check a partial config string exists in the running config.
:param cfg_str: config string to check
:return : True or False
"""
ios_cfg = self._get_running_config()
parse = HTParser(ios_cfg)
cfg_raw = parse.find_lines("^" + cfg_s... | ['def', '_cfg_exists', '(', 'self', ',', 'cfg_str', ')', ':', 'ios_cfg', '=', 'self', '.', '_get_running_config', '(', ')', 'parse', '=', 'HTParser', '(', 'ios_cfg', ')', 'cfg_raw', '=', 'parse', '.', 'find_lines', '(', '"^"', '+', 'cfg_str', ')', 'LOG', '.', 'debug', '(', '"_cfg_exists(): Found lines %s"', ',', 'cfg_r... | Check a partial config string exists in the running config.
:param cfg_str: config string to check
:return : True or False | ['Check', 'a', 'partial', 'config', 'string', 'exists', 'in', 'the', 'running', 'config', '.'] | train | https://github.com/openstack/networking-cisco/blob/aa58a30aec25b86f9aa5952b0863045975debfa9/networking_cisco/plugins/cisco/cfg_agent/device_drivers/iosxe/iosxe_routing_driver.py#L405-L415 |
4,122 | idmillington/layout | layout/rl_utils.py | ReportlabOutput.draw_polygon | def draw_polygon(
self,
*pts,
close_path=True,
stroke=None,
stroke_width=1,
stroke_dash=None,
fill=None
) -> None:
"""Draws the given polygon."""
c = self.c
c.saveState()
if stroke is not Non... | python | def draw_polygon(
self,
*pts,
close_path=True,
stroke=None,
stroke_width=1,
stroke_dash=None,
fill=None
) -> None:
"""Draws the given polygon."""
c = self.c
c.saveState()
if stroke is not Non... | ['def', 'draw_polygon', '(', 'self', ',', '*', 'pts', ',', 'close_path', '=', 'True', ',', 'stroke', '=', 'None', ',', 'stroke_width', '=', '1', ',', 'stroke_dash', '=', 'None', ',', 'fill', '=', 'None', ')', '->', 'None', ':', 'c', '=', 'self', '.', 'c', 'c', '.', 'saveState', '(', ')', 'if', 'stroke', 'is', 'not', 'N... | Draws the given polygon. | ['Draws', 'the', 'given', 'polygon', '.'] | train | https://github.com/idmillington/layout/blob/c452d1d7a74c9a74f7639c1b49e2a41c4e354bb5/layout/rl_utils.py#L98-L127 |
4,123 | plivo/sharq-server | sharq_server/server.py | SharQServer._view_interval | def _view_interval(self, queue_type, queue_id):
"""Updates the queue interval in SharQ."""
response = {
'status': 'failure'
}
try:
request_data = json.loads(request.data)
interval = request_data['interval']
except Exception, e:
resp... | python | def _view_interval(self, queue_type, queue_id):
"""Updates the queue interval in SharQ."""
response = {
'status': 'failure'
}
try:
request_data = json.loads(request.data)
interval = request_data['interval']
except Exception, e:
resp... | ['def', '_view_interval', '(', 'self', ',', 'queue_type', ',', 'queue_id', ')', ':', 'response', '=', '{', "'status'", ':', "'failure'", '}', 'try', ':', 'request_data', '=', 'json', '.', 'loads', '(', 'request', '.', 'data', ')', 'interval', '=', 'request_data', '[', "'interval'", ']', 'except', 'Exception', ',', 'e',... | Updates the queue interval in SharQ. | ['Updates', 'the', 'queue', 'interval', 'in', 'SharQ', '.'] | train | https://github.com/plivo/sharq-server/blob/9f4c50eb5ee28d1084591febc4a3a34d7ffd0556/sharq_server/server.py#L133-L159 |
4,124 | allenai/allennlp | allennlp/data/dataset_readers/dataset_utils/text2sql_utils.py | clean_and_split_sql | def clean_and_split_sql(sql: str) -> List[str]:
"""
Cleans up and unifies a SQL query. This involves unifying quoted strings
and splitting brackets which aren't formatted consistently in the data.
"""
sql_tokens: List[str] = []
for token in sql.strip().split():
token = token.replace('"',... | python | def clean_and_split_sql(sql: str) -> List[str]:
"""
Cleans up and unifies a SQL query. This involves unifying quoted strings
and splitting brackets which aren't formatted consistently in the data.
"""
sql_tokens: List[str] = []
for token in sql.strip().split():
token = token.replace('"',... | ['def', 'clean_and_split_sql', '(', 'sql', ':', 'str', ')', '->', 'List', '[', 'str', ']', ':', 'sql_tokens', ':', 'List', '[', 'str', ']', '=', '[', ']', 'for', 'token', 'in', 'sql', '.', 'strip', '(', ')', '.', 'split', '(', ')', ':', 'token', '=', 'token', '.', 'replace', '(', '\'"\'', ',', '"\'"', ')', '.', 'replac... | Cleans up and unifies a SQL query. This involves unifying quoted strings
and splitting brackets which aren't formatted consistently in the data. | ['Cleans', 'up', 'and', 'unifies', 'a', 'SQL', 'query', '.', 'This', 'involves', 'unifying', 'quoted', 'strings', 'and', 'splitting', 'brackets', 'which', 'aren', 't', 'formatted', 'consistently', 'in', 'the', 'data', '.'] | train | https://github.com/allenai/allennlp/blob/648a36f77db7e45784c047176074f98534c76636/allennlp/data/dataset_readers/dataset_utils/text2sql_utils.py#L89-L102 |
4,125 | django-fluent/django-fluent-blogs | fluent_blogs/sitemaps.py | TagArchiveSitemap.lastmod | def lastmod(self, tag):
"""Return the last modification of the entry."""
lastitems = EntryModel.objects.published().order_by('-modification_date').filter(tags=tag).only('modification_date')
return lastitems[0].modification_date | python | def lastmod(self, tag):
"""Return the last modification of the entry."""
lastitems = EntryModel.objects.published().order_by('-modification_date').filter(tags=tag).only('modification_date')
return lastitems[0].modification_date | ['def', 'lastmod', '(', 'self', ',', 'tag', ')', ':', 'lastitems', '=', 'EntryModel', '.', 'objects', '.', 'published', '(', ')', '.', 'order_by', '(', "'-modification_date'", ')', '.', 'filter', '(', 'tags', '=', 'tag', ')', '.', 'only', '(', "'modification_date'", ')', 'return', 'lastitems', '[', '0', ']', '.', 'modi... | Return the last modification of the entry. | ['Return', 'the', 'last', 'modification', 'of', 'the', 'entry', '.'] | train | https://github.com/django-fluent/django-fluent-blogs/blob/86b148549a010eaca9a2ea987fe43be250e06c50/fluent_blogs/sitemaps.py#L86-L89 |
4,126 | ClimateImpactLab/DataFS | datafs/managers/manager_dynamo.py | DynamoDBManager._update_spec_config | def _update_spec_config(self, document_name, spec):
'''
Dynamo implementation of project specific metadata spec
'''
# add the updated archive_metadata object to Dynamo
self._spec_table.update_item(
Key={'_id': '{}'.format(document_name)},
UpdateExpression... | python | def _update_spec_config(self, document_name, spec):
'''
Dynamo implementation of project specific metadata spec
'''
# add the updated archive_metadata object to Dynamo
self._spec_table.update_item(
Key={'_id': '{}'.format(document_name)},
UpdateExpression... | ['def', '_update_spec_config', '(', 'self', ',', 'document_name', ',', 'spec', ')', ':', '# add the updated archive_metadata object to Dynamo', 'self', '.', '_spec_table', '.', 'update_item', '(', 'Key', '=', '{', "'_id'", ':', "'{}'", '.', 'format', '(', 'document_name', ')', '}', ',', 'UpdateExpression', '=', '"SET c... | Dynamo implementation of project specific metadata spec | ['Dynamo', 'implementation', 'of', 'project', 'specific', 'metadata', 'spec'] | train | https://github.com/ClimateImpactLab/DataFS/blob/0d32c2b4e18d300a11b748a552f6adbc3dd8f59d/datafs/managers/manager_dynamo.py#L182-L192 |
4,127 | openstack/networking-cisco | networking_cisco/apps/saf/agent/vdp/lldpad.py | LldpadDriver.construct_vdp_dict | def construct_vdp_dict(self, mode, mgrid, typeid, typeid_ver, vsiid_frmt,
vsiid, filter_frmt, gid, mac, vlan, oui_id,
oui_data):
"""Constructs the VDP Message.
Please refer http://www.ieee802.org/1/pages/802.1bg.html VDP
Section for more det... | python | def construct_vdp_dict(self, mode, mgrid, typeid, typeid_ver, vsiid_frmt,
vsiid, filter_frmt, gid, mac, vlan, oui_id,
oui_data):
"""Constructs the VDP Message.
Please refer http://www.ieee802.org/1/pages/802.1bg.html VDP
Section for more det... | ['def', 'construct_vdp_dict', '(', 'self', ',', 'mode', ',', 'mgrid', ',', 'typeid', ',', 'typeid_ver', ',', 'vsiid_frmt', ',', 'vsiid', ',', 'filter_frmt', ',', 'gid', ',', 'mac', ',', 'vlan', ',', 'oui_id', ',', 'oui_data', ')', ':', 'vdp_keyword_str', '=', '{', '}', 'if', 'mgrid', 'is', 'None', ':', 'mgrid', '=', 's... | Constructs the VDP Message.
Please refer http://www.ieee802.org/1/pages/802.1bg.html VDP
Section for more detailed information
:param mode: Associate or De-associate
:param mgrid: MGR ID
:param typeid: Type ID
:param typeid_ver: Version of the Type ID
:param vsii... | ['Constructs', 'the', 'VDP', 'Message', '.'] | train | https://github.com/openstack/networking-cisco/blob/aa58a30aec25b86f9aa5952b0863045975debfa9/networking_cisco/apps/saf/agent/vdp/lldpad.py#L330-L402 |
4,128 | DistrictDataLabs/yellowbrick | yellowbrick/datasets/base.py | Dataset.to_dataframe | def to_dataframe(self):
"""
Returns the entire dataset as a single pandas DataFrame.
Returns
-------
df : DataFrame with shape (n_instances, n_columns)
A pandas DataFrame containing the complete original data table
including all targets (specified by the ... | python | def to_dataframe(self):
"""
Returns the entire dataset as a single pandas DataFrame.
Returns
-------
df : DataFrame with shape (n_instances, n_columns)
A pandas DataFrame containing the complete original data table
including all targets (specified by the ... | ['def', 'to_dataframe', '(', 'self', ')', ':', 'if', 'pd', 'is', 'None', ':', 'raise', 'DatasetsError', '(', '"pandas is required to load DataFrame, it can be installed with pip"', ')', 'path', '=', 'find_dataset_path', '(', 'self', '.', 'name', ',', 'ext', '=', '".csv.gz"', ',', 'data_home', '=', 'self', '.', 'data_ho... | Returns the entire dataset as a single pandas DataFrame.
Returns
-------
df : DataFrame with shape (n_instances, n_columns)
A pandas DataFrame containing the complete original data table
including all targets (specified by the meta data) and all
features (inc... | ['Returns', 'the', 'entire', 'dataset', 'as', 'a', 'single', 'pandas', 'DataFrame', '.'] | train | https://github.com/DistrictDataLabs/yellowbrick/blob/59b67236a3862c73363e8edad7cd86da5b69e3b2/yellowbrick/datasets/base.py#L232-L249 |
4,129 | paypal/baler | baler/baler.py | static_uint8_variable_for_data | def static_uint8_variable_for_data(variable_name, data, max_line_length=120, comment="", indent=2):
r"""
>>> static_uint8_variable_for_data("v", "abc")
'static uint8_t v[3] = {\n 0x61, 0x62, 0x63,\n}; // v'
>>> static_uint8_variable_for_data("v", "abc", comment="hi")
'static uint8_t v[3] = { // hi\... | python | def static_uint8_variable_for_data(variable_name, data, max_line_length=120, comment="", indent=2):
r"""
>>> static_uint8_variable_for_data("v", "abc")
'static uint8_t v[3] = {\n 0x61, 0x62, 0x63,\n}; // v'
>>> static_uint8_variable_for_data("v", "abc", comment="hi")
'static uint8_t v[3] = { // hi\... | ['def', 'static_uint8_variable_for_data', '(', 'variable_name', ',', 'data', ',', 'max_line_length', '=', '120', ',', 'comment', '=', '""', ',', 'indent', '=', '2', ')', ':', 'hex_components', '=', '[', ']', 'for', 'byte', 'in', 'data', ':', 'byte_as_hex', '=', '"0x{u:02X}"', '.', 'format', '(', 'u', '=', 'ord', '(', '... | r"""
>>> static_uint8_variable_for_data("v", "abc")
'static uint8_t v[3] = {\n 0x61, 0x62, 0x63,\n}; // v'
>>> static_uint8_variable_for_data("v", "abc", comment="hi")
'static uint8_t v[3] = { // hi\n 0x61, 0x62, 0x63,\n}; // v'
>>> static_uint8_variable_for_data("v", "abc", indent=4)
'static ... | ['r', '>>>', 'static_uint8_variable_for_data', '(', 'v', 'abc', ')', 'static', 'uint8_t', 'v', '[', '3', ']', '=', '{', '\\', 'n', '0x61', '0x62', '0x63', '\\', 'n', '}', ';', '//', 'v', '>>>', 'static_uint8_variable_for_data', '(', 'v', 'abc', 'comment', '=', 'hi', ')', 'static', 'uint8_t', 'v', '[', '3', ']', '=', '{... | train | https://github.com/paypal/baler/blob/db4f09dd2c7729b2df5268c87ad3b4cb43396abf/baler/baler.py#L44-L77 |
4,130 | TylerTemp/docpie | docpie/pie.py | Docpie.set_config | def set_config(self, **config):
"""Shadow all the current config."""
reinit = False
if 'stdopt' in config:
stdopt = config.pop('stdopt')
reinit = (stdopt != self.stdopt)
self.stdopt = stdopt
if 'attachopt' in config:
attachopt = config.pop(... | python | def set_config(self, **config):
"""Shadow all the current config."""
reinit = False
if 'stdopt' in config:
stdopt = config.pop('stdopt')
reinit = (stdopt != self.stdopt)
self.stdopt = stdopt
if 'attachopt' in config:
attachopt = config.pop(... | ['def', 'set_config', '(', 'self', ',', '*', '*', 'config', ')', ':', 'reinit', '=', 'False', 'if', "'stdopt'", 'in', 'config', ':', 'stdopt', '=', 'config', '.', 'pop', '(', "'stdopt'", ')', 'reinit', '=', '(', 'stdopt', '!=', 'self', '.', 'stdopt', ')', 'self', '.', 'stdopt', '=', 'stdopt', 'if', "'attachopt'", 'in',... | Shadow all the current config. | ['Shadow', 'all', 'the', 'current', 'config', '.'] | train | https://github.com/TylerTemp/docpie/blob/e658454b81b6c79a020d499f12ad73496392c09a/docpie/pie.py#L655-L714 |
4,131 | AndrewAnnex/SpiceyPy | spiceypy/spiceypy.py | ckgpav | def ckgpav(inst, sclkdp, tol, ref):
"""
Get pointing (attitude) and angular velocity
for a specified spacecraft clock time.
http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/ckgpav_c.html
:param inst: NAIF ID of instrument, spacecraft, or structure.
:type inst: int
:param sclkdp: Enc... | python | def ckgpav(inst, sclkdp, tol, ref):
"""
Get pointing (attitude) and angular velocity
for a specified spacecraft clock time.
http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/ckgpav_c.html
:param inst: NAIF ID of instrument, spacecraft, or structure.
:type inst: int
:param sclkdp: Enc... | ['def', 'ckgpav', '(', 'inst', ',', 'sclkdp', ',', 'tol', ',', 'ref', ')', ':', 'inst', '=', 'ctypes', '.', 'c_int', '(', 'inst', ')', 'sclkdp', '=', 'ctypes', '.', 'c_double', '(', 'sclkdp', ')', 'tol', '=', 'ctypes', '.', 'c_double', '(', 'tol', ')', 'ref', '=', 'stypes', '.', 'stringToCharP', '(', 'ref', ')', 'cmat'... | Get pointing (attitude) and angular velocity
for a specified spacecraft clock time.
http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/ckgpav_c.html
:param inst: NAIF ID of instrument, spacecraft, or structure.
:type inst: int
:param sclkdp: Encoded spacecraft clock time.
:type sclkdp: fl... | ['Get', 'pointing', '(', 'attitude', ')', 'and', 'angular', 'velocity', 'for', 'a', 'specified', 'spacecraft', 'clock', 'time', '.'] | train | https://github.com/AndrewAnnex/SpiceyPy/blob/fc20a9b9de68b58eed5b332f0c051fb343a6e335/spiceypy/spiceypy.py#L1031-L1063 |
4,132 | datadesk/python-documentcloud | documentcloud/toolbox.py | credentials_required | def credentials_required(method_func):
"""
Decorator for methods that checks that the client has credentials.
Throws a CredentialsMissingError when they are absent.
"""
def _checkcredentials(self, *args, **kwargs):
if self.username and self.password:
return method_func(self, *ar... | python | def credentials_required(method_func):
"""
Decorator for methods that checks that the client has credentials.
Throws a CredentialsMissingError when they are absent.
"""
def _checkcredentials(self, *args, **kwargs):
if self.username and self.password:
return method_func(self, *ar... | ['def', 'credentials_required', '(', 'method_func', ')', ':', 'def', '_checkcredentials', '(', 'self', ',', '*', 'args', ',', '*', '*', 'kwargs', ')', ':', 'if', 'self', '.', 'username', 'and', 'self', '.', 'password', ':', 'return', 'method_func', '(', 'self', ',', '*', 'args', ',', '*', '*', 'kwargs', ')', 'else', ':... | Decorator for methods that checks that the client has credentials.
Throws a CredentialsMissingError when they are absent. | ['Decorator', 'for', 'methods', 'that', 'checks', 'that', 'the', 'client', 'has', 'credentials', '.'] | train | https://github.com/datadesk/python-documentcloud/blob/0d7f42cbf1edf5c61fca37ed846362cba4abfd76/documentcloud/toolbox.py#L45-L59 |
4,133 | jart/fabulous | fabulous/utils.py | TerminalInfo.dimensions | def dimensions(self):
"""Returns terminal dimensions
Don't save this information for long periods of time because
the user might resize their terminal.
:return: Returns ``(width, height)``. If there's no terminal
to be found, we'll just return ``(79, 40)``.
""... | python | def dimensions(self):
"""Returns terminal dimensions
Don't save this information for long periods of time because
the user might resize their terminal.
:return: Returns ``(width, height)``. If there's no terminal
to be found, we'll just return ``(79, 40)``.
""... | ['def', 'dimensions', '(', 'self', ')', ':', 'try', ':', 'call', '=', 'fcntl', '.', 'ioctl', '(', 'self', '.', 'termfd', ',', 'termios', '.', 'TIOCGWINSZ', ',', '"\\000"', '*', '8', ')', 'except', 'IOError', ':', 'return', '(', '79', ',', '40', ')', 'else', ':', 'height', ',', 'width', '=', 'struct', '.', 'unpack', '('... | Returns terminal dimensions
Don't save this information for long periods of time because
the user might resize their terminal.
:return: Returns ``(width, height)``. If there's no terminal
to be found, we'll just return ``(79, 40)``. | ['Returns', 'terminal', 'dimensions'] | train | https://github.com/jart/fabulous/blob/19903cf0a980b82f5928c3bec1f28b6bdd3785bd/fabulous/utils.py#L100-L115 |
4,134 | DLR-RM/RAFCON | source/rafcon/gui/mygaphas/connector.py | RectanglePointPort.glue | def glue(self, pos):
"""Calculates the distance between the given position and the port
:param (float, float) pos: Distance to this position is calculated
:return: Distance to port
:rtype: float
"""
# Distance between border of rectangle and point
# Equation from... | python | def glue(self, pos):
"""Calculates the distance between the given position and the port
:param (float, float) pos: Distance to this position is calculated
:return: Distance to port
:rtype: float
"""
# Distance between border of rectangle and point
# Equation from... | ['def', 'glue', '(', 'self', ',', 'pos', ')', ':', '# Distance between border of rectangle and point', '# Equation from http://stackoverflow.com/a/18157551/3568069', 'dx', '=', 'max', '(', 'self', '.', 'point', '.', 'x', '-', 'self', '.', 'width', '/', '2.', '-', 'pos', '[', '0', ']', ',', '0', ',', 'pos', '[', '0', ']... | Calculates the distance between the given position and the port
:param (float, float) pos: Distance to this position is calculated
:return: Distance to port
:rtype: float | ['Calculates', 'the', 'distance', 'between', 'the', 'given', 'position', 'and', 'the', 'port'] | train | https://github.com/DLR-RM/RAFCON/blob/24942ef1a904531f49ab8830a1dbb604441be498/source/rafcon/gui/mygaphas/connector.py#L43-L55 |
4,135 | apache/incubator-mxnet | example/gluon/lipnet/trainer.py | Train.infer_batch | def infer_batch(self, dataloader):
"""
Description : inference for LipNet
"""
sum_losses = 0
len_losses = 0
for input_data, input_label in dataloader:
data = gluon.utils.split_and_load(input_data, self.ctx, even_split=False)
label = gluon.utils.spl... | python | def infer_batch(self, dataloader):
"""
Description : inference for LipNet
"""
sum_losses = 0
len_losses = 0
for input_data, input_label in dataloader:
data = gluon.utils.split_and_load(input_data, self.ctx, even_split=False)
label = gluon.utils.spl... | ['def', 'infer_batch', '(', 'self', ',', 'dataloader', ')', ':', 'sum_losses', '=', '0', 'len_losses', '=', '0', 'for', 'input_data', ',', 'input_label', 'in', 'dataloader', ':', 'data', '=', 'gluon', '.', 'utils', '.', 'split_and_load', '(', 'input_data', ',', 'self', '.', 'ctx', ',', 'even_split', '=', 'False', ')', ... | Description : inference for LipNet | ['Description', ':', 'inference', 'for', 'LipNet'] | train | https://github.com/apache/incubator-mxnet/blob/1af29e9c060a4c7d60eeaacba32afdb9a7775ba7/example/gluon/lipnet/trainer.py#L188-L201 |
4,136 | Kortemme-Lab/klab | klab/google/gcalendar.py | GoogleCalendar.get_upcoming_event_lists_for_the_remainder_of_the_month | def get_upcoming_event_lists_for_the_remainder_of_the_month(self, year = None, month = None):
'''Return the set of events as triple of (today's events, events for the remainder of the week, events for the remainder of the month).'''
events = []
if year == None and month == None:
now... | python | def get_upcoming_event_lists_for_the_remainder_of_the_month(self, year = None, month = None):
'''Return the set of events as triple of (today's events, events for the remainder of the week, events for the remainder of the month).'''
events = []
if year == None and month == None:
now... | ['def', 'get_upcoming_event_lists_for_the_remainder_of_the_month', '(', 'self', ',', 'year', '=', 'None', ',', 'month', '=', 'None', ')', ':', 'events', '=', '[', ']', 'if', 'year', '==', 'None', 'and', 'month', '==', 'None', ':', 'now', '=', 'datetime', '.', 'now', '(', 'tz', '=', 'self', '.', 'timezone', ')', '# time... | Return the set of events as triple of (today's events, events for the remainder of the week, events for the remainder of the month). | ['Return', 'the', 'set', 'of', 'events', 'as', 'triple', 'of', '(', 'today', 's', 'events', 'events', 'for', 'the', 'remainder', 'of', 'the', 'week', 'events', 'for', 'the', 'remainder', 'of', 'the', 'month', ')', '.'] | train | https://github.com/Kortemme-Lab/klab/blob/6d410ad08f1bd9f7cbbb28d7d946e94fbaaa2b6b/klab/google/gcalendar.py#L236-L277 |
4,137 | koszullab/metaTOR | metator/scripts/hicstuff.py | distance_to_contact | def distance_to_contact(D, alpha=1):
"""Compute contact matrix from input distance matrix. Distance values of
zeroes are given the largest contact count otherwise inferred non-zero
distance values.
"""
if callable(alpha):
distance_function = alpha
else:
try:
a = np.f... | python | def distance_to_contact(D, alpha=1):
"""Compute contact matrix from input distance matrix. Distance values of
zeroes are given the largest contact count otherwise inferred non-zero
distance values.
"""
if callable(alpha):
distance_function = alpha
else:
try:
a = np.f... | ['def', 'distance_to_contact', '(', 'D', ',', 'alpha', '=', '1', ')', ':', 'if', 'callable', '(', 'alpha', ')', ':', 'distance_function', '=', 'alpha', 'else', ':', 'try', ':', 'a', '=', 'np', '.', 'float64', '(', 'alpha', ')', 'def', 'distance_function', '(', 'x', ')', ':', 'return', '1', '/', '(', 'x', '**', '(', '1'... | Compute contact matrix from input distance matrix. Distance values of
zeroes are given the largest contact count otherwise inferred non-zero
distance values. | ['Compute', 'contact', 'matrix', 'from', 'input', 'distance', 'matrix', '.', 'Distance', 'values', 'of', 'zeroes', 'are', 'given', 'the', 'largest', 'contact', 'count', 'otherwise', 'inferred', 'non', '-', 'zero', 'distance', 'values', '.'] | train | https://github.com/koszullab/metaTOR/blob/0c1203d1dffedfa5ea380c0335b4baa9cfb7e89a/metator/scripts/hicstuff.py#L918-L942 |
4,138 | yougov/pmxbot | pmxbot/quotes.py | MongoDBQuotes.delete | def delete(self, lookup):
"""
If exactly one quote matches, delete it. Otherwise,
raise a ValueError.
"""
lookup, num = self.split_num(lookup)
if num:
result = self.find_matches(lookup)[num - 1]
else:
result, = self.find_matches(lookup)
self.db.delete_one(result) | python | def delete(self, lookup):
"""
If exactly one quote matches, delete it. Otherwise,
raise a ValueError.
"""
lookup, num = self.split_num(lookup)
if num:
result = self.find_matches(lookup)[num - 1]
else:
result, = self.find_matches(lookup)
self.db.delete_one(result) | ['def', 'delete', '(', 'self', ',', 'lookup', ')', ':', 'lookup', ',', 'num', '=', 'self', '.', 'split_num', '(', 'lookup', ')', 'if', 'num', ':', 'result', '=', 'self', '.', 'find_matches', '(', 'lookup', ')', '[', 'num', '-', '1', ']', 'else', ':', 'result', ',', '=', 'self', '.', 'find_matches', '(', 'lookup', ')', ... | If exactly one quote matches, delete it. Otherwise,
raise a ValueError. | ['If', 'exactly', 'one', 'quote', 'matches', 'delete', 'it', '.', 'Otherwise', 'raise', 'a', 'ValueError', '.'] | train | https://github.com/yougov/pmxbot/blob/5da84a3258a0fd73cb35b60e39769a5d7bfb2ba7/pmxbot/quotes.py#L153-L163 |
4,139 | pip-services3-python/pip-services3-commons-python | pip_services3_commons/refer/References.py | References.get_all | def get_all(self):
"""
Gets all component references registered in this reference map.
:return: a list with component references.
"""
components = []
self._lock.acquire()
try:
for reference in self._references:
components.appe... | python | def get_all(self):
"""
Gets all component references registered in this reference map.
:return: a list with component references.
"""
components = []
self._lock.acquire()
try:
for reference in self._references:
components.appe... | ['def', 'get_all', '(', 'self', ')', ':', 'components', '=', '[', ']', 'self', '.', '_lock', '.', 'acquire', '(', ')', 'try', ':', 'for', 'reference', 'in', 'self', '.', '_references', ':', 'components', '.', 'append', '(', 'reference', '.', 'get_component', '(', ')', ')', 'finally', ':', 'self', '.', '_lock', '.', 're... | Gets all component references registered in this reference map.
:return: a list with component references. | ['Gets', 'all', 'component', 'references', 'registered', 'in', 'this', 'reference', 'map', '.'] | train | https://github.com/pip-services3-python/pip-services3-commons-python/blob/22cbbb3e91e49717f65c083d36147fdb07ba9e3b/pip_services3_commons/refer/References.py#L147-L162 |
4,140 | StackStorm/pybind | pybind/nos/v6_0_2f/brocade_firmware_rpc/firmware_download/input/__init__.py | input._set_usb | def _set_usb(self, v, load=False):
"""
Setter method for usb, mapped from YANG variable /brocade_firmware_rpc/firmware_download/input/usb (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_usb is considered as a private
method. Backends looking to populate t... | python | def _set_usb(self, v, load=False):
"""
Setter method for usb, mapped from YANG variable /brocade_firmware_rpc/firmware_download/input/usb (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_usb is considered as a private
method. Backends looking to populate t... | ['def', '_set_usb', '(', 'self', ',', 'v', ',', 'load', '=', 'False', ')', ':', 'if', 'hasattr', '(', 'v', ',', '"_utype"', ')', ':', 'v', '=', 'v', '.', '_utype', '(', 'v', ')', 'try', ':', 't', '=', 'YANGDynClass', '(', 'v', ',', 'base', '=', 'usb', '.', 'usb', ',', 'is_container', '=', "'container'", ',', 'presence'... | Setter method for usb, mapped from YANG variable /brocade_firmware_rpc/firmware_download/input/usb (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_usb is considered as a private
method. Backends looking to populate this variable should
do so via calling thisO... | ['Setter', 'method', 'for', 'usb', 'mapped', 'from', 'YANG', 'variable', '/', 'brocade_firmware_rpc', '/', 'firmware_download', '/', 'input', '/', 'usb', '(', 'container', ')', 'If', 'this', 'variable', 'is', 'read', '-', 'only', '(', 'config', ':', 'false', ')', 'in', 'the', 'source', 'YANG', 'file', 'then', '_set_usb... | train | https://github.com/StackStorm/pybind/blob/44c467e71b2b425be63867aba6e6fa28b2cfe7fb/pybind/nos/v6_0_2f/brocade_firmware_rpc/firmware_download/input/__init__.py#L200-L221 |
4,141 | srittau/python-asserts | asserts/__init__.py | assert_dict_equal | def assert_dict_equal(
first, second, key_msg_fmt="{msg}", value_msg_fmt="{msg}"
):
"""Fail unless first dictionary equals second.
The dictionaries are considered equal, if they both contain the same
keys, and their respective values are also equal.
>>> assert_dict_equal({"foo": 5}, {"foo": 5})
... | python | def assert_dict_equal(
first, second, key_msg_fmt="{msg}", value_msg_fmt="{msg}"
):
"""Fail unless first dictionary equals second.
The dictionaries are considered equal, if they both contain the same
keys, and their respective values are also equal.
>>> assert_dict_equal({"foo": 5}, {"foo": 5})
... | ['def', 'assert_dict_equal', '(', 'first', ',', 'second', ',', 'key_msg_fmt', '=', '"{msg}"', ',', 'value_msg_fmt', '=', '"{msg}"', ')', ':', 'first_keys', '=', 'set', '(', 'first', '.', 'keys', '(', ')', ')', 'second_keys', '=', 'set', '(', 'second', '.', 'keys', '(', ')', ')', 'missing_keys', '=', 'list', '(', 'first... | Fail unless first dictionary equals second.
The dictionaries are considered equal, if they both contain the same
keys, and their respective values are also equal.
>>> assert_dict_equal({"foo": 5}, {"foo": 5})
>>> assert_dict_equal({"foo": 5}, {})
Traceback (most recent call last):
...
... | ['Fail', 'unless', 'first', 'dictionary', 'equals', 'second', '.'] | train | https://github.com/srittau/python-asserts/blob/1d5c797031c68ee27552d1c94e7f918c3d3d0453/asserts/__init__.py#L305-L380 |
4,142 | gwastro/pycbc | pycbc/inference/io/base_hdf.py | BaseInferenceFile.read_samples | def read_samples(self, parameters, array_class=None, **kwargs):
"""Reads samples for the given parameter(s).
The ``parameters`` can be the name of any dataset in ``samples_group``,
a virtual field or method of ``FieldArray`` (as long as the file
contains the necessary fields to derive t... | python | def read_samples(self, parameters, array_class=None, **kwargs):
"""Reads samples for the given parameter(s).
The ``parameters`` can be the name of any dataset in ``samples_group``,
a virtual field or method of ``FieldArray`` (as long as the file
contains the necessary fields to derive t... | ['def', 'read_samples', '(', 'self', ',', 'parameters', ',', 'array_class', '=', 'None', ',', '*', '*', 'kwargs', ')', ':', '# get the type of array class to use', 'if', 'array_class', 'is', 'None', ':', 'array_class', '=', 'FieldArray', '# get the names of fields needed for the given parameters', 'possible_fields', '=... | Reads samples for the given parameter(s).
The ``parameters`` can be the name of any dataset in ``samples_group``,
a virtual field or method of ``FieldArray`` (as long as the file
contains the necessary fields to derive the virtual field or method),
and/or any numpy function of these.
... | ['Reads', 'samples', 'for', 'the', 'given', 'parameter', '(', 's', ')', '.'] | train | https://github.com/gwastro/pycbc/blob/7a64cdd104d263f1b6ea0b01e6841837d05a4cb3/pycbc/inference/io/base_hdf.py#L141-L188 |
4,143 | danielfrg/word2vec | word2vec/wordclusters.py | WordClusters.get_cluster | def get_cluster(self, word):
"""
Returns the cluster number for a word in the vocabulary
"""
idx = self.ix(word)
return self.clusters[idx] | python | def get_cluster(self, word):
"""
Returns the cluster number for a word in the vocabulary
"""
idx = self.ix(word)
return self.clusters[idx] | ['def', 'get_cluster', '(', 'self', ',', 'word', ')', ':', 'idx', '=', 'self', '.', 'ix', '(', 'word', ')', 'return', 'self', '.', 'clusters', '[', 'idx', ']'] | Returns the cluster number for a word in the vocabulary | ['Returns', 'the', 'cluster', 'number', 'for', 'a', 'word', 'in', 'the', 'vocabulary'] | train | https://github.com/danielfrg/word2vec/blob/762200acec2941a030abed69e946838af35eb2ae/word2vec/wordclusters.py#L23-L28 |
4,144 | adrn/gala | gala/dynamics/core.py | PhaseSpacePosition.angular_momentum | def angular_momentum(self):
r"""
Compute the angular momentum for the phase-space positions contained
in this object::
.. math::
\boldsymbol{{L}} = \boldsymbol{{q}} \times \boldsymbol{{p}}
See :ref:`shape-conventions` for more information about the shapes of
... | python | def angular_momentum(self):
r"""
Compute the angular momentum for the phase-space positions contained
in this object::
.. math::
\boldsymbol{{L}} = \boldsymbol{{q}} \times \boldsymbol{{p}}
See :ref:`shape-conventions` for more information about the shapes of
... | ['def', 'angular_momentum', '(', 'self', ')', ':', 'cart', '=', 'self', '.', 'represent_as', '(', 'coord', '.', 'CartesianRepresentation', ')', 'return', 'cart', '.', 'pos', '.', 'cross', '(', 'cart', '.', 'vel', ')', '.', 'xyz'] | r"""
Compute the angular momentum for the phase-space positions contained
in this object::
.. math::
\boldsymbol{{L}} = \boldsymbol{{q}} \times \boldsymbol{{p}}
See :ref:`shape-conventions` for more information about the shapes of
input and output objects.
... | ['r', 'Compute', 'the', 'angular', 'momentum', 'for', 'the', 'phase', '-', 'space', 'positions', 'contained', 'in', 'this', 'object', '::'] | train | https://github.com/adrn/gala/blob/ea95575a0df1581bb4b0986aebd6eea8438ab7eb/gala/dynamics/core.py#L689-L718 |
4,145 | projectatomic/atomic-reactor | docs/manpage/generate_manpage.py | ManPageFormatter.create_subcommand_synopsis | def create_subcommand_synopsis(self, parser):
""" show usage with description for commands """
self.add_usage(parser.usage, parser._get_positional_actions(),
None, prefix='')
usage = self._format_usage(parser.usage, parser._get_positional_actions(),
... | python | def create_subcommand_synopsis(self, parser):
""" show usage with description for commands """
self.add_usage(parser.usage, parser._get_positional_actions(),
None, prefix='')
usage = self._format_usage(parser.usage, parser._get_positional_actions(),
... | ['def', 'create_subcommand_synopsis', '(', 'self', ',', 'parser', ')', ':', 'self', '.', 'add_usage', '(', 'parser', '.', 'usage', ',', 'parser', '.', '_get_positional_actions', '(', ')', ',', 'None', ',', 'prefix', '=', "''", ')', 'usage', '=', 'self', '.', '_format_usage', '(', 'parser', '.', 'usage', ',', 'parser', ... | show usage with description for commands | ['show', 'usage', 'with', 'description', 'for', 'commands'] | train | https://github.com/projectatomic/atomic-reactor/blob/fd31c01b964097210bf169960d051e5f04019a80/docs/manpage/generate_manpage.py#L89-L95 |
4,146 | tanghaibao/jcvi | jcvi/formats/coords.py | blast | def blast(args):
"""
%prog blast <deltafile|coordsfile>
Covert delta or coordsfile to BLAST tabular output.
"""
p = OptionParser(blast.__doc__)
opts, args = p.parse_args(args)
if len(args) != 1:
sys.exit(not p.print_help())
deltafile, = args
blastfile = deltafile.rsplit(".... | python | def blast(args):
"""
%prog blast <deltafile|coordsfile>
Covert delta or coordsfile to BLAST tabular output.
"""
p = OptionParser(blast.__doc__)
opts, args = p.parse_args(args)
if len(args) != 1:
sys.exit(not p.print_help())
deltafile, = args
blastfile = deltafile.rsplit(".... | ['def', 'blast', '(', 'args', ')', ':', 'p', '=', 'OptionParser', '(', 'blast', '.', '__doc__', ')', 'opts', ',', 'args', '=', 'p', '.', 'parse_args', '(', 'args', ')', 'if', 'len', '(', 'args', ')', '!=', '1', ':', 'sys', '.', 'exit', '(', 'not', 'p', '.', 'print_help', '(', ')', ')', 'deltafile', ',', '=', 'args', 'b... | %prog blast <deltafile|coordsfile>
Covert delta or coordsfile to BLAST tabular output. | ['%prog', 'blast', '<deltafile|coordsfile', '>'] | train | https://github.com/tanghaibao/jcvi/blob/d2e31a77b6ade7f41f3b321febc2b4744d1cdeca/jcvi/formats/coords.py#L294-L313 |
4,147 | spyder-ide/spyder | spyder/plugins/ipythonconsole/plugin.py | IPythonConsole.interpreter_versions | def interpreter_versions(self):
"""Python and IPython versions used by clients"""
if CONF.get('main_interpreter', 'default'):
from IPython.core import release
versions = dict(
python_version = sys.version.split("\n")[0].strip(),
ipython_versi... | python | def interpreter_versions(self):
"""Python and IPython versions used by clients"""
if CONF.get('main_interpreter', 'default'):
from IPython.core import release
versions = dict(
python_version = sys.version.split("\n")[0].strip(),
ipython_versi... | ['def', 'interpreter_versions', '(', 'self', ')', ':', 'if', 'CONF', '.', 'get', '(', "'main_interpreter'", ',', "'default'", ')', ':', 'from', 'IPython', '.', 'core', 'import', 'release', 'versions', '=', 'dict', '(', 'python_version', '=', 'sys', '.', 'version', '.', 'split', '(', '"\\n"', ')', '[', '0', ']', '.', 's... | Python and IPython versions used by clients | ['Python', 'and', 'IPython', 'versions', 'used', 'by', 'clients'] | train | https://github.com/spyder-ide/spyder/blob/f76836ce1b924bcc4efd3f74f2960d26a4e528e0/spyder/plugins/ipythonconsole/plugin.py#L824-L852 |
4,148 | JoeVirtual/KonFoo | konfoo/utils.py | d3flare_json | def d3flare_json(metadata, file=None, **options):
""" Converts the *metadata* dictionary of a container or field into a
``flare.json`` formatted string or formatted stream written to the *file*
The ``flare.json`` format is defined by the `d3.js <https://d3js.org/>`_ graphic
library.
The ``flare.js... | python | def d3flare_json(metadata, file=None, **options):
""" Converts the *metadata* dictionary of a container or field into a
``flare.json`` formatted string or formatted stream written to the *file*
The ``flare.json`` format is defined by the `d3.js <https://d3js.org/>`_ graphic
library.
The ``flare.js... | ['def', 'd3flare_json', '(', 'metadata', ',', 'file', '=', 'None', ',', '*', '*', 'options', ')', ':', 'def', 'convert', '(', 'root', ')', ':', 'dct', '=', 'OrderedDict', '(', ')', 'item_type', '=', 'root', '.', 'get', '(', "'type'", ')', 'dct', '[', "'class'", ']', '=', 'root', '.', 'get', '(', "'class'", ')', 'dct', ... | Converts the *metadata* dictionary of a container or field into a
``flare.json`` formatted string or formatted stream written to the *file*
The ``flare.json`` format is defined by the `d3.js <https://d3js.org/>`_ graphic
library.
The ``flare.json`` format looks like this:
.. code-block:: JSON
... | ['Converts', 'the', '*', 'metadata', '*', 'dictionary', 'of', 'a', 'container', 'or', 'field', 'into', 'a', 'flare', '.', 'json', 'formatted', 'string', 'or', 'formatted', 'stream', 'written', 'to', 'the', '*', 'file', '*'] | train | https://github.com/JoeVirtual/KonFoo/blob/0c62ef5c2bed4deaf908b34082e4de2544532fdc/konfoo/utils.py#L168-L228 |
4,149 | santoshphilip/eppy | eppy/geometry/height_surface.py | height | def height(poly):
"""height"""
num = len(poly)
hgt = 0.0
for i in range(num):
hgt += (poly[i][2])
return hgt/num | python | def height(poly):
"""height"""
num = len(poly)
hgt = 0.0
for i in range(num):
hgt += (poly[i][2])
return hgt/num | ['def', 'height', '(', 'poly', ')', ':', 'num', '=', 'len', '(', 'poly', ')', 'hgt', '=', '0.0', 'for', 'i', 'in', 'range', '(', 'num', ')', ':', 'hgt', '+=', '(', 'poly', '[', 'i', ']', '[', '2', ']', ')', 'return', 'hgt', '/', 'num'] | height | ['height'] | train | https://github.com/santoshphilip/eppy/blob/55410ff7c11722f35bc4331ff5e00a0b86f787e1/eppy/geometry/height_surface.py#L40-L46 |
4,150 | RudolfCardinal/pythonlib | cardinal_pythonlib/exceptions.py | recover_info_from_exception | def recover_info_from_exception(err: Exception) -> Dict:
"""
Retrives the information added to an exception by
:func:`add_info_to_exception`.
"""
if len(err.args) < 1:
return {}
info = err.args[-1]
if not isinstance(info, dict):
return {}
return info | python | def recover_info_from_exception(err: Exception) -> Dict:
"""
Retrives the information added to an exception by
:func:`add_info_to_exception`.
"""
if len(err.args) < 1:
return {}
info = err.args[-1]
if not isinstance(info, dict):
return {}
return info | ['def', 'recover_info_from_exception', '(', 'err', ':', 'Exception', ')', '->', 'Dict', ':', 'if', 'len', '(', 'err', '.', 'args', ')', '<', '1', ':', 'return', '{', '}', 'info', '=', 'err', '.', 'args', '[', '-', '1', ']', 'if', 'not', 'isinstance', '(', 'info', ',', 'dict', ')', ':', 'return', '{', '}', 'return', 'in... | Retrives the information added to an exception by
:func:`add_info_to_exception`. | ['Retrives', 'the', 'information', 'added', 'to', 'an', 'exception', 'by', ':', 'func', ':', 'add_info_to_exception', '.'] | train | https://github.com/RudolfCardinal/pythonlib/blob/0b84cb35f38bd7d8723958dae51b480a829b7227/cardinal_pythonlib/exceptions.py#L58-L68 |
4,151 | spulec/moto | moto/core/models.py | BaseBackend.url_paths | def url_paths(self):
"""
A dictionary of the paths of the urls to be mocked with this service and
the handlers that should be called in their place
"""
unformatted_paths = self._url_module.url_paths
paths = {}
for unformatted_path, handler in unformatted_paths.it... | python | def url_paths(self):
"""
A dictionary of the paths of the urls to be mocked with this service and
the handlers that should be called in their place
"""
unformatted_paths = self._url_module.url_paths
paths = {}
for unformatted_path, handler in unformatted_paths.it... | ['def', 'url_paths', '(', 'self', ')', ':', 'unformatted_paths', '=', 'self', '.', '_url_module', '.', 'url_paths', 'paths', '=', '{', '}', 'for', 'unformatted_path', ',', 'handler', 'in', 'unformatted_paths', '.', 'items', '(', ')', ':', 'path', '=', 'unformatted_path', '.', 'format', '(', '""', ')', 'paths', '[', 'pa... | A dictionary of the paths of the urls to be mocked with this service and
the handlers that should be called in their place | ['A', 'dictionary', 'of', 'the', 'paths', 'of', 'the', 'urls', 'to', 'be', 'mocked', 'with', 'this', 'service', 'and', 'the', 'handlers', 'that', 'should', 'be', 'called', 'in', 'their', 'place'] | train | https://github.com/spulec/moto/blob/4a286c4bc288933bb023396e2784a6fdbb966bc9/moto/core/models.py#L501-L513 |
4,152 | eventbrite/pysoa | pysoa/common/transport/local.py | LocalClientTransport.send_request_message | def send_request_message(self, request_id, meta, body, _=None):
"""
Receives a request from the client and handles and dispatches in in-thread. `message_expiry_in_seconds` is not
supported. Messages do not expire, as the server handles the request immediately in the same thread before
th... | python | def send_request_message(self, request_id, meta, body, _=None):
"""
Receives a request from the client and handles and dispatches in in-thread. `message_expiry_in_seconds` is not
supported. Messages do not expire, as the server handles the request immediately in the same thread before
th... | ['def', 'send_request_message', '(', 'self', ',', 'request_id', ',', 'meta', ',', 'body', ',', '_', '=', 'None', ')', ':', 'self', '.', '_current_request', '=', '(', 'request_id', ',', 'meta', ',', 'body', ')', 'try', ':', 'self', '.', 'server', '.', 'handle_next_request', '(', ')', 'finally', ':', 'self', '.', '_curre... | Receives a request from the client and handles and dispatches in in-thread. `message_expiry_in_seconds` is not
supported. Messages do not expire, as the server handles the request immediately in the same thread before
this method returns. This method blocks until the server has completed handling the re... | ['Receives', 'a', 'request', 'from', 'the', 'client', 'and', 'handles', 'and', 'dispatches', 'in', 'in', '-', 'thread', '.', 'message_expiry_in_seconds', 'is', 'not', 'supported', '.', 'Messages', 'do', 'not', 'expire', 'as', 'the', 'server', 'handles', 'the', 'request', 'immediately', 'in', 'the', 'same', 'thread', 'b... | train | https://github.com/eventbrite/pysoa/blob/9c052cae2397d13de3df8ae2c790846a70b53f18/pysoa/common/transport/local.py#L78-L88 |
4,153 | monarch-initiative/dipper | dipper/models/Genotype.py | Genotype.addPartsToVSLC | def addPartsToVSLC(
self, vslc_id, allele1_id, allele2_id, zygosity_id=None,
allele1_rel=None, allele2_rel=None):
"""
Here we add the parts to the VSLC. While traditionally alleles
(reference or variant loci) are traditionally added, you can add any
node (such as... | python | def addPartsToVSLC(
self, vslc_id, allele1_id, allele2_id, zygosity_id=None,
allele1_rel=None, allele2_rel=None):
"""
Here we add the parts to the VSLC. While traditionally alleles
(reference or variant loci) are traditionally added, you can add any
node (such as... | ['def', 'addPartsToVSLC', '(', 'self', ',', 'vslc_id', ',', 'allele1_id', ',', 'allele2_id', ',', 'zygosity_id', '=', 'None', ',', 'allele1_rel', '=', 'None', ',', 'allele2_rel', '=', 'None', ')', ':', '# vslc has parts allele1/allele2', 'if', 'allele1_id', 'is', 'not', 'None', ':', 'self', '.', 'addParts', '(', 'allel... | Here we add the parts to the VSLC. While traditionally alleles
(reference or variant loci) are traditionally added, you can add any
node (such as sequence_alterations for unlocated variations) to a vslc
if they are known to be paired. However, if a sequence_alteration's
loci is unknown... | ['Here', 'we', 'add', 'the', 'parts', 'to', 'the', 'VSLC', '.', 'While', 'traditionally', 'alleles', '(', 'reference', 'or', 'variant', 'loci', ')', 'are', 'traditionally', 'added', 'you', 'can', 'add', 'any', 'node', '(', 'such', 'as', 'sequence_alterations', 'for', 'unlocated', 'variations', ')', 'to', 'a', 'vslc', '... | train | https://github.com/monarch-initiative/dipper/blob/24cc80db355bbe15776edc5c7b41e0886959ba41/dipper/models/Genotype.py#L205-L241 |
4,154 | apache/spark | python/pyspark/sql/dataframe.py | _to_corrected_pandas_type | def _to_corrected_pandas_type(dt):
"""
When converting Spark SQL records to Pandas DataFrame, the inferred data type may be wrong.
This method gets the corrected data type for Pandas if that type may be inferred uncorrectly.
"""
import numpy as np
if type(dt) == ByteType:
return np.int8
... | python | def _to_corrected_pandas_type(dt):
"""
When converting Spark SQL records to Pandas DataFrame, the inferred data type may be wrong.
This method gets the corrected data type for Pandas if that type may be inferred uncorrectly.
"""
import numpy as np
if type(dt) == ByteType:
return np.int8
... | ['def', '_to_corrected_pandas_type', '(', 'dt', ')', ':', 'import', 'numpy', 'as', 'np', 'if', 'type', '(', 'dt', ')', '==', 'ByteType', ':', 'return', 'np', '.', 'int8', 'elif', 'type', '(', 'dt', ')', '==', 'ShortType', ':', 'return', 'np', '.', 'int16', 'elif', 'type', '(', 'dt', ')', '==', 'IntegerType', ':', 'retu... | When converting Spark SQL records to Pandas DataFrame, the inferred data type may be wrong.
This method gets the corrected data type for Pandas if that type may be inferred uncorrectly. | ['When', 'converting', 'Spark', 'SQL', 'records', 'to', 'Pandas', 'DataFrame', 'the', 'inferred', 'data', 'type', 'may', 'be', 'wrong', '.', 'This', 'method', 'gets', 'the', 'corrected', 'data', 'type', 'for', 'Pandas', 'if', 'that', 'type', 'may', 'be', 'inferred', 'uncorrectly', '.'] | train | https://github.com/apache/spark/blob/618d6bff71073c8c93501ab7392c3cc579730f0b/python/pyspark/sql/dataframe.py#L2239-L2254 |
4,155 | newville/asteval | asteval/asteval.py | Interpreter.on_return | def on_return(self, node): # ('value',)
"""Return statement: look for None, return special sentinal."""
self.retval = self.run(node.value)
if self.retval is None:
self.retval = ReturnedNone
return | python | def on_return(self, node): # ('value',)
"""Return statement: look for None, return special sentinal."""
self.retval = self.run(node.value)
if self.retval is None:
self.retval = ReturnedNone
return | ['def', 'on_return', '(', 'self', ',', 'node', ')', ':', "# ('value',)", 'self', '.', 'retval', '=', 'self', '.', 'run', '(', 'node', '.', 'value', ')', 'if', 'self', '.', 'retval', 'is', 'None', ':', 'self', '.', 'retval', '=', 'ReturnedNone', 'return'] | Return statement: look for None, return special sentinal. | ['Return', 'statement', ':', 'look', 'for', 'None', 'return', 'special', 'sentinal', '.'] | train | https://github.com/newville/asteval/blob/bb7d3a95079f96ead75ea55662014bbcc82f9b28/asteval/asteval.py#L367-L372 |
4,156 | ewels/MultiQC | multiqc/modules/qualimap/QM_BamQC.py | parse_coverage | def parse_coverage(self, f):
""" Parse the contents of the Qualimap BamQC Coverage Histogram file """
# Get the sample name from the parent parent directory
# Typical path: <sample name>/raw_data_qualimapReport/coverage_histogram.txt
s_name = self.get_s_name(f)
d = dict()
for l in f['f']:
... | python | def parse_coverage(self, f):
""" Parse the contents of the Qualimap BamQC Coverage Histogram file """
# Get the sample name from the parent parent directory
# Typical path: <sample name>/raw_data_qualimapReport/coverage_histogram.txt
s_name = self.get_s_name(f)
d = dict()
for l in f['f']:
... | ['def', 'parse_coverage', '(', 'self', ',', 'f', ')', ':', '# Get the sample name from the parent parent directory', '# Typical path: <sample name>/raw_data_qualimapReport/coverage_histogram.txt', 's_name', '=', 'self', '.', 'get_s_name', '(', 'f', ')', 'd', '=', 'dict', '(', ')', 'for', 'l', 'in', 'f', '[', "'f'", ']'... | Parse the contents of the Qualimap BamQC Coverage Histogram file | ['Parse', 'the', 'contents', 'of', 'the', 'Qualimap', 'BamQC', 'Coverage', 'Histogram', 'file'] | train | https://github.com/ewels/MultiQC/blob/2037d6322b2554146a74efbf869156ad20d4c4ec/multiqc/modules/qualimap/QM_BamQC.py#L122-L156 |
4,157 | apache/spark | python/pyspark/rdd.py | RDD.mapPartitions | def mapPartitions(self, f, preservesPartitioning=False):
"""
Return a new RDD by applying a function to each partition of this RDD.
>>> rdd = sc.parallelize([1, 2, 3, 4], 2)
>>> def f(iterator): yield sum(iterator)
>>> rdd.mapPartitions(f).collect()
[3, 7]
"""
... | python | def mapPartitions(self, f, preservesPartitioning=False):
"""
Return a new RDD by applying a function to each partition of this RDD.
>>> rdd = sc.parallelize([1, 2, 3, 4], 2)
>>> def f(iterator): yield sum(iterator)
>>> rdd.mapPartitions(f).collect()
[3, 7]
"""
... | ['def', 'mapPartitions', '(', 'self', ',', 'f', ',', 'preservesPartitioning', '=', 'False', ')', ':', 'def', 'func', '(', 's', ',', 'iterator', ')', ':', 'return', 'f', '(', 'iterator', ')', 'return', 'self', '.', 'mapPartitionsWithIndex', '(', 'func', ',', 'preservesPartitioning', ')'] | Return a new RDD by applying a function to each partition of this RDD.
>>> rdd = sc.parallelize([1, 2, 3, 4], 2)
>>> def f(iterator): yield sum(iterator)
>>> rdd.mapPartitions(f).collect()
[3, 7] | ['Return', 'a', 'new', 'RDD', 'by', 'applying', 'a', 'function', 'to', 'each', 'partition', 'of', 'this', 'RDD', '.'] | train | https://github.com/apache/spark/blob/618d6bff71073c8c93501ab7392c3cc579730f0b/python/pyspark/rdd.py#L344-L355 |
4,158 | langloisjp/pysvclog | servicelog.py | UDPLogger.send | def send(self, jsonstr):
"""
Send jsonstr to the UDP collector
>>> logger = UDPLogger()
>>> logger.send('{"key": "value"}')
"""
udp_sock = socket(AF_INET, SOCK_DGRAM)
udp_sock.sendto(jsonstr.encode('utf-8'), self.addr) | python | def send(self, jsonstr):
"""
Send jsonstr to the UDP collector
>>> logger = UDPLogger()
>>> logger.send('{"key": "value"}')
"""
udp_sock = socket(AF_INET, SOCK_DGRAM)
udp_sock.sendto(jsonstr.encode('utf-8'), self.addr) | ['def', 'send', '(', 'self', ',', 'jsonstr', ')', ':', 'udp_sock', '=', 'socket', '(', 'AF_INET', ',', 'SOCK_DGRAM', ')', 'udp_sock', '.', 'sendto', '(', 'jsonstr', '.', 'encode', '(', "'utf-8'", ')', ',', 'self', '.', 'addr', ')'] | Send jsonstr to the UDP collector
>>> logger = UDPLogger()
>>> logger.send('{"key": "value"}') | ['Send', 'jsonstr', 'to', 'the', 'UDP', 'collector'] | train | https://github.com/langloisjp/pysvclog/blob/ab429bb12e13dca63ffce082e633d8879b6e3854/servicelog.py#L66-L74 |
4,159 | fabaff/python-glances-api | glances_api/__init__.py | Glances.get_data | async def get_data(self):
"""Retrieve the data."""
url = '{}/{}'.format(self.url, 'all')
try:
with async_timeout.timeout(5, loop=self._loop):
if self.password is None:
response = await self._session.get(url)
else:
... | python | async def get_data(self):
"""Retrieve the data."""
url = '{}/{}'.format(self.url, 'all')
try:
with async_timeout.timeout(5, loop=self._loop):
if self.password is None:
response = await self._session.get(url)
else:
... | ['async', 'def', 'get_data', '(', 'self', ')', ':', 'url', '=', "'{}/{}'", '.', 'format', '(', 'self', '.', 'url', ',', "'all'", ')', 'try', ':', 'with', 'async_timeout', '.', 'timeout', '(', '5', ',', 'loop', '=', 'self', '.', '_loop', ')', ':', 'if', 'self', '.', 'password', 'is', 'None', ':', 'response', '=', 'await... | Retrieve the data. | ['Retrieve', 'the', 'data', '.'] | train | https://github.com/fabaff/python-glances-api/blob/7ed8a688617d0d0b1c8d5b107559fc4afcdbaaac/glances_api/__init__.py#L31-L50 |
4,160 | ramses-tech/ramses | ramses/utils.py | get_route_name | def get_route_name(resource_uri):
""" Get route name from RAML resource URI.
:param resource_uri: String representing RAML resource URI.
:returns string: String with route name, which is :resource_uri:
stripped of non-word characters.
"""
resource_uri = resource_uri.strip('/')
resource_... | python | def get_route_name(resource_uri):
""" Get route name from RAML resource URI.
:param resource_uri: String representing RAML resource URI.
:returns string: String with route name, which is :resource_uri:
stripped of non-word characters.
"""
resource_uri = resource_uri.strip('/')
resource_... | ['def', 'get_route_name', '(', 'resource_uri', ')', ':', 'resource_uri', '=', 'resource_uri', '.', 'strip', '(', "'/'", ')', 'resource_uri', '=', 're', '.', 'sub', '(', "'\\W'", ',', "''", ',', 'resource_uri', ')', 'return', 'resource_uri'] | Get route name from RAML resource URI.
:param resource_uri: String representing RAML resource URI.
:returns string: String with route name, which is :resource_uri:
stripped of non-word characters. | ['Get', 'route', 'name', 'from', 'RAML', 'resource', 'URI', '.'] | train | https://github.com/ramses-tech/ramses/blob/ea2e1e896325b7256cdf5902309e05fd98e0c14c/ramses/utils.py#L345-L354 |
4,161 | vimalloc/flask-jwt-simple | flask_jwt_simple/jwt_manager.py | JWTManager._set_default_configuration_options | def _set_default_configuration_options(app):
"""
Sets the default configuration options used by this extension
"""
# Options for JWTs when the TOKEN_LOCATION is headers
app.config.setdefault('JWT_HEADER_NAME', 'Authorization')
app.config.setdefault('JWT_HEADER_TYPE', 'Bea... | python | def _set_default_configuration_options(app):
"""
Sets the default configuration options used by this extension
"""
# Options for JWTs when the TOKEN_LOCATION is headers
app.config.setdefault('JWT_HEADER_NAME', 'Authorization')
app.config.setdefault('JWT_HEADER_TYPE', 'Bea... | ['def', '_set_default_configuration_options', '(', 'app', ')', ':', '# Options for JWTs when the TOKEN_LOCATION is headers', 'app', '.', 'config', '.', 'setdefault', '(', "'JWT_HEADER_NAME'", ',', "'Authorization'", ')', 'app', '.', 'config', '.', 'setdefault', '(', "'JWT_HEADER_TYPE'", ',', "'Bearer'", ')', "# How lon... | Sets the default configuration options used by this extension | ['Sets', 'the', 'default', 'configuration', 'options', 'used', 'by', 'this', 'extension'] | train | https://github.com/vimalloc/flask-jwt-simple/blob/ed930340cfcff5a6ddc49248d4682e87204dd3be/flask_jwt_simple/jwt_manager.py#L80-L109 |
4,162 | datalib/libextract | libextract/core.py | parse_html | def parse_html(fileobj, encoding):
"""
Given a file object *fileobj*, get an ElementTree instance.
The *encoding* is assumed to be utf8.
"""
parser = HTMLParser(encoding=encoding, remove_blank_text=True)
return parse(fileobj, parser) | python | def parse_html(fileobj, encoding):
"""
Given a file object *fileobj*, get an ElementTree instance.
The *encoding* is assumed to be utf8.
"""
parser = HTMLParser(encoding=encoding, remove_blank_text=True)
return parse(fileobj, parser) | ['def', 'parse_html', '(', 'fileobj', ',', 'encoding', ')', ':', 'parser', '=', 'HTMLParser', '(', 'encoding', '=', 'encoding', ',', 'remove_blank_text', '=', 'True', ')', 'return', 'parse', '(', 'fileobj', ',', 'parser', ')'] | Given a file object *fileobj*, get an ElementTree instance.
The *encoding* is assumed to be utf8. | ['Given', 'a', 'file', 'object', '*', 'fileobj', '*', 'get', 'an', 'ElementTree', 'instance', '.', 'The', '*', 'encoding', '*', 'is', 'assumed', 'to', 'be', 'utf8', '.'] | train | https://github.com/datalib/libextract/blob/9cf9d55c7f8cd622eab0a50f009385f0a39b1200/libextract/core.py#L20-L26 |
4,163 | econ-ark/HARK | HARK/ConsumptionSaving/ConsIndShockModel.py | ConsIndShockSolver.solve | def solve(self):
'''
Solves the single period consumption-saving problem using the method of
endogenous gridpoints. Solution includes a consumption function cFunc
(using cubic or linear splines), a marginal value function vPfunc, a min-
imum acceptable level of normalized market... | python | def solve(self):
'''
Solves the single period consumption-saving problem using the method of
endogenous gridpoints. Solution includes a consumption function cFunc
(using cubic or linear splines), a marginal value function vPfunc, a min-
imum acceptable level of normalized market... | ['def', 'solve', '(', 'self', ')', ':', '# Make arrays of end-of-period assets and end-of-period marginal value', 'aNrm', '=', 'self', '.', 'prepareToCalcEndOfPrdvP', '(', ')', 'EndOfPrdvP', '=', 'self', '.', 'calcEndOfPrdvP', '(', ')', '# Construct a basic solution for this period', 'if', 'self', '.', 'CubicBool', ':'... | Solves the single period consumption-saving problem using the method of
endogenous gridpoints. Solution includes a consumption function cFunc
(using cubic or linear splines), a marginal value function vPfunc, a min-
imum acceptable level of normalized market resources mNrmMin, normalized
... | ['Solves', 'the', 'single', 'period', 'consumption', '-', 'saving', 'problem', 'using', 'the', 'method', 'of', 'endogenous', 'gridpoints', '.', 'Solution', 'includes', 'a', 'consumption', 'function', 'cFunc', '(', 'using', 'cubic', 'or', 'linear', 'splines', ')', 'a', 'marginal', 'value', 'function', 'vPfunc', 'a', 'mi... | train | https://github.com/econ-ark/HARK/blob/3d184153a189e618a87c9540df1cd12044039cc5/HARK/ConsumptionSaving/ConsIndShockModel.py#L1144-L1180 |
4,164 | gwww/elkm1 | elkm1_lib/elements.py | Elements.get_descriptions | def get_descriptions(self, description_type):
"""
Gets the descriptions for specified type.
When complete the callback is called with a list of descriptions
"""
(desc_type, max_units) = description_type
results = [None] * max_units
self.elk._descriptions_in_progre... | python | def get_descriptions(self, description_type):
"""
Gets the descriptions for specified type.
When complete the callback is called with a list of descriptions
"""
(desc_type, max_units) = description_type
results = [None] * max_units
self.elk._descriptions_in_progre... | ['def', 'get_descriptions', '(', 'self', ',', 'description_type', ')', ':', '(', 'desc_type', ',', 'max_units', ')', '=', 'description_type', 'results', '=', '[', 'None', ']', '*', 'max_units', 'self', '.', 'elk', '.', '_descriptions_in_progress', '[', 'desc_type', ']', '=', '(', 'max_units', ',', 'results', ',', 'self... | Gets the descriptions for specified type.
When complete the callback is called with a list of descriptions | ['Gets', 'the', 'descriptions', 'for', 'specified', 'type', '.', 'When', 'complete', 'the', 'callback', 'is', 'called', 'with', 'a', 'list', 'of', 'descriptions'] | train | https://github.com/gwww/elkm1/blob/078d0de30840c3fab46f1f8534d98df557931e91/elkm1_lib/elements.py#L90-L100 |
4,165 | CivicSpleen/ambry | ambry/orm/partition.py | Partition.analysis | def analysis(self):
"""Return an AnalysisPartition proxy, which wraps this partition to provide acess to
dataframes, shapely shapes and other analysis services"""
if isinstance(self, PartitionProxy):
return AnalysisPartition(self._obj)
else:
return AnalysisPartiti... | python | def analysis(self):
"""Return an AnalysisPartition proxy, which wraps this partition to provide acess to
dataframes, shapely shapes and other analysis services"""
if isinstance(self, PartitionProxy):
return AnalysisPartition(self._obj)
else:
return AnalysisPartiti... | ['def', 'analysis', '(', 'self', ')', ':', 'if', 'isinstance', '(', 'self', ',', 'PartitionProxy', ')', ':', 'return', 'AnalysisPartition', '(', 'self', '.', '_obj', ')', 'else', ':', 'return', 'AnalysisPartition', '(', 'self', ')'] | Return an AnalysisPartition proxy, which wraps this partition to provide acess to
dataframes, shapely shapes and other analysis services | ['Return', 'an', 'AnalysisPartition', 'proxy', 'which', 'wraps', 'this', 'partition', 'to', 'provide', 'acess', 'to', 'dataframes', 'shapely', 'shapes', 'and', 'other', 'analysis', 'services'] | train | https://github.com/CivicSpleen/ambry/blob/d7f2be4bf1f7ffd086f3fadd4fcae60c32473e42/ambry/orm/partition.py#L790-L796 |
4,166 | aloetesting/aloe_django | aloe_django/__init__.py | django_url | def django_url(step, url=None):
"""
The URL for a page from the test server.
:param step: A Gherkin step
:param url: If specified, the relative URL to append.
"""
base_url = step.test.live_server_url
if url:
return urljoin(base_url, url)
else:
return base_url | python | def django_url(step, url=None):
"""
The URL for a page from the test server.
:param step: A Gherkin step
:param url: If specified, the relative URL to append.
"""
base_url = step.test.live_server_url
if url:
return urljoin(base_url, url)
else:
return base_url | ['def', 'django_url', '(', 'step', ',', 'url', '=', 'None', ')', ':', 'base_url', '=', 'step', '.', 'test', '.', 'live_server_url', 'if', 'url', ':', 'return', 'urljoin', '(', 'base_url', ',', 'url', ')', 'else', ':', 'return', 'base_url'] | The URL for a page from the test server.
:param step: A Gherkin step
:param url: If specified, the relative URL to append. | ['The', 'URL', 'for', 'a', 'page', 'from', 'the', 'test', 'server', '.'] | train | https://github.com/aloetesting/aloe_django/blob/672eac97c97644bfe334e70696a6dc5ddf4ced02/aloe_django/__init__.py#L38-L51 |
4,167 | econ-ark/HARK | HARK/utilities.py | combineIndepDstns | def combineIndepDstns(*distributions):
'''
Given n lists (or tuples) whose elements represent n independent, discrete
probability spaces (probabilities and values), construct a joint pmf over
all combinations of these independent points. Can take multivariate discrete
distributions as inputs.
... | python | def combineIndepDstns(*distributions):
'''
Given n lists (or tuples) whose elements represent n independent, discrete
probability spaces (probabilities and values), construct a joint pmf over
all combinations of these independent points. Can take multivariate discrete
distributions as inputs.
... | ['def', 'combineIndepDstns', '(', '*', 'distributions', ')', ':', '# Very quick and incomplete parameter check:', 'for', 'dist', 'in', 'distributions', ':', 'assert', 'len', '(', 'dist', '[', '0', ']', ')', '==', 'len', '(', 'dist', '[', '-', '1', ']', ')', ',', '"len(dist[0]) != len(dist[-1])"', '# Get information on ... | Given n lists (or tuples) whose elements represent n independent, discrete
probability spaces (probabilities and values), construct a joint pmf over
all combinations of these independent points. Can take multivariate discrete
distributions as inputs.
Parameters
----------
distributions : [np.a... | ['Given', 'n', 'lists', '(', 'or', 'tuples', ')', 'whose', 'elements', 'represent', 'n', 'independent', 'discrete', 'probability', 'spaces', '(', 'probabilities', 'and', 'values', ')', 'construct', 'a', 'joint', 'pmf', 'over', 'all', 'combinations', 'of', 'these', 'independent', 'points', '.', 'Can', 'take', 'multivari... | train | https://github.com/econ-ark/HARK/blob/3d184153a189e618a87c9540df1cd12044039cc5/HARK/utilities.py#L832-L907 |
4,168 | CiscoUcs/UcsPythonSDK | src/UcsSdk/UcsHandle_Edit.py | UcsHandle.SendUcsFirmware | def SendUcsFirmware(self, path=None, dumpXml=False):
"""
Uploads a specific CCO Image on UCS.
- path specifies the path of the image to be uploaded.
"""
from UcsBase import WriteUcsWarning, UcsUtils, ManagedObject, WriteObject, UcsUtils, UcsValidationException, \
UcsException
from Ucs import ConfigCon... | python | def SendUcsFirmware(self, path=None, dumpXml=False):
"""
Uploads a specific CCO Image on UCS.
- path specifies the path of the image to be uploaded.
"""
from UcsBase import WriteUcsWarning, UcsUtils, ManagedObject, WriteObject, UcsUtils, UcsValidationException, \
UcsException
from Ucs import ConfigCon... | ['def', 'SendUcsFirmware', '(', 'self', ',', 'path', '=', 'None', ',', 'dumpXml', '=', 'False', ')', ':', 'from', 'UcsBase', 'import', 'WriteUcsWarning', ',', 'UcsUtils', ',', 'ManagedObject', ',', 'WriteObject', ',', 'UcsUtils', ',', 'UcsValidationException', ',', 'UcsException', 'from', 'Ucs', 'import', 'ConfigConfig... | Uploads a specific CCO Image on UCS.
- path specifies the path of the image to be uploaded. | ['Uploads', 'a', 'specific', 'CCO', 'Image', 'on', 'UCS', '.', '-', 'path', 'specifies', 'the', 'path', 'of', 'the', 'image', 'to', 'be', 'uploaded', '.'] | train | https://github.com/CiscoUcs/UcsPythonSDK/blob/bf6b07d6abeacb922c92b198352eda4eb9e4629b/src/UcsSdk/UcsHandle_Edit.py#L1303-L1375 |
4,169 | rpkilby/SurveyGizmo | surveygizmo/api/base.py | Resource.page | def page(self, value):
""" Set the page which will be returned.
:param value: 'page' parameter value for the rest api call
:type value: str
Take a look at https://apihelp.surveygizmo.com/help/surveyresponse-sub-object
"""
instance = copy(self)
ins... | python | def page(self, value):
""" Set the page which will be returned.
:param value: 'page' parameter value for the rest api call
:type value: str
Take a look at https://apihelp.surveygizmo.com/help/surveyresponse-sub-object
"""
instance = copy(self)
ins... | ['def', 'page', '(', 'self', ',', 'value', ')', ':', 'instance', '=', 'copy', '(', 'self', ')', 'instance', '.', '_filters', '.', 'append', '(', '{', "'page'", ':', 'value', '}', ')', 'return', 'instance'] | Set the page which will be returned.
:param value: 'page' parameter value for the rest api call
:type value: str
Take a look at https://apihelp.surveygizmo.com/help/surveyresponse-sub-object | ['Set', 'the', 'page', 'which', 'will', 'be', 'returned', '.', ':', 'param', 'value', ':', 'page', 'parameter', 'value', 'for', 'the', 'rest', 'api', 'call', ':', 'type', 'value', ':', 'str'] | train | https://github.com/rpkilby/SurveyGizmo/blob/a097091dc7dcfb58f70242fb1becabc98df049a5/surveygizmo/api/base.py#L86-L100 |
4,170 | edibledinos/pwnypack | pwnypack/shellcode/base.py | BaseEnvironment.reg_add | def reg_add(self, reg, value):
"""
Add a value to a register. The value can be another :class:`Register`,
an :class:`Offset`, a :class:`Buffer`, an integer or ``None``.
Arguments:
reg(pwnypack.shellcode.types.Register): The register to add the
value to.
... | python | def reg_add(self, reg, value):
"""
Add a value to a register. The value can be another :class:`Register`,
an :class:`Offset`, a :class:`Buffer`, an integer or ``None``.
Arguments:
reg(pwnypack.shellcode.types.Register): The register to add the
value to.
... | ['def', 'reg_add', '(', 'self', ',', 'reg', ',', 'value', ')', ':', 'if', 'value', 'is', 'None', ':', 'return', '[', ']', 'elif', 'isinstance', '(', 'value', ',', 'Register', ')', ':', 'return', 'self', '.', 'reg_add_reg', '(', 'reg', ',', 'value', ')', 'elif', 'isinstance', '(', 'value', ',', '(', 'Buffer', ',', 'six'... | Add a value to a register. The value can be another :class:`Register`,
an :class:`Offset`, a :class:`Buffer`, an integer or ``None``.
Arguments:
reg(pwnypack.shellcode.types.Register): The register to add the
value to.
value: The value to add to the register.
... | ['Add', 'a', 'value', 'to', 'a', 'register', '.', 'The', 'value', 'can', 'be', 'another', ':', 'class', ':', 'Register', 'an', ':', 'class', ':', 'Offset', 'a', ':', 'class', ':', 'Buffer', 'an', 'integer', 'or', 'None', '.'] | train | https://github.com/edibledinos/pwnypack/blob/e0a5a8e6ef3f4f1f7e1b91ee379711f4a49cb0e6/pwnypack/shellcode/base.py#L228-L267 |
4,171 | PyGithub/PyGithub | github/Label.py | Label.edit | def edit(self, name, color, description=github.GithubObject.NotSet):
"""
:calls: `PATCH /repos/:owner/:repo/labels/:name <http://developer.github.com/v3/issues/labels>`_
:param name: string
:param color: string
:param description: string
:rtype: None
"""
a... | python | def edit(self, name, color, description=github.GithubObject.NotSet):
"""
:calls: `PATCH /repos/:owner/:repo/labels/:name <http://developer.github.com/v3/issues/labels>`_
:param name: string
:param color: string
:param description: string
:rtype: None
"""
a... | ['def', 'edit', '(', 'self', ',', 'name', ',', 'color', ',', 'description', '=', 'github', '.', 'GithubObject', '.', 'NotSet', ')', ':', 'assert', 'isinstance', '(', 'name', ',', '(', 'str', ',', 'unicode', ')', ')', ',', 'name', 'assert', 'isinstance', '(', 'color', ',', '(', 'str', ',', 'unicode', ')', ')', ',', 'col... | :calls: `PATCH /repos/:owner/:repo/labels/:name <http://developer.github.com/v3/issues/labels>`_
:param name: string
:param color: string
:param description: string
:rtype: None | [':', 'calls', ':', 'PATCH', '/', 'repos', '/', ':', 'owner', '/', ':', 'repo', '/', 'labels', '/', ':', 'name', '<http', ':', '//', 'developer', '.', 'github', '.', 'com', '/', 'v3', '/', 'issues', '/', 'labels', '>', '_', ':', 'param', 'name', ':', 'string', ':', 'param', 'color', ':', 'string', ':', 'param', 'descri... | train | https://github.com/PyGithub/PyGithub/blob/f716df86bbe7dc276c6596699fa9712b61ef974c/github/Label.py#L91-L114 |
4,172 | cbclab/MOT | mot/mcmc_diagnostics.py | minimum_multivariate_ess | def minimum_multivariate_ess(nmr_params, alpha=0.05, epsilon=0.05):
r"""Calculate the minimum multivariate Effective Sample Size you will need to obtain the desired precision.
This implements the inequality from Vats et al. (2016):
.. math::
\widehat{ESS} \geq \frac{2^{2/p}\pi}{(p\Gamma(p/2))^{2/... | python | def minimum_multivariate_ess(nmr_params, alpha=0.05, epsilon=0.05):
r"""Calculate the minimum multivariate Effective Sample Size you will need to obtain the desired precision.
This implements the inequality from Vats et al. (2016):
.. math::
\widehat{ESS} \geq \frac{2^{2/p}\pi}{(p\Gamma(p/2))^{2/... | ['def', 'minimum_multivariate_ess', '(', 'nmr_params', ',', 'alpha', '=', '0.05', ',', 'epsilon', '=', '0.05', ')', ':', 'tmp', '=', '2.0', '/', 'nmr_params', 'log_min_ess', '=', 'tmp', '*', 'np', '.', 'log', '(', '2', ')', '+', 'np', '.', 'log', '(', 'np', '.', 'pi', ')', '-', 'tmp', '*', '(', 'np', '.', 'log', '(', '... | r"""Calculate the minimum multivariate Effective Sample Size you will need to obtain the desired precision.
This implements the inequality from Vats et al. (2016):
.. math::
\widehat{ESS} \geq \frac{2^{2/p}\pi}{(p\Gamma(p/2))^{2/p}} \frac{\chi^{2}_{1-\alpha,p}}{\epsilon^{2}}
Where :math:`p` is t... | ['r', 'Calculate', 'the', 'minimum', 'multivariate', 'Effective', 'Sample', 'Size', 'you', 'will', 'need', 'to', 'obtain', 'the', 'desired', 'precision', '.'] | train | https://github.com/cbclab/MOT/blob/fb3243b65025705842e82704705c00902f9a35af/mot/mcmc_diagnostics.py#L258-L288 |
4,173 | latchset/custodia | src/custodia/log.py | getLogger | def getLogger(name):
"""Create logger with custom exception() method
"""
def exception(self, msg, *args, **kwargs):
extra = kwargs.setdefault('extra', {})
extra['exc_fullstack'] = self.isEnabledFor(logging.DEBUG)
kwargs['exc_info'] = True
self.log(logging.ERROR, msg, *args, *... | python | def getLogger(name):
"""Create logger with custom exception() method
"""
def exception(self, msg, *args, **kwargs):
extra = kwargs.setdefault('extra', {})
extra['exc_fullstack'] = self.isEnabledFor(logging.DEBUG)
kwargs['exc_info'] = True
self.log(logging.ERROR, msg, *args, *... | ['def', 'getLogger', '(', 'name', ')', ':', 'def', 'exception', '(', 'self', ',', 'msg', ',', '*', 'args', ',', '*', '*', 'kwargs', ')', ':', 'extra', '=', 'kwargs', '.', 'setdefault', '(', "'extra'", ',', '{', '}', ')', 'extra', '[', "'exc_fullstack'", ']', '=', 'self', '.', 'isEnabledFor', '(', 'logging', '.', 'DEBUG... | Create logger with custom exception() method | ['Create', 'logger', 'with', 'custom', 'exception', '()', 'method'] | train | https://github.com/latchset/custodia/blob/5ad4cd7a2f40babc6b8b5d16215b7e27ca993b6d/src/custodia/log.py#L68-L79 |
4,174 | timothydmorton/VESPA | vespa/stars/utils.py | semimajor | def semimajor(P,mstar=1):
"""Returns semimajor axis in AU given P in days, mstar in solar masses.
"""
return ((P*DAY/2/np.pi)**2*G*mstar*MSUN)**(1./3)/AU | python | def semimajor(P,mstar=1):
"""Returns semimajor axis in AU given P in days, mstar in solar masses.
"""
return ((P*DAY/2/np.pi)**2*G*mstar*MSUN)**(1./3)/AU | ['def', 'semimajor', '(', 'P', ',', 'mstar', '=', '1', ')', ':', 'return', '(', '(', 'P', '*', 'DAY', '/', '2', '/', 'np', '.', 'pi', ')', '**', '2', '*', 'G', '*', 'mstar', '*', 'MSUN', ')', '**', '(', '1.', '/', '3', ')', '/', 'AU'] | Returns semimajor axis in AU given P in days, mstar in solar masses. | ['Returns', 'semimajor', 'axis', 'in', 'AU', 'given', 'P', 'in', 'days', 'mstar', 'in', 'solar', 'masses', '.'] | train | https://github.com/timothydmorton/VESPA/blob/0446b54d48009f3655cfd1a3957ceea21d3adcaa/vespa/stars/utils.py#L159-L162 |
4,175 | orbingol/NURBS-Python | geomdl/voxelize.py | save_voxel_grid | def save_voxel_grid(voxel_grid, file_name):
""" Saves binary voxel grid as a binary file.
The binary file is structured in little-endian unsigned int format.
:param voxel_grid: binary voxel grid
:type voxel_grid: list, tuple
:param file_name: file name to save
:type file_name: str
"""
... | python | def save_voxel_grid(voxel_grid, file_name):
""" Saves binary voxel grid as a binary file.
The binary file is structured in little-endian unsigned int format.
:param voxel_grid: binary voxel grid
:type voxel_grid: list, tuple
:param file_name: file name to save
:type file_name: str
"""
... | ['def', 'save_voxel_grid', '(', 'voxel_grid', ',', 'file_name', ')', ':', 'try', ':', 'with', 'open', '(', 'file_name', ',', "'wb'", ')', 'as', 'fp', ':', 'for', 'voxel', 'in', 'voxel_grid', ':', 'fp', '.', 'write', '(', 'struct', '.', 'pack', '(', '"<I"', ',', 'voxel', ')', ')', 'except', 'IOError', 'as', 'e', ':', 'p... | Saves binary voxel grid as a binary file.
The binary file is structured in little-endian unsigned int format.
:param voxel_grid: binary voxel grid
:type voxel_grid: list, tuple
:param file_name: file name to save
:type file_name: str | ['Saves', 'binary', 'voxel', 'grid', 'as', 'a', 'binary', 'file', '.'] | train | https://github.com/orbingol/NURBS-Python/blob/b1c6a8b51cf143ff58761438e93ba6baef470627/geomdl/voxelize.py#L89-L107 |
4,176 | bcbio/bcbio-nextgen | bcbio/variation/vcfanno.py | find_annotations | def find_annotations(data, retriever=None):
"""Find annotation configuration files for vcfanno, using pre-installed inputs.
Creates absolute paths for user specified inputs and finds locally
installed defaults.
Default annotations:
- gemini for variant pipelines
- somatic for variant tumor... | python | def find_annotations(data, retriever=None):
"""Find annotation configuration files for vcfanno, using pre-installed inputs.
Creates absolute paths for user specified inputs and finds locally
installed defaults.
Default annotations:
- gemini for variant pipelines
- somatic for variant tumor... | ['def', 'find_annotations', '(', 'data', ',', 'retriever', '=', 'None', ')', ':', 'conf_files', '=', 'dd', '.', 'get_vcfanno', '(', 'data', ')', 'if', 'not', 'isinstance', '(', 'conf_files', ',', '(', 'list', ',', 'tuple', ')', ')', ':', 'conf_files', '=', '[', 'conf_files', ']', 'for', 'c', 'in', '_default_conf_files'... | Find annotation configuration files for vcfanno, using pre-installed inputs.
Creates absolute paths for user specified inputs and finds locally
installed defaults.
Default annotations:
- gemini for variant pipelines
- somatic for variant tumor pipelines
- rnaedit for RNA-seq variant call... | ['Find', 'annotation', 'configuration', 'files', 'for', 'vcfanno', 'using', 'pre', '-', 'installed', 'inputs', '.'] | train | https://github.com/bcbio/bcbio-nextgen/blob/6a9348c0054ccd5baffd22f1bb7d0422f6978b20/bcbio/variation/vcfanno.py#L91-L137 |
4,177 | vsoch/pokemon | pokemon/skills.py | get_ascii | def get_ascii(pid=None, name=None, pokemons=None, return_pokemons=False, message=None):
'''get_ascii will return ascii art for a pokemon based on a name or pid.
:param pid: the pokemon ID to return
:param name: the pokemon name to return
:param return_pokemons: return catches (default False)
:param ... | python | def get_ascii(pid=None, name=None, pokemons=None, return_pokemons=False, message=None):
'''get_ascii will return ascii art for a pokemon based on a name or pid.
:param pid: the pokemon ID to return
:param name: the pokemon name to return
:param return_pokemons: return catches (default False)
:param ... | ['def', 'get_ascii', '(', 'pid', '=', 'None', ',', 'name', '=', 'None', ',', 'pokemons', '=', 'None', ',', 'return_pokemons', '=', 'False', ',', 'message', '=', 'None', ')', ':', 'pokemon', '=', 'get_pokemon', '(', 'name', '=', 'name', ',', 'pid', '=', 'pid', ',', 'pokemons', '=', 'pokemons', ')', 'printme', '=', 'mess... | get_ascii will return ascii art for a pokemon based on a name or pid.
:param pid: the pokemon ID to return
:param name: the pokemon name to return
:param return_pokemons: return catches (default False)
:param message: add a message to the ascii | ['get_ascii', 'will', 'return', 'ascii', 'art', 'for', 'a', 'pokemon', 'based', 'on', 'a', 'name', 'or', 'pid', '.', ':', 'param', 'pid', ':', 'the', 'pokemon', 'ID', 'to', 'return', ':', 'param', 'name', ':', 'the', 'pokemon', 'name', 'to', 'return', ':', 'param', 'return_pokemons', ':', 'return', 'catches', '(', 'def... | train | https://github.com/vsoch/pokemon/blob/c9cd8c5d64897617867d38d45183476ea64a0620/pokemon/skills.py#L27-L43 |
4,178 | honzamach/pynspect | pynspect/traversers.py | BaseFilteringTreeTraverser.decorate_function | def decorate_function(self, name, decorator):
"""
Decorate function with given name with given decorator.
:param str name: Name of the function.
:param callable decorator: Decorator callback.
"""
self.functions[name] = decorator(self.functions[name]) | python | def decorate_function(self, name, decorator):
"""
Decorate function with given name with given decorator.
:param str name: Name of the function.
:param callable decorator: Decorator callback.
"""
self.functions[name] = decorator(self.functions[name]) | ['def', 'decorate_function', '(', 'self', ',', 'name', ',', 'decorator', ')', ':', 'self', '.', 'functions', '[', 'name', ']', '=', 'decorator', '(', 'self', '.', 'functions', '[', 'name', ']', ')'] | Decorate function with given name with given decorator.
:param str name: Name of the function.
:param callable decorator: Decorator callback. | ['Decorate', 'function', 'with', 'given', 'name', 'with', 'given', 'decorator', '.'] | train | https://github.com/honzamach/pynspect/blob/0582dcc1f7aafe50e25a21c792ea1b3367ea5881/pynspect/traversers.py#L753-L760 |
4,179 | ic-labs/django-icekit | icekit_events/models.py | get_occurrence_times_for_event | def get_occurrence_times_for_event(event):
"""
Return a tuple with two sets containing the (start, end) *naive* datetimes
of an Event's Occurrences, or the original start datetime if an
Occurrence's start was modified by a user.
"""
occurrences_starts = set()
occurrences_ends = set()
for... | python | def get_occurrence_times_for_event(event):
"""
Return a tuple with two sets containing the (start, end) *naive* datetimes
of an Event's Occurrences, or the original start datetime if an
Occurrence's start was modified by a user.
"""
occurrences_starts = set()
occurrences_ends = set()
for... | ['def', 'get_occurrence_times_for_event', '(', 'event', ')', ':', 'occurrences_starts', '=', 'set', '(', ')', 'occurrences_ends', '=', 'set', '(', ')', 'for', 'o', 'in', 'event', '.', 'occurrence_list', ':', 'occurrences_starts', '.', 'add', '(', 'coerce_naive', '(', 'o', '.', 'original_start', 'or', 'o', '.', 'start',... | Return a tuple with two sets containing the (start, end) *naive* datetimes
of an Event's Occurrences, or the original start datetime if an
Occurrence's start was modified by a user. | ['Return', 'a', 'tuple', 'with', 'two', 'sets', 'containing', 'the', '(', 'start', 'end', ')', '*', 'naive', '*', 'datetimes', 'of', 'an', 'Event', 's', 'Occurrences', 'or', 'the', 'original', 'start', 'datetime', 'if', 'an', 'Occurrence', 's', 'start', 'was', 'modified', 'by', 'a', 'user', '.'] | train | https://github.com/ic-labs/django-icekit/blob/c507ea5b1864303732c53ad7c5800571fca5fa94/icekit_events/models.py#L933-L948 |
4,180 | F5Networks/f5-common-python | f5/utils/responses/handlers.py | Stats._get_nest_stats | def _get_nest_stats(self):
"""Helper method to deal with nestedStats
as json format changed in v12.x
"""
for x in self.rdict:
check = urlparse(x)
if check.scheme:
nested_dict = self.rdict[x]['nestedStats']
tmp_dict = nested_dict['e... | python | def _get_nest_stats(self):
"""Helper method to deal with nestedStats
as json format changed in v12.x
"""
for x in self.rdict:
check = urlparse(x)
if check.scheme:
nested_dict = self.rdict[x]['nestedStats']
tmp_dict = nested_dict['e... | ['def', '_get_nest_stats', '(', 'self', ')', ':', 'for', 'x', 'in', 'self', '.', 'rdict', ':', 'check', '=', 'urlparse', '(', 'x', ')', 'if', 'check', '.', 'scheme', ':', 'nested_dict', '=', 'self', '.', 'rdict', '[', 'x', ']', '[', "'nestedStats'", ']', 'tmp_dict', '=', 'nested_dict', '[', "'entries'", ']', 'return', ... | Helper method to deal with nestedStats
as json format changed in v12.x | ['Helper', 'method', 'to', 'deal', 'with', 'nestedStats'] | train | https://github.com/F5Networks/f5-common-python/blob/7e67d5acd757a60e3d5f8c88c534bd72208f5494/f5/utils/responses/handlers.py#L52-L64 |
4,181 | rigetti/pyquil | pyquil/api/_benchmark.py | BenchmarkConnection.generate_rb_sequence | def generate_rb_sequence(self, depth, gateset, seed=None, interleaver=None):
"""
Construct a randomized benchmarking experiment on the given qubits, decomposing into
gateset. If interleaver is not provided, the returned sequence will have the form
C_1 C_2 ... C_(depth-1) C_inv ,
... | python | def generate_rb_sequence(self, depth, gateset, seed=None, interleaver=None):
"""
Construct a randomized benchmarking experiment on the given qubits, decomposing into
gateset. If interleaver is not provided, the returned sequence will have the form
C_1 C_2 ... C_(depth-1) C_inv ,
... | ['def', 'generate_rb_sequence', '(', 'self', ',', 'depth', ',', 'gateset', ',', 'seed', '=', 'None', ',', 'interleaver', '=', 'None', ')', ':', '# Support QubitPlaceholders: we temporarily index to arbitrary integers.', '# `generate_rb_sequence` handles mapping back to the original gateset gates.', 'gateset_as_program'... | Construct a randomized benchmarking experiment on the given qubits, decomposing into
gateset. If interleaver is not provided, the returned sequence will have the form
C_1 C_2 ... C_(depth-1) C_inv ,
where each C is a Clifford element drawn from gateset, C_{< depth} are randomly selected,
... | ['Construct', 'a', 'randomized', 'benchmarking', 'experiment', 'on', 'the', 'given', 'qubits', 'decomposing', 'into', 'gateset', '.', 'If', 'interleaver', 'is', 'not', 'provided', 'the', 'returned', 'sequence', 'will', 'have', 'the', 'form'] | train | https://github.com/rigetti/pyquil/blob/ec98e453084b0037d69d8c3245f6822a5422593d/pyquil/api/_benchmark.py#L82-L141 |
4,182 | dls-controls/pymalcolm | malcolm/core/views.py | Attribute.put_value | def put_value(self, value, timeout=None):
"""Put a value to the Attribute and wait for completion"""
self._context.put(self._data.path + ["value"], value, timeout=timeout) | python | def put_value(self, value, timeout=None):
"""Put a value to the Attribute and wait for completion"""
self._context.put(self._data.path + ["value"], value, timeout=timeout) | ['def', 'put_value', '(', 'self', ',', 'value', ',', 'timeout', '=', 'None', ')', ':', 'self', '.', '_context', '.', 'put', '(', 'self', '.', '_data', '.', 'path', '+', '[', '"value"', ']', ',', 'value', ',', 'timeout', '=', 'timeout', ')'] | Put a value to the Attribute and wait for completion | ['Put', 'a', 'value', 'to', 'the', 'Attribute', 'and', 'wait', 'for', 'completion'] | train | https://github.com/dls-controls/pymalcolm/blob/80ea667e4da26365a6cebc0249f52fdc744bd983/malcolm/core/views.py#L78-L80 |
4,183 | evhub/coconut | coconut/compiler/matching.py | Matcher.duplicate | def duplicate(self):
"""Duplicates the matcher to others."""
other = Matcher(self.loc, self.check_var, self.checkdefs, self.names, self.var_index)
other.insert_check(0, "not " + self.check_var)
self.others.append(other)
return other | python | def duplicate(self):
"""Duplicates the matcher to others."""
other = Matcher(self.loc, self.check_var, self.checkdefs, self.names, self.var_index)
other.insert_check(0, "not " + self.check_var)
self.others.append(other)
return other | ['def', 'duplicate', '(', 'self', ')', ':', 'other', '=', 'Matcher', '(', 'self', '.', 'loc', ',', 'self', '.', 'check_var', ',', 'self', '.', 'checkdefs', ',', 'self', '.', 'names', ',', 'self', '.', 'var_index', ')', 'other', '.', 'insert_check', '(', '0', ',', '"not "', '+', 'self', '.', 'check_var', ')', 'self', '.... | Duplicates the matcher to others. | ['Duplicates', 'the', 'matcher', 'to', 'others', '.'] | train | https://github.com/evhub/coconut/blob/ff97177344e7604e89a0a98a977a87ed2a56fc6d/coconut/compiler/matching.py#L120-L125 |
4,184 | BD2KGenomics/toil-scripts | src/toil_scripts/bwa_alignment/bwa_alignment.py | main | def main():
"""
Computational Genomics Lab, Genomics Institute, UC Santa Cruz
Toil BWA pipeline
Alignment of fastq reads via BWA-kit
General usage:
1. Type "toil-bwa generate" to create an editable manifest and config in the current working directory.
2. Parameterize the pipeline by editin... | python | def main():
"""
Computational Genomics Lab, Genomics Institute, UC Santa Cruz
Toil BWA pipeline
Alignment of fastq reads via BWA-kit
General usage:
1. Type "toil-bwa generate" to create an editable manifest and config in the current working directory.
2. Parameterize the pipeline by editin... | ['def', 'main', '(', ')', ':', '# Define Parser object and add to Toil', 'parser', '=', 'argparse', '.', 'ArgumentParser', '(', 'description', '=', 'main', '.', '__doc__', ',', 'formatter_class', '=', 'argparse', '.', 'RawTextHelpFormatter', ')', 'subparsers', '=', 'parser', '.', 'add_subparsers', '(', 'dest', '=', "'c... | Computational Genomics Lab, Genomics Institute, UC Santa Cruz
Toil BWA pipeline
Alignment of fastq reads via BWA-kit
General usage:
1. Type "toil-bwa generate" to create an editable manifest and config in the current working directory.
2. Parameterize the pipeline by editing the config.
3. Fil... | ['Computational', 'Genomics', 'Lab', 'Genomics', 'Institute', 'UC', 'Santa', 'Cruz', 'Toil', 'BWA', 'pipeline'] | train | https://github.com/BD2KGenomics/toil-scripts/blob/f878d863defcdccaabb7fe06f991451b7a198fb7/src/toil_scripts/bwa_alignment/bwa_alignment.py#L215-L292 |
4,185 | Alignak-monitoring/alignak | alignak/external_command.py | ExternalCommandManager.launch_host_event_handler | def launch_host_event_handler(self, host):
"""Launch event handler for a service
Format of the line that triggers function call::
LAUNCH_HOST_EVENT_HANDLER;<host_name>
:param host: host to execute the event handler
:type host: alignak.objects.host.Host
:return: None
... | python | def launch_host_event_handler(self, host):
"""Launch event handler for a service
Format of the line that triggers function call::
LAUNCH_HOST_EVENT_HANDLER;<host_name>
:param host: host to execute the event handler
:type host: alignak.objects.host.Host
:return: None
... | ['def', 'launch_host_event_handler', '(', 'self', ',', 'host', ')', ':', 'host', '.', 'get_event_handlers', '(', 'self', '.', 'hosts', ',', 'self', '.', 'daemon', '.', 'macromodulations', ',', 'self', '.', 'daemon', '.', 'timeperiods', ',', 'ext_cmd', '=', 'True', ')'] | Launch event handler for a service
Format of the line that triggers function call::
LAUNCH_HOST_EVENT_HANDLER;<host_name>
:param host: host to execute the event handler
:type host: alignak.objects.host.Host
:return: None | ['Launch', 'event', 'handler', 'for', 'a', 'service', 'Format', 'of', 'the', 'line', 'that', 'triggers', 'function', 'call', '::'] | train | https://github.com/Alignak-monitoring/alignak/blob/f3c145207e83159b799d3714e4241399c7740a64/alignak/external_command.py#L4021-L4032 |
4,186 | mitsei/dlkit | dlkit/json_/relationship/managers.py | RelationshipManager.get_relationship_admin_session_for_family | def get_relationship_admin_session_for_family(self, family_id):
"""Gets the ``OsidSession`` associated with the relationship administration service for the given family.
arg: family_id (osid.id.Id): the ``Id`` of the ``Family``
return: (osid.relationship.RelationshipAdminSession) - a
... | python | def get_relationship_admin_session_for_family(self, family_id):
"""Gets the ``OsidSession`` associated with the relationship administration service for the given family.
arg: family_id (osid.id.Id): the ``Id`` of the ``Family``
return: (osid.relationship.RelationshipAdminSession) - a
... | ['def', 'get_relationship_admin_session_for_family', '(', 'self', ',', 'family_id', ')', ':', 'if', 'not', 'self', '.', 'supports_relationship_admin', '(', ')', ':', 'raise', 'errors', '.', 'Unimplemented', '(', ')', '##', '# Also include check to see if the catalog Id is found otherwise raise errors.NotFound', '##', '... | Gets the ``OsidSession`` associated with the relationship administration service for the given family.
arg: family_id (osid.id.Id): the ``Id`` of the ``Family``
return: (osid.relationship.RelationshipAdminSession) - a
``RelationshipAdminSession``
raise: NotFound - no family ... | ['Gets', 'the', 'OsidSession', 'associated', 'with', 'the', 'relationship', 'administration', 'service', 'for', 'the', 'given', 'family', '.'] | train | https://github.com/mitsei/dlkit/blob/445f968a175d61c8d92c0f617a3c17dc1dc7c584/dlkit/json_/relationship/managers.py#L334-L356 |
4,187 | PGower/PyCanvas | pycanvas/apis/groups.py | GroupsAPI.get_single_group_membership_users | def get_single_group_membership_users(self, user_id, group_id):
"""
Get a single group membership.
Returns the group membership with the given membership id or user id.
"""
path = {}
data = {}
params = {}
# REQUIRED - PATH - group_id
"... | python | def get_single_group_membership_users(self, user_id, group_id):
"""
Get a single group membership.
Returns the group membership with the given membership id or user id.
"""
path = {}
data = {}
params = {}
# REQUIRED - PATH - group_id
"... | ['def', 'get_single_group_membership_users', '(', 'self', ',', 'user_id', ',', 'group_id', ')', ':', 'path', '=', '{', '}', 'data', '=', '{', '}', 'params', '=', '{', '}', '# REQUIRED - PATH - group_id\r', '"""ID"""', 'path', '[', '"group_id"', ']', '=', 'group_id', '# REQUIRED - PATH - user_id\r', '"""ID"""', 'path', ... | Get a single group membership.
Returns the group membership with the given membership id or user id. | ['Get', 'a', 'single', 'group', 'membership', '.', 'Returns', 'the', 'group', 'membership', 'with', 'the', 'given', 'membership', 'id', 'or', 'user', 'id', '.'] | train | https://github.com/PGower/PyCanvas/blob/68520005382b440a1e462f9df369f54d364e21e8/pycanvas/apis/groups.py#L478-L497 |
4,188 | chdzq/ARPAbetAndIPAConvertor | arpabetandipaconvertor/model/syllable.py | Syllable.translate_to_international_phonetic_alphabet | def translate_to_international_phonetic_alphabet(self, hide_stress_mark=False):
'''
转换成国际音标。只要一个元音的时候需要隐藏重音标识
:param hide_stress_mark:
:return:
'''
translations = self.stress.mark_ipa() if (not hide_stress_mark) and self.have_vowel else ""
for phoneme in self._p... | python | def translate_to_international_phonetic_alphabet(self, hide_stress_mark=False):
'''
转换成国际音标。只要一个元音的时候需要隐藏重音标识
:param hide_stress_mark:
:return:
'''
translations = self.stress.mark_ipa() if (not hide_stress_mark) and self.have_vowel else ""
for phoneme in self._p... | ['def', 'translate_to_international_phonetic_alphabet', '(', 'self', ',', 'hide_stress_mark', '=', 'False', ')', ':', 'translations', '=', 'self', '.', 'stress', '.', 'mark_ipa', '(', ')', 'if', '(', 'not', 'hide_stress_mark', ')', 'and', 'self', '.', 'have_vowel', 'else', '""', 'for', 'phoneme', 'in', 'self', '.', '_p... | 转换成国际音标。只要一个元音的时候需要隐藏重音标识
:param hide_stress_mark:
:return: | ['转换成国际音标。只要一个元音的时候需要隐藏重音标识', ':', 'param', 'hide_stress_mark', ':', ':', 'return', ':'] | train | https://github.com/chdzq/ARPAbetAndIPAConvertor/blob/e8b2fdbb5b7134c4f779f4d6dcd5dc30979a0a26/arpabetandipaconvertor/model/syllable.py#L86-L98 |
4,189 | bcbio/bcbio-nextgen | bcbio/cwl/tool.py | _run_cwltool | def _run_cwltool(args):
"""Run with cwltool -- reference implementation.
"""
main_file, json_file, project_name = _get_main_and_json(args.directory)
work_dir = utils.safe_makedir(os.path.join(os.getcwd(), "cwltool_work"))
tmp_dir = utils.safe_makedir(os.path.join(work_dir, "tmpcwl"))
log_file = ... | python | def _run_cwltool(args):
"""Run with cwltool -- reference implementation.
"""
main_file, json_file, project_name = _get_main_and_json(args.directory)
work_dir = utils.safe_makedir(os.path.join(os.getcwd(), "cwltool_work"))
tmp_dir = utils.safe_makedir(os.path.join(work_dir, "tmpcwl"))
log_file = ... | ['def', '_run_cwltool', '(', 'args', ')', ':', 'main_file', ',', 'json_file', ',', 'project_name', '=', '_get_main_and_json', '(', 'args', '.', 'directory', ')', 'work_dir', '=', 'utils', '.', 'safe_makedir', '(', 'os', '.', 'path', '.', 'join', '(', 'os', '.', 'getcwd', '(', ')', ',', '"cwltool_work"', ')', ')', 'tmp_... | Run with cwltool -- reference implementation. | ['Run', 'with', 'cwltool', '--', 'reference', 'implementation', '.'] | train | https://github.com/bcbio/bcbio-nextgen/blob/6a9348c0054ccd5baffd22f1bb7d0422f6978b20/bcbio/cwl/tool.py#L83-L97 |
4,190 | sckott/habanero | habanero/cn/styles.py | csl_styles | def csl_styles(**kwargs):
'''
Get list of styles from https://github.com/citation-style-language/styles
:param kwargs: any additional arguments will be passed on to `requests.get`
:return: list, of CSL styles
Usage::
from habanero import cn
cn.csl_styles()
'''
base = "https://api.github.co... | python | def csl_styles(**kwargs):
'''
Get list of styles from https://github.com/citation-style-language/styles
:param kwargs: any additional arguments will be passed on to `requests.get`
:return: list, of CSL styles
Usage::
from habanero import cn
cn.csl_styles()
'''
base = "https://api.github.co... | ['def', 'csl_styles', '(', '*', '*', 'kwargs', ')', ':', 'base', '=', '"https://api.github.com/repos/citation-style-language/styles"', 'tt', '=', 'requests', '.', 'get', '(', 'base', '+', "'/commits?per_page=1'", ',', '*', '*', 'kwargs', ')', 'tt', '.', 'raise_for_status', '(', ')', 'check_json', '(', 'tt', ')', 'commr... | Get list of styles from https://github.com/citation-style-language/styles
:param kwargs: any additional arguments will be passed on to `requests.get`
:return: list, of CSL styles
Usage::
from habanero import cn
cn.csl_styles() | ['Get', 'list', 'of', 'styles', 'from', 'https', ':', '//', 'github', '.', 'com', '/', 'citation', '-', 'style', '-', 'language', '/', 'styles'] | train | https://github.com/sckott/habanero/blob/a17d87070378786bbb138e1c9712ecad9aacf38e/habanero/cn/styles.py#L7-L33 |
4,191 | collectiveacuity/labPack | labpack/platforms/aws/ec2.py | ec2Client.delete_instance | def delete_instance(self, instance_id):
'''
method for removing an instance from AWS EC2
:param instance_id: string of instance id on AWS
:return: string reporting state of instance
'''
title = '%s.delete_instance' % self.__class__.__name__
# validate inputs
... | python | def delete_instance(self, instance_id):
'''
method for removing an instance from AWS EC2
:param instance_id: string of instance id on AWS
:return: string reporting state of instance
'''
title = '%s.delete_instance' % self.__class__.__name__
# validate inputs
... | ['def', 'delete_instance', '(', 'self', ',', 'instance_id', ')', ':', 'title', '=', "'%s.delete_instance'", '%', 'self', '.', '__class__', '.', '__name__', '# validate inputs', 'input_fields', '=', '{', "'instance_id'", ':', 'instance_id', '}', 'for', 'key', ',', 'value', 'in', 'input_fields', '.', 'items', '(', ')', '... | method for removing an instance from AWS EC2
:param instance_id: string of instance id on AWS
:return: string reporting state of instance | ['method', 'for', 'removing', 'an', 'instance', 'from', 'AWS', 'EC2'] | train | https://github.com/collectiveacuity/labPack/blob/52949ece35e72e3cc308f54d9ffa6bfbd96805b8/labpack/platforms/aws/ec2.py#L613-L685 |
4,192 | telminov/sw-django-utils | djutils/date_utils.py | date_to_timestamp | def date_to_timestamp(date):
"""
date to unix timestamp in milliseconds
"""
date_tuple = date.timetuple()
timestamp = calendar.timegm(date_tuple) * 1000
return timestamp | python | def date_to_timestamp(date):
"""
date to unix timestamp in milliseconds
"""
date_tuple = date.timetuple()
timestamp = calendar.timegm(date_tuple) * 1000
return timestamp | ['def', 'date_to_timestamp', '(', 'date', ')', ':', 'date_tuple', '=', 'date', '.', 'timetuple', '(', ')', 'timestamp', '=', 'calendar', '.', 'timegm', '(', 'date_tuple', ')', '*', '1000', 'return', 'timestamp'] | date to unix timestamp in milliseconds | ['date', 'to', 'unix', 'timestamp', 'in', 'milliseconds'] | train | https://github.com/telminov/sw-django-utils/blob/43b8491c87a5dd8fce145834c00198f4de14ceb9/djutils/date_utils.py#L39-L45 |
4,193 | pandas-dev/pandas | pandas/io/pytables.py | _convert_string_array | def _convert_string_array(data, encoding, errors, itemsize=None):
"""
we take a string-like that is object dtype and coerce to a fixed size
string type
Parameters
----------
data : a numpy array of object dtype
encoding : None or string-encoding
errors : handler for encoding errors
... | python | def _convert_string_array(data, encoding, errors, itemsize=None):
"""
we take a string-like that is object dtype and coerce to a fixed size
string type
Parameters
----------
data : a numpy array of object dtype
encoding : None or string-encoding
errors : handler for encoding errors
... | ['def', '_convert_string_array', '(', 'data', ',', 'encoding', ',', 'errors', ',', 'itemsize', '=', 'None', ')', ':', '# encode if needed', 'if', 'encoding', 'is', 'not', 'None', 'and', 'len', '(', 'data', ')', ':', 'data', '=', 'Series', '(', 'data', '.', 'ravel', '(', ')', ')', '.', 'str', '.', 'encode', '(', 'encodi... | we take a string-like that is object dtype and coerce to a fixed size
string type
Parameters
----------
data : a numpy array of object dtype
encoding : None or string-encoding
errors : handler for encoding errors
itemsize : integer, optional, defaults to the max length of the strings
R... | ['we', 'take', 'a', 'string', '-', 'like', 'that', 'is', 'object', 'dtype', 'and', 'coerce', 'to', 'a', 'fixed', 'size', 'string', 'type'] | train | https://github.com/pandas-dev/pandas/blob/9feb3ad92cc0397a04b665803a49299ee7aa1037/pandas/io/pytables.py#L4521-L4549 |
4,194 | bsolomon1124/pyfinance | pyfinance/returns.py | TSeries.batting_avg | def batting_avg(self, benchmark):
"""Percentage of periods when `self` outperformed `benchmark`.
Parameters
----------
benchmark : {pd.Series, TSeries, 1d np.ndarray}
The benchmark security to which `self` is compared.
Returns
-------
float
"... | python | def batting_avg(self, benchmark):
"""Percentage of periods when `self` outperformed `benchmark`.
Parameters
----------
benchmark : {pd.Series, TSeries, 1d np.ndarray}
The benchmark security to which `self` is compared.
Returns
-------
float
"... | ['def', 'batting_avg', '(', 'self', ',', 'benchmark', ')', ':', 'diff', '=', 'self', '.', 'excess_ret', '(', 'benchmark', ')', 'return', 'np', '.', 'count_nonzero', '(', 'diff', '>', '0.0', ')', '/', 'diff', '.', 'count', '(', ')'] | Percentage of periods when `self` outperformed `benchmark`.
Parameters
----------
benchmark : {pd.Series, TSeries, 1d np.ndarray}
The benchmark security to which `self` is compared.
Returns
-------
float | ['Percentage', 'of', 'periods', 'when', 'self', 'outperformed', 'benchmark', '.'] | train | https://github.com/bsolomon1124/pyfinance/blob/c95925209a809b4e648e79cbeaf7711d8e5ff1a6/pyfinance/returns.py#L174-L188 |
4,195 | grahame/sedge | sedge/engine.py | SedgeEngine.parse | def parse(self, fd):
"""very simple parser - but why would we want it to be complex?"""
def resolve_args(args):
# FIXME break this out, it's in common with the templating stuff elsewhere
root = self.sections[0]
val_dict = dict(('<' + t + '>', u) for (t, u) in root.ge... | python | def parse(self, fd):
"""very simple parser - but why would we want it to be complex?"""
def resolve_args(args):
# FIXME break this out, it's in common with the templating stuff elsewhere
root = self.sections[0]
val_dict = dict(('<' + t + '>', u) for (t, u) in root.ge... | ['def', 'parse', '(', 'self', ',', 'fd', ')', ':', 'def', 'resolve_args', '(', 'args', ')', ':', "# FIXME break this out, it's in common with the templating stuff elsewhere", 'root', '=', 'self', '.', 'sections', '[', '0', ']', 'val_dict', '=', 'dict', '(', '(', "'<'", '+', 't', '+', "'>'", ',', 'u', ')', 'for', '(', '... | very simple parser - but why would we want it to be complex? | ['very', 'simple', 'parser', '-', 'but', 'why', 'would', 'we', 'want', 'it', 'to', 'be', 'complex?'] | train | https://github.com/grahame/sedge/blob/60dc6a0c5ef3bf802fe48a2571a8524a6ea33878/sedge/engine.py#L328-L464 |
4,196 | openstack/networking-cisco | networking_cisco/plugins/cisco/cfg_agent/cfg_agent.py | CiscoCfgAgent.process_services | def process_services(self, device_ids=None, removed_devices_info=None):
"""Process services managed by this config agent.
This method is invoked by any of three scenarios.
1. Invoked by a periodic task running every `RPC_LOOP_INTERVAL`
seconds. This is the most common scenario.
... | python | def process_services(self, device_ids=None, removed_devices_info=None):
"""Process services managed by this config agent.
This method is invoked by any of three scenarios.
1. Invoked by a periodic task running every `RPC_LOOP_INTERVAL`
seconds. This is the most common scenario.
... | ['def', 'process_services', '(', 'self', ',', 'device_ids', '=', 'None', ',', 'removed_devices_info', '=', 'None', ')', ':', 'LOG', '.', 'debug', '(', '"Processing services started"', ')', '# Now we process only routing service, additional services will be', '# added in future', 'if', 'self', '.', 'routing_service_help... | Process services managed by this config agent.
This method is invoked by any of three scenarios.
1. Invoked by a periodic task running every `RPC_LOOP_INTERVAL`
seconds. This is the most common scenario.
In this mode, the method is called without any arguments.
2. Called by th... | ['Process', 'services', 'managed', 'by', 'this', 'config', 'agent', '.'] | train | https://github.com/openstack/networking-cisco/blob/aa58a30aec25b86f9aa5952b0863045975debfa9/networking_cisco/plugins/cisco/cfg_agent/cfg_agent.py#L217-L267 |
4,197 | brantai/python-rightscale | rightscale/httpclient.py | HTTPClient.request | def request(self, method, path='/', url=None, ignore_codes=[], **kwargs):
"""
Wrapper for the ._request method that verifies if we're logged into
RightScale before making a call, and sanity checks the oauth expiration
time.
:param str method: An HTTP method (e.g. 'get', 'post', ... | python | def request(self, method, path='/', url=None, ignore_codes=[], **kwargs):
"""
Wrapper for the ._request method that verifies if we're logged into
RightScale before making a call, and sanity checks the oauth expiration
time.
:param str method: An HTTP method (e.g. 'get', 'post', ... | ['def', 'request', '(', 'self', ',', 'method', ',', 'path', '=', "'/'", ',', 'url', '=', 'None', ',', 'ignore_codes', '=', '[', ']', ',', '*', '*', 'kwargs', ')', ':', "# On every call, check if we're both logged in, and if the token is", "# expiring. If it is, we'll re-login with the information passed into", '# us at... | Wrapper for the ._request method that verifies if we're logged into
RightScale before making a call, and sanity checks the oauth expiration
time.
:param str method: An HTTP method (e.g. 'get', 'post', 'PUT', etc...)
:param str path: A path component of the target URL. This will be
... | ['Wrapper', 'for', 'the', '.', '_request', 'method', 'that', 'verifies', 'if', 'we', 're', 'logged', 'into', 'RightScale', 'before', 'making', 'a', 'call', 'and', 'sanity', 'checks', 'the', 'oauth', 'expiration', 'time', '.'] | train | https://github.com/brantai/python-rightscale/blob/5fbf4089922917247be712d58645a7b1504f0944/rightscale/httpclient.py#L94-L127 |
4,198 | cameronbwhite/Flask-CAS | flask_cas/routing.py | login | def login():
"""
This route has two purposes. First, it is used by the user
to login. Second, it is used by the CAS to respond with the
`ticket` after the user logs in successfully.
When the user accesses this url, they are redirected to the CAS
to login. If the login was successful, the CAS wi... | python | def login():
"""
This route has two purposes. First, it is used by the user
to login. Second, it is used by the CAS to respond with the
`ticket` after the user logs in successfully.
When the user accesses this url, they are redirected to the CAS
to login. If the login was successful, the CAS wi... | ['def', 'login', '(', ')', ':', 'cas_token_session_key', '=', 'current_app', '.', 'config', '[', "'CAS_TOKEN_SESSION_KEY'", ']', 'redirect_url', '=', 'create_cas_login_url', '(', 'current_app', '.', 'config', '[', "'CAS_SERVER'", ']', ',', 'current_app', '.', 'config', '[', "'CAS_LOGIN_ROUTE'", ']', ',', 'flask', '.', ... | This route has two purposes. First, it is used by the user
to login. Second, it is used by the CAS to respond with the
`ticket` after the user logs in successfully.
When the user accesses this url, they are redirected to the CAS
to login. If the login was successful, the CAS will respond to this
ro... | ['This', 'route', 'has', 'two', 'purposes', '.', 'First', 'it', 'is', 'used', 'by', 'the', 'user', 'to', 'login', '.', 'Second', 'it', 'is', 'used', 'by', 'the', 'CAS', 'to', 'respond', 'with', 'the', 'ticket', 'after', 'the', 'user', 'logs', 'in', 'successfully', '.'] | train | https://github.com/cameronbwhite/Flask-CAS/blob/f85173938654cb9b9316a5c869000b74b008422e/flask_cas/routing.py#L18-L58 |
4,199 | markovmodel/PyEMMA | pyemma/datasets/api.py | get_multi_temperature_data | def get_multi_temperature_data(kt0=1.0, kt1=5.0, length0=10000, length1=10000, n0=10, n1=10):
"""
Continuous MCMC process in an asymmetric double well potential at multiple temperatures.
Parameters
----------
kt0: double, optional, default=1.0
Temperature in kT for the first thermodynamic s... | python | def get_multi_temperature_data(kt0=1.0, kt1=5.0, length0=10000, length1=10000, n0=10, n1=10):
"""
Continuous MCMC process in an asymmetric double well potential at multiple temperatures.
Parameters
----------
kt0: double, optional, default=1.0
Temperature in kT for the first thermodynamic s... | ['def', 'get_multi_temperature_data', '(', 'kt0', '=', '1.0', ',', 'kt1', '=', '5.0', ',', 'length0', '=', '10000', ',', 'length1', '=', '10000', ',', 'n0', '=', '10', ',', 'n1', '=', '10', ')', ':', 'dws', '=', '_DWS', '(', ')', 'mt_data', '=', 'dws', '.', 'mt_sample', '(', 'kt0', '=', 'kt0', ',', 'kt1', '=', 'kt1', '... | Continuous MCMC process in an asymmetric double well potential at multiple temperatures.
Parameters
----------
kt0: double, optional, default=1.0
Temperature in kT for the first thermodynamic state.
kt1: double, optional, default=5.0
Temperature in kT for the second thermodynamic state.... | ['Continuous', 'MCMC', 'process', 'in', 'an', 'asymmetric', 'double', 'well', 'potential', 'at', 'multiple', 'temperatures', '.'] | train | https://github.com/markovmodel/PyEMMA/blob/5c3124398217de05ba5ce9c8fb01519222481ab8/pyemma/datasets/api.py#L89-L119 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.