body
stringlengths
26
98.2k
body_hash
int64
-9,222,864,604,528,158,000
9,221,803,474B
docstring
stringlengths
1
16.8k
path
stringlengths
5
230
name
stringlengths
1
96
repository_name
stringlengths
7
89
lang
stringclasses
1 value
body_without_docstring
stringlengths
20
98.2k
def pack_range(key, packing, grad_vars, rng): 'Form the concatenation of a specified range of gradient tensors.\n\n Args:\n key: Value under which to store meta-data in packing that will be used\n later to restore the grad_var list structure.\n packing: Dict holding data describing packed ranges of smal...
-8,537,589,089,792,121,000
Form the concatenation of a specified range of gradient tensors. Args: key: Value under which to store meta-data in packing that will be used later to restore the grad_var list structure. packing: Dict holding data describing packed ranges of small tensors. grad_vars: List of (grad, var) pairs for one replic...
tensorflow/python/distribute/cross_device_utils.py
pack_range
DeuroIO/Deuro-tensorflow
python
def pack_range(key, packing, grad_vars, rng): 'Form the concatenation of a specified range of gradient tensors.\n\n Args:\n key: Value under which to store meta-data in packing that will be used\n later to restore the grad_var list structure.\n packing: Dict holding data describing packed ranges of smal...
def unpack_grad_tuple(gv, gpt): 'Unpack a previously packed collection of gradient tensors.\n\n Args:\n gv: A (grad, var) pair to be unpacked.\n gpt: A GradPackTuple describing the packing operation that produced gv.\n\n Returns:\n A list of (grad, var) pairs corresponding to the values that were\n o...
588,470,004,200,064,300
Unpack a previously packed collection of gradient tensors. Args: gv: A (grad, var) pair to be unpacked. gpt: A GradPackTuple describing the packing operation that produced gv. Returns: A list of (grad, var) pairs corresponding to the values that were originally packed into gv, maybe following subsequent oper...
tensorflow/python/distribute/cross_device_utils.py
unpack_grad_tuple
DeuroIO/Deuro-tensorflow
python
def unpack_grad_tuple(gv, gpt): 'Unpack a previously packed collection of gradient tensors.\n\n Args:\n gv: A (grad, var) pair to be unpacked.\n gpt: A GradPackTuple describing the packing operation that produced gv.\n\n Returns:\n A list of (grad, var) pairs corresponding to the values that were\n o...
def pack_small_tensors(replica_grads, max_bytes=0, max_group=0): "Concatenate small gradient tensors together for reduction.\n\n Args:\n replica_grads: List of lists of (gradient, variable) tuples.\n max_bytes: Int giving max number of bytes in a tensor that\n may be considered small.\n max_group: In...
-3,869,403,477,617,599,000
Concatenate small gradient tensors together for reduction. Args: replica_grads: List of lists of (gradient, variable) tuples. max_bytes: Int giving max number of bytes in a tensor that may be considered small. max_group: Int giving max number of small tensors that may be concatenated into one new tensor....
tensorflow/python/distribute/cross_device_utils.py
pack_small_tensors
DeuroIO/Deuro-tensorflow
python
def pack_small_tensors(replica_grads, max_bytes=0, max_group=0): "Concatenate small gradient tensors together for reduction.\n\n Args:\n replica_grads: List of lists of (gradient, variable) tuples.\n max_bytes: Int giving max number of bytes in a tensor that\n may be considered small.\n max_group: In...
def unpack_small_tensors(replica_grads, packing): 'Undo the structure alterations to replica_grads done by pack_small_tensors.\n\n Args:\n replica_grads: List of List of (grad, var) tuples.\n packing: A dict generated by pack_small_tensors describing the changes\n it made to replica_grads.\n\n Returns:...
4,266,833,636,242,527,700
Undo the structure alterations to replica_grads done by pack_small_tensors. Args: replica_grads: List of List of (grad, var) tuples. packing: A dict generated by pack_small_tensors describing the changes it made to replica_grads. Returns: new_replica_grads: identical to replica_grads except that concatenati...
tensorflow/python/distribute/cross_device_utils.py
unpack_small_tensors
DeuroIO/Deuro-tensorflow
python
def unpack_small_tensors(replica_grads, packing): 'Undo the structure alterations to replica_grads done by pack_small_tensors.\n\n Args:\n replica_grads: List of List of (grad, var) tuples.\n packing: A dict generated by pack_small_tensors describing the changes\n it made to replica_grads.\n\n Returns:...
def aggregate_tensors_or_indexed_slices(values, accumulation_fn=math_ops.add_n): 'Aggregate tensors using `accumulation_fn` and IndexedSlices via concat.' if any((isinstance(v, ops.IndexedSlices) for v in values)): return gradients_impl._AggregateIndexedSlicesGradients(values) else: return a...
-8,755,428,239,939,372,000
Aggregate tensors using `accumulation_fn` and IndexedSlices via concat.
tensorflow/python/distribute/cross_device_utils.py
aggregate_tensors_or_indexed_slices
DeuroIO/Deuro-tensorflow
python
def aggregate_tensors_or_indexed_slices(values, accumulation_fn=math_ops.add_n): if any((isinstance(v, ops.IndexedSlices) for v in values)): return gradients_impl._AggregateIndexedSlicesGradients(values) else: return accumulation_fn(values)
def contains_indexed_slices(value): 'Check whether the value is `IndexedSlices` or contains `IndexedSlices`.' if isinstance(value, ops.IndexedSlices): return True elif (isinstance(value, (list, tuple)) and value): return any((contains_indexed_slices(v) for v in value)) elif isinstance(va...
4,811,654,315,259,058,000
Check whether the value is `IndexedSlices` or contains `IndexedSlices`.
tensorflow/python/distribute/cross_device_utils.py
contains_indexed_slices
DeuroIO/Deuro-tensorflow
python
def contains_indexed_slices(value): if isinstance(value, ops.IndexedSlices): return True elif (isinstance(value, (list, tuple)) and value): return any((contains_indexed_slices(v) for v in value)) elif isinstance(value, value_lib.DistributedValues): return contains_indexed_slices...
def __init__(self, group_key_start=1, instance_key_start=100, instance_key_with_id_start=10000): 'Initializes the object.\n\n Args:\n group_key_start: the starting integer of group key.\n instance_key_start: the starting integer of instance key.\n instance_key_with_id_start: the starting integer o...
-8,157,370,672,428,255,000
Initializes the object. Args: group_key_start: the starting integer of group key. instance_key_start: the starting integer of instance key. instance_key_with_id_start: the starting integer of instance key that is recorded with an id.
tensorflow/python/distribute/cross_device_utils.py
__init__
DeuroIO/Deuro-tensorflow
python
def __init__(self, group_key_start=1, instance_key_start=100, instance_key_with_id_start=10000): 'Initializes the object.\n\n Args:\n group_key_start: the starting integer of group key.\n instance_key_start: the starting integer of instance key.\n instance_key_with_id_start: the starting integer o...
def get_group_key(self, devices): 'Returns a group key for the set of devices.\n\n Args:\n devices: list of strings naming devices in a collective group.\n\n Returns:\n int key uniquely identifying the set of device names.\n ' parsed = [pydev.DeviceSpec.from_string(d) for d in devices] na...
-1,122,532,047,166,860,800
Returns a group key for the set of devices. Args: devices: list of strings naming devices in a collective group. Returns: int key uniquely identifying the set of device names.
tensorflow/python/distribute/cross_device_utils.py
get_group_key
DeuroIO/Deuro-tensorflow
python
def get_group_key(self, devices): 'Returns a group key for the set of devices.\n\n Args:\n devices: list of strings naming devices in a collective group.\n\n Returns:\n int key uniquely identifying the set of device names.\n ' parsed = [pydev.DeviceSpec.from_string(d) for d in devices] na...
def get_instance_key(self, key_id=None): 'Returns a new instance key for use in defining a collective op.\n\n Args:\n key_id: optional string. If set, key will be recorded and the same key\n will be returned when the same key_id is provided. If not, an increasing\n instance key will be returne...
6,131,911,157,188,649,000
Returns a new instance key for use in defining a collective op. Args: key_id: optional string. If set, key will be recorded and the same key will be returned when the same key_id is provided. If not, an increasing instance key will be returned.
tensorflow/python/distribute/cross_device_utils.py
get_instance_key
DeuroIO/Deuro-tensorflow
python
def get_instance_key(self, key_id=None): 'Returns a new instance key for use in defining a collective op.\n\n Args:\n key_id: optional string. If set, key will be recorded and the same key\n will be returned when the same key_id is provided. If not, an increasing\n instance key will be returne...
def test_user_commands(): 'test all user commands.' assert (__main__.main(['user', 'create', 'ec2mc_test_user', 'setup_users', '--default']) is not False) sleep(5) assert (__main__.main(['user', 'list']) is not False) assert (__main__.main(['user', 'set_group', 'EC2MC_TEST_USER', 'basic_users']) is ...
1,676,041,740,222,803,000
test all user commands.
tests/test_user_commands.py
test_user_commands
TakingItCasual/easymc
python
def test_user_commands(): assert (__main__.main(['user', 'create', 'ec2mc_test_user', 'setup_users', '--default']) is not False) sleep(5) assert (__main__.main(['user', 'list']) is not False) assert (__main__.main(['user', 'set_group', 'EC2MC_TEST_USER', 'basic_users']) is not False) assert (__...
def coalesce(edge_index, edge_attr=None, num_nodes=None, reduce='add', is_sorted=False, sort_by_row=True): 'Row-wise sorts :obj:`edge_index` and removes its duplicated entries.\n Duplicate entries in :obj:`edge_attr` are merged by scattering them\n together according to the given :obj:`reduce` option.\n\n ...
-587,692,267,582,630,000
Row-wise sorts :obj:`edge_index` and removes its duplicated entries. Duplicate entries in :obj:`edge_attr` are merged by scattering them together according to the given :obj:`reduce` option. Args: edge_index (LongTensor): The edge indices. edge_attr (Tensor or List[Tensor], optional): Edge weights or multi- ...
gammagl/utils/coalesce.py
coalesce
BUPT-GAMMA/GammaGL
python
def coalesce(edge_index, edge_attr=None, num_nodes=None, reduce='add', is_sorted=False, sort_by_row=True): 'Row-wise sorts :obj:`edge_index` and removes its duplicated entries.\n Duplicate entries in :obj:`edge_attr` are merged by scattering them\n together according to the given :obj:`reduce` option.\n\n ...
def __init__(self, force=False): '\n Creates an exporter instance. If force flag is set, all data already\n present in a destination folder are overwritten on export.\n\n @param force: If True, force flag is set. It is not otherwise.\n @type force: bool\n ' self._force = force
-4,257,012,996,661,849,600
Creates an exporter instance. If force flag is set, all data already present in a destination folder are overwritten on export. @param force: If True, force flag is set. It is not otherwise. @type force: bool
comodit_client/api/exporter.py
__init__
geoco84/comodit-client
python
def __init__(self, force=False): '\n Creates an exporter instance. If force flag is set, all data already\n present in a destination folder are overwritten on export.\n\n @param force: If True, force flag is set. It is not otherwise.\n @type force: bool\n ' self._force = force
def export_application(self, app, path, backup=False): '\n Exports an application to a local folder.\n\n @param app: The application to export.\n @type app: L{Application}\n @param path: Path to local directory.\n @type path: string\n @param backup: indicate is a backup.\n ...
-2,885,658,103,454,825,000
Exports an application to a local folder. @param app: The application to export. @type app: L{Application} @param path: Path to local directory. @type path: string @param backup: indicate is a backup. @type path: bool
comodit_client/api/exporter.py
export_application
geoco84/comodit-client
python
def export_application(self, app, path, backup=False): '\n Exports an application to a local folder.\n\n @param app: The application to export.\n @type app: L{Application}\n @param path: Path to local directory.\n @type path: string\n @param backup: indicate is a backup.\n ...
def export_distribution(self, dist, path, backup=False): '\n Exports a distribution to a local folder.\n\n @param dist: The distribution to export.\n @type dist: L{Distribution}\n @param path: Path to local directory.\n @type path: string\n @param backup: indicate is a back...
-7,077,236,834,681,137,000
Exports a distribution to a local folder. @param dist: The distribution to export. @type dist: L{Distribution} @param path: Path to local directory. @type path: string @param backup: indicate is a backup. @type path: bool
comodit_client/api/exporter.py
export_distribution
geoco84/comodit-client
python
def export_distribution(self, dist, path, backup=False): '\n Exports a distribution to a local folder.\n\n @param dist: The distribution to export.\n @type dist: L{Distribution}\n @param path: Path to local directory.\n @type path: string\n @param backup: indicate is a back...
def export_platform(self, plat, path, backup=False): '\n Exports a platform to a local folder.\n\n @param plat: The platform to export.\n @type plat: L{Platform}\n @param path: Path to local directory.\n @type path: string\n @param backup: indicate is a backup.\n @ty...
5,090,413,098,656,828,000
Exports a platform to a local folder. @param plat: The platform to export. @type plat: L{Platform} @param path: Path to local directory. @type path: string @param backup: indicate is a backup. @type path: bool
comodit_client/api/exporter.py
export_platform
geoco84/comodit-client
python
def export_platform(self, plat, path, backup=False): '\n Exports a platform to a local folder.\n\n @param plat: The platform to export.\n @type plat: L{Platform}\n @param path: Path to local directory.\n @type path: string\n @param backup: indicate is a backup.\n @ty...
def export_environment(self, env, path): '\n Exports an environment to a local folder. Hosts of the environment\n are exported also.\n\n @param env: The environment to export.\n @type env: L{Environment}\n @param path: Path to local directory.\n @type path: string\n ...
-7,664,905,090,788,252,000
Exports an environment to a local folder. Hosts of the environment are exported also. @param env: The environment to export. @type env: L{Environment} @param path: Path to local directory. @type path: string
comodit_client/api/exporter.py
export_environment
geoco84/comodit-client
python
def export_environment(self, env, path): '\n Exports an environment to a local folder. Hosts of the environment\n are exported also.\n\n @param env: The environment to export.\n @type env: L{Environment}\n @param path: Path to local directory.\n @type path: string\n ...
def export_job(self, job, path): '\n Exports a job to a local folder.\n\n @param job: The job to export.\n @type job: L{Job}\n @param path: Path to local directory.\n @type path: string\n ' self._export_entity(job, path)
-6,253,046,233,192,386,000
Exports a job to a local folder. @param job: The job to export. @type job: L{Job} @param path: Path to local directory. @type path: string
comodit_client/api/exporter.py
export_job
geoco84/comodit-client
python
def export_job(self, job, path): '\n Exports a job to a local folder.\n\n @param job: The job to export.\n @type job: L{Job}\n @param path: Path to local directory.\n @type path: string\n ' self._export_entity(job, path)
def export_orchestration(self, orchestration, path): '\n Exports a orchestration to a local folder.\n\n @param job: The orchestration to export.\n @type orchestration: L{Orchestration}\n @param path: Path to local directory.\n @type path: string\n ' self._export_entity(...
-5,400,655,714,325,233,000
Exports a orchestration to a local folder. @param job: The orchestration to export. @type orchestration: L{Orchestration} @param path: Path to local directory. @type path: string
comodit_client/api/exporter.py
export_orchestration
geoco84/comodit-client
python
def export_orchestration(self, orchestration, path): '\n Exports a orchestration to a local folder.\n\n @param job: The orchestration to export.\n @type orchestration: L{Orchestration}\n @param path: Path to local directory.\n @type path: string\n ' self._export_entity(...
def export_notification(self, notification, path): '\n Exports a jobnotificationto a local folder.\n\n @param notification: The notification to export.\n @type notification: L{Notification}\n @param path: Path to local directory.\n @type path: string\n ' self._export_en...
-3,760,152,571,254,754,000
Exports a jobnotificationto a local folder. @param notification: The notification to export. @type notification: L{Notification} @param path: Path to local directory. @type path: string
comodit_client/api/exporter.py
export_notification
geoco84/comodit-client
python
def export_notification(self, notification, path): '\n Exports a jobnotificationto a local folder.\n\n @param notification: The notification to export.\n @type notification: L{Notification}\n @param path: Path to local directory.\n @type path: string\n ' self._export_en...
def export_host(self, host, path): '\n Exports a host to a local folder. Contexts and instance are exported\n also.\n\n @param host: The host to export.\n @type host: L{Host}\n @param path: Path to local directory.\n @type path: string\n ' self._export_entity(hos...
-7,571,313,200,186,343,000
Exports a host to a local folder. Contexts and instance are exported also. @param host: The host to export. @type host: L{Host} @param path: Path to local directory. @type path: string
comodit_client/api/exporter.py
export_host
geoco84/comodit-client
python
def export_host(self, host, path): '\n Exports a host to a local folder. Contexts and instance are exported\n also.\n\n @param host: The host to export.\n @type host: L{Host}\n @param path: Path to local directory.\n @type path: string\n ' self._export_entity(hos...
def export_organization(self, org, path): '\n Exports an organization to a local folder. Environments, applications,\n distributions and platforms are exported also.\n\n @param org: The organization to export.\n @type org: L{Organization}\n @param path: Path to local directory.\n ...
3,655,422,668,646,644,000
Exports an organization to a local folder. Environments, applications, distributions and platforms are exported also. @param org: The organization to export. @type org: L{Organization} @param path: Path to local directory. @type path: string
comodit_client/api/exporter.py
export_organization
geoco84/comodit-client
python
def export_organization(self, org, path): '\n Exports an organization to a local folder. Environments, applications,\n distributions and platforms are exported also.\n\n @param org: The organization to export.\n @type org: L{Organization}\n @param path: Path to local directory.\n ...
def test_multiple_metavar_help(self, parser): '\n Help text for options with a metavar tuple should display help\n in the form "--preferences=value1 value2 value3" (#2004).\n ' group = parser.getgroup('general') group.addoption('--preferences', metavar=('value1', 'value2', 'value3'), na...
2,200,379,206,769,270,800
Help text for options with a metavar tuple should display help in the form "--preferences=value1 value2 value3" (#2004).
tools/third_party/pytest/testing/test_parseopt.py
test_multiple_metavar_help
2-GARIK20/wpt
python
def test_multiple_metavar_help(self, parser): '\n Help text for options with a metavar tuple should display help\n in the form "--preferences=value1 value2 value3" (#2004).\n ' group = parser.getgroup('general') group.addoption('--preferences', metavar=('value1', 'value2', 'value3'), na...
@classmethod def from_string(cls, permission): 'Create a FileSystemSasPermissions from a string.\n\n To specify read, write, or delete permissions you need only to\n include the first letter of the word in the string. E.g. For read and\n write permissions, you would provide a string "rw".\n\n ...
-2,302,456,270,963,383,800
Create a FileSystemSasPermissions from a string. To specify read, write, or delete permissions you need only to include the first letter of the word in the string. E.g. For read and write permissions, you would provide a string "rw". :param str permission: The string which dictates the read, add, create, write, o...
sdk/storage/azure-storage-file-datalake/azure/storage/filedatalake/_models.py
from_string
Co0olboi/azure-sdk-for-python
python
@classmethod def from_string(cls, permission): 'Create a FileSystemSasPermissions from a string.\n\n To specify read, write, or delete permissions you need only to\n include the first letter of the word in the string. E.g. For read and\n write permissions, you would provide a string "rw".\n\n ...
@classmethod def from_string(cls, permission): 'Create a DirectorySasPermissions from a string.\n\n To specify read, create, write, or delete permissions you need only to\n include the first letter of the word in the string. E.g. For read and\n write permissions, you would provide a string "rw"...
656,015,843,620,330,800
Create a DirectorySasPermissions from a string. To specify read, create, write, or delete permissions you need only to include the first letter of the word in the string. E.g. For read and write permissions, you would provide a string "rw". :param str permission: The string which dictates the read, add, create, w...
sdk/storage/azure-storage-file-datalake/azure/storage/filedatalake/_models.py
from_string
Co0olboi/azure-sdk-for-python
python
@classmethod def from_string(cls, permission): 'Create a DirectorySasPermissions from a string.\n\n To specify read, create, write, or delete permissions you need only to\n include the first letter of the word in the string. E.g. For read and\n write permissions, you would provide a string "rw"...
@classmethod def from_string(cls, permission): 'Create a FileSasPermissions from a string.\n\n To specify read, write, or delete permissions you need only to\n include the first letter of the word in the string. E.g. For read and\n write permissions, you would provide a string "rw".\n\n ...
6,598,723,634,432,000,000
Create a FileSasPermissions from a string. To specify read, write, or delete permissions you need only to include the first letter of the word in the string. E.g. For read and write permissions, you would provide a string "rw". :param str permission: The string which dictates the read, add, create, write, or dele...
sdk/storage/azure-storage-file-datalake/azure/storage/filedatalake/_models.py
from_string
Co0olboi/azure-sdk-for-python
python
@classmethod def from_string(cls, permission): 'Create a FileSasPermissions from a string.\n\n To specify read, write, or delete permissions you need only to\n include the first letter of the word in the string. E.g. For read and\n write permissions, you would provide a string "rw".\n\n ...
def _ensure_tf_install(): 'Attempt to import tensorflow, and ensure its version is sufficient.\n\n Raises:\n ImportError: if either tensorflow is not importable or its version is\n inadequate.\n ' try: import tensorflow as tf except ImportError: print('\n\nFailed to import TensorFlow...
-1,946,075,856,956,076,000
Attempt to import tensorflow, and ensure its version is sufficient. Raises: ImportError: if either tensorflow is not importable or its version is inadequate.
tensorflow_model_optimization/__init__.py
_ensure_tf_install
13957166977/model-optimization
python
def _ensure_tf_install(): 'Attempt to import tensorflow, and ensure its version is sufficient.\n\n Raises:\n ImportError: if either tensorflow is not importable or its version is\n inadequate.\n ' try: import tensorflow as tf except ImportError: print('\n\nFailed to import TensorFlow...
def array(data: Union[(Sequence[object], AnyArrayLike)], dtype: Optional[Dtype]=None, copy: bool=True) -> ExtensionArray: '\n Create an array.\n\n .. versionadded:: 0.24.0\n\n Parameters\n ----------\n data : Sequence of objects\n The scalars inside `data` should be instances of the\n s...
-2,947,146,966,493,969,400
Create an array. .. versionadded:: 0.24.0 Parameters ---------- data : Sequence of objects The scalars inside `data` should be instances of the scalar type for `dtype`. It's expected that `data` represents a 1-dimensional array of data. When `data` is an Index or Series, the underlying array will...
pandas/core/construction.py
array
BhavarthShah/pandas
python
def array(data: Union[(Sequence[object], AnyArrayLike)], dtype: Optional[Dtype]=None, copy: bool=True) -> ExtensionArray: '\n Create an array.\n\n .. versionadded:: 0.24.0\n\n Parameters\n ----------\n data : Sequence of objects\n The scalars inside `data` should be instances of the\n s...
def extract_array(obj: AnyArrayLike, extract_numpy: bool=False) -> ArrayLike: "\n Extract the ndarray or ExtensionArray from a Series or Index.\n\n For all other types, `obj` is just returned as is.\n\n Parameters\n ----------\n obj : object\n For Series / Index, the underlying ExtensionArray ...
7,053,903,833,595,036,000
Extract the ndarray or ExtensionArray from a Series or Index. For all other types, `obj` is just returned as is. Parameters ---------- obj : object For Series / Index, the underlying ExtensionArray is unboxed. For Numpy-backed ExtensionArrays, the ndarray is extracted. extract_numpy : bool, default False ...
pandas/core/construction.py
extract_array
BhavarthShah/pandas
python
def extract_array(obj: AnyArrayLike, extract_numpy: bool=False) -> ArrayLike: "\n Extract the ndarray or ExtensionArray from a Series or Index.\n\n For all other types, `obj` is just returned as is.\n\n Parameters\n ----------\n obj : object\n For Series / Index, the underlying ExtensionArray ...
def sanitize_array(data, index: Optional[Index], dtype: Optional[DtypeObj]=None, copy: bool=False, raise_cast_failure: bool=False) -> ArrayLike: '\n Sanitize input data to an ndarray or ExtensionArray, copy if specified,\n coerce to the dtype if specified.\n ' if isinstance(data, ma.MaskedArray): ...
-8,508,708,360,423,698,000
Sanitize input data to an ndarray or ExtensionArray, copy if specified, coerce to the dtype if specified.
pandas/core/construction.py
sanitize_array
BhavarthShah/pandas
python
def sanitize_array(data, index: Optional[Index], dtype: Optional[DtypeObj]=None, copy: bool=False, raise_cast_failure: bool=False) -> ArrayLike: '\n Sanitize input data to an ndarray or ExtensionArray, copy if specified,\n coerce to the dtype if specified.\n ' if isinstance(data, ma.MaskedArray): ...
def _try_cast(arr, dtype: Optional[DtypeObj], copy: bool, raise_cast_failure: bool): "\n Convert input to numpy ndarray and optionally cast to a given dtype.\n\n Parameters\n ----------\n arr : ndarray, scalar, list, tuple, iterator (catchall)\n Excludes: ExtensionArray, Series, Index.\n dtype...
2,792,998,909,573,878,300
Convert input to numpy ndarray and optionally cast to a given dtype. Parameters ---------- arr : ndarray, scalar, list, tuple, iterator (catchall) Excludes: ExtensionArray, Series, Index. dtype : np.dtype, ExtensionDtype or None copy : bool If False, don't copy the data if not needed. raise_cast_failure : bool...
pandas/core/construction.py
_try_cast
BhavarthShah/pandas
python
def _try_cast(arr, dtype: Optional[DtypeObj], copy: bool, raise_cast_failure: bool): "\n Convert input to numpy ndarray and optionally cast to a given dtype.\n\n Parameters\n ----------\n arr : ndarray, scalar, list, tuple, iterator (catchall)\n Excludes: ExtensionArray, Series, Index.\n dtype...
def is_empty_data(data: Any) -> bool: '\n Utility to check if a Series is instantiated with empty data,\n which does not contain dtype information.\n\n Parameters\n ----------\n data : array-like, Iterable, dict, or scalar value\n Contains data stored in Series.\n\n Returns\n -------\n ...
6,240,236,482,168,920,000
Utility to check if a Series is instantiated with empty data, which does not contain dtype information. Parameters ---------- data : array-like, Iterable, dict, or scalar value Contains data stored in Series. Returns ------- bool
pandas/core/construction.py
is_empty_data
BhavarthShah/pandas
python
def is_empty_data(data: Any) -> bool: '\n Utility to check if a Series is instantiated with empty data,\n which does not contain dtype information.\n\n Parameters\n ----------\n data : array-like, Iterable, dict, or scalar value\n Contains data stored in Series.\n\n Returns\n -------\n ...
def create_series_with_explicit_dtype(data: Any=None, index: Optional[Union[(ArrayLike, Index)]]=None, dtype: Optional[Dtype]=None, name: Optional[str]=None, copy: bool=False, fastpath: bool=False, dtype_if_empty: Dtype=object) -> Series: '\n Helper to pass an explicit dtype when instantiating an empty Series.\n...
6,999,774,149,489,639,000
Helper to pass an explicit dtype when instantiating an empty Series. This silences a DeprecationWarning described in GitHub-17261. Parameters ---------- data : Mirrored from Series.__init__ index : Mirrored from Series.__init__ dtype : Mirrored from Series.__init__ name : Mirrored from Series.__init__ copy : Mirrored...
pandas/core/construction.py
create_series_with_explicit_dtype
BhavarthShah/pandas
python
def create_series_with_explicit_dtype(data: Any=None, index: Optional[Union[(ArrayLike, Index)]]=None, dtype: Optional[Dtype]=None, name: Optional[str]=None, copy: bool=False, fastpath: bool=False, dtype_if_empty: Dtype=object) -> Series: '\n Helper to pass an explicit dtype when instantiating an empty Series.\n...
def cli_endpoint(fn): '\n Decorator for command line endpoints that execute dags or tasks. It runs\n the decorated function, captures exception (if any), sends a colored\n traceback to standard error and exits with code 1.\n\n Notes\n -----\n Functions decorated with this must be called with keywo...
-2,612,740,748,282,923,500
Decorator for command line endpoints that execute dags or tasks. It runs the decorated function, captures exception (if any), sends a colored traceback to standard error and exits with code 1. Notes ----- Functions decorated with this must be called with keyword arguments Call some_endpoint(catch_exception=False) to ...
src/ploomber/cli/io.py
cli_endpoint
abhishak3/ploomber
python
def cli_endpoint(fn): '\n Decorator for command line endpoints that execute dags or tasks. It runs\n the decorated function, captures exception (if any), sends a colored\n traceback to standard error and exits with code 1.\n\n Notes\n -----\n Functions decorated with this must be called with keywo...
def command_endpoint(fn): '\n Decorator for command line endpoints that only parse dags or tasks but do\n not execute them. If it tails, it prints error message to stderror, then\n calls with exit code 1.\n ' @wraps(fn) def wrapper(**kwargs): try: fn(**kwargs) except...
-1,015,329,598,697,073,800
Decorator for command line endpoints that only parse dags or tasks but do not execute them. If it tails, it prints error message to stderror, then calls with exit code 1.
src/ploomber/cli/io.py
command_endpoint
abhishak3/ploomber
python
def command_endpoint(fn): '\n Decorator for command line endpoints that only parse dags or tasks but do\n not execute them. If it tails, it prints error message to stderror, then\n calls with exit code 1.\n ' @wraps(fn) def wrapper(**kwargs): try: fn(**kwargs) except...
def assert_flash(self, text): 'asserts that message exists in flashes' for flash_dom in self.find_elements('.flash'): if (flash_dom.text == text): return print(flash_dom.text) raise AssertionError(f'Flash not found for text "{text}"')
-740,493,910,525,271,300
asserts that message exists in flashes
qa327_test/frontend/geek_base.py
assert_flash
nicoleooi/cmpe327
python
def assert_flash(self, text): for flash_dom in self.find_elements('.flash'): if (flash_dom.text == text): return print(flash_dom.text) raise AssertionError(f'Flash not found for text "{text}"')
def login_test_user(self, email=TEST_USER.email, password='test_frontend'): 'login our test user' self.open((base_url + '/login')) self.input('#email', email) self.input('#password', password) self.click('#btn-submit')
214,767,214,298,577,860
login our test user
qa327_test/frontend/geek_base.py
login_test_user
nicoleooi/cmpe327
python
def login_test_user(self, email=TEST_USER.email, password='test_frontend'): self.open((base_url + '/login')) self.input('#email', email) self.input('#password', password) self.click('#btn-submit')
def reject_outliers(y, x=None, m=2.0, replaceNaN=True): ' Reject outliers:\n If replaceNaN is true: they are replaced by NaN \n Otherwise they are removed\n ' if (m == 0): pass else: dd = np.abs((y - np.nanmedian(y))) mdev = np.nanmedian(dd) if mdev: ...
2,903,634,235,489,495,000
Reject outliers: If replaceNaN is true: they are replaced by NaN Otherwise they are removed
pydatview/tools/signal.py
reject_outliers
cdrtm/pyDatView
python
def reject_outliers(y, x=None, m=2.0, replaceNaN=True): ' Reject outliers:\n If replaceNaN is true: they are replaced by NaN \n Otherwise they are removed\n ' if (m == 0): pass else: dd = np.abs((y - np.nanmedian(y))) mdev = np.nanmedian(dd) if mdev: ...
def moving_average(a, n=3): ' \n perform moving average, return a vector of same length as input\n\n NOTE: also in kalman.filters\n ' a = a.ravel() a = np.concatenate((([a[0]] * (n - 1)), a)) ret = np.cumsum(a, dtype=float) ret[n:] = (ret[n:] - ret[:(- n)]) ret = (ret[(n - 1):] / n) ...
-1,125,062,380,859,889,400
perform moving average, return a vector of same length as input NOTE: also in kalman.filters
pydatview/tools/signal.py
moving_average
cdrtm/pyDatView
python
def moving_average(a, n=3): ' \n perform moving average, return a vector of same length as input\n\n NOTE: also in kalman.filters\n ' a = a.ravel() a = np.concatenate((([a[0]] * (n - 1)), a)) ret = np.cumsum(a, dtype=float) ret[n:] = (ret[n:] - ret[:(- n)]) ret = (ret[(n - 1):] / n) ...
def lowpass1(y, dt, fc=3): ' \n 1st order low pass filter\n ' tau = (1 / ((2 * np.pi) * fc)) alpha = (dt / (tau + dt)) y_filt = np.zeros(y.shape) y_filt[0] = y[0] for i in np.arange(1, len(y)): y_filt[i] = ((alpha * y[i]) + ((1 - alpha) * y_filt[(i - 1)])) return y_filt
7,886,073,476,410,002,000
1st order low pass filter
pydatview/tools/signal.py
lowpass1
cdrtm/pyDatView
python
def lowpass1(y, dt, fc=3): ' \n \n ' tau = (1 / ((2 * np.pi) * fc)) alpha = (dt / (tau + dt)) y_filt = np.zeros(y.shape) y_filt[0] = y[0] for i in np.arange(1, len(y)): y_filt[i] = ((alpha * y[i]) + ((1 - alpha) * y_filt[(i - 1)])) return y_filt
def highpass1(y, dt, fc=3): ' \n 1st order high pass filter\n ' tau = (1 / ((2 * np.pi) * fc)) alpha = (tau / (tau + dt)) y_filt = np.zeros(y.shape) y_filt[0] = 0 for i in np.arange(1, len(y)): y_filt[i] = ((alpha * y_filt[(i - 1)]) + (alpha * (y[i] - y[(i - 1)]))) m0 = np.mean...
-2,147,949,212,982,904,600
1st order high pass filter
pydatview/tools/signal.py
highpass1
cdrtm/pyDatView
python
def highpass1(y, dt, fc=3): ' \n \n ' tau = (1 / ((2 * np.pi) * fc)) alpha = (tau / (tau + dt)) y_filt = np.zeros(y.shape) y_filt[0] = 0 for i in np.arange(1, len(y)): y_filt[i] = ((alpha * y_filt[(i - 1)]) + (alpha * (y[i] - y[(i - 1)]))) m0 = np.mean(y) m1 = np.mean(y_fil...
def zero_crossings(y, x=None, direction=None): "\n Find zero-crossing points in a discrete vector, using linear interpolation.\n\n direction: 'up' or 'down', to select only up-crossings or down-crossings\n\n returns: \n x values xzc such that y(yzc)==0\n indexes izc, such that the z...
4,096,691,605,760,065,000
Find zero-crossing points in a discrete vector, using linear interpolation. direction: 'up' or 'down', to select only up-crossings or down-crossings returns: x values xzc such that y(yzc)==0 indexes izc, such that the zero is between y[izc] (excluded) and y[izc+1] (included) if direction is not provided, al...
pydatview/tools/signal.py
zero_crossings
cdrtm/pyDatView
python
def zero_crossings(y, x=None, direction=None): "\n Find zero-crossing points in a discrete vector, using linear interpolation.\n\n direction: 'up' or 'down', to select only up-crossings or down-crossings\n\n returns: \n x values xzc such that y(yzc)==0\n indexes izc, such that the z...
def correlation(x, nMax=80, dt=1, method='manual'): ' \n Compute auto correlation of a signal\n ' nvec = np.arange(0, nMax) sigma2 = np.var(x) R = np.zeros(nMax) R[0] = 1 for (i, nDelay) in enumerate(nvec[1:]): R[(i + 1)] = (np.mean((x[0:(- nDelay)] * x[nDelay:])) / sigma2) tau...
-753,259,868,257,545,100
Compute auto correlation of a signal
pydatview/tools/signal.py
correlation
cdrtm/pyDatView
python
def correlation(x, nMax=80, dt=1, method='manual'): ' \n \n ' nvec = np.arange(0, nMax) sigma2 = np.var(x) R = np.zeros(nMax) R[0] = 1 for (i, nDelay) in enumerate(nvec[1:]): R[(i + 1)] = (np.mean((x[0:(- nDelay)] * x[nDelay:])) / sigma2) tau = (nvec * dt) return (R, tau)
def correlated_signal(coeff, n=1000): '\n Create a correlated random signal of length `n` based on the correlation coefficient `coeff`\n value[t] = coeff * value[t-1] + (1-coeff) * random\n ' if ((coeff < 0) or (coeff > 1)): raise Exception('Correlation coefficient should be between 0 an...
734,792,984,720,503,700
Create a correlated random signal of length `n` based on the correlation coefficient `coeff` value[t] = coeff * value[t-1] + (1-coeff) * random
pydatview/tools/signal.py
correlated_signal
cdrtm/pyDatView
python
def correlated_signal(coeff, n=1000): '\n Create a correlated random signal of length `n` based on the correlation coefficient `coeff`\n value[t] = coeff * value[t-1] + (1-coeff) * random\n ' if ((coeff < 0) or (coeff > 1)): raise Exception('Correlation coefficient should be between 0 an...
def build_colormap2label(): 'Build a RGB color to label mapping for segmentation.' colormap2label = np.zeros((256 ** 3)) for (i, colormap) in enumerate(VOC_COLORMAP): colormap2label[((((colormap[0] * 256) + colormap[1]) * 256) + colormap[2])] = i return colormap2label
2,136,949,172,295,368,700
Build a RGB color to label mapping for segmentation.
tools/convet_voc2coco/voc2coco.py
build_colormap2label
yhpengtu/CenterIMask
python
def build_colormap2label(): colormap2label = np.zeros((256 ** 3)) for (i, colormap) in enumerate(VOC_COLORMAP): colormap2label[((((colormap[0] * 256) + colormap[1]) * 256) + colormap[2])] = i return colormap2label
def voc_label_indices(colormap, colormap2label): 'Map a RGB color to a label.' colormap = colormap.astype('int32') idx = ((((colormap[:, :, 0] * 256) + colormap[:, :, 1]) * 256) + colormap[:, :, 2]) return colormap2label[idx]
-3,168,059,356,542,787,600
Map a RGB color to a label.
tools/convet_voc2coco/voc2coco.py
voc_label_indices
yhpengtu/CenterIMask
python
def voc_label_indices(colormap, colormap2label): colormap = colormap.astype('int32') idx = ((((colormap[:, :, 0] * 256) + colormap[:, :, 1]) * 256) + colormap[:, :, 2]) return colormap2label[idx]
def inference(model, dataset, limit): 'Run detection on images in the given directory.' if (not os.path.exists(RESULTS_DIR)): os.makedirs(RESULTS_DIR) time_dir = '{:%Y%m%dT%H%M%S}'.format(datetime.datetime.now()) time_dir = os.path.join(RESULTS_DIR, time_dir) os.makedirs(time_dir) for im...
-6,147,800,993,385,219,000
Run detection on images in the given directory.
tools/convet_voc2coco/voc2coco.py
inference
yhpengtu/CenterIMask
python
def inference(model, dataset, limit): if (not os.path.exists(RESULTS_DIR)): os.makedirs(RESULTS_DIR) time_dir = '{:%Y%m%dT%H%M%S}'.format(datetime.datetime.now()) time_dir = os.path.join(RESULTS_DIR, time_dir) os.makedirs(time_dir) for image_id in dataset.image_ids[:limit]: imag...
def load_voc(self, dataset_dir, trainval, year='2012'): "Load a voc_year of the VOC dataset.\n dataset_dir: The root directory of the VOC dataset, example: '/mnt/disk1/VOCdevkit'\n trainval: 'train' or 'val' for Training or Validation\n year: '2007' or '2012' for VOC dataset\n " voc_...
-2,206,683,044,585,229,000
Load a voc_year of the VOC dataset. dataset_dir: The root directory of the VOC dataset, example: '/mnt/disk1/VOCdevkit' trainval: 'train' or 'val' for Training or Validation year: '2007' or '2012' for VOC dataset
tools/convet_voc2coco/voc2coco.py
load_voc
yhpengtu/CenterIMask
python
def load_voc(self, dataset_dir, trainval, year='2012'): "Load a voc_year of the VOC dataset.\n dataset_dir: The root directory of the VOC dataset, example: '/mnt/disk1/VOCdevkit'\n trainval: 'train' or 'val' for Training or Validation\n year: '2007' or '2012' for VOC dataset\n " voc_...
def load_raw_mask(self, image_id, class_or_object): "load two kinds of mask of VOC dataset.\n image_id: id of mask\n class_or_object: 'class_mask' or 'object_mask' for SegmentationClass or SegmentationObject\n Returns:\n image: numpy of mask image.\n " assert (class_or_object ...
2,401,107,649,179,283,500
load two kinds of mask of VOC dataset. image_id: id of mask class_or_object: 'class_mask' or 'object_mask' for SegmentationClass or SegmentationObject Returns: image: numpy of mask image.
tools/convet_voc2coco/voc2coco.py
load_raw_mask
yhpengtu/CenterIMask
python
def load_raw_mask(self, image_id, class_or_object): "load two kinds of mask of VOC dataset.\n image_id: id of mask\n class_or_object: 'class_mask' or 'object_mask' for SegmentationClass or SegmentationObject\n Returns:\n image: numpy of mask image.\n " assert (class_or_object ...
def load_class_label(self, image_id): "Mapping SegmentationClass image's color to indice of ground truth \n image_id: id of mask\n Return:\n class_label: [height, width] matrix contains values form 0 to 20\n " raw_mask = self.load_raw_mask(image_id, 'class_mask') class_label = vo...
8,214,092,490,726,870,000
Mapping SegmentationClass image's color to indice of ground truth image_id: id of mask Return: class_label: [height, width] matrix contains values form 0 to 20
tools/convet_voc2coco/voc2coco.py
load_class_label
yhpengtu/CenterIMask
python
def load_class_label(self, image_id): "Mapping SegmentationClass image's color to indice of ground truth \n image_id: id of mask\n Return:\n class_label: [height, width] matrix contains values form 0 to 20\n " raw_mask = self.load_raw_mask(image_id, 'class_mask') class_label = vo...
def load_mask(self, image_id): 'Mapping annotation images to real Masks(MRCNN needed)\n image_id: id of mask\n Returns:\n masks: A bool array of shape [height, width, instance count] with\n one mask per instance.\n class_ids: a 1D array of class IDs of the instance masks.\n ...
-5,372,344,417,436,508,000
Mapping annotation images to real Masks(MRCNN needed) image_id: id of mask Returns: masks: A bool array of shape [height, width, instance count] with one mask per instance. class_ids: a 1D array of class IDs of the instance masks.
tools/convet_voc2coco/voc2coco.py
load_mask
yhpengtu/CenterIMask
python
def load_mask(self, image_id): 'Mapping annotation images to real Masks(MRCNN needed)\n image_id: id of mask\n Returns:\n masks: A bool array of shape [height, width, instance count] with\n one mask per instance.\n class_ids: a 1D array of class IDs of the instance masks.\n ...
def getKeyId(self): "\n Get the keyId used by this peer (this peer's identifier).\n\n This is stored in the key store.\n " return self.keyStore.getKeyId()
-688,107,004,979,895,800
Get the keyId used by this peer (this peer's identifier). This is stored in the key store.
tint/peer.py
getKeyId
8468/tint
python
def getKeyId(self): "\n Get the keyId used by this peer (this peer's identifier).\n\n This is stored in the key store.\n " return self.keyStore.getKeyId()
def getPublicKey(self): "\n Get the keyId used by this peer (this peer's identifier).\n\n This is stored in the key store.\n " return self.keyStore.getPublicKey()
-1,210,259,793,139,717,600
Get the keyId used by this peer (this peer's identifier). This is stored in the key store.
tint/peer.py
getPublicKey
8468/tint
python
def getPublicKey(self): "\n Get the keyId used by this peer (this peer's identifier).\n\n This is stored in the key store.\n " return self.keyStore.getPublicKey()
def set(self, hostKeyId, storagePath, storageValue): "\n Set a value on a host.\n\n @param hostKeyId: The key id for the destination host to set the\n given key. This could be the local host, in which case the hostKey\n will be the same as this C{Peer}'s keyStore keyId.\n\n @para...
6,969,237,322,244,427,000
Set a value on a host. @param hostKeyId: The key id for the destination host to set the given key. This could be the local host, in which case the hostKey will be the same as this C{Peer}'s keyStore keyId. @param storagePath: The path to the key to set. For instance, this could be something like /chat/<somekey>/inb...
tint/peer.py
set
8468/tint
python
def set(self, hostKeyId, storagePath, storageValue): "\n Set a value on a host.\n\n @param hostKeyId: The key id for the destination host to set the\n given key. This could be the local host, in which case the hostKey\n will be the same as this C{Peer}'s keyStore keyId.\n\n @para...
def get(self, hostKeyId, storagePath): "\n Get a value from a host.\n\n @param hostKeyId: The key id for the destination host to get the\n given key. This could be the local host, in which case the hostKey\n will be the same as this C{Peer}'s keyStore keyId.\n\n @param storagePat...
1,392,539,997,521,182,500
Get a value from a host. @param hostKeyId: The key id for the destination host to get the given key. This could be the local host, in which case the hostKey will be the same as this C{Peer}'s keyStore keyId. @param storagePath: The path to the key to get. For instance, this could be something like /chat/<somekey>/i...
tint/peer.py
get
8468/tint
python
def get(self, hostKeyId, storagePath): "\n Get a value from a host.\n\n @param hostKeyId: The key id for the destination host to get the\n given key. This could be the local host, in which case the hostKey\n will be the same as this C{Peer}'s keyStore keyId.\n\n @param storagePat...
def push(self, hostKeyId, storagePath, storageValue): '\n Given key, create a new key at <key>/<id> with the given value, where <id>\n is an auto-incrementing integer value starting at 0.\n ' if (hostKeyId == self.getKeyId()): return self.storage.push(hostKeyId, storagePath, storage...
7,086,835,767,524,889,000
Given key, create a new key at <key>/<id> with the given value, where <id> is an auto-incrementing integer value starting at 0.
tint/peer.py
push
8468/tint
python
def push(self, hostKeyId, storagePath, storageValue): '\n Given key, create a new key at <key>/<id> with the given value, where <id>\n is an auto-incrementing integer value starting at 0.\n ' if (hostKeyId == self.getKeyId()): return self.storage.push(hostKeyId, storagePath, storage...
def ls(self, hostKeyId, storagePath, offset, length): '\n Given key, get all children keys (with the given offset and length). Length cannot\n be more than 1000.\n ' if (hostKeyId == self.getKeyId()): return self.storage.ls(hostKeyId, storagePath, offset, length) return self.po...
4,173,619,235,199,410,000
Given key, get all children keys (with the given offset and length). Length cannot be more than 1000.
tint/peer.py
ls
8468/tint
python
def ls(self, hostKeyId, storagePath, offset, length): '\n Given key, get all children keys (with the given offset and length). Length cannot\n be more than 1000.\n ' if (hostKeyId == self.getKeyId()): return self.storage.ls(hostKeyId, storagePath, offset, length) return self.po...
def __init__(self, verbose=0, cols=None, drop_invariant=False, return_df=True, handle_unknown=None, handle_missing='count', min_group_size=None, combine_min_nan_groups=True, min_group_name=None, normalize=False): 'Count encoding for categorical features.\n\n For a given categorical feature, replace the names...
4,560,433,223,553,619,000
Count encoding for categorical features. For a given categorical feature, replace the names of the groups with the group counts. Parameters ---------- verbose: int integer indicating verbosity of output. 0 for none. cols: list a list of columns to encode, if None, all string and categorical columns will ...
category_encoders/count.py
__init__
JoshuaC3/categorical-encoding
python
def __init__(self, verbose=0, cols=None, drop_invariant=False, return_df=True, handle_unknown=None, handle_missing='count', min_group_size=None, combine_min_nan_groups=True, min_group_name=None, normalize=False): 'Count encoding for categorical features.\n\n For a given categorical feature, replace the names...
def fit(self, X, y=None, **kwargs): 'Fit encoder according to X.\n\n Parameters\n ----------\n X : array-like, shape = [n_samples, n_features]\n Training vectors, where n_samples is the number of samples\n and n_features is the number of features.\n y : array-like, ...
8,885,542,150,674,907,000
Fit encoder according to X. Parameters ---------- X : array-like, shape = [n_samples, n_features] Training vectors, where n_samples is the number of samples and n_features is the number of features. y : array-like, shape = [n_samples] Target values. Returns ------- self : encoder Returns self.
category_encoders/count.py
fit
JoshuaC3/categorical-encoding
python
def fit(self, X, y=None, **kwargs): 'Fit encoder according to X.\n\n Parameters\n ----------\n X : array-like, shape = [n_samples, n_features]\n Training vectors, where n_samples is the number of samples\n and n_features is the number of features.\n y : array-like, ...
def transform(self, X, y=None): 'Perform the transformation to new categorical data.\n\n Parameters\n ----------\n X : array-like, shape = [n_samples, n_features]\n y : array-like, shape = [n_samples]\n \n Returns\n -------\n p : array, shape = [n_samples,...
-7,646,785,548,588,537,000
Perform the transformation to new categorical data. Parameters ---------- X : array-like, shape = [n_samples, n_features] y : array-like, shape = [n_samples] Returns ------- p : array, shape = [n_samples, n_numeric + N] Transformed values with encoding applied.
category_encoders/count.py
transform
JoshuaC3/categorical-encoding
python
def transform(self, X, y=None): 'Perform the transformation to new categorical data.\n\n Parameters\n ----------\n X : array-like, shape = [n_samples, n_features]\n y : array-like, shape = [n_samples]\n \n Returns\n -------\n p : array, shape = [n_samples,...
def _fit_count_encode(self, X_in, y): 'Perform the count encoding.' X = X_in.copy(deep=True) if (self.cols is None): self.cols = X.columns.values self.mapping = {} for col in self.cols: if X[col].isna().any(): if (self._handle_missing[col] == 'error'): rai...
-7,622,429,411,437,038,000
Perform the count encoding.
category_encoders/count.py
_fit_count_encode
JoshuaC3/categorical-encoding
python
def _fit_count_encode(self, X_in, y): X = X_in.copy(deep=True) if (self.cols is None): self.cols = X.columns.values self.mapping = {} for col in self.cols: if X[col].isna().any(): if (self._handle_missing[col] == 'error'): raise ValueError(('Missing data ...
def _transform_count_encode(self, X_in, y): 'Perform the transform count encoding.' X = X_in.copy(deep=True) for col in self.cols: if (self._min_group_size is not None): if (col in self._min_group_categories.keys()): X[col] = X[col].map(self._min_group_categories[col]).fi...
-5,871,763,005,789,190,000
Perform the transform count encoding.
category_encoders/count.py
_transform_count_encode
JoshuaC3/categorical-encoding
python
def _transform_count_encode(self, X_in, y): X = X_in.copy(deep=True) for col in self.cols: if (self._min_group_size is not None): if (col in self._min_group_categories.keys()): X[col] = X[col].map(self._min_group_categories[col]).fillna(X[col]) X[col] = X[col].ma...
def combine_min_categories(self, X): 'Combine small categories into a single category.' for (col, mapper) in self.mapping.items(): if (self._normalize[col] and isinstance(self._min_group_size[col], int)): self._min_group_size[col] = (self._min_group_size[col] / X.shape[0]) elif ((not...
6,089,501,862,832,393,000
Combine small categories into a single category.
category_encoders/count.py
combine_min_categories
JoshuaC3/categorical-encoding
python
def combine_min_categories(self, X): for (col, mapper) in self.mapping.items(): if (self._normalize[col] and isinstance(self._min_group_size[col], int)): self._min_group_size[col] = (self._min_group_size[col] / X.shape[0]) elif ((not self._normalize) and isinstance(self._min_group_s...
def _check_set_create_dict_attrs(self): 'Check attributes that can be dicts and format for all self.cols.' dict_attrs = {'normalize': False, 'min_group_name': None, 'combine_min_nan_groups': True, 'min_group_size': None, 'handle_unknown': 'value', 'handle_missing': 'value'} for (attr_name, attr_default) in ...
-5,527,060,141,706,155,000
Check attributes that can be dicts and format for all self.cols.
category_encoders/count.py
_check_set_create_dict_attrs
JoshuaC3/categorical-encoding
python
def _check_set_create_dict_attrs(self): dict_attrs = {'normalize': False, 'min_group_name': None, 'combine_min_nan_groups': True, 'min_group_size': None, 'handle_unknown': 'value', 'handle_missing': 'value'} for (attr_name, attr_default) in dict_attrs.items(): attr = copy(getattr(self, attr_name)) ...
def _test_null_distribution_basic(self, test: str, lazy: bool, quick_scale: bool=False, n_cells: int=3000, n_genes: int=200, n_groups: int=3): '\n Test if de.wald() generates a uniform p-value distribution\n if it is given data simulated based on the null model. Returns the p-value\n of the two...
-1,909,257,724,457,016,800
Test if de.wald() generates a uniform p-value distribution if it is given data simulated based on the null model. Returns the p-value of the two-side Kolmgorov-Smirnov test for equality of the observed p-value distriubution and a uniform distribution. :param n_cells: Number of cells to simulate (number of observation...
diffxpy/unit_test/test_pairwise.py
_test_null_distribution_basic
gokceneraslan/diffxpy
python
def _test_null_distribution_basic(self, test: str, lazy: bool, quick_scale: bool=False, n_cells: int=3000, n_genes: int=200, n_groups: int=3): '\n Test if de.wald() generates a uniform p-value distribution\n if it is given data simulated based on the null model. Returns the p-value\n of the two...
def info(self): 'Print info about this unit, overrides superclass method.' print('Grrr, I am the Orc Figher!')
-3,397,929,164,373,522,400
Print info about this unit, overrides superclass method.
wargame/designpatterns/pythonic_orcfighter.py
info
jeantardelli/wargameRepo
python
def info(self): print('Grrr, I am the Orc Figher!')
def all_equal(left, right, cache=None): 'Check whether two objects `left` and `right` are equal.\n\n Parameters\n ----------\n left : Union[object, Expr, Node]\n right : Union[object, Expr, Node]\n cache : Optional[Dict[Tuple[Node, Node], bool]]\n A dictionary indicating whether two Nodes are ...
-8,700,499,191,523,920,000
Check whether two objects `left` and `right` are equal. Parameters ---------- left : Union[object, Expr, Node] right : Union[object, Expr, Node] cache : Optional[Dict[Tuple[Node, Node], bool]] A dictionary indicating whether two Nodes are equal
ibis/expr/operations.py
all_equal
odidev/ibis
python
def all_equal(left, right, cache=None): 'Check whether two objects `left` and `right` are equal.\n\n Parameters\n ----------\n left : Union[object, Expr, Node]\n right : Union[object, Expr, Node]\n cache : Optional[Dict[Tuple[Node, Node], bool]]\n A dictionary indicating whether two Nodes are ...
def __getstate__(self) -> Dict[(str, Any)]: 'The attributes _expr_cached and _hash are\n used as caches; they can be excluded from\n serialization without affecting correctness.\n\n Excluding _expr_cached and _hash from serialization\n will allow the serialized bytes to be the same for\n...
5,526,020,258,835,681,000
The attributes _expr_cached and _hash are used as caches; they can be excluded from serialization without affecting correctness. Excluding _expr_cached and _hash from serialization will allow the serialized bytes to be the same for equivalent Node objets. Returns ------- Dict[str, Any] A dictionary storing the ob...
ibis/expr/operations.py
__getstate__
odidev/ibis
python
def __getstate__(self) -> Dict[(str, Any)]: 'The attributes _expr_cached and _hash are\n used as caches; they can be excluded from\n serialization without affecting correctness.\n\n Excluding _expr_cached and _hash from serialization\n will allow the serialized bytes to be the same for\n...
def __setstate__(self, state: Dict[(str, Any)]) -> None: '\n Parameters\n ----------\n state: Dict[str, Any]\n A dictionary storing the objects attributes.\n ' for slot in state: setattr(self, slot, state[slot])
5,854,483,499,252,395,000
Parameters ---------- state: Dict[str, Any] A dictionary storing the objects attributes.
ibis/expr/operations.py
__setstate__
odidev/ibis
python
def __setstate__(self, state: Dict[(str, Any)]) -> None: '\n Parameters\n ----------\n state: Dict[str, Any]\n A dictionary storing the objects attributes.\n ' for slot in state: setattr(self, slot, state[slot])
def output_type(self): '\n This function must resolve the output type of the expression and return\n the node wrapped in the appropriate ValueExpr type.\n ' raise NotImplementedError
5,740,557,941,522,150,000
This function must resolve the output type of the expression and return the node wrapped in the appropriate ValueExpr type.
ibis/expr/operations.py
output_type
odidev/ibis
python
def output_type(self): '\n This function must resolve the output type of the expression and return\n the node wrapped in the appropriate ValueExpr type.\n ' raise NotImplementedError
def count(self): 'Only valid if the distinct contains a single column' return CountDistinct(self.arg)
3,605,110,978,898,989,600
Only valid if the distinct contains a single column
ibis/expr/operations.py
count
odidev/ibis
python
def count(self): return CountDistinct(self.arg)
def else_(self, result_expr): '\n Specify\n\n Returns\n -------\n builder : CaseBuilder\n ' kwargs = {slot: getattr(self, slot) for slot in self.__slots__ if (slot != 'default')} result_expr = ir.as_value_expr(result_expr) kwargs['default'] = result_expr return typ...
-5,472,837,417,554,490,000
Specify Returns ------- builder : CaseBuilder
ibis/expr/operations.py
else_
odidev/ibis
python
def else_(self, result_expr): '\n Specify\n\n Returns\n -------\n builder : CaseBuilder\n ' kwargs = {slot: getattr(self, slot) for slot in self.__slots__ if (slot != 'default')} result_expr = ir.as_value_expr(result_expr) kwargs['default'] = result_expr return typ...
def when(self, case_expr, result_expr): '\n Add a new case-result pair.\n\n Parameters\n ----------\n case : Expr\n Expression to equality-compare with base expression. Must be\n comparable with the base.\n result : Expr\n Value when the case predicate e...
-23,041,507,715,912,564
Add a new case-result pair. Parameters ---------- case : Expr Expression to equality-compare with base expression. Must be comparable with the base. result : Expr Value when the case predicate evaluates to true. Returns ------- builder : CaseBuilder
ibis/expr/operations.py
when
odidev/ibis
python
def when(self, case_expr, result_expr): '\n Add a new case-result pair.\n\n Parameters\n ----------\n case : Expr\n Expression to equality-compare with base expression. Must be\n comparable with the base.\n result : Expr\n Value when the case predicate e...
def when(self, case_expr, result_expr): '\n Add a new case-result pair.\n\n Parameters\n ----------\n case : Expr\n Expression to equality-compare with base expression. Must be\n comparable with the base.\n result : Expr\n Value when the case predicate e...
7,831,332,286,804,111,000
Add a new case-result pair. Parameters ---------- case : Expr Expression to equality-compare with base expression. Must be comparable with the base. result : Expr Value when the case predicate evaluates to true. Returns ------- builder : CaseBuilder
ibis/expr/operations.py
when
odidev/ibis
python
def when(self, case_expr, result_expr): '\n Add a new case-result pair.\n\n Parameters\n ----------\n case : Expr\n Expression to equality-compare with base expression. Must be\n comparable with the base.\n result : Expr\n Value when the case predicate e...
def __init__(self, left, right): '\n Casting rules for type promotions (for resolving the output type) may\n depend in some cases on the target backend.\n\n TODO: how will overflows be handled? Can we provide anything useful in\n Ibis to help the user avoid them?\n\n :param left:\...
4,501,114,707,235,070,000
Casting rules for type promotions (for resolving the output type) may depend in some cases on the target backend. TODO: how will overflows be handled? Can we provide anything useful in Ibis to help the user avoid them? :param left: :param right:
ibis/expr/operations.py
__init__
odidev/ibis
python
def __init__(self, left, right): '\n Casting rules for type promotions (for resolving the output type) may\n depend in some cases on the target backend.\n\n TODO: how will overflows be handled? Can we provide anything useful in\n Ibis to help the user avoid them?\n\n :param left:\...
def __hash__(self) -> int: "Return the hash of a literal value.\n\n We override this method to make sure that we can handle things that\n aren't eminently hashable like an ``array<array<int64>>``.\n\n " return hash(self.dtype._literal_value_hash_key(self.value))
-8,880,341,266,466,899,000
Return the hash of a literal value. We override this method to make sure that we can handle things that aren't eminently hashable like an ``array<array<int64>>``.
ibis/expr/operations.py
__hash__
odidev/ibis
python
def __hash__(self) -> int: "Return the hash of a literal value.\n\n We override this method to make sure that we can handle things that\n aren't eminently hashable like an ``array<array<int64>>``.\n\n " return hash(self.dtype._literal_value_hash_key(self.value))
def __init__(__self__, resource_name: str, opts: Optional[pulumi.ResourceOptions]=None, daily_recurrence: Optional[pulumi.Input[pulumi.InputType['DayDetailsArgs']]]=None, hourly_recurrence: Optional[pulumi.Input[pulumi.InputType['HourDetailsArgs']]]=None, lab_name: Optional[pulumi.Input[str]]=None, location: Optional[p...
6,186,674,530,000,249,000
A schedule. API Version: 2018-09-15. :param str resource_name: The name of the resource. :param pulumi.ResourceOptions opts: Options for the resource. :param pulumi.Input[pulumi.InputType['DayDetailsArgs']] daily_recurrence: If the schedule will occur once each day of the week, specify the daily recurrence. :param pul...
sdk/python/pulumi_azure_nextgen/devtestlab/schedule.py
__init__
pulumi/pulumi-azure-nextgen
python
def __init__(__self__, resource_name: str, opts: Optional[pulumi.ResourceOptions]=None, daily_recurrence: Optional[pulumi.Input[pulumi.InputType['DayDetailsArgs']]]=None, hourly_recurrence: Optional[pulumi.Input[pulumi.InputType['HourDetailsArgs']]]=None, lab_name: Optional[pulumi.Input[str]]=None, location: Optional[p...
@staticmethod def get(resource_name: str, id: pulumi.Input[str], opts: Optional[pulumi.ResourceOptions]=None) -> 'Schedule': "\n Get an existing Schedule resource's state with the given name, id, and optional extra\n properties used to qualify the lookup.\n\n :param str resource_name: The uniqu...
8,867,794,031,495,096,000
Get an existing Schedule resource's state with the given name, id, and optional extra properties used to qualify the lookup. :param str resource_name: The unique name of the resulting resource. :param pulumi.Input[str] id: The unique provider ID of the resource to lookup. :param pulumi.ResourceOptions opts: Options fo...
sdk/python/pulumi_azure_nextgen/devtestlab/schedule.py
get
pulumi/pulumi-azure-nextgen
python
@staticmethod def get(resource_name: str, id: pulumi.Input[str], opts: Optional[pulumi.ResourceOptions]=None) -> 'Schedule': "\n Get an existing Schedule resource's state with the given name, id, and optional extra\n properties used to qualify the lookup.\n\n :param str resource_name: The uniqu...
@property @pulumi.getter(name='createdDate') def created_date(self) -> pulumi.Output[str]: '\n The creation date of the schedule.\n ' return pulumi.get(self, 'created_date')
-4,870,187,973,321,437,000
The creation date of the schedule.
sdk/python/pulumi_azure_nextgen/devtestlab/schedule.py
created_date
pulumi/pulumi-azure-nextgen
python
@property @pulumi.getter(name='createdDate') def created_date(self) -> pulumi.Output[str]: '\n \n ' return pulumi.get(self, 'created_date')
@property @pulumi.getter(name='dailyRecurrence') def daily_recurrence(self) -> pulumi.Output[Optional['outputs.DayDetailsResponse']]: '\n If the schedule will occur once each day of the week, specify the daily recurrence.\n ' return pulumi.get(self, 'daily_recurrence')
6,480,351,104,012,898,000
If the schedule will occur once each day of the week, specify the daily recurrence.
sdk/python/pulumi_azure_nextgen/devtestlab/schedule.py
daily_recurrence
pulumi/pulumi-azure-nextgen
python
@property @pulumi.getter(name='dailyRecurrence') def daily_recurrence(self) -> pulumi.Output[Optional['outputs.DayDetailsResponse']]: '\n \n ' return pulumi.get(self, 'daily_recurrence')
@property @pulumi.getter(name='hourlyRecurrence') def hourly_recurrence(self) -> pulumi.Output[Optional['outputs.HourDetailsResponse']]: '\n If the schedule will occur multiple times a day, specify the hourly recurrence.\n ' return pulumi.get(self, 'hourly_recurrence')
-210,472,516,599,062,460
If the schedule will occur multiple times a day, specify the hourly recurrence.
sdk/python/pulumi_azure_nextgen/devtestlab/schedule.py
hourly_recurrence
pulumi/pulumi-azure-nextgen
python
@property @pulumi.getter(name='hourlyRecurrence') def hourly_recurrence(self) -> pulumi.Output[Optional['outputs.HourDetailsResponse']]: '\n \n ' return pulumi.get(self, 'hourly_recurrence')
@property @pulumi.getter def location(self) -> pulumi.Output[Optional[str]]: '\n The location of the resource.\n ' return pulumi.get(self, 'location')
-6,989,812,945,498,137,000
The location of the resource.
sdk/python/pulumi_azure_nextgen/devtestlab/schedule.py
location
pulumi/pulumi-azure-nextgen
python
@property @pulumi.getter def location(self) -> pulumi.Output[Optional[str]]: '\n \n ' return pulumi.get(self, 'location')
@property @pulumi.getter def name(self) -> pulumi.Output[str]: '\n The name of the resource.\n ' return pulumi.get(self, 'name')
7,945,008,266,317,837,000
The name of the resource.
sdk/python/pulumi_azure_nextgen/devtestlab/schedule.py
name
pulumi/pulumi-azure-nextgen
python
@property @pulumi.getter def name(self) -> pulumi.Output[str]: '\n \n ' return pulumi.get(self, 'name')
@property @pulumi.getter(name='notificationSettings') def notification_settings(self) -> pulumi.Output[Optional['outputs.NotificationSettingsResponse']]: '\n Notification settings.\n ' return pulumi.get(self, 'notification_settings')
873,310,138,035,010,600
Notification settings.
sdk/python/pulumi_azure_nextgen/devtestlab/schedule.py
notification_settings
pulumi/pulumi-azure-nextgen
python
@property @pulumi.getter(name='notificationSettings') def notification_settings(self) -> pulumi.Output[Optional['outputs.NotificationSettingsResponse']]: '\n \n ' return pulumi.get(self, 'notification_settings')
@property @pulumi.getter(name='provisioningState') def provisioning_state(self) -> pulumi.Output[str]: '\n The provisioning status of the resource.\n ' return pulumi.get(self, 'provisioning_state')
-5,777,047,059,194,198,000
The provisioning status of the resource.
sdk/python/pulumi_azure_nextgen/devtestlab/schedule.py
provisioning_state
pulumi/pulumi-azure-nextgen
python
@property @pulumi.getter(name='provisioningState') def provisioning_state(self) -> pulumi.Output[str]: '\n \n ' return pulumi.get(self, 'provisioning_state')
@property @pulumi.getter def status(self) -> pulumi.Output[Optional[str]]: '\n The status of the schedule (i.e. Enabled, Disabled)\n ' return pulumi.get(self, 'status')
1,623,179,802,714,244,400
The status of the schedule (i.e. Enabled, Disabled)
sdk/python/pulumi_azure_nextgen/devtestlab/schedule.py
status
pulumi/pulumi-azure-nextgen
python
@property @pulumi.getter def status(self) -> pulumi.Output[Optional[str]]: '\n \n ' return pulumi.get(self, 'status')
@property @pulumi.getter def tags(self) -> pulumi.Output[Optional[Mapping[(str, str)]]]: '\n The tags of the resource.\n ' return pulumi.get(self, 'tags')
4,713,149,495,578,682,000
The tags of the resource.
sdk/python/pulumi_azure_nextgen/devtestlab/schedule.py
tags
pulumi/pulumi-azure-nextgen
python
@property @pulumi.getter def tags(self) -> pulumi.Output[Optional[Mapping[(str, str)]]]: '\n \n ' return pulumi.get(self, 'tags')
@property @pulumi.getter(name='targetResourceId') def target_resource_id(self) -> pulumi.Output[Optional[str]]: '\n The resource ID to which the schedule belongs\n ' return pulumi.get(self, 'target_resource_id')
1,420,396,896,255,958,500
The resource ID to which the schedule belongs
sdk/python/pulumi_azure_nextgen/devtestlab/schedule.py
target_resource_id
pulumi/pulumi-azure-nextgen
python
@property @pulumi.getter(name='targetResourceId') def target_resource_id(self) -> pulumi.Output[Optional[str]]: '\n \n ' return pulumi.get(self, 'target_resource_id')
@property @pulumi.getter(name='taskType') def task_type(self) -> pulumi.Output[Optional[str]]: '\n The task type of the schedule (e.g. LabVmsShutdownTask, LabVmAutoStart).\n ' return pulumi.get(self, 'task_type')
5,791,849,811,834,436,000
The task type of the schedule (e.g. LabVmsShutdownTask, LabVmAutoStart).
sdk/python/pulumi_azure_nextgen/devtestlab/schedule.py
task_type
pulumi/pulumi-azure-nextgen
python
@property @pulumi.getter(name='taskType') def task_type(self) -> pulumi.Output[Optional[str]]: '\n \n ' return pulumi.get(self, 'task_type')
@property @pulumi.getter(name='timeZoneId') def time_zone_id(self) -> pulumi.Output[Optional[str]]: '\n The time zone ID (e.g. Pacific Standard time).\n ' return pulumi.get(self, 'time_zone_id')
4,756,117,501,304,452,000
The time zone ID (e.g. Pacific Standard time).
sdk/python/pulumi_azure_nextgen/devtestlab/schedule.py
time_zone_id
pulumi/pulumi-azure-nextgen
python
@property @pulumi.getter(name='timeZoneId') def time_zone_id(self) -> pulumi.Output[Optional[str]]: '\n \n ' return pulumi.get(self, 'time_zone_id')
@property @pulumi.getter def type(self) -> pulumi.Output[str]: '\n The type of the resource.\n ' return pulumi.get(self, 'type')
3,589,901,220,239,403,500
The type of the resource.
sdk/python/pulumi_azure_nextgen/devtestlab/schedule.py
type
pulumi/pulumi-azure-nextgen
python
@property @pulumi.getter def type(self) -> pulumi.Output[str]: '\n \n ' return pulumi.get(self, 'type')
@property @pulumi.getter(name='uniqueIdentifier') def unique_identifier(self) -> pulumi.Output[str]: '\n The unique immutable identifier of a resource (Guid).\n ' return pulumi.get(self, 'unique_identifier')
2,468,897,841,730,923,500
The unique immutable identifier of a resource (Guid).
sdk/python/pulumi_azure_nextgen/devtestlab/schedule.py
unique_identifier
pulumi/pulumi-azure-nextgen
python
@property @pulumi.getter(name='uniqueIdentifier') def unique_identifier(self) -> pulumi.Output[str]: '\n \n ' return pulumi.get(self, 'unique_identifier')
@property @pulumi.getter(name='weeklyRecurrence') def weekly_recurrence(self) -> pulumi.Output[Optional['outputs.WeekDetailsResponse']]: '\n If the schedule will occur only some days of the week, specify the weekly recurrence.\n ' return pulumi.get(self, 'weekly_recurrence')
-1,530,063,684,734,173,200
If the schedule will occur only some days of the week, specify the weekly recurrence.
sdk/python/pulumi_azure_nextgen/devtestlab/schedule.py
weekly_recurrence
pulumi/pulumi-azure-nextgen
python
@property @pulumi.getter(name='weeklyRecurrence') def weekly_recurrence(self) -> pulumi.Output[Optional['outputs.WeekDetailsResponse']]: '\n \n ' return pulumi.get(self, 'weekly_recurrence')
def run(self, document, number_sentences): '\n :param: number_sentences, starts with 0 for the fist sentence\n ' boundaries = (document.sentences_boundaries[0][0], document.sentences_boundaries[:(number_sentences + 1)][(- 1)][1]) document.text = document.text[boundaries[0]:boundaries[1]] d...
3,765,172,281,749,577,000
:param: number_sentences, starts with 0 for the fist sentence
pipeline/filter.py
run
hadyelsahar/RE-NLG-Dataset
python
def run(self, document, number_sentences): '\n \n ' boundaries = (document.sentences_boundaries[0][0], document.sentences_boundaries[:(number_sentences + 1)][(- 1)][1]) document.text = document.text[boundaries[0]:boundaries[1]] document.sentences_boundaries = self._limitSenteceBoundaries(d...
def __init__(self, all_triples, entities): '\n :param: input TripleReaderTriples object\n :param: a list of entity that should be filtered\n ' self.wikidata_triples = all_triples self.entities = entities
-2,811,689,684,874,151,400
:param: input TripleReaderTriples object :param: a list of entity that should be filtered
pipeline/filter.py
__init__
hadyelsahar/RE-NLG-Dataset
python
def __init__(self, all_triples, entities): '\n :param: input TripleReaderTriples object\n :param: a list of entity that should be filtered\n ' self.wikidata_triples = all_triples self.entities = entities
def grabArtifactFromJenkins(**context): "\n Grab an artifact from the previous job\n The python-jenkins library doesn't expose a method for that\n But it's totally possible to build manually the request for that\n " hook = JenkinsHook('jenkins_nqa') jenkins_server = hook.get_jenkins_server() ...
2,763,114,726,950,187,500
Grab an artifact from the previous job The python-jenkins library doesn't expose a method for that But it's totally possible to build manually the request for that
dags/jenkins_dag.py
grabArtifactFromJenkins
shameerb/incubator-airflow
python
def grabArtifactFromJenkins(**context): "\n Grab an artifact from the previous job\n The python-jenkins library doesn't expose a method for that\n But it's totally possible to build manually the request for that\n " hook = JenkinsHook('jenkins_nqa') jenkins_server = hook.get_jenkins_server() ...
def api23_link_aggregation_groups_delete_with_http_info(self, ids=None, names=None, async_req=False, _return_http_data_only=False, _preload_content=True, _request_timeout=None): 'DELETE link-aggregation-groups\n\n Remove a link aggregation group to unbind the ports.\n This method makes a synchronous H...
-3,378,384,162,495,183,400
DELETE link-aggregation-groups Remove a link aggregation group to unbind the ports. This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.api23_link_aggregation_groups_delete_with_http_info(async_req=True) >>> result = thread.get() :...
pypureclient/flashblade/FB_2_3/api/link_aggregation_groups_api.py
api23_link_aggregation_groups_delete_with_http_info
Flav-STOR-WL/py-pure-client
python
def api23_link_aggregation_groups_delete_with_http_info(self, ids=None, names=None, async_req=False, _return_http_data_only=False, _preload_content=True, _request_timeout=None): 'DELETE link-aggregation-groups\n\n Remove a link aggregation group to unbind the ports.\n This method makes a synchronous H...
def api23_link_aggregation_groups_get_with_http_info(self, continuation_token=None, filter=None, ids=None, limit=None, names=None, offset=None, sort=None, async_req=False, _return_http_data_only=False, _preload_content=True, _request_timeout=None): "GET link-aggregation-groups\n\n List the status and attribu...
-7,130,267,852,678,943,000
GET link-aggregation-groups List the status and attributes of the Ethernet ports in the configured link aggregation groups. This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.api23_link_aggregation_groups_get_with_http_info(async_r...
pypureclient/flashblade/FB_2_3/api/link_aggregation_groups_api.py
api23_link_aggregation_groups_get_with_http_info
Flav-STOR-WL/py-pure-client
python
def api23_link_aggregation_groups_get_with_http_info(self, continuation_token=None, filter=None, ids=None, limit=None, names=None, offset=None, sort=None, async_req=False, _return_http_data_only=False, _preload_content=True, _request_timeout=None): "GET link-aggregation-groups\n\n List the status and attribu...
def api23_link_aggregation_groups_patch_with_http_info(self, link_aggregation_group=None, ids=None, names=None, async_req=False, _return_http_data_only=False, _preload_content=True, _request_timeout=None): 'PATCH link-aggregation-groups\n\n Modify link aggregation groups by adding and removing Ethernet ports...
916,694,224,354,934,500
PATCH link-aggregation-groups Modify link aggregation groups by adding and removing Ethernet ports. This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.api23_link_aggregation_groups_patch_with_http_info(link_aggregation_group, async...
pypureclient/flashblade/FB_2_3/api/link_aggregation_groups_api.py
api23_link_aggregation_groups_patch_with_http_info
Flav-STOR-WL/py-pure-client
python
def api23_link_aggregation_groups_patch_with_http_info(self, link_aggregation_group=None, ids=None, names=None, async_req=False, _return_http_data_only=False, _preload_content=True, _request_timeout=None): 'PATCH link-aggregation-groups\n\n Modify link aggregation groups by adding and removing Ethernet ports...
def api23_link_aggregation_groups_post_with_http_info(self, link_aggregation_group=None, names=None, async_req=False, _return_http_data_only=False, _preload_content=True, _request_timeout=None): 'POST link-aggregation-groups\n\n Create a link aggregation group of Ethernet ports on the array.\n This me...
5,937,333,888,890,664,000
POST link-aggregation-groups Create a link aggregation group of Ethernet ports on the array. This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.api23_link_aggregation_groups_post_with_http_info(link_aggregation_group, names, async_...
pypureclient/flashblade/FB_2_3/api/link_aggregation_groups_api.py
api23_link_aggregation_groups_post_with_http_info
Flav-STOR-WL/py-pure-client
python
def api23_link_aggregation_groups_post_with_http_info(self, link_aggregation_group=None, names=None, async_req=False, _return_http_data_only=False, _preload_content=True, _request_timeout=None): 'POST link-aggregation-groups\n\n Create a link aggregation group of Ethernet ports on the array.\n This me...
def onnxifi_caffe2_net(pred_net, input_shapes, max_batch_size=1, max_seq_size=1, debug=False, use_onnx=True, merge_fp32_inputs_into_fp16=False, adjust_batch=True, black_list=None, weight_names=None): '\n Transform the caffe2_net by collapsing ONNXIFI-runnable nodes into Onnxifi c2 ops\n ' shape_hints = {}...
383,177,673,734,143,000
Transform the caffe2_net by collapsing ONNXIFI-runnable nodes into Onnxifi c2 ops
detectron/lib/python3.6/site-packages/caffe2/python/onnx/onnxifi.py
onnxifi_caffe2_net
JustinBear99/Mask_RCNN
python
def onnxifi_caffe2_net(pred_net, input_shapes, max_batch_size=1, max_seq_size=1, debug=False, use_onnx=True, merge_fp32_inputs_into_fp16=False, adjust_batch=True, black_list=None, weight_names=None): '\n \n ' shape_hints = {} for (k, v) in input_shapes.items(): shape_hints[k] = v pred_net_...