body stringlengths 26 98.2k | body_hash int64 -9,222,864,604,528,158,000 9,221,803,474B | docstring stringlengths 1 16.8k | path stringlengths 5 230 | name stringlengths 1 96 | repository_name stringlengths 7 89 | lang stringclasses 1
value | body_without_docstring stringlengths 20 98.2k |
|---|---|---|---|---|---|---|---|
def inference(net, img_path='', output_path='./', output_name='f', use_gpu=True):
'\n\n :param net:\n :param img_path:\n :param output_path:\n :return:\n '
adj2_ = torch.from_numpy(graph.cihp2pascal_nlp_adj).float()
adj2_test = adj2_.unsqueeze(0).unsqueeze(0).expand(1, 1, 7, 20).cuda().transp... | 7,895,869,433,386,588,000 | :param net:
:param img_path:
:param output_path:
:return: | exp/inference/inference_dir.py | inference | ericwang0701/Graphonomy | python | def inference(net, img_path=, output_path='./', output_name='f', use_gpu=True):
'\n\n :param net:\n :param img_path:\n :param output_path:\n :return:\n '
adj2_ = torch.from_numpy(graph.cihp2pascal_nlp_adj).float()
adj2_test = adj2_.unsqueeze(0).unsqueeze(0).expand(1, 1, 7, 20).cuda().transpos... |
def chunk_date_range(start_date: DateTime, interval=pendulum.duration(days=1)) -> Iterable[Period]:
'\n Yields a list of the beginning and ending timestamps of each day between the start date and now.\n The return value is a pendulum.period\n '
now = pendulum.now()
while (start_date <= now):
... | 3,912,673,268,839,119,000 | Yields a list of the beginning and ending timestamps of each day between the start date and now.
The return value is a pendulum.period | airbyte-integrations/connectors/source-slack/source_slack/source.py | chunk_date_range | AetherUnbound/airbyte | python | def chunk_date_range(start_date: DateTime, interval=pendulum.duration(days=1)) -> Iterable[Period]:
'\n Yields a list of the beginning and ending timestamps of each day between the start date and now.\n The return value is a pendulum.period\n '
now = pendulum.now()
while (start_date <= now):
... |
@property
@abstractmethod
def data_field(self) -> str:
'The name of the field in the response which contains the data' | 4,770,618,322,837,041,000 | The name of the field in the response which contains the data | airbyte-integrations/connectors/source-slack/source_slack/source.py | data_field | AetherUnbound/airbyte | python | @property
@abstractmethod
def data_field(self) -> str:
|
def stream_slices(self, stream_state: Mapping[(str, Any)]=None, **kwargs) -> Iterable[Optional[Mapping[(str, any)]]]:
"\n The logic for incrementally syncing threads is not very obvious, so buckle up.\n\n To get all messages in a thread, one must specify the channel and timestamp of the parent (first)... | -7,067,957,066,529,734,000 | The logic for incrementally syncing threads is not very obvious, so buckle up.
To get all messages in a thread, one must specify the channel and timestamp of the parent (first) message of that thread, basically its ID.
One complication is that threads can be updated at any time in the future. Therefore, if we wanted ... | airbyte-integrations/connectors/source-slack/source_slack/source.py | stream_slices | AetherUnbound/airbyte | python | def stream_slices(self, stream_state: Mapping[(str, Any)]=None, **kwargs) -> Iterable[Optional[Mapping[(str, any)]]]:
"\n The logic for incrementally syncing threads is not very obvious, so buckle up.\n\n To get all messages in a thread, one must specify the channel and timestamp of the parent (first)... |
def run_dask_function(config):
'Start a Dask Cluster using dask-kubernetes and run a function.\n\n Talks to kubernetes to create `n` amount of new `pods` with a dask worker inside of each\n forming a `dask` cluster. Then, a function specified from `config` is being imported and\n run with the given argumen... | -2,584,494,591,846,567,000 | Start a Dask Cluster using dask-kubernetes and run a function.
Talks to kubernetes to create `n` amount of new `pods` with a dask worker inside of each
forming a `dask` cluster. Then, a function specified from `config` is being imported and
run with the given arguments. The tasks created by this `function` are being r... | benchmark/btb_benchmark/kubernetes.py | run_dask_function | HDI-Project/BTB | python | def run_dask_function(config):
'Start a Dask Cluster using dask-kubernetes and run a function.\n\n Talks to kubernetes to create `n` amount of new `pods` with a dask worker inside of each\n forming a `dask` cluster. Then, a function specified from `config` is being imported and\n run with the given argumen... |
def run_on_kubernetes(config, namespace='default'):
'Run dask function inside a pod using the given config.\n\n Create a pod, using the local kubernetes configuration that starts a Dask Cluster\n using dask-kubernetes and runs a function specified within the `config` dictionary.\n\n Args:\n config... | 2,622,271,829,564,217,300 | Run dask function inside a pod using the given config.
Create a pod, using the local kubernetes configuration that starts a Dask Cluster
using dask-kubernetes and runs a function specified within the `config` dictionary.
Args:
config (dict):
Config dictionary.
namespace (str):
Kubernetes nam... | benchmark/btb_benchmark/kubernetes.py | run_on_kubernetes | HDI-Project/BTB | python | def run_on_kubernetes(config, namespace='default'):
'Run dask function inside a pod using the given config.\n\n Create a pod, using the local kubernetes configuration that starts a Dask Cluster\n using dask-kubernetes and runs a function specified within the `config` dictionary.\n\n Args:\n config... |
def generate_corpus_seeds(*, fuzz_pool, build_dir, seed_dir, targets):
'Generates new corpus seeds.\n\n Run {targets} without input, and outputs the generated corpus seeds to\n {seed_dir}.\n '
logging.info('Generating corpus seeds to {}'.format(seed_dir))
def job(command):
logging.debug("R... | -8,586,212,390,711,142,000 | Generates new corpus seeds.
Run {targets} without input, and outputs the generated corpus seeds to
{seed_dir}. | test/fuzz/test_runner.py | generate_corpus_seeds | BlockMechanic/crown | python | def generate_corpus_seeds(*, fuzz_pool, build_dir, seed_dir, targets):
'Generates new corpus seeds.\n\n Run {targets} without input, and outputs the generated corpus seeds to\n {seed_dir}.\n '
logging.info('Generating corpus seeds to {}'.format(seed_dir))
def job(command):
logging.debug("R... |
def get_event_categories(source_type: Optional[str]=None, opts: Optional[pulumi.InvokeOptions]=None) -> AwaitableGetEventCategoriesResult:
'\n ## Example Usage\n\n List the event categories of all the RDS resources.\n\n ```python\n import pulumi\n import pulumi_aws as aws\n\n example_event_categor... | 6,330,624,011,440,082,000 | ## Example Usage
List the event categories of all the RDS resources.
```python
import pulumi
import pulumi_aws as aws
example_event_categories = aws.rds.get_event_categories()
pulumi.export("example", example_event_categories.event_categories)
```
List the event categories specific to the RDS resource `db-snapshot`... | sdk/python/pulumi_aws/rds/get_event_categories.py | get_event_categories | mdop-wh/pulumi-aws | python | def get_event_categories(source_type: Optional[str]=None, opts: Optional[pulumi.InvokeOptions]=None) -> AwaitableGetEventCategoriesResult:
'\n ## Example Usage\n\n List the event categories of all the RDS resources.\n\n ```python\n import pulumi\n import pulumi_aws as aws\n\n example_event_categor... |
@property
@pulumi.getter(name='eventCategories')
def event_categories(self) -> List[str]:
'\n A list of the event categories.\n '
return pulumi.get(self, 'event_categories') | 7,065,916,001,102,644,000 | A list of the event categories. | sdk/python/pulumi_aws/rds/get_event_categories.py | event_categories | mdop-wh/pulumi-aws | python | @property
@pulumi.getter(name='eventCategories')
def event_categories(self) -> List[str]:
'\n \n '
return pulumi.get(self, 'event_categories') |
@property
@pulumi.getter
def id(self) -> str:
'\n The provider-assigned unique ID for this managed resource.\n '
return pulumi.get(self, 'id') | 3,214,403,723,836,065,300 | The provider-assigned unique ID for this managed resource. | sdk/python/pulumi_aws/rds/get_event_categories.py | id | mdop-wh/pulumi-aws | python | @property
@pulumi.getter
def id(self) -> str:
'\n \n '
return pulumi.get(self, 'id') |
def model_fn(model_dir):
'Load the PyTorch model from the `model_dir` directory.'
print('Loading model.')
model_info = {}
model_info_path = os.path.join(model_dir, 'model_info.pth')
with open(model_info_path, 'rb') as f:
model_info = torch.load(f)
print('model_info: {}'.format(model_info... | 7,045,043,767,301,961,000 | Load the PyTorch model from the `model_dir` directory. | Project_Plagiarism_Detection/source_pytorch/train.py | model_fn | ngocpc/Project_Plagiarism_Detection | python | def model_fn(model_dir):
print('Loading model.')
model_info = {}
model_info_path = os.path.join(model_dir, 'model_info.pth')
with open(model_info_path, 'rb') as f:
model_info = torch.load(f)
print('model_info: {}'.format(model_info))
device = torch.device(('cuda' if torch.cuda.is_av... |
def train(model, train_loader, epochs, criterion, optimizer, device):
'\n This is the training method that is called by the PyTorch training script. The parameters\n passed are as follows:\n model - The PyTorch model that we wish to train.\n train_loader - The PyTorch DataLoader that should be us... | -4,570,357,159,223,916,000 | This is the training method that is called by the PyTorch training script. The parameters
passed are as follows:
model - The PyTorch model that we wish to train.
train_loader - The PyTorch DataLoader that should be used during training.
epochs - The total number of epochs to train for.
criterion - The l... | Project_Plagiarism_Detection/source_pytorch/train.py | train | ngocpc/Project_Plagiarism_Detection | python | def train(model, train_loader, epochs, criterion, optimizer, device):
'\n This is the training method that is called by the PyTorch training script. The parameters\n passed are as follows:\n model - The PyTorch model that we wish to train.\n train_loader - The PyTorch DataLoader that should be us... |
def available(self):
'True if the solver is available'
return self.executable(self.path) | 4,336,824,989,999,015,000 | True if the solver is available | pulp/apis/gurobi_api.py | available | KCachel/pulp | python | def available(self):
return self.executable(self.path) |
def actualSolve(self, lp):
'Solve a well formulated lp problem'
if ('GUROBI_HOME' in os.environ):
if ('LD_LIBRARY_PATH' not in os.environ):
os.environ['LD_LIBRARY_PATH'] = ''
os.environ['LD_LIBRARY_PATH'] += ((':' + os.environ['GUROBI_HOME']) + '/lib')
if (not self.executable(sel... | -8,829,734,950,521,837,000 | Solve a well formulated lp problem | pulp/apis/gurobi_api.py | actualSolve | KCachel/pulp | python | def actualSolve(self, lp):
if ('GUROBI_HOME' in os.environ):
if ('LD_LIBRARY_PATH' not in os.environ):
os.environ['LD_LIBRARY_PATH'] =
os.environ['LD_LIBRARY_PATH'] += ((':' + os.environ['GUROBI_HOME']) + '/lib')
if (not self.executable(self.path)):
raise PulpSolverErro... |
def readsol(self, filename):
'Read a Gurobi solution file'
with open(filename) as my_file:
try:
next(my_file)
except StopIteration:
warnings.warn('GUROBI_CMD does provide good solution status of non optimal solutions')
status = constants.LpStatusNotSolved
... | -1,829,265,237,492,473,000 | Read a Gurobi solution file | pulp/apis/gurobi_api.py | readsol | KCachel/pulp | python | def readsol(self, filename):
with open(filename) as my_file:
try:
next(my_file)
except StopIteration:
warnings.warn('GUROBI_CMD does provide good solution status of non optimal solutions')
status = constants.LpStatusNotSolved
return (status, {}, {... |
def writesol(self, filename, vs):
'Writes a GUROBI solution file'
values = [(v.name, v.value()) for v in vs if (v.value() is not None)]
rows = []
for (name, value) in values:
rows.append('{} {}'.format(name, value))
with open(filename, 'w') as f:
f.write('\n'.join(rows))
return T... | -5,954,528,396,085,368,000 | Writes a GUROBI solution file | pulp/apis/gurobi_api.py | writesol | KCachel/pulp | python | def writesol(self, filename, vs):
values = [(v.name, v.value()) for v in vs if (v.value() is not None)]
rows = []
for (name, value) in values:
rows.append('{} {}'.format(name, value))
with open(filename, 'w') as f:
f.write('\n'.join(rows))
return True |
def __init__(self, mip=True, msg=True, timeLimit=None, epgap=None, **solverParams):
'\n Initializes the Gurobi solver.\n\n @param mip: if False the solver will solve a MIP as an LP\n @param msg: displays information from the solver to stdout\n @param timeLimit: sets the m... | -6,638,805,552,735,199,000 | Initializes the Gurobi solver.
@param mip: if False the solver will solve a MIP as an LP
@param msg: displays information from the solver to stdout
@param timeLimit: sets the maximum time for solution
@param epgap: sets the integer bound gap | pulp/apis/gurobi_api.py | __init__ | KCachel/pulp | python | def __init__(self, mip=True, msg=True, timeLimit=None, epgap=None, **solverParams):
'\n Initializes the Gurobi solver.\n\n @param mip: if False the solver will solve a MIP as an LP\n @param msg: displays information from the solver to stdout\n @param timeLimit: sets the m... |
def available(self):
'True if the solver is available'
return True | -8,466,514,769,147,015,000 | True if the solver is available | pulp/apis/gurobi_api.py | available | KCachel/pulp | python | def available(self):
return True |
def callSolver(self, lp, callback=None):
'Solves the problem with gurobi\n '
self.solveTime = (- clock())
lp.solverModel.optimize(callback=callback)
self.solveTime += clock() | 8,155,221,425,289,192,000 | Solves the problem with gurobi | pulp/apis/gurobi_api.py | callSolver | KCachel/pulp | python | def callSolver(self, lp, callback=None):
'\n '
self.solveTime = (- clock())
lp.solverModel.optimize(callback=callback)
self.solveTime += clock() |
def buildSolverModel(self, lp):
'\n Takes the pulp lp model and translates it into a gurobi model\n '
log.debug('create the gurobi model')
lp.solverModel = gurobipy.Model(lp.name)
log.debug('set the sense of the problem')
if (lp.sense == constants.LpMaximize):
lp.solver... | -7,679,258,590,792,931,000 | Takes the pulp lp model and translates it into a gurobi model | pulp/apis/gurobi_api.py | buildSolverModel | KCachel/pulp | python | def buildSolverModel(self, lp):
'\n \n '
log.debug('create the gurobi model')
lp.solverModel = gurobipy.Model(lp.name)
log.debug('set the sense of the problem')
if (lp.sense == constants.LpMaximize):
lp.solverModel.setAttr('ModelSense', (- 1))
if self.timeLimit:
... |
def actualSolve(self, lp, callback=None):
'\n Solve a well formulated lp problem\n\n creates a gurobi model, variables and constraints and attaches\n them to the lp model which it then solves\n '
self.buildSolverModel(lp)
log.debug('Solve the Model using gurobi')
... | 5,262,698,481,114,730,000 | Solve a well formulated lp problem
creates a gurobi model, variables and constraints and attaches
them to the lp model which it then solves | pulp/apis/gurobi_api.py | actualSolve | KCachel/pulp | python | def actualSolve(self, lp, callback=None):
'\n Solve a well formulated lp problem\n\n creates a gurobi model, variables and constraints and attaches\n them to the lp model which it then solves\n '
self.buildSolverModel(lp)
log.debug('Solve the Model using gurobi')
... |
def actualResolve(self, lp, callback=None):
'\n Solve a well formulated lp problem\n\n uses the old solver and modifies the rhs of the modified constraints\n '
log.debug('Resolve the Model using gurobi')
for constraint in lp.constraints.values():
if constraint.modifi... | 2,901,966,951,382,373,400 | Solve a well formulated lp problem
uses the old solver and modifies the rhs of the modified constraints | pulp/apis/gurobi_api.py | actualResolve | KCachel/pulp | python | def actualResolve(self, lp, callback=None):
'\n Solve a well formulated lp problem\n\n uses the old solver and modifies the rhs of the modified constraints\n '
log.debug('Resolve the Model using gurobi')
for constraint in lp.constraints.values():
if constraint.modifi... |
def available(self):
'True if the solver is available'
return False | 3,812,336,950,956,385,300 | True if the solver is available | pulp/apis/gurobi_api.py | available | KCachel/pulp | python | def available(self):
return False |
def actualSolve(self, lp, callback=None):
'Solve a well formulated lp problem'
raise PulpSolverError('GUROBI: Not Available') | 7,349,007,719,746,268,000 | Solve a well formulated lp problem | pulp/apis/gurobi_api.py | actualSolve | KCachel/pulp | python | def actualSolve(self, lp, callback=None):
raise PulpSolverError('GUROBI: Not Available') |
def model_architecture(self, num_features, num_actions, max_history_len):
'Build a keras model and return a compiled model.\n\n :param max_history_len: The maximum number of historical\n turns used to decide on next action\n '
from keras.layers import LSTM, Activatio... | 2,872,324,262,310,207,500 | Build a keras model and return a compiled model.
:param max_history_len: The maximum number of historical
turns used to decide on next action | rasa_core/policies/keras_policy.py | model_architecture | AdrianAdamiec/rasa_core | python | def model_architecture(self, num_features, num_actions, max_history_len):
'Build a keras model and return a compiled model.\n\n :param max_history_len: The maximum number of historical\n turns used to decide on next action\n '
from keras.layers import LSTM, Activatio... |
def conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1):
'3x3 convolution with padding'
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=dilation, groups=groups, bias=False, dilation=dilation) | 5,146,063,946,393,382,000 | 3x3 convolution with padding | segmentation_models_pytorch/encoders/zerocenter.py | conv3x3 | vinnamkim/segmentation_models.pytorch | python | def conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1):
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=dilation, groups=groups, bias=False, dilation=dilation) |
def conv1x1(in_planes, out_planes, stride=1):
'1x1 convolution'
return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False) | 2,748,212,586,768,409,000 | 1x1 convolution | segmentation_models_pytorch/encoders/zerocenter.py | conv1x1 | vinnamkim/segmentation_models.pytorch | python | def conv1x1(in_planes, out_planes, stride=1):
return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False) |
def resnet18(pretrained=False, progress=True, **kwargs):
'ResNet-18 model from\n `"Deep Residual Learning for Image Recognition" <https://arxiv.org/pdf/1512.03385.pdf>`_\n\n Args:\n pretrained (bool): If True, returns a model pre-trained on ImageNet\n progress (bool): If True, displays a progres... | 3,628,755,405,227,026,400 | ResNet-18 model from
`"Deep Residual Learning for Image Recognition" <https://arxiv.org/pdf/1512.03385.pdf>`_
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr | segmentation_models_pytorch/encoders/zerocenter.py | resnet18 | vinnamkim/segmentation_models.pytorch | python | def resnet18(pretrained=False, progress=True, **kwargs):
'ResNet-18 model from\n `"Deep Residual Learning for Image Recognition" <https://arxiv.org/pdf/1512.03385.pdf>`_\n\n Args:\n pretrained (bool): If True, returns a model pre-trained on ImageNet\n progress (bool): If True, displays a progres... |
def resnet34(pretrained=False, progress=True, **kwargs):
'ResNet-34 model from\n `"Deep Residual Learning for Image Recognition" <https://arxiv.org/pdf/1512.03385.pdf>`_\n\n Args:\n pretrained (bool): If True, returns a model pre-trained on ImageNet\n progress (bool): If True, displays a progres... | 2,970,724,482,834,467,300 | ResNet-34 model from
`"Deep Residual Learning for Image Recognition" <https://arxiv.org/pdf/1512.03385.pdf>`_
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr | segmentation_models_pytorch/encoders/zerocenter.py | resnet34 | vinnamkim/segmentation_models.pytorch | python | def resnet34(pretrained=False, progress=True, **kwargs):
'ResNet-34 model from\n `"Deep Residual Learning for Image Recognition" <https://arxiv.org/pdf/1512.03385.pdf>`_\n\n Args:\n pretrained (bool): If True, returns a model pre-trained on ImageNet\n progress (bool): If True, displays a progres... |
def resnet50(pretrained=False, progress=True, **kwargs):
'ResNet-50 model from\n `"Deep Residual Learning for Image Recognition" <https://arxiv.org/pdf/1512.03385.pdf>`_\n\n Args:\n pretrained (bool): If True, returns a model pre-trained on ImageNet\n progress (bool): If True, displays a progres... | 8,498,227,001,201,233,000 | ResNet-50 model from
`"Deep Residual Learning for Image Recognition" <https://arxiv.org/pdf/1512.03385.pdf>`_
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr | segmentation_models_pytorch/encoders/zerocenter.py | resnet50 | vinnamkim/segmentation_models.pytorch | python | def resnet50(pretrained=False, progress=True, **kwargs):
'ResNet-50 model from\n `"Deep Residual Learning for Image Recognition" <https://arxiv.org/pdf/1512.03385.pdf>`_\n\n Args:\n pretrained (bool): If True, returns a model pre-trained on ImageNet\n progress (bool): If True, displays a progres... |
def resnet101(pretrained=False, progress=True, **kwargs):
'ResNet-101 model from\n `"Deep Residual Learning for Image Recognition" <https://arxiv.org/pdf/1512.03385.pdf>`_\n\n Args:\n pretrained (bool): If True, returns a model pre-trained on ImageNet\n progress (bool): If True, displays a progr... | -4,235,029,202,871,245,000 | ResNet-101 model from
`"Deep Residual Learning for Image Recognition" <https://arxiv.org/pdf/1512.03385.pdf>`_
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr | segmentation_models_pytorch/encoders/zerocenter.py | resnet101 | vinnamkim/segmentation_models.pytorch | python | def resnet101(pretrained=False, progress=True, **kwargs):
'ResNet-101 model from\n `"Deep Residual Learning for Image Recognition" <https://arxiv.org/pdf/1512.03385.pdf>`_\n\n Args:\n pretrained (bool): If True, returns a model pre-trained on ImageNet\n progress (bool): If True, displays a progr... |
def resnet152(pretrained=False, progress=True, **kwargs):
'ResNet-152 model from\n `"Deep Residual Learning for Image Recognition" <https://arxiv.org/pdf/1512.03385.pdf>`_\n\n Args:\n pretrained (bool): If True, returns a model pre-trained on ImageNet\n progress (bool): If True, displays a progr... | -1,663,645,882,722,182,100 | ResNet-152 model from
`"Deep Residual Learning for Image Recognition" <https://arxiv.org/pdf/1512.03385.pdf>`_
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr | segmentation_models_pytorch/encoders/zerocenter.py | resnet152 | vinnamkim/segmentation_models.pytorch | python | def resnet152(pretrained=False, progress=True, **kwargs):
'ResNet-152 model from\n `"Deep Residual Learning for Image Recognition" <https://arxiv.org/pdf/1512.03385.pdf>`_\n\n Args:\n pretrained (bool): If True, returns a model pre-trained on ImageNet\n progress (bool): If True, displays a progr... |
def resnext50_32x4d(pretrained=False, progress=True, **kwargs):
'ResNeXt-50 32x4d model from\n `"Aggregated Residual Transformation for Deep Neural Networks" <https://arxiv.org/pdf/1611.05431.pdf>`_\n\n Args:\n pretrained (bool): If True, returns a model pre-trained on ImageNet\n progress (bool)... | -4,696,768,546,196,444,000 | ResNeXt-50 32x4d model from
`"Aggregated Residual Transformation for Deep Neural Networks" <https://arxiv.org/pdf/1611.05431.pdf>`_
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr | segmentation_models_pytorch/encoders/zerocenter.py | resnext50_32x4d | vinnamkim/segmentation_models.pytorch | python | def resnext50_32x4d(pretrained=False, progress=True, **kwargs):
'ResNeXt-50 32x4d model from\n `"Aggregated Residual Transformation for Deep Neural Networks" <https://arxiv.org/pdf/1611.05431.pdf>`_\n\n Args:\n pretrained (bool): If True, returns a model pre-trained on ImageNet\n progress (bool)... |
def resnext101_32x8d(pretrained=False, progress=True, **kwargs):
'ResNeXt-101 32x8d model from\n `"Aggregated Residual Transformation for Deep Neural Networks" <https://arxiv.org/pdf/1611.05431.pdf>`_\n\n Args:\n pretrained (bool): If True, returns a model pre-trained on ImageNet\n progress (boo... | -3,328,597,472,209,807,400 | ResNeXt-101 32x8d model from
`"Aggregated Residual Transformation for Deep Neural Networks" <https://arxiv.org/pdf/1611.05431.pdf>`_
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr | segmentation_models_pytorch/encoders/zerocenter.py | resnext101_32x8d | vinnamkim/segmentation_models.pytorch | python | def resnext101_32x8d(pretrained=False, progress=True, **kwargs):
'ResNeXt-101 32x8d model from\n `"Aggregated Residual Transformation for Deep Neural Networks" <https://arxiv.org/pdf/1611.05431.pdf>`_\n\n Args:\n pretrained (bool): If True, returns a model pre-trained on ImageNet\n progress (boo... |
def wide_resnet50_2(pretrained=False, progress=True, **kwargs):
'Wide ResNet-50-2 model from\n `"Wide Residual Networks" <https://arxiv.org/pdf/1605.07146.pdf>`_\n\n The model is the same as ResNet except for the bottleneck number of channels\n which is twice larger in every block. The number of channels i... | 2,511,253,809,351,651,300 | Wide ResNet-50-2 model from
`"Wide Residual Networks" <https://arxiv.org/pdf/1605.07146.pdf>`_
The model is the same as ResNet except for the bottleneck number of channels
which is twice larger in every block. The number of channels in outer 1x1
convolutions is the same, e.g. last block in ResNet-50 has 2048-512-2048
... | segmentation_models_pytorch/encoders/zerocenter.py | wide_resnet50_2 | vinnamkim/segmentation_models.pytorch | python | def wide_resnet50_2(pretrained=False, progress=True, **kwargs):
'Wide ResNet-50-2 model from\n `"Wide Residual Networks" <https://arxiv.org/pdf/1605.07146.pdf>`_\n\n The model is the same as ResNet except for the bottleneck number of channels\n which is twice larger in every block. The number of channels i... |
def wide_resnet101_2(pretrained=False, progress=True, **kwargs):
'Wide ResNet-101-2 model from\n `"Wide Residual Networks" <https://arxiv.org/pdf/1605.07146.pdf>`_\n\n The model is the same as ResNet except for the bottleneck number of channels\n which is twice larger in every block. The number of channels... | -1,834,030,766,958,855,400 | Wide ResNet-101-2 model from
`"Wide Residual Networks" <https://arxiv.org/pdf/1605.07146.pdf>`_
The model is the same as ResNet except for the bottleneck number of channels
which is twice larger in every block. The number of channels in outer 1x1
convolutions is the same, e.g. last block in ResNet-50 has 2048-512-2048... | segmentation_models_pytorch/encoders/zerocenter.py | wide_resnet101_2 | vinnamkim/segmentation_models.pytorch | python | def wide_resnet101_2(pretrained=False, progress=True, **kwargs):
'Wide ResNet-101-2 model from\n `"Wide Residual Networks" <https://arxiv.org/pdf/1605.07146.pdf>`_\n\n The model is the same as ResNet except for the bottleneck number of channels\n which is twice larger in every block. The number of channels... |
@staticmethod
def _parse(results):
'Parse the test output.\n\n See also https://github.com/axboe/fio/blob/master/HOWTO\n '
stats = defaultdict(int)
for (host, output) in results.items():
for job in output.split():
stats[host] += int(job.split(';')[7])
stats[host... | 3,945,342,443,185,179,000 | Parse the test output.
See also https://github.com/axboe/fio/blob/master/HOWTO | perfrunner/tests/fio.py | _parse | agyryk/perfrunner | python | @staticmethod
def _parse(results):
'Parse the test output.\n\n See also https://github.com/axboe/fio/blob/master/HOWTO\n '
stats = defaultdict(int)
for (host, output) in results.items():
for job in output.split():
stats[host] += int(job.split(';')[7])
stats[host... |
def forward(self, outputs, batch):
'\n :param outputs:\n :param batch:\n :return:\n '
opt = self.opt
(hm_loss, wh_loss, off_loss, id_loss) = (0.0, 0.0, 0.0, 0.0)
for s in range(opt.num_stacks):
output = outputs[s]
if (not opt.mse_loss):
output['hm'... | 1,284,021,264,384,355,000 | :param outputs:
:param batch:
:return: | src/lib/trains/mot.py | forward | CaptainEven/FairMOTVehicle | python | def forward(self, outputs, batch):
'\n :param outputs:\n :param batch:\n :return:\n '
opt = self.opt
(hm_loss, wh_loss, off_loss, id_loss) = (0.0, 0.0, 0.0, 0.0)
for s in range(opt.num_stacks):
output = outputs[s]
if (not opt.mse_loss):
output['hm'... |
def _get_metadata(self, result: Dict[(str, Any)]) -> List[EventMetadataEntry]:
'\n Here, we run queries against our output Snowflake database tables to add additional context\n to our asset materializations.\n '
table_name = result['unique_id'].split('.')[(- 1)]
with connect_snowflake(c... | -9,076,837,992,323,186,000 | Here, we run queries against our output Snowflake database tables to add additional context
to our asset materializations. | examples/hacker_news/hacker_news/resources/dbt_asset_resource.py | _get_metadata | AndreaGiardini/dagster | python | def _get_metadata(self, result: Dict[(str, Any)]) -> List[EventMetadataEntry]:
'\n Here, we run queries against our output Snowflake database tables to add additional context\n to our asset materializations.\n '
table_name = result['unique_id'].split('.')[(- 1)]
with connect_snowflake(c... |
def get_form_instance_from_request(request):
' Get the form class from the request. '
form_id = request.POST.get('form_id')
if (form_id and form_id.isdigit()):
try:
return Form.objects.get(pk=int(form_id))
except Form.DoesNotExist:
pass
return None | -7,506,938,458,354,489,000 | Get the form class from the request. | wagtailstreamforms/utils/requests.py | get_form_instance_from_request | AsankaL/wagtailstreamforms | python | def get_form_instance_from_request(request):
' '
form_id = request.POST.get('form_id')
if (form_id and form_id.isdigit()):
try:
return Form.objects.get(pk=int(form_id))
except Form.DoesNotExist:
pass
return None |
def min_ea():
'\n Return the lowest mapped address of the IDB.\n Wrapper on :meth:`BipIdb.min_ea`.\n '
return BipIdb.min_ea() | 3,291,513,428,824,812,000 | Return the lowest mapped address of the IDB.
Wrapper on :meth:`BipIdb.min_ea`. | bip/base/bipidb.py | min_ea | BrunoPujos/bip | python | def min_ea():
'\n Return the lowest mapped address of the IDB.\n Wrapper on :meth:`BipIdb.min_ea`.\n '
return BipIdb.min_ea() |
def max_ea():
'\n Return the highest mapped address of the IDB.\n Wrapper on :meth:`BipIdb.max_ea`.\n '
return BipIdb.max_ea() | 1,639,269,415,500,825,000 | Return the highest mapped address of the IDB.
Wrapper on :meth:`BipIdb.max_ea`. | bip/base/bipidb.py | max_ea | BrunoPujos/bip | python | def max_ea():
'\n Return the highest mapped address of the IDB.\n Wrapper on :meth:`BipIdb.max_ea`.\n '
return BipIdb.max_ea() |
def Here():
'\n Return current screen address.\n\n :return: The current address.\n '
return BipIdb.current_addr() | 2,565,687,747,275,942,400 | Return current screen address.
:return: The current address. | bip/base/bipidb.py | Here | BrunoPujos/bip | python | def Here():
'\n Return current screen address.\n\n :return: The current address.\n '
return BipIdb.current_addr() |
@staticmethod
def ptr_size():
'\n Return the number of bits in a pointer.\n \n :rtype: int\n '
info = idaapi.get_inf_structure()
if info.is_64bit():
bits = 64
elif info.is_32bit():
bits = 32
else:
bits = 16
return bits | 8,048,461,866,271,615,000 | Return the number of bits in a pointer.
:rtype: int | bip/base/bipidb.py | ptr_size | BrunoPujos/bip | python | @staticmethod
def ptr_size():
'\n Return the number of bits in a pointer.\n \n :rtype: int\n '
info = idaapi.get_inf_structure()
if info.is_64bit():
bits = 64
elif info.is_32bit():
bits = 32
else:
bits = 16
return bits |
@staticmethod
def min_ea():
'\n Return the lowest mapped address of the IDB.\n '
return idc.get_inf_attr(idc.INF_MIN_EA) | -6,052,441,912,866,462,000 | Return the lowest mapped address of the IDB. | bip/base/bipidb.py | min_ea | BrunoPujos/bip | python | @staticmethod
def min_ea():
'\n \n '
return idc.get_inf_attr(idc.INF_MIN_EA) |
@staticmethod
def max_ea():
'\n Return the highest mapped address of the IDB.\n '
return idc.get_inf_attr(idc.INF_MAX_EA) | -9,029,114,986,457,386,000 | Return the highest mapped address of the IDB. | bip/base/bipidb.py | max_ea | BrunoPujos/bip | python | @staticmethod
def max_ea():
'\n \n '
return idc.get_inf_attr(idc.INF_MAX_EA) |
@staticmethod
def image_base():
'\n Return the base address of the image loaded in the IDB.\n \n This is different from :meth:`~BipIdb.min_ea` which is the lowest\n *mapped* address.\n '
return idaapi.get_imagebase() | 434,545,259,254,879,040 | Return the base address of the image loaded in the IDB.
This is different from :meth:`~BipIdb.min_ea` which is the lowest
*mapped* address. | bip/base/bipidb.py | image_base | BrunoPujos/bip | python | @staticmethod
def image_base():
'\n Return the base address of the image loaded in the IDB.\n \n This is different from :meth:`~BipIdb.min_ea` which is the lowest\n *mapped* address.\n '
return idaapi.get_imagebase() |
@staticmethod
def current_addr():
'\n Return current screen address.\n\n :return: The current address selected.\n '
return ida_kernwin.get_screen_ea() | -2,356,439,664,913,423,400 | Return current screen address.
:return: The current address selected. | bip/base/bipidb.py | current_addr | BrunoPujos/bip | python | @staticmethod
def current_addr():
'\n Return current screen address.\n\n :return: The current address selected.\n '
return ida_kernwin.get_screen_ea() |
@staticmethod
def relea(addr):
'\n Calculate the relative address compare to the IDA image base.\n The calcul done is ``ADDR - IMGBASE``.\n \n The opposite of this function is :func:`absea`.\n \n :param int addr: The absolute address to translate.\n :retu... | -1,530,190,527,725,429,000 | Calculate the relative address compare to the IDA image base.
The calcul done is ``ADDR - IMGBASE``.
The opposite of this function is :func:`absea`.
:param int addr: The absolute address to translate.
:return: The offset from image base corresponding to ``addr``.
:rtype: int | bip/base/bipidb.py | relea | BrunoPujos/bip | python | @staticmethod
def relea(addr):
'\n Calculate the relative address compare to the IDA image base.\n The calcul done is ``ADDR - IMGBASE``.\n \n The opposite of this function is :func:`absea`.\n \n :param int addr: The absolute address to translate.\n :retu... |
@staticmethod
def absea(offset):
'\n Calculate the absolute address from an offset of the image base.\n The calcul done is ``OFFSET + IMGBASE`` .\n \n The opposite of this function is :func:`relea`.\n \n :param int offset: The offset from the beginning of the image ... | 3,687,362,688,392,563,000 | Calculate the absolute address from an offset of the image base.
The calcul done is ``OFFSET + IMGBASE`` .
The opposite of this function is :func:`relea`.
:param int offset: The offset from the beginning of the image base
to translate.
:return: The absolute address corresponding to the offset.
:rtype: int | bip/base/bipidb.py | absea | BrunoPujos/bip | python | @staticmethod
def absea(offset):
'\n Calculate the absolute address from an offset of the image base.\n The calcul done is ``OFFSET + IMGBASE`` .\n \n The opposite of this function is :func:`relea`.\n \n :param int offset: The offset from the beginning of the image ... |
def __init__(self, name=None, run_id=None, start_time=None, end_time=None, succeeded=None, local_vars_configuration=None):
'AuditProcess - a model defined in OpenAPI"\n \n :param name: (required)\n :type name: str\n :param run_id: (required)\n :type run_id: str\n :param s... | -2,539,799,742,336,256,000 | AuditProcess - a model defined in OpenAPI"
:param name: (required)
:type name: str
:param run_id: (required)
:type run_id: str
:param start_time: (required)
:type start_time: datetime
:param end_time:
:type end_time: datetime
:param succeeded:
:type succeeded: bool | sdk/finbourne_insights/models/audit_process.py | __init__ | finbourne/finbourne-insights-sdk-python | python | def __init__(self, name=None, run_id=None, start_time=None, end_time=None, succeeded=None, local_vars_configuration=None):
'AuditProcess - a model defined in OpenAPI"\n \n :param name: (required)\n :type name: str\n :param run_id: (required)\n :type run_id: str\n :param s... |
@property
def name(self):
'Gets the name of this AuditProcess. # noqa: E501\n\n\n :return: The name of this AuditProcess. # noqa: E501\n :rtype: str\n '
return self._name | -8,446,075,018,189,653,000 | Gets the name of this AuditProcess. # noqa: E501
:return: The name of this AuditProcess. # noqa: E501
:rtype: str | sdk/finbourne_insights/models/audit_process.py | name | finbourne/finbourne-insights-sdk-python | python | @property
def name(self):
'Gets the name of this AuditProcess. # noqa: E501\n\n\n :return: The name of this AuditProcess. # noqa: E501\n :rtype: str\n '
return self._name |
@name.setter
def name(self, name):
'Sets the name of this AuditProcess.\n\n\n :param name: The name of this AuditProcess. # noqa: E501\n :type name: str\n '
if (self.local_vars_configuration.client_side_validation and (name is None)):
raise ValueError('Invalid value for `name`, mus... | -1,908,435,027,012,927,500 | Sets the name of this AuditProcess.
:param name: The name of this AuditProcess. # noqa: E501
:type name: str | sdk/finbourne_insights/models/audit_process.py | name | finbourne/finbourne-insights-sdk-python | python | @name.setter
def name(self, name):
'Sets the name of this AuditProcess.\n\n\n :param name: The name of this AuditProcess. # noqa: E501\n :type name: str\n '
if (self.local_vars_configuration.client_side_validation and (name is None)):
raise ValueError('Invalid value for `name`, mus... |
@property
def run_id(self):
'Gets the run_id of this AuditProcess. # noqa: E501\n\n\n :return: The run_id of this AuditProcess. # noqa: E501\n :rtype: str\n '
return self._run_id | 8,597,131,932,081,543,000 | Gets the run_id of this AuditProcess. # noqa: E501
:return: The run_id of this AuditProcess. # noqa: E501
:rtype: str | sdk/finbourne_insights/models/audit_process.py | run_id | finbourne/finbourne-insights-sdk-python | python | @property
def run_id(self):
'Gets the run_id of this AuditProcess. # noqa: E501\n\n\n :return: The run_id of this AuditProcess. # noqa: E501\n :rtype: str\n '
return self._run_id |
@run_id.setter
def run_id(self, run_id):
'Sets the run_id of this AuditProcess.\n\n\n :param run_id: The run_id of this AuditProcess. # noqa: E501\n :type run_id: str\n '
if (self.local_vars_configuration.client_side_validation and (run_id is None)):
raise ValueError('Invalid value... | 5,772,166,214,027,327,000 | Sets the run_id of this AuditProcess.
:param run_id: The run_id of this AuditProcess. # noqa: E501
:type run_id: str | sdk/finbourne_insights/models/audit_process.py | run_id | finbourne/finbourne-insights-sdk-python | python | @run_id.setter
def run_id(self, run_id):
'Sets the run_id of this AuditProcess.\n\n\n :param run_id: The run_id of this AuditProcess. # noqa: E501\n :type run_id: str\n '
if (self.local_vars_configuration.client_side_validation and (run_id is None)):
raise ValueError('Invalid value... |
@property
def start_time(self):
'Gets the start_time of this AuditProcess. # noqa: E501\n\n\n :return: The start_time of this AuditProcess. # noqa: E501\n :rtype: datetime\n '
return self._start_time | -1,591,834,829,298,791,700 | Gets the start_time of this AuditProcess. # noqa: E501
:return: The start_time of this AuditProcess. # noqa: E501
:rtype: datetime | sdk/finbourne_insights/models/audit_process.py | start_time | finbourne/finbourne-insights-sdk-python | python | @property
def start_time(self):
'Gets the start_time of this AuditProcess. # noqa: E501\n\n\n :return: The start_time of this AuditProcess. # noqa: E501\n :rtype: datetime\n '
return self._start_time |
@start_time.setter
def start_time(self, start_time):
'Sets the start_time of this AuditProcess.\n\n\n :param start_time: The start_time of this AuditProcess. # noqa: E501\n :type start_time: datetime\n '
if (self.local_vars_configuration.client_side_validation and (start_time is None)):
... | 4,984,940,772,512,307,000 | Sets the start_time of this AuditProcess.
:param start_time: The start_time of this AuditProcess. # noqa: E501
:type start_time: datetime | sdk/finbourne_insights/models/audit_process.py | start_time | finbourne/finbourne-insights-sdk-python | python | @start_time.setter
def start_time(self, start_time):
'Sets the start_time of this AuditProcess.\n\n\n :param start_time: The start_time of this AuditProcess. # noqa: E501\n :type start_time: datetime\n '
if (self.local_vars_configuration.client_side_validation and (start_time is None)):
... |
@property
def end_time(self):
'Gets the end_time of this AuditProcess. # noqa: E501\n\n\n :return: The end_time of this AuditProcess. # noqa: E501\n :rtype: datetime\n '
return self._end_time | 8,328,583,657,415,390,000 | Gets the end_time of this AuditProcess. # noqa: E501
:return: The end_time of this AuditProcess. # noqa: E501
:rtype: datetime | sdk/finbourne_insights/models/audit_process.py | end_time | finbourne/finbourne-insights-sdk-python | python | @property
def end_time(self):
'Gets the end_time of this AuditProcess. # noqa: E501\n\n\n :return: The end_time of this AuditProcess. # noqa: E501\n :rtype: datetime\n '
return self._end_time |
@end_time.setter
def end_time(self, end_time):
'Sets the end_time of this AuditProcess.\n\n\n :param end_time: The end_time of this AuditProcess. # noqa: E501\n :type end_time: datetime\n '
self._end_time = end_time | -809,817,271,008,857,500 | Sets the end_time of this AuditProcess.
:param end_time: The end_time of this AuditProcess. # noqa: E501
:type end_time: datetime | sdk/finbourne_insights/models/audit_process.py | end_time | finbourne/finbourne-insights-sdk-python | python | @end_time.setter
def end_time(self, end_time):
'Sets the end_time of this AuditProcess.\n\n\n :param end_time: The end_time of this AuditProcess. # noqa: E501\n :type end_time: datetime\n '
self._end_time = end_time |
@property
def succeeded(self):
'Gets the succeeded of this AuditProcess. # noqa: E501\n\n\n :return: The succeeded of this AuditProcess. # noqa: E501\n :rtype: bool\n '
return self._succeeded | 5,928,420,784,814,148,000 | Gets the succeeded of this AuditProcess. # noqa: E501
:return: The succeeded of this AuditProcess. # noqa: E501
:rtype: bool | sdk/finbourne_insights/models/audit_process.py | succeeded | finbourne/finbourne-insights-sdk-python | python | @property
def succeeded(self):
'Gets the succeeded of this AuditProcess. # noqa: E501\n\n\n :return: The succeeded of this AuditProcess. # noqa: E501\n :rtype: bool\n '
return self._succeeded |
@succeeded.setter
def succeeded(self, succeeded):
'Sets the succeeded of this AuditProcess.\n\n\n :param succeeded: The succeeded of this AuditProcess. # noqa: E501\n :type succeeded: bool\n '
self._succeeded = succeeded | 8,987,236,863,677,922,000 | Sets the succeeded of this AuditProcess.
:param succeeded: The succeeded of this AuditProcess. # noqa: E501
:type succeeded: bool | sdk/finbourne_insights/models/audit_process.py | succeeded | finbourne/finbourne-insights-sdk-python | python | @succeeded.setter
def succeeded(self, succeeded):
'Sets the succeeded of this AuditProcess.\n\n\n :param succeeded: The succeeded of this AuditProcess. # noqa: E501\n :type succeeded: bool\n '
self._succeeded = succeeded |
def to_dict(self, serialize=False):
'Returns the model properties as a dict'
result = {}
def convert(x):
if hasattr(x, 'to_dict'):
args = getfullargspec(x.to_dict).args
if (len(args) == 1):
return x.to_dict()
else:
return x.to_dict... | -1,664,115,404,714,547,500 | Returns the model properties as a dict | sdk/finbourne_insights/models/audit_process.py | to_dict | finbourne/finbourne-insights-sdk-python | python | def to_dict(self, serialize=False):
result = {}
def convert(x):
if hasattr(x, 'to_dict'):
args = getfullargspec(x.to_dict).args
if (len(args) == 1):
return x.to_dict()
else:
return x.to_dict(serialize)
else:
re... |
def to_str(self):
'Returns the string representation of the model'
return pprint.pformat(self.to_dict()) | 5,849,158,643,760,736,000 | Returns the string representation of the model | sdk/finbourne_insights/models/audit_process.py | to_str | finbourne/finbourne-insights-sdk-python | python | def to_str(self):
return pprint.pformat(self.to_dict()) |
def __repr__(self):
'For `print` and `pprint`'
return self.to_str() | -8,960,031,694,814,905,000 | For `print` and `pprint` | sdk/finbourne_insights/models/audit_process.py | __repr__ | finbourne/finbourne-insights-sdk-python | python | def __repr__(self):
return self.to_str() |
def __eq__(self, other):
'Returns true if both objects are equal'
if (not isinstance(other, AuditProcess)):
return False
return (self.to_dict() == other.to_dict()) | -5,086,045,539,816,114,000 | Returns true if both objects are equal | sdk/finbourne_insights/models/audit_process.py | __eq__ | finbourne/finbourne-insights-sdk-python | python | def __eq__(self, other):
if (not isinstance(other, AuditProcess)):
return False
return (self.to_dict() == other.to_dict()) |
def __ne__(self, other):
'Returns true if both objects are not equal'
if (not isinstance(other, AuditProcess)):
return True
return (self.to_dict() != other.to_dict()) | 2,383,757,369,715,803,600 | Returns true if both objects are not equal | sdk/finbourne_insights/models/audit_process.py | __ne__ | finbourne/finbourne-insights-sdk-python | python | def __ne__(self, other):
if (not isinstance(other, AuditProcess)):
return True
return (self.to_dict() != other.to_dict()) |
def _get_sparse_matrixes(X):
'Create csc, csr and coo sparse matrix from any of the above\n\n Arguments:\n X {array-like, csc, csr or coo sparse matrix}\n\n Returns:\n csc, csr, coo\n '
X_coo = X_csc = X_csr = None
if scipy.sparse.isspmatrix_coo(X):
X_coo = X
X_csr = X... | -8,066,853,925,303,291,000 | Create csc, csr and coo sparse matrix from any of the above
Arguments:
X {array-like, csc, csr or coo sparse matrix}
Returns:
csc, csr, coo | src/interface_py/h2o4gpu/solvers/factorization.py | _get_sparse_matrixes | aaron8tang/h2o4gpu | python | def _get_sparse_matrixes(X):
'Create csc, csr and coo sparse matrix from any of the above\n\n Arguments:\n X {array-like, csc, csr or coo sparse matrix}\n\n Returns:\n csc, csr, coo\n '
X_coo = X_csc = X_csr = None
if scipy.sparse.isspmatrix_coo(X):
X_coo = X
X_csr = X... |
def fit(self, X, y=None, X_test=None, X_BATCHES=1, THETA_BATCHES=1, early_stopping_rounds=None, verbose=False, scores=None):
'Learn model from rating matrix X.\n\n Parameters\n ----------\n X {array-like, sparse matrix}, shape (m, n)\n Data matrix to be decomposed.\n y None\n ... | 1,335,698,133,788,022,000 | Learn model from rating matrix X.
Parameters
----------
X {array-like, sparse matrix}, shape (m, n)
Data matrix to be decomposed.
y None
Ignored
X_test {array-like, coo sparse matrix}, shape (m, n)
Data matrix for cross validation.
X_BATCHES int, default: 1
Batches to split XT, increase this parameter ... | src/interface_py/h2o4gpu/solvers/factorization.py | fit | aaron8tang/h2o4gpu | python | def fit(self, X, y=None, X_test=None, X_BATCHES=1, THETA_BATCHES=1, early_stopping_rounds=None, verbose=False, scores=None):
'Learn model from rating matrix X.\n\n Parameters\n ----------\n X {array-like, sparse matrix}, shape (m, n)\n Data matrix to be decomposed.\n y None\n ... |
def predict(self, X):
'Predict none zero elements of coo sparse matrix X according to the fitted model.\n\n Parameters\n ----------\n X {array-like, sparse coo matrix} shape (m, n)\n Data matrix in coo format. Values are ignored.\n\n Returns\n -------\n ... | 1,088,738,667,691,376,400 | Predict none zero elements of coo sparse matrix X according to the fitted model.
Parameters
----------
X {array-like, sparse coo matrix} shape (m, n)
Data matrix in coo format. Values are ignored.
Returns
-------
{array-like, sparse coo matrix} shape (m, n)
Predicted values. | src/interface_py/h2o4gpu/solvers/factorization.py | predict | aaron8tang/h2o4gpu | python | def predict(self, X):
'Predict none zero elements of coo sparse matrix X according to the fitted model.\n\n Parameters\n ----------\n X {array-like, sparse coo matrix} shape (m, n)\n Data matrix in coo format. Values are ignored.\n\n Returns\n -------\n ... |
def testAppsV1beta1DeploymentList(self):
'\n Test AppsV1beta1DeploymentList\n '
pass | -6,776,409,037,605,836,000 | Test AppsV1beta1DeploymentList | kubernetes/test/test_apps_v1beta1_deployment_list.py | testAppsV1beta1DeploymentList | dix000p/kubernetes-client | python | def testAppsV1beta1DeploymentList(self):
'\n \n '
pass |
def iscoroutinefunction(func):
'\n Return True if func is a coroutine function (a function defined with async\n def syntax, and doesn\'t contain yield), or a function decorated with\n @asyncio.coroutine.\n\n Note: copied and modified from Python 3.5\'s builtin couroutines.py to avoid\n importing asyn... | 6,053,619,501,236,065,000 | Return True if func is a coroutine function (a function defined with async
def syntax, and doesn't contain yield), or a function decorated with
@asyncio.coroutine.
Note: copied and modified from Python 3.5's builtin couroutines.py to avoid
importing asyncio directly, which in turns also initializes the "logging"
modul... | src/_pytest/compat.py | iscoroutinefunction | robholt/pytest | python | def iscoroutinefunction(func):
'\n Return True if func is a coroutine function (a function defined with async\n def syntax, and doesn\'t contain yield), or a function decorated with\n @asyncio.coroutine.\n\n Note: copied and modified from Python 3.5\'s builtin couroutines.py to avoid\n importing asyn... |
def num_mock_patch_args(function):
' return number of arguments used up by mock arguments (if any) '
patchings = getattr(function, 'patchings', None)
if (not patchings):
return 0
mock_sentinel = getattr(sys.modules.get('mock'), 'DEFAULT', object())
ut_mock_sentinel = getattr(sys.modules.get(... | -4,451,766,268,141,289,000 | return number of arguments used up by mock arguments (if any) | src/_pytest/compat.py | num_mock_patch_args | robholt/pytest | python | def num_mock_patch_args(function):
' '
patchings = getattr(function, 'patchings', None)
if (not patchings):
return 0
mock_sentinel = getattr(sys.modules.get('mock'), 'DEFAULT', object())
ut_mock_sentinel = getattr(sys.modules.get('unittest.mock'), 'DEFAULT', object())
return len([p for ... |
def getfuncargnames(function, *, name: str='', is_method=False, cls=None):
"Returns the names of a function's mandatory arguments.\n\n This should return the names of all function arguments that:\n * Aren't bound to an instance or type as in instance or class methods.\n * Don't have default values.... | 7,358,649,675,907,133,000 | Returns the names of a function's mandatory arguments.
This should return the names of all function arguments that:
* Aren't bound to an instance or type as in instance or class methods.
* Don't have default values.
* Aren't bound with functools.partial.
* Aren't replaced with mocks.
The is_method and... | src/_pytest/compat.py | getfuncargnames | robholt/pytest | python | def getfuncargnames(function, *, name: str=, is_method=False, cls=None):
"Returns the names of a function's mandatory arguments.\n\n This should return the names of all function arguments that:\n * Aren't bound to an instance or type as in instance or class methods.\n * Don't have default values.\n... |
def ascii_escaped(val):
'If val is pure ascii, returns it as a str(). Otherwise, escapes\n bytes objects into a sequence of escaped bytes:\n\n b\'ôÅÖ\' -> \'\\xc3\\xb4\\xc5\\xd6\'\n\n and escapes unicode objects into a sequence of escaped unicode\n ids, e.g.:\n\n \'4\\nV\\U00043efa\\x0eMXWB\\x1e\\u... | -5,234,808,399,818,805,000 | If val is pure ascii, returns it as a str(). Otherwise, escapes
bytes objects into a sequence of escaped bytes:
b'ôÅÖ' -> '\xc3\xb4\xc5\xd6'
and escapes unicode objects into a sequence of escaped unicode
ids, e.g.:
'4\nV\U00043efa\x0eMXWB\x1e\u3028\u15fd\xcd\U0007d944'
note:
the obvious "v.decode('unicode-esca... | src/_pytest/compat.py | ascii_escaped | robholt/pytest | python | def ascii_escaped(val):
'If val is pure ascii, returns it as a str(). Otherwise, escapes\n bytes objects into a sequence of escaped bytes:\n\n b\'ôÅÖ\' -> \'\\xc3\\xb4\\xc5\\xd6\'\n\n and escapes unicode objects into a sequence of escaped unicode\n ids, e.g.:\n\n \'4\\nV\\U00043efa\\x0eMXWB\\x1e\\u... |
def get_real_func(obj):
' gets the real function object of the (possibly) wrapped object by\n functools.wraps or functools.partial.\n '
start_obj = obj
for i in range(100):
new_obj = getattr(obj, '__pytest_wrapped__', None)
if isinstance(new_obj, _PytestWrapper):
obj = new_... | 7,137,254,283,143,825,000 | gets the real function object of the (possibly) wrapped object by
functools.wraps or functools.partial. | src/_pytest/compat.py | get_real_func | robholt/pytest | python | def get_real_func(obj):
' gets the real function object of the (possibly) wrapped object by\n functools.wraps or functools.partial.\n '
start_obj = obj
for i in range(100):
new_obj = getattr(obj, '__pytest_wrapped__', None)
if isinstance(new_obj, _PytestWrapper):
obj = new_... |
def get_real_method(obj, holder):
'\n Attempts to obtain the real function object that might be wrapping ``obj``, while at the same time\n returning a bound method to ``holder`` if the original object was a bound method.\n '
try:
is_method = hasattr(obj, '__func__')
obj = get_real_func(... | 3,975,037,658,141,571,600 | Attempts to obtain the real function object that might be wrapping ``obj``, while at the same time
returning a bound method to ``holder`` if the original object was a bound method. | src/_pytest/compat.py | get_real_method | robholt/pytest | python | def get_real_method(obj, holder):
'\n Attempts to obtain the real function object that might be wrapping ``obj``, while at the same time\n returning a bound method to ``holder`` if the original object was a bound method.\n '
try:
is_method = hasattr(obj, '__func__')
obj = get_real_func(... |
def safe_getattr(object, name, default):
" Like getattr but return default upon any Exception or any OutcomeException.\n\n Attribute access can potentially fail for 'evil' Python objects.\n See issue #214.\n It catches OutcomeException because of #2490 (issue #580), new outcomes are derived from BaseExcept... | -6,607,776,573,387,889,000 | Like getattr but return default upon any Exception or any OutcomeException.
Attribute access can potentially fail for 'evil' Python objects.
See issue #214.
It catches OutcomeException because of #2490 (issue #580), new outcomes are derived from BaseException
instead of Exception (for more details check #2707) | src/_pytest/compat.py | safe_getattr | robholt/pytest | python | def safe_getattr(object, name, default):
" Like getattr but return default upon any Exception or any OutcomeException.\n\n Attribute access can potentially fail for 'evil' Python objects.\n See issue #214.\n It catches OutcomeException because of #2490 (issue #580), new outcomes are derived from BaseExcept... |
def safe_isclass(obj):
'Ignore any exception via isinstance on Python 3.'
try:
return inspect.isclass(obj)
except Exception:
return False | 886,407,819,187,287,400 | Ignore any exception via isinstance on Python 3. | src/_pytest/compat.py | safe_isclass | robholt/pytest | python | def safe_isclass(obj):
try:
return inspect.isclass(obj)
except Exception:
return False |
@property
def funcargnames(self):
' alias attribute for ``fixturenames`` for pre-2.3 compatibility'
import warnings
from _pytest.deprecated import FUNCARGNAMES
warnings.warn(FUNCARGNAMES, stacklevel=2)
return self.fixturenames | -2,417,199,118,803,160,000 | alias attribute for ``fixturenames`` for pre-2.3 compatibility | src/_pytest/compat.py | funcargnames | robholt/pytest | python | @property
def funcargnames(self):
' '
import warnings
from _pytest.deprecated import FUNCARGNAMES
warnings.warn(FUNCARGNAMES, stacklevel=2)
return self.fixturenames |
def main(argv=None):
'Run the operators-filter with the specified command line arguments.\n '
return OperatorsFilter().main(argv) | 2,750,703,961,454,874,600 | Run the operators-filter with the specified command line arguments. | src/cosmic_ray/tools/filters/operators_filter.py | main | Smirenost/cosmic-ray | python | def main(argv=None):
'\n '
return OperatorsFilter().main(argv) |
def filter(self, work_db: WorkDB, args: Namespace):
'Mark as skipped all work item with filtered operator\n '
if (args.config is None):
config = work_db.get_config()
else:
config = load_config(args.config)
exclude_operators = config.sub('filters', 'operators-filter').get('exclude-... | 7,362,884,219,123,820,000 | Mark as skipped all work item with filtered operator | src/cosmic_ray/tools/filters/operators_filter.py | filter | Smirenost/cosmic-ray | python | def filter(self, work_db: WorkDB, args: Namespace):
'\n '
if (args.config is None):
config = work_db.get_config()
else:
config = load_config(args.config)
exclude_operators = config.sub('filters', 'operators-filter').get('exclude-operators', ())
self._skip_filtered(work_db, exc... |
def get_package_author_name() -> str:
'Return the package author name to be used.'
return userinput(name='python_package_author_name', label='Enter the python package author name to use.', default=load_repository_author_name(), validator='non_empty', sanitizer=['strip'], cache=False) | 3,065,983,685,440,111,000 | Return the package author name to be used. | setup_python_package/queries/get_package_author_name.py | get_package_author_name | LucaCappelletti94/setup_python_package | python | def get_package_author_name() -> str:
return userinput(name='python_package_author_name', label='Enter the python package author name to use.', default=load_repository_author_name(), validator='non_empty', sanitizer=['strip'], cache=False) |
def conv3x3(in_planes, out_planes, stride=1, padding=1, dilation=1):
'3x3 convolution with padding'
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=padding, dilation=dilation, bias=False) | -7,125,971,413,056,351,000 | 3x3 convolution with padding | libs/networks/resnet_dilation.py | conv3x3 | Kinpzz/RCRNet-Pytorch | python | def conv3x3(in_planes, out_planes, stride=1, padding=1, dilation=1):
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=padding, dilation=dilation, bias=False) |
def conv1x1(in_planes, out_planes, stride=1):
'1x1 convolution'
return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False) | 2,748,212,586,768,409,000 | 1x1 convolution | libs/networks/resnet_dilation.py | conv1x1 | Kinpzz/RCRNet-Pytorch | python | def conv1x1(in_planes, out_planes, stride=1):
return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False) |
def resnet18(pretrained=False, **kwargs):
'Constructs a ResNet-18 model.\n Args:\n pretrained (bool): If True, returns a model pre-trained on ImageNet\n '
model = ResNet(BasicBlock, [2, 2, 2, 2], **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['resnet18']))
... | 2,710,881,011,384,566,300 | Constructs a ResNet-18 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet | libs/networks/resnet_dilation.py | resnet18 | Kinpzz/RCRNet-Pytorch | python | def resnet18(pretrained=False, **kwargs):
'Constructs a ResNet-18 model.\n Args:\n pretrained (bool): If True, returns a model pre-trained on ImageNet\n '
model = ResNet(BasicBlock, [2, 2, 2, 2], **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['resnet18']))
... |
def resnet34(pretrained=False, **kwargs):
'Constructs a ResNet-34 model.\n Args:\n pretrained (bool): If True, returns a model pre-trained on ImageNet\n '
model = ResNet(BasicBlock, [3, 4, 6, 3], **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['resnet34']))
... | 321,186,425,817,952,300 | Constructs a ResNet-34 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet | libs/networks/resnet_dilation.py | resnet34 | Kinpzz/RCRNet-Pytorch | python | def resnet34(pretrained=False, **kwargs):
'Constructs a ResNet-34 model.\n Args:\n pretrained (bool): If True, returns a model pre-trained on ImageNet\n '
model = ResNet(BasicBlock, [3, 4, 6, 3], **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['resnet34']))
... |
def resnet50(pretrained=False, **kwargs):
'Constructs a ResNet-50 model.\n Args:\n pretrained (bool): If True, returns a model pre-trained on ImageNet\n '
model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['resnet50']))
... | -4,884,347,836,839,471,000 | Constructs a ResNet-50 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet | libs/networks/resnet_dilation.py | resnet50 | Kinpzz/RCRNet-Pytorch | python | def resnet50(pretrained=False, **kwargs):
'Constructs a ResNet-50 model.\n Args:\n pretrained (bool): If True, returns a model pre-trained on ImageNet\n '
model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['resnet50']))
... |
def resnet101(pretrained=False, **kwargs):
'Constructs a ResNet-101 model.\n Args:\n pretrained (bool): If True, returns a model pre-trained on ImageNet\n '
model = ResNet(Bottleneck, [3, 4, 23, 3], **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['resnet101'... | -5,899,972,026,593,623,000 | Constructs a ResNet-101 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet | libs/networks/resnet_dilation.py | resnet101 | Kinpzz/RCRNet-Pytorch | python | def resnet101(pretrained=False, **kwargs):
'Constructs a ResNet-101 model.\n Args:\n pretrained (bool): If True, returns a model pre-trained on ImageNet\n '
model = ResNet(Bottleneck, [3, 4, 23, 3], **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['resnet101'... |
def resnet152(pretrained=False, **kwargs):
'Constructs a ResNet-152 model.\n Args:\n pretrained (bool): If True, returns a model pre-trained on ImageNet\n '
model = ResNet(Bottleneck, [3, 8, 36, 3], **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['resnet152'... | 5,878,302,975,223,905,000 | Constructs a ResNet-152 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet | libs/networks/resnet_dilation.py | resnet152 | Kinpzz/RCRNet-Pytorch | python | def resnet152(pretrained=False, **kwargs):
'Constructs a ResNet-152 model.\n Args:\n pretrained (bool): If True, returns a model pre-trained on ImageNet\n '
model = ResNet(Bottleneck, [3, 8, 36, 3], **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['resnet152'... |
def parse_domain_date(domain_date: Union[(List[str], str)], date_format: str='%Y-%m-%dT%H:%M:%S.000Z') -> Optional[str]:
"Converts whois date format to an ISO8601 string\n\n Converts the HelloWorld domain WHOIS date (YYYY-mm-dd HH:MM:SS) format\n in a datetime. If a list is returned with multiple elements, ta... | 533,124,122,959,102,500 | Converts whois date format to an ISO8601 string
Converts the HelloWorld domain WHOIS date (YYYY-mm-dd HH:MM:SS) format
in a datetime. If a list is returned with multiple elements, takes only
the first one.
:type domain_date: ``Union[List[str],str]``
:param date_format:
a string or list of strings with the format ... | Packs/HelloWorld/Integrations/HelloWorld/HelloWorld.py | parse_domain_date | DeanArbel/content | python | def parse_domain_date(domain_date: Union[(List[str], str)], date_format: str='%Y-%m-%dT%H:%M:%S.000Z') -> Optional[str]:
"Converts whois date format to an ISO8601 string\n\n Converts the HelloWorld domain WHOIS date (YYYY-mm-dd HH:MM:SS) format\n in a datetime. If a list is returned with multiple elements, ta... |
def convert_to_demisto_severity(severity: str) -> int:
"Maps HelloWorld severity to Cortex XSOAR severity\n\n Converts the HelloWorld alert severity level ('Low', 'Medium',\n 'High', 'Critical') to Cortex XSOAR incident severity (1 to 4)\n for mapping.\n\n :type severity: ``str``\n :param severity: s... | -3,912,506,415,638,290,400 | Maps HelloWorld severity to Cortex XSOAR severity
Converts the HelloWorld alert severity level ('Low', 'Medium',
'High', 'Critical') to Cortex XSOAR incident severity (1 to 4)
for mapping.
:type severity: ``str``
:param severity: severity as returned from the HelloWorld API (str)
:return: Cortex XSOAR Severity (1 to... | Packs/HelloWorld/Integrations/HelloWorld/HelloWorld.py | convert_to_demisto_severity | DeanArbel/content | python | def convert_to_demisto_severity(severity: str) -> int:
"Maps HelloWorld severity to Cortex XSOAR severity\n\n Converts the HelloWorld alert severity level ('Low', 'Medium',\n 'High', 'Critical') to Cortex XSOAR incident severity (1 to 4)\n for mapping.\n\n :type severity: ``str``\n :param severity: s... |
def test_module(client: Client, first_fetch_time: int) -> str:
"Tests API connectivity and authentication'\n\n Returning 'ok' indicates that the integration works like it is supposed to.\n Connection to the service is successful.\n Raises exceptions if something goes wrong.\n\n :type client: ``Client``\... | -6,083,236,003,950,006,000 | Tests API connectivity and authentication'
Returning 'ok' indicates that the integration works like it is supposed to.
Connection to the service is successful.
Raises exceptions if something goes wrong.
:type client: ``Client``
:param Client: HelloWorld client to use
:type name: ``str``
:param name: name to append t... | Packs/HelloWorld/Integrations/HelloWorld/HelloWorld.py | test_module | DeanArbel/content | python | def test_module(client: Client, first_fetch_time: int) -> str:
"Tests API connectivity and authentication'\n\n Returning 'ok' indicates that the integration works like it is supposed to.\n Connection to the service is successful.\n Raises exceptions if something goes wrong.\n\n :type client: ``Client``\... |
def say_hello_command(client: Client, args: Dict[(str, Any)]) -> CommandResults:
"helloworld-say-hello command: Returns Hello {somename}\n\n :type client: ``Client``\n :param Client: HelloWorld client to use\n\n :type args: ``str``\n :param args:\n all command arguments, usually passed from ``dem... | -4,154,078,156,561,007,600 | helloworld-say-hello command: Returns Hello {somename}
:type client: ``Client``
:param Client: HelloWorld client to use
:type args: ``str``
:param args:
all command arguments, usually passed from ``demisto.args()``.
``args['name']`` is used as input name
:return:
A ``CommandResults`` object that is then ... | Packs/HelloWorld/Integrations/HelloWorld/HelloWorld.py | say_hello_command | DeanArbel/content | python | def say_hello_command(client: Client, args: Dict[(str, Any)]) -> CommandResults:
"helloworld-say-hello command: Returns Hello {somename}\n\n :type client: ``Client``\n :param Client: HelloWorld client to use\n\n :type args: ``str``\n :param args:\n all command arguments, usually passed from ``dem... |
def fetch_incidents(client: Client, max_results: int, last_run: Dict[(str, int)], first_fetch_time: Optional[int], alert_status: Optional[str], min_severity: str, alert_type: Optional[str]) -> Tuple[(Dict[(str, int)], List[dict])]:
'This function retrieves new alerts every interval (default is 1 minute).\n\n Thi... | 439,226,341,109,222,000 | This function retrieves new alerts every interval (default is 1 minute).
This function has to implement the logic of making sure that incidents are
fetched only onces and no incidents are missed. By default it's invoked by
XSOAR every minute. It will use last_run to save the timestamp of the last
incident it processed... | Packs/HelloWorld/Integrations/HelloWorld/HelloWorld.py | fetch_incidents | DeanArbel/content | python | def fetch_incidents(client: Client, max_results: int, last_run: Dict[(str, int)], first_fetch_time: Optional[int], alert_status: Optional[str], min_severity: str, alert_type: Optional[str]) -> Tuple[(Dict[(str, int)], List[dict])]:
'This function retrieves new alerts every interval (default is 1 minute).\n\n Thi... |
def ip_reputation_command(client: Client, args: Dict[(str, Any)], default_threshold: int) -> List[CommandResults]:
"ip command: Returns IP reputation for a list of IPs\n\n :type client: ``Client``\n :param Client: HelloWorld client to use\n\n :type args: ``Dict[str, Any]``\n :param args:\n all co... | 6,297,315,567,717,406,000 | ip command: Returns IP reputation for a list of IPs
:type client: ``Client``
:param Client: HelloWorld client to use
:type args: ``Dict[str, Any]``
:param args:
all command arguments, usually passed from ``demisto.args()``.
``args['ip']`` is a list of IPs or a single IP
``args['threshold']`` threshold to ... | Packs/HelloWorld/Integrations/HelloWorld/HelloWorld.py | ip_reputation_command | DeanArbel/content | python | def ip_reputation_command(client: Client, args: Dict[(str, Any)], default_threshold: int) -> List[CommandResults]:
"ip command: Returns IP reputation for a list of IPs\n\n :type client: ``Client``\n :param Client: HelloWorld client to use\n\n :type args: ``Dict[str, Any]``\n :param args:\n all co... |
def domain_reputation_command(client: Client, args: Dict[(str, Any)], default_threshold: int) -> List[CommandResults]:
"domain command: Returns domain reputation for a list of domains\n\n :type client: ``Client``\n :param Client: HelloWorld client to use\n\n :type args: ``Dict[str, Any]``\n :param args:... | 4,025,089,852,044,071,000 | domain command: Returns domain reputation for a list of domains
:type client: ``Client``
:param Client: HelloWorld client to use
:type args: ``Dict[str, Any]``
:param args:
all command arguments, usually passed from ``demisto.args()``.
``args['domain']`` list of domains or a single domain
``args['threshol... | Packs/HelloWorld/Integrations/HelloWorld/HelloWorld.py | domain_reputation_command | DeanArbel/content | python | def domain_reputation_command(client: Client, args: Dict[(str, Any)], default_threshold: int) -> List[CommandResults]:
"domain command: Returns domain reputation for a list of domains\n\n :type client: ``Client``\n :param Client: HelloWorld client to use\n\n :type args: ``Dict[str, Any]``\n :param args:... |
def search_alerts_command(client: Client, args: Dict[(str, Any)]) -> CommandResults:
"helloworld-search-alerts command: Search alerts in HelloWorld\n\n :type client: ``Client``\n :param Client: HelloWorld client to use\n\n :type args: ``Dict[str, Any]``\n :param args:\n all command arguments, usu... | -8,565,637,857,912,481,000 | helloworld-search-alerts command: Search alerts in HelloWorld
:type client: ``Client``
:param Client: HelloWorld client to use
:type args: ``Dict[str, Any]``
:param args:
all command arguments, usually passed from ``demisto.args()``.
``args['status']`` alert status. Options are 'ACTIVE' or 'CLOSED'
``args... | Packs/HelloWorld/Integrations/HelloWorld/HelloWorld.py | search_alerts_command | DeanArbel/content | python | def search_alerts_command(client: Client, args: Dict[(str, Any)]) -> CommandResults:
"helloworld-search-alerts command: Search alerts in HelloWorld\n\n :type client: ``Client``\n :param Client: HelloWorld client to use\n\n :type args: ``Dict[str, Any]``\n :param args:\n all command arguments, usu... |
def get_alert_command(client: Client, args: Dict[(str, Any)]) -> CommandResults:
"helloworld-get-alert command: Returns a HelloWorld alert\n\n :type client: ``Client``\n :param Client: HelloWorld client to use\n\n :type args: ``Dict[str, Any]``\n :param args:\n all command arguments, usually pass... | -4,416,676,729,715,666,400 | helloworld-get-alert command: Returns a HelloWorld alert
:type client: ``Client``
:param Client: HelloWorld client to use
:type args: ``Dict[str, Any]``
:param args:
all command arguments, usually passed from ``demisto.args()``.
``args['alert_id']`` alert ID to return
:return:
A ``CommandResults`` object... | Packs/HelloWorld/Integrations/HelloWorld/HelloWorld.py | get_alert_command | DeanArbel/content | python | def get_alert_command(client: Client, args: Dict[(str, Any)]) -> CommandResults:
"helloworld-get-alert command: Returns a HelloWorld alert\n\n :type client: ``Client``\n :param Client: HelloWorld client to use\n\n :type args: ``Dict[str, Any]``\n :param args:\n all command arguments, usually pass... |
def update_alert_status_command(client: Client, args: Dict[(str, Any)]) -> CommandResults:
"helloworld-update-alert-status command: Changes the status of an alert\n\n Changes the status of a HelloWorld alert and returns the updated alert info\n\n :type client: ``Client``\n :param Client: HelloWorld client ... | 2,405,045,081,726,955,500 | helloworld-update-alert-status command: Changes the status of an alert
Changes the status of a HelloWorld alert and returns the updated alert info
:type client: ``Client``
:param Client: HelloWorld client to use
:type args: ``Dict[str, Any]``
:param args:
all command arguments, usually passed from ``demisto.args... | Packs/HelloWorld/Integrations/HelloWorld/HelloWorld.py | update_alert_status_command | DeanArbel/content | python | def update_alert_status_command(client: Client, args: Dict[(str, Any)]) -> CommandResults:
"helloworld-update-alert-status command: Changes the status of an alert\n\n Changes the status of a HelloWorld alert and returns the updated alert info\n\n :type client: ``Client``\n :param Client: HelloWorld client ... |
def scan_start_command(client: Client, args: Dict[(str, Any)]) -> CommandResults:
"helloworld-start-scan command: Starts a HelloWorld scan\n\n :type client: ``Client``\n :param Client: HelloWorld client to use\n\n :type args: ``Dict[str, Any]``\n :param args:\n all command arguments, usually pass... | -6,920,022,741,749,217,000 | helloworld-start-scan command: Starts a HelloWorld scan
:type client: ``Client``
:param Client: HelloWorld client to use
:type args: ``Dict[str, Any]``
:param args:
all command arguments, usually passed from ``demisto.args()``.
``args['hostname']`` hostname to run the scan on
:return:
A ``CommandResults`... | Packs/HelloWorld/Integrations/HelloWorld/HelloWorld.py | scan_start_command | DeanArbel/content | python | def scan_start_command(client: Client, args: Dict[(str, Any)]) -> CommandResults:
"helloworld-start-scan command: Starts a HelloWorld scan\n\n :type client: ``Client``\n :param Client: HelloWorld client to use\n\n :type args: ``Dict[str, Any]``\n :param args:\n all command arguments, usually pass... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.