body stringlengths 26 98.2k | body_hash int64 -9,222,864,604,528,158,000 9,221,803,474B | docstring stringlengths 1 16.8k | path stringlengths 5 230 | name stringlengths 1 96 | repository_name stringlengths 7 89 | lang stringclasses 1
value | body_without_docstring stringlengths 20 98.2k |
|---|---|---|---|---|---|---|---|
def refine_keypoints(regressed_keypoints, keypoint_candidates, keypoint_scores, num_keypoint_candidates, bboxes=None, unmatched_keypoint_score=0.1, box_scale=1.2, candidate_search_scale=0.3, candidate_ranking_mode='min_distance'):
'Refines regressed keypoints by snapping to the nearest candidate keypoints.\n\n The... | 100,583,164,911,949,800 | Refines regressed keypoints by snapping to the nearest candidate keypoints.
The initial regressed keypoints represent a full set of keypoints regressed
from the centers of the objects. The keypoint candidates are estimated
independently from heatmaps, and are not associated with any object instances.
This function ref... | research/object_detection/meta_architectures/center_net_meta_arch.py | refine_keypoints | AvikantSrivastava/models | python | def refine_keypoints(regressed_keypoints, keypoint_candidates, keypoint_scores, num_keypoint_candidates, bboxes=None, unmatched_keypoint_score=0.1, box_scale=1.2, candidate_search_scale=0.3, candidate_ranking_mode='min_distance'):
'Refines regressed keypoints by snapping to the nearest candidate keypoints.\n\n The... |
def _pad_to_full_keypoint_dim(keypoint_coords, keypoint_scores, keypoint_inds, num_total_keypoints):
'Scatter keypoint elements into tensors with full keypoints dimension.\n\n Args:\n keypoint_coords: a [batch_size, num_instances, num_keypoints, 2] float32\n tensor.\n keypoint_scores: a [batch_size, num... | -119,598,477,833,223,570 | Scatter keypoint elements into tensors with full keypoints dimension.
Args:
keypoint_coords: a [batch_size, num_instances, num_keypoints, 2] float32
tensor.
keypoint_scores: a [batch_size, num_instances, num_keypoints] float32
tensor.
keypoint_inds: a list of integers that indicate the keypoint indices f... | research/object_detection/meta_architectures/center_net_meta_arch.py | _pad_to_full_keypoint_dim | AvikantSrivastava/models | python | def _pad_to_full_keypoint_dim(keypoint_coords, keypoint_scores, keypoint_inds, num_total_keypoints):
'Scatter keypoint elements into tensors with full keypoints dimension.\n\n Args:\n keypoint_coords: a [batch_size, num_instances, num_keypoints, 2] float32\n tensor.\n keypoint_scores: a [batch_size, num... |
def _pad_to_full_instance_dim(keypoint_coords, keypoint_scores, instance_inds, max_instances):
'Scatter keypoint elements into tensors with full instance dimension.\n\n Args:\n keypoint_coords: a [batch_size, num_instances, num_keypoints, 2] float32\n tensor.\n keypoint_scores: a [batch_size, num_instan... | -6,481,720,654,041,201,000 | Scatter keypoint elements into tensors with full instance dimension.
Args:
keypoint_coords: a [batch_size, num_instances, num_keypoints, 2] float32
tensor.
keypoint_scores: a [batch_size, num_instances, num_keypoints] float32
tensor.
instance_inds: a list of integers that indicate the instance indices fo... | research/object_detection/meta_architectures/center_net_meta_arch.py | _pad_to_full_instance_dim | AvikantSrivastava/models | python | def _pad_to_full_instance_dim(keypoint_coords, keypoint_scores, instance_inds, max_instances):
'Scatter keypoint elements into tensors with full instance dimension.\n\n Args:\n keypoint_coords: a [batch_size, num_instances, num_keypoints, 2] float32\n tensor.\n keypoint_scores: a [batch_size, num_instan... |
def _gather_candidates_at_indices(keypoint_candidates, keypoint_scores, indices):
'Gathers keypoint candidate coordinates and scores at indices.\n\n Args:\n keypoint_candidates: a float tensor of shape [batch_size, max_candidates,\n num_keypoints, 2] with candidate coordinates.\n keypoint_scores: a floa... | 1,550,501,514,456,468,500 | Gathers keypoint candidate coordinates and scores at indices.
Args:
keypoint_candidates: a float tensor of shape [batch_size, max_candidates,
num_keypoints, 2] with candidate coordinates.
keypoint_scores: a float tensor of shape [batch_size, max_candidates,
num_keypoints] with keypoint scores.
indices: a... | research/object_detection/meta_architectures/center_net_meta_arch.py | _gather_candidates_at_indices | AvikantSrivastava/models | python | def _gather_candidates_at_indices(keypoint_candidates, keypoint_scores, indices):
'Gathers keypoint candidate coordinates and scores at indices.\n\n Args:\n keypoint_candidates: a float tensor of shape [batch_size, max_candidates,\n num_keypoints, 2] with candidate coordinates.\n keypoint_scores: a floa... |
def flattened_indices_from_row_col_indices(row_indices, col_indices, num_cols):
'Get the index in a flattened array given row and column indices.'
return ((row_indices * num_cols) + col_indices) | -2,580,679,528,769,606,700 | Get the index in a flattened array given row and column indices. | research/object_detection/meta_architectures/center_net_meta_arch.py | flattened_indices_from_row_col_indices | AvikantSrivastava/models | python | def flattened_indices_from_row_col_indices(row_indices, col_indices, num_cols):
return ((row_indices * num_cols) + col_indices) |
def row_col_channel_indices_from_flattened_indices(indices, num_cols, num_channels):
'Computes row, column and channel indices from flattened indices.\n\n Args:\n indices: An integer tensor of any shape holding the indices in the flattened\n space.\n num_cols: Number of columns in the image (width).\n ... | 8,608,488,016,104,368,000 | Computes row, column and channel indices from flattened indices.
Args:
indices: An integer tensor of any shape holding the indices in the flattened
space.
num_cols: Number of columns in the image (width).
num_channels: Number of channels in the image.
Returns:
row_indices: The row indices corresponding to... | research/object_detection/meta_architectures/center_net_meta_arch.py | row_col_channel_indices_from_flattened_indices | AvikantSrivastava/models | python | def row_col_channel_indices_from_flattened_indices(indices, num_cols, num_channels):
'Computes row, column and channel indices from flattened indices.\n\n Args:\n indices: An integer tensor of any shape holding the indices in the flattened\n space.\n num_cols: Number of columns in the image (width).\n ... |
def get_valid_anchor_weights_in_flattened_image(true_image_shapes, height, width):
'Computes valid anchor weights for an image assuming pixels will be flattened.\n\n This function is useful when we only want to penalize valid areas in the\n image in the case when padding is used. The function assumes that the los... | 2,124,917,025,900,141,300 | Computes valid anchor weights for an image assuming pixels will be flattened.
This function is useful when we only want to penalize valid areas in the
image in the case when padding is used. The function assumes that the loss
function will be applied after flattening the spatial dimensions and returns
anchor weights a... | research/object_detection/meta_architectures/center_net_meta_arch.py | get_valid_anchor_weights_in_flattened_image | AvikantSrivastava/models | python | def get_valid_anchor_weights_in_flattened_image(true_image_shapes, height, width):
'Computes valid anchor weights for an image assuming pixels will be flattened.\n\n This function is useful when we only want to penalize valid areas in the\n image in the case when padding is used. The function assumes that the los... |
def convert_strided_predictions_to_normalized_boxes(boxes, stride, true_image_shapes):
"Converts predictions in the output space to normalized boxes.\n\n Boxes falling outside the valid image boundary are clipped to be on the\n boundary.\n\n Args:\n boxes: A tensor of shape [batch_size, num_boxes, 4] holding ... | 8,227,977,681,416,438,000 | Converts predictions in the output space to normalized boxes.
Boxes falling outside the valid image boundary are clipped to be on the
boundary.
Args:
boxes: A tensor of shape [batch_size, num_boxes, 4] holding the raw
coordinates of boxes in the model's output space.
stride: The stride in the output space.
t... | research/object_detection/meta_architectures/center_net_meta_arch.py | convert_strided_predictions_to_normalized_boxes | AvikantSrivastava/models | python | def convert_strided_predictions_to_normalized_boxes(boxes, stride, true_image_shapes):
"Converts predictions in the output space to normalized boxes.\n\n Boxes falling outside the valid image boundary are clipped to be on the\n boundary.\n\n Args:\n boxes: A tensor of shape [batch_size, num_boxes, 4] holding ... |
def convert_strided_predictions_to_normalized_keypoints(keypoint_coords, keypoint_scores, stride, true_image_shapes, clip_out_of_frame_keypoints=False):
"Converts predictions in the output space to normalized keypoints.\n\n If clip_out_of_frame_keypoints=False, keypoint coordinates falling outside\n the valid ima... | 1,464,973,746,934,306,600 | Converts predictions in the output space to normalized keypoints.
If clip_out_of_frame_keypoints=False, keypoint coordinates falling outside
the valid image boundary are normalized but not clipped; If
clip_out_of_frame_keypoints=True, keypoint coordinates falling outside the
valid image boundary are clipped to the clo... | research/object_detection/meta_architectures/center_net_meta_arch.py | convert_strided_predictions_to_normalized_keypoints | AvikantSrivastava/models | python | def convert_strided_predictions_to_normalized_keypoints(keypoint_coords, keypoint_scores, stride, true_image_shapes, clip_out_of_frame_keypoints=False):
"Converts predictions in the output space to normalized keypoints.\n\n If clip_out_of_frame_keypoints=False, keypoint coordinates falling outside\n the valid ima... |
def convert_strided_predictions_to_instance_masks(boxes, classes, masks, true_image_shapes, densepose_part_heatmap=None, densepose_surface_coords=None, stride=4, mask_height=256, mask_width=256, score_threshold=0.5, densepose_class_index=(- 1)):
'Converts predicted full-image masks into instance masks.\n\n For eac... | 5,333,232,042,033,351,000 | Converts predicted full-image masks into instance masks.
For each predicted detection box:
* Crop and resize the predicted mask (and optionally DensePose coordinates)
based on the detected bounding box coordinates and class prediction. Uses
bilinear resampling.
* Binarize the mask using the provided score ... | research/object_detection/meta_architectures/center_net_meta_arch.py | convert_strided_predictions_to_instance_masks | AvikantSrivastava/models | python | def convert_strided_predictions_to_instance_masks(boxes, classes, masks, true_image_shapes, densepose_part_heatmap=None, densepose_surface_coords=None, stride=4, mask_height=256, mask_width=256, score_threshold=0.5, densepose_class_index=(- 1)):
'Converts predicted full-image masks into instance masks.\n\n For eac... |
def crop_and_threshold_masks(elems, input_height, input_width, mask_height=256, mask_width=256, score_threshold=0.5, densepose_class_index=(- 1)):
'Crops and thresholds masks based on detection boxes.\n\n Args:\n elems: A tuple of\n boxes - float32 tensor of shape [max_detections, 4]\n classes - int32... | -162,861,961,586,320,220 | Crops and thresholds masks based on detection boxes.
Args:
elems: A tuple of
boxes - float32 tensor of shape [max_detections, 4]
classes - int32 tensor of shape [max_detections] (0-indexed)
masks - float32 tensor of shape [output_height, output_width, num_classes]
part_heatmap - float32 tensor of sha... | research/object_detection/meta_architectures/center_net_meta_arch.py | crop_and_threshold_masks | AvikantSrivastava/models | python | def crop_and_threshold_masks(elems, input_height, input_width, mask_height=256, mask_width=256, score_threshold=0.5, densepose_class_index=(- 1)):
'Crops and thresholds masks based on detection boxes.\n\n Args:\n elems: A tuple of\n boxes - float32 tensor of shape [max_detections, 4]\n classes - int32... |
def gather_surface_coords_for_parts(surface_coords_cropped, highest_scoring_part):
'Gathers the (v, u) coordinates for the highest scoring DensePose parts.\n\n Args:\n surface_coords_cropped: A [max_detections, height, width, num_parts, 2]\n float32 tensor with (v, u) surface coordinates.\n highest_scor... | 5,027,404,084,321,661,000 | Gathers the (v, u) coordinates for the highest scoring DensePose parts.
Args:
surface_coords_cropped: A [max_detections, height, width, num_parts, 2]
float32 tensor with (v, u) surface coordinates.
highest_scoring_part: A [max_detections, height, width] integer tensor with
the highest scoring part (0-index... | research/object_detection/meta_architectures/center_net_meta_arch.py | gather_surface_coords_for_parts | AvikantSrivastava/models | python | def gather_surface_coords_for_parts(surface_coords_cropped, highest_scoring_part):
'Gathers the (v, u) coordinates for the highest scoring DensePose parts.\n\n Args:\n surface_coords_cropped: A [max_detections, height, width, num_parts, 2]\n float32 tensor with (v, u) surface coordinates.\n highest_scor... |
def predicted_embeddings_at_object_centers(embedding_predictions, y_indices, x_indices):
'Returns the predicted embeddings at specified object centers.\n\n Args:\n embedding_predictions: A float tensor of shape [batch_size, height, width,\n reid_embed_size] holding predicted embeddings.\n y_indices: A [... | -4,203,698,444,624,568,000 | Returns the predicted embeddings at specified object centers.
Args:
embedding_predictions: A float tensor of shape [batch_size, height, width,
reid_embed_size] holding predicted embeddings.
y_indices: A [batch, num_instances] int tensor holding y indices for object
centers. These indices correspond to loca... | research/object_detection/meta_architectures/center_net_meta_arch.py | predicted_embeddings_at_object_centers | AvikantSrivastava/models | python | def predicted_embeddings_at_object_centers(embedding_predictions, y_indices, x_indices):
'Returns the predicted embeddings at specified object centers.\n\n Args:\n embedding_predictions: A float tensor of shape [batch_size, height, width,\n reid_embed_size] holding predicted embeddings.\n y_indices: A [... |
def get_num_instances_from_weights(groundtruth_weights_list):
'Computes the number of instances/boxes from the weights in a batch.\n\n Args:\n groundtruth_weights_list: A list of float tensors with shape\n [max_num_instances] representing whether there is an actual instance in\n the image (with non-ze... | -1,074,146,280,008,215,800 | Computes the number of instances/boxes from the weights in a batch.
Args:
groundtruth_weights_list: A list of float tensors with shape
[max_num_instances] representing whether there is an actual instance in
the image (with non-zero value) or is padded to match the
max_num_instances (with value 0.0). The ... | research/object_detection/meta_architectures/center_net_meta_arch.py | get_num_instances_from_weights | AvikantSrivastava/models | python | def get_num_instances_from_weights(groundtruth_weights_list):
'Computes the number of instances/boxes from the weights in a batch.\n\n Args:\n groundtruth_weights_list: A list of float tensors with shape\n [max_num_instances] representing whether there is an actual instance in\n the image (with non-ze... |
def __init__(self, name=None, channel_means=(0.0, 0.0, 0.0), channel_stds=(1.0, 1.0, 1.0), bgr_ordering=False):
'Initializes a CenterNet feature extractor.\n\n Args:\n name: str, the name used for the underlying keras model.\n channel_means: A tuple of floats, denoting the mean of each channel\n ... | 3,949,862,726,365,284,000 | Initializes a CenterNet feature extractor.
Args:
name: str, the name used for the underlying keras model.
channel_means: A tuple of floats, denoting the mean of each channel
which will be subtracted from it. If None or empty, we use 0s.
channel_stds: A tuple of floats, denoting the standard deviation of each... | research/object_detection/meta_architectures/center_net_meta_arch.py | __init__ | AvikantSrivastava/models | python | def __init__(self, name=None, channel_means=(0.0, 0.0, 0.0), channel_stds=(1.0, 1.0, 1.0), bgr_ordering=False):
'Initializes a CenterNet feature extractor.\n\n Args:\n name: str, the name used for the underlying keras model.\n channel_means: A tuple of floats, denoting the mean of each channel\n ... |
def preprocess(self, inputs):
'Converts a batch of unscaled images to a scale suitable for the model.\n\n This method normalizes the image using the given `channel_means` and\n `channels_stds` values at initialization time while optionally flipping\n the channel order if `bgr_ordering` is set.\n\n Args:... | -6,190,530,192,781,071,000 | Converts a batch of unscaled images to a scale suitable for the model.
This method normalizes the image using the given `channel_means` and
`channels_stds` values at initialization time while optionally flipping
the channel order if `bgr_ordering` is set.
Args:
inputs: a [batch, height, width, channels] float32 ten... | research/object_detection/meta_architectures/center_net_meta_arch.py | preprocess | AvikantSrivastava/models | python | def preprocess(self, inputs):
'Converts a batch of unscaled images to a scale suitable for the model.\n\n This method normalizes the image using the given `channel_means` and\n `channels_stds` values at initialization time while optionally flipping\n the channel order if `bgr_ordering` is set.\n\n Args:... |
@property
@abc.abstractmethod
def out_stride(self):
'The stride in the output image of the network.'
pass | -2,731,785,466,358,025,700 | The stride in the output image of the network. | research/object_detection/meta_architectures/center_net_meta_arch.py | out_stride | AvikantSrivastava/models | python | @property
@abc.abstractmethod
def out_stride(self):
pass |
@property
@abc.abstractmethod
def num_feature_outputs(self):
'Ther number of feature outputs returned by the feature extractor.'
pass | -5,547,701,700,219,693,000 | Ther number of feature outputs returned by the feature extractor. | research/object_detection/meta_architectures/center_net_meta_arch.py | num_feature_outputs | AvikantSrivastava/models | python | @property
@abc.abstractmethod
def num_feature_outputs(self):
pass |
@property
@abc.abstractmethod
def supported_sub_model_types(self):
'Valid sub model types supported by the get_sub_model function.'
pass | -2,112,042,718,320,665,900 | Valid sub model types supported by the get_sub_model function. | research/object_detection/meta_architectures/center_net_meta_arch.py | supported_sub_model_types | AvikantSrivastava/models | python | @property
@abc.abstractmethod
def supported_sub_model_types(self):
pass |
@abc.abstractmethod
def get_sub_model(self, sub_model_type):
"Returns the underlying keras model for the given sub_model_type.\n\n This function is useful when we only want to get a subset of weights to\n be restored from a checkpoint.\n\n Args:\n sub_model_type: string, the type of sub model. Current... | -1,582,457,041,655,619,000 | Returns the underlying keras model for the given sub_model_type.
This function is useful when we only want to get a subset of weights to
be restored from a checkpoint.
Args:
sub_model_type: string, the type of sub model. Currently, CenterNet
feature extractors support 'detection' and 'classification'. | research/object_detection/meta_architectures/center_net_meta_arch.py | get_sub_model | AvikantSrivastava/models | python | @abc.abstractmethod
def get_sub_model(self, sub_model_type):
"Returns the underlying keras model for the given sub_model_type.\n\n This function is useful when we only want to get a subset of weights to\n be restored from a checkpoint.\n\n Args:\n sub_model_type: string, the type of sub model. Current... |
def __new__(cls, localization_loss, scale_loss_weight, offset_loss_weight, task_loss_weight=1.0):
'Constructor with default values for ObjectDetectionParams.\n\n Args:\n localization_loss: a object_detection.core.losses.Loss object to compute\n the loss for the center offset and height/width predicti... | 319,322,182,275,863,940 | Constructor with default values for ObjectDetectionParams.
Args:
localization_loss: a object_detection.core.losses.Loss object to compute
the loss for the center offset and height/width predictions in
CenterNet.
scale_loss_weight: float, The weight for localizing box size. Note that
the scale loss is d... | research/object_detection/meta_architectures/center_net_meta_arch.py | __new__ | AvikantSrivastava/models | python | def __new__(cls, localization_loss, scale_loss_weight, offset_loss_weight, task_loss_weight=1.0):
'Constructor with default values for ObjectDetectionParams.\n\n Args:\n localization_loss: a object_detection.core.losses.Loss object to compute\n the loss for the center offset and height/width predicti... |
def __new__(cls, task_name, class_id, keypoint_indices, classification_loss, localization_loss, keypoint_labels=None, keypoint_std_dev=None, keypoint_heatmap_loss_weight=1.0, keypoint_offset_loss_weight=1.0, keypoint_regression_loss_weight=1.0, keypoint_candidate_score_threshold=0.1, heatmap_bias_init=(- 2.19), num_can... | 1,224,667,856,985,538,300 | Constructor with default values for KeypointEstimationParams.
Args:
task_name: string, the name of the task this namedtuple corresponds to.
Note that it should be an unique identifier of the task.
class_id: int, the ID of the class that contains the target keypoints to
considered in this task. For example,... | research/object_detection/meta_architectures/center_net_meta_arch.py | __new__ | AvikantSrivastava/models | python | def __new__(cls, task_name, class_id, keypoint_indices, classification_loss, localization_loss, keypoint_labels=None, keypoint_std_dev=None, keypoint_heatmap_loss_weight=1.0, keypoint_offset_loss_weight=1.0, keypoint_regression_loss_weight=1.0, keypoint_candidate_score_threshold=0.1, heatmap_bias_init=(- 2.19), num_can... |
def __new__(cls, classification_loss, object_center_loss_weight, heatmap_bias_init=(- 2.19), min_box_overlap_iou=0.7, max_box_predictions=100, use_labeled_classes=False):
'Constructor with default values for ObjectCenterParams.\n\n Args:\n classification_loss: an object_detection.core.losses.Loss object to\... | -244,050,174,111,315,200 | Constructor with default values for ObjectCenterParams.
Args:
classification_loss: an object_detection.core.losses.Loss object to
compute the loss for the class predictions in CenterNet.
object_center_loss_weight: float, The weight for the object center loss.
heatmap_bias_init: float, the initial value of bi... | research/object_detection/meta_architectures/center_net_meta_arch.py | __new__ | AvikantSrivastava/models | python | def __new__(cls, classification_loss, object_center_loss_weight, heatmap_bias_init=(- 2.19), min_box_overlap_iou=0.7, max_box_predictions=100, use_labeled_classes=False):
'Constructor with default values for ObjectCenterParams.\n\n Args:\n classification_loss: an object_detection.core.losses.Loss object to\... |
def __new__(cls, classification_loss, task_loss_weight=1.0, mask_height=256, mask_width=256, score_threshold=0.5, heatmap_bias_init=(- 2.19)):
'Constructor with default values for MaskParams.\n\n Args:\n classification_loss: an object_detection.core.losses.Loss object to\n compute the loss for the se... | 7,861,250,382,031,103,000 | Constructor with default values for MaskParams.
Args:
classification_loss: an object_detection.core.losses.Loss object to
compute the loss for the semantic segmentation predictions in CenterNet.
task_loss_weight: float, The loss weight for the segmentation task.
mask_height: The height of the resized instanc... | research/object_detection/meta_architectures/center_net_meta_arch.py | __new__ | AvikantSrivastava/models | python | def __new__(cls, classification_loss, task_loss_weight=1.0, mask_height=256, mask_width=256, score_threshold=0.5, heatmap_bias_init=(- 2.19)):
'Constructor with default values for MaskParams.\n\n Args:\n classification_loss: an object_detection.core.losses.Loss object to\n compute the loss for the se... |
def __new__(cls, class_id, classification_loss, localization_loss, part_loss_weight=1.0, coordinate_loss_weight=1.0, num_parts=24, task_loss_weight=1.0, upsample_to_input_res=True, upsample_method='bilinear', heatmap_bias_init=(- 2.19)):
'Constructor with default values for DensePoseParams.\n\n Args:\n clas... | 5,725,670,139,513,713,000 | Constructor with default values for DensePoseParams.
Args:
class_id: the ID of the class that contains the DensePose groundtruth.
This should typically correspond to the "person" class. Note that the ID
is 0-based, meaning that class 0 corresponds to the first non-background
object class.
classificatio... | research/object_detection/meta_architectures/center_net_meta_arch.py | __new__ | AvikantSrivastava/models | python | def __new__(cls, class_id, classification_loss, localization_loss, part_loss_weight=1.0, coordinate_loss_weight=1.0, num_parts=24, task_loss_weight=1.0, upsample_to_input_res=True, upsample_method='bilinear', heatmap_bias_init=(- 2.19)):
'Constructor with default values for DensePoseParams.\n\n Args:\n clas... |
def __new__(cls, num_track_ids, reid_embed_size, num_fc_layers, classification_loss, task_loss_weight=1.0):
'Constructor with default values for TrackParams.\n\n Args:\n num_track_ids: int. The maximum track ID in the dataset. Used for ReID\n embedding classification task.\n reid_embed_size: int... | -5,018,960,529,862,917,000 | Constructor with default values for TrackParams.
Args:
num_track_ids: int. The maximum track ID in the dataset. Used for ReID
embedding classification task.
reid_embed_size: int. The embedding size for ReID task.
num_fc_layers: int. The number of (fully-connected, batch-norm, relu)
layers for track ID cl... | research/object_detection/meta_architectures/center_net_meta_arch.py | __new__ | AvikantSrivastava/models | python | def __new__(cls, num_track_ids, reid_embed_size, num_fc_layers, classification_loss, task_loss_weight=1.0):
'Constructor with default values for TrackParams.\n\n Args:\n num_track_ids: int. The maximum track ID in the dataset. Used for ReID\n embedding classification task.\n reid_embed_size: int... |
def __new__(cls, localization_loss, task_loss_weight=1.0):
'Constructor with default values for TrackParams.\n\n Args:\n localization_loss: an object_detection.core.losses.Loss object to\n compute the loss for the temporal offset in CenterNet.\n task_loss_weight: float, the loss weight for the t... | 4,264,954,637,863,041,500 | Constructor with default values for TrackParams.
Args:
localization_loss: an object_detection.core.losses.Loss object to
compute the loss for the temporal offset in CenterNet.
task_loss_weight: float, the loss weight for the temporal offset
task.
Returns:
An initialized TemporalOffsetParams namedtuple. | research/object_detection/meta_architectures/center_net_meta_arch.py | __new__ | AvikantSrivastava/models | python | def __new__(cls, localization_loss, task_loss_weight=1.0):
'Constructor with default values for TrackParams.\n\n Args:\n localization_loss: an object_detection.core.losses.Loss object to\n compute the loss for the temporal offset in CenterNet.\n task_loss_weight: float, the loss weight for the t... |
def __init__(self, is_training, add_summaries, num_classes, feature_extractor, image_resizer_fn, object_center_params, object_detection_params=None, keypoint_params_dict=None, mask_params=None, densepose_params=None, track_params=None, temporal_offset_params=None):
'Initializes a CenterNet model.\n\n Args:\n ... | 5,664,729,417,145,753,000 | Initializes a CenterNet model.
Args:
is_training: Set to True if this model is being built for training.
add_summaries: Whether to add tf summaries in the model.
num_classes: int, The number of classes that the model should predict.
feature_extractor: A CenterNetFeatureExtractor to use to extract features
... | research/object_detection/meta_architectures/center_net_meta_arch.py | __init__ | AvikantSrivastava/models | python | def __init__(self, is_training, add_summaries, num_classes, feature_extractor, image_resizer_fn, object_center_params, object_detection_params=None, keypoint_params_dict=None, mask_params=None, densepose_params=None, track_params=None, temporal_offset_params=None):
'Initializes a CenterNet model.\n\n Args:\n ... |
def _construct_prediction_heads(self, num_classes, num_feature_outputs, class_prediction_bias_init):
'Constructs the prediction heads based on the specific parameters.\n\n Args:\n num_classes: An integer indicating how many classes in total to predict.\n num_feature_outputs: An integer indicating how m... | -6,639,868,782,437,514,000 | Constructs the prediction heads based on the specific parameters.
Args:
num_classes: An integer indicating how many classes in total to predict.
num_feature_outputs: An integer indicating how many feature outputs to use
for calculating the loss. The Objects as Points paper attaches loss
functions to multip... | research/object_detection/meta_architectures/center_net_meta_arch.py | _construct_prediction_heads | AvikantSrivastava/models | python | def _construct_prediction_heads(self, num_classes, num_feature_outputs, class_prediction_bias_init):
'Constructs the prediction heads based on the specific parameters.\n\n Args:\n num_classes: An integer indicating how many classes in total to predict.\n num_feature_outputs: An integer indicating how m... |
def _initialize_target_assigners(self, stride, min_box_overlap_iou):
'Initializes the target assigners and puts them in a dictionary.\n\n Args:\n stride: An integer indicating the stride of the image.\n min_box_overlap_iou: float, the minimum IOU overlap that predicted boxes\n need have with gro... | -3,979,121,992,371,626,000 | Initializes the target assigners and puts them in a dictionary.
Args:
stride: An integer indicating the stride of the image.
min_box_overlap_iou: float, the minimum IOU overlap that predicted boxes
need have with groundtruth boxes to not be penalized. This is used for
computing the class specific center he... | research/object_detection/meta_architectures/center_net_meta_arch.py | _initialize_target_assigners | AvikantSrivastava/models | python | def _initialize_target_assigners(self, stride, min_box_overlap_iou):
'Initializes the target assigners and puts them in a dictionary.\n\n Args:\n stride: An integer indicating the stride of the image.\n min_box_overlap_iou: float, the minimum IOU overlap that predicted boxes\n need have with gro... |
def _compute_object_center_loss(self, input_height, input_width, object_center_predictions, per_pixel_weights):
'Computes the object center loss.\n\n Args:\n input_height: An integer scalar tensor representing input image height.\n input_width: An integer scalar tensor representing input image width.\n... | -3,619,118,231,556,900,400 | Computes the object center loss.
Args:
input_height: An integer scalar tensor representing input image height.
input_width: An integer scalar tensor representing input image width.
object_center_predictions: A list of float tensors of shape [batch_size,
out_height, out_width, num_classes] representing the ob... | research/object_detection/meta_architectures/center_net_meta_arch.py | _compute_object_center_loss | AvikantSrivastava/models | python | def _compute_object_center_loss(self, input_height, input_width, object_center_predictions, per_pixel_weights):
'Computes the object center loss.\n\n Args:\n input_height: An integer scalar tensor representing input image height.\n input_width: An integer scalar tensor representing input image width.\n... |
def _compute_object_detection_losses(self, input_height, input_width, prediction_dict, per_pixel_weights):
'Computes the weighted object detection losses.\n\n This wrapper function calls the function which computes the losses for\n object detection task and applies corresponding weights to the losses.\n\n ... | 806,099,432,362,530,600 | Computes the weighted object detection losses.
This wrapper function calls the function which computes the losses for
object detection task and applies corresponding weights to the losses.
Args:
input_height: An integer scalar tensor representing input image height.
input_width: An integer scalar tensor represent... | research/object_detection/meta_architectures/center_net_meta_arch.py | _compute_object_detection_losses | AvikantSrivastava/models | python | def _compute_object_detection_losses(self, input_height, input_width, prediction_dict, per_pixel_weights):
'Computes the weighted object detection losses.\n\n This wrapper function calls the function which computes the losses for\n object detection task and applies corresponding weights to the losses.\n\n ... |
def _compute_box_scale_and_offset_loss(self, input_height, input_width, scale_predictions, offset_predictions):
'Computes the scale loss of the object detection task.\n\n Args:\n input_height: An integer scalar tensor representing input image height.\n input_width: An integer scalar tensor representing... | -696,938,473,885,633,900 | Computes the scale loss of the object detection task.
Args:
input_height: An integer scalar tensor representing input image height.
input_width: An integer scalar tensor representing input image width.
scale_predictions: A list of float tensors of shape [batch_size,
out_height, out_width, 2] representing the... | research/object_detection/meta_architectures/center_net_meta_arch.py | _compute_box_scale_and_offset_loss | AvikantSrivastava/models | python | def _compute_box_scale_and_offset_loss(self, input_height, input_width, scale_predictions, offset_predictions):
'Computes the scale loss of the object detection task.\n\n Args:\n input_height: An integer scalar tensor representing input image height.\n input_width: An integer scalar tensor representing... |
def _compute_keypoint_estimation_losses(self, task_name, input_height, input_width, prediction_dict, per_pixel_weights):
'Computes the weighted keypoint losses.'
kp_params = self._kp_params_dict[task_name]
heatmap_key = get_keypoint_name(task_name, KEYPOINT_HEATMAP)
offset_key = get_keypoint_name(task_n... | 389,698,818,048,579,700 | Computes the weighted keypoint losses. | research/object_detection/meta_architectures/center_net_meta_arch.py | _compute_keypoint_estimation_losses | AvikantSrivastava/models | python | def _compute_keypoint_estimation_losses(self, task_name, input_height, input_width, prediction_dict, per_pixel_weights):
kp_params = self._kp_params_dict[task_name]
heatmap_key = get_keypoint_name(task_name, KEYPOINT_HEATMAP)
offset_key = get_keypoint_name(task_name, KEYPOINT_OFFSET)
regression_key... |
def _compute_kp_heatmap_loss(self, input_height, input_width, task_name, heatmap_predictions, classification_loss_fn, per_pixel_weights):
'Computes the heatmap loss of the keypoint estimation task.\n\n Args:\n input_height: An integer scalar tensor representing input image height.\n input_width: An int... | 4,771,942,718,064,745,000 | Computes the heatmap loss of the keypoint estimation task.
Args:
input_height: An integer scalar tensor representing input image height.
input_width: An integer scalar tensor representing input image width.
task_name: A string representing the name of the keypoint task.
heatmap_predictions: A list of float ten... | research/object_detection/meta_architectures/center_net_meta_arch.py | _compute_kp_heatmap_loss | AvikantSrivastava/models | python | def _compute_kp_heatmap_loss(self, input_height, input_width, task_name, heatmap_predictions, classification_loss_fn, per_pixel_weights):
'Computes the heatmap loss of the keypoint estimation task.\n\n Args:\n input_height: An integer scalar tensor representing input image height.\n input_width: An int... |
def _compute_kp_offset_loss(self, input_height, input_width, task_name, offset_predictions, localization_loss_fn):
'Computes the offset loss of the keypoint estimation task.\n\n Args:\n input_height: An integer scalar tensor representing input image height.\n input_width: An integer scalar tensor repre... | 3,327,474,415,315,478,000 | Computes the offset loss of the keypoint estimation task.
Args:
input_height: An integer scalar tensor representing input image height.
input_width: An integer scalar tensor representing input image width.
task_name: A string representing the name of the keypoint task.
offset_predictions: A list of float tenso... | research/object_detection/meta_architectures/center_net_meta_arch.py | _compute_kp_offset_loss | AvikantSrivastava/models | python | def _compute_kp_offset_loss(self, input_height, input_width, task_name, offset_predictions, localization_loss_fn):
'Computes the offset loss of the keypoint estimation task.\n\n Args:\n input_height: An integer scalar tensor representing input image height.\n input_width: An integer scalar tensor repre... |
def _compute_kp_regression_loss(self, input_height, input_width, task_name, regression_predictions, localization_loss_fn):
'Computes the keypoint regression loss of the keypoint estimation task.\n\n Args:\n input_height: An integer scalar tensor representing input image height.\n input_width: An intege... | 3,207,325,843,742,887,000 | Computes the keypoint regression loss of the keypoint estimation task.
Args:
input_height: An integer scalar tensor representing input image height.
input_width: An integer scalar tensor representing input image width.
task_name: A string representing the name of the keypoint task.
regression_predictions: A li... | research/object_detection/meta_architectures/center_net_meta_arch.py | _compute_kp_regression_loss | AvikantSrivastava/models | python | def _compute_kp_regression_loss(self, input_height, input_width, task_name, regression_predictions, localization_loss_fn):
'Computes the keypoint regression loss of the keypoint estimation task.\n\n Args:\n input_height: An integer scalar tensor representing input image height.\n input_width: An intege... |
def _compute_segmentation_losses(self, prediction_dict, per_pixel_weights):
'Computes all the losses associated with segmentation.\n\n Args:\n prediction_dict: The dictionary returned from the predict() method.\n per_pixel_weights: A float tensor of shape [batch_size,\n out_height * out_width, 1... | -7,498,127,224,504,642,000 | Computes all the losses associated with segmentation.
Args:
prediction_dict: The dictionary returned from the predict() method.
per_pixel_weights: A float tensor of shape [batch_size,
out_height * out_width, 1] with 1s in locations where the spatial
coordinates fall within the height and width in true_imag... | research/object_detection/meta_architectures/center_net_meta_arch.py | _compute_segmentation_losses | AvikantSrivastava/models | python | def _compute_segmentation_losses(self, prediction_dict, per_pixel_weights):
'Computes all the losses associated with segmentation.\n\n Args:\n prediction_dict: The dictionary returned from the predict() method.\n per_pixel_weights: A float tensor of shape [batch_size,\n out_height * out_width, 1... |
def _compute_mask_loss(self, segmentation_predictions, per_pixel_weights):
'Computes the mask loss.\n\n Args:\n segmentation_predictions: A list of float32 tensors of shape [batch_size,\n out_height, out_width, num_classes].\n per_pixel_weights: A float tensor of shape [batch_size,\n out_... | -7,094,407,873,046,366,000 | Computes the mask loss.
Args:
segmentation_predictions: A list of float32 tensors of shape [batch_size,
out_height, out_width, num_classes].
per_pixel_weights: A float tensor of shape [batch_size,
out_height * out_width, 1] with 1s in locations where the spatial
coordinates fall within the height and w... | research/object_detection/meta_architectures/center_net_meta_arch.py | _compute_mask_loss | AvikantSrivastava/models | python | def _compute_mask_loss(self, segmentation_predictions, per_pixel_weights):
'Computes the mask loss.\n\n Args:\n segmentation_predictions: A list of float32 tensors of shape [batch_size,\n out_height, out_width, num_classes].\n per_pixel_weights: A float tensor of shape [batch_size,\n out_... |
def _compute_densepose_losses(self, input_height, input_width, prediction_dict):
'Computes the weighted DensePose losses.\n\n Args:\n input_height: An integer scalar tensor representing input image height.\n input_width: An integer scalar tensor representing input image width.\n prediction_dict: A... | -3,313,712,703,948,206,600 | Computes the weighted DensePose losses.
Args:
input_height: An integer scalar tensor representing input image height.
input_width: An integer scalar tensor representing input image width.
prediction_dict: A dictionary holding predicted tensors output by the
"predict" function. See the "predict" function for ... | research/object_detection/meta_architectures/center_net_meta_arch.py | _compute_densepose_losses | AvikantSrivastava/models | python | def _compute_densepose_losses(self, input_height, input_width, prediction_dict):
'Computes the weighted DensePose losses.\n\n Args:\n input_height: An integer scalar tensor representing input image height.\n input_width: An integer scalar tensor representing input image width.\n prediction_dict: A... |
def _compute_densepose_part_and_coordinate_losses(self, input_height, input_width, part_predictions, surface_coord_predictions):
'Computes the individual losses for the DensePose task.\n\n Args:\n input_height: An integer scalar tensor representing input image height.\n input_width: An integer scalar t... | 1,636,117,419,904,766,700 | Computes the individual losses for the DensePose task.
Args:
input_height: An integer scalar tensor representing input image height.
input_width: An integer scalar tensor representing input image width.
part_predictions: A list of float tensors of shape [batch_size,
out_height, out_width, num_parts].
surfa... | research/object_detection/meta_architectures/center_net_meta_arch.py | _compute_densepose_part_and_coordinate_losses | AvikantSrivastava/models | python | def _compute_densepose_part_and_coordinate_losses(self, input_height, input_width, part_predictions, surface_coord_predictions):
'Computes the individual losses for the DensePose task.\n\n Args:\n input_height: An integer scalar tensor representing input image height.\n input_width: An integer scalar t... |
def _compute_track_losses(self, input_height, input_width, prediction_dict):
'Computes all the losses associated with tracking.\n\n Args:\n input_height: An integer scalar tensor representing input image height.\n input_width: An integer scalar tensor representing input image width.\n prediction_d... | 3,826,571,471,669,439,500 | Computes all the losses associated with tracking.
Args:
input_height: An integer scalar tensor representing input image height.
input_width: An integer scalar tensor representing input image width.
prediction_dict: The dictionary returned from the predict() method.
Returns:
A dictionary with tracking losses. | research/object_detection/meta_architectures/center_net_meta_arch.py | _compute_track_losses | AvikantSrivastava/models | python | def _compute_track_losses(self, input_height, input_width, prediction_dict):
'Computes all the losses associated with tracking.\n\n Args:\n input_height: An integer scalar tensor representing input image height.\n input_width: An integer scalar tensor representing input image width.\n prediction_d... |
def _compute_track_embedding_loss(self, input_height, input_width, object_reid_predictions):
'Computes the object ReID loss.\n\n The embedding is trained as a classification task where the target is the\n ID of each track among all tracks in the whole dataset.\n\n Args:\n input_height: An integer scal... | -975,937,577,233,702,400 | Computes the object ReID loss.
The embedding is trained as a classification task where the target is the
ID of each track among all tracks in the whole dataset.
Args:
input_height: An integer scalar tensor representing input image height.
input_width: An integer scalar tensor representing input image width.
obj... | research/object_detection/meta_architectures/center_net_meta_arch.py | _compute_track_embedding_loss | AvikantSrivastava/models | python | def _compute_track_embedding_loss(self, input_height, input_width, object_reid_predictions):
'Computes the object ReID loss.\n\n The embedding is trained as a classification task where the target is the\n ID of each track among all tracks in the whole dataset.\n\n Args:\n input_height: An integer scal... |
def _compute_temporal_offset_loss(self, input_height, input_width, prediction_dict):
'Computes the temporal offset loss for tracking.\n\n Args:\n input_height: An integer scalar tensor representing input image height.\n input_width: An integer scalar tensor representing input image width.\n predic... | -6,298,571,109,006,518,000 | Computes the temporal offset loss for tracking.
Args:
input_height: An integer scalar tensor representing input image height.
input_width: An integer scalar tensor representing input image width.
prediction_dict: The dictionary returned from the predict() method.
Returns:
A dictionary with track/temporal_offs... | research/object_detection/meta_architectures/center_net_meta_arch.py | _compute_temporal_offset_loss | AvikantSrivastava/models | python | def _compute_temporal_offset_loss(self, input_height, input_width, prediction_dict):
'Computes the temporal offset loss for tracking.\n\n Args:\n input_height: An integer scalar tensor representing input image height.\n input_width: An integer scalar tensor representing input image width.\n predic... |
def predict(self, preprocessed_inputs, _):
"Predicts CenterNet prediction tensors given an input batch.\n\n Feature extractors are free to produce predictions from multiple feature\n maps and therefore we return a dictionary mapping strings to lists.\n E.g. the hourglass backbone produces two feature maps.... | -738,261,801,241,657,200 | Predicts CenterNet prediction tensors given an input batch.
Feature extractors are free to produce predictions from multiple feature
maps and therefore we return a dictionary mapping strings to lists.
E.g. the hourglass backbone produces two feature maps.
Args:
preprocessed_inputs: a [batch, height, width, channels... | research/object_detection/meta_architectures/center_net_meta_arch.py | predict | AvikantSrivastava/models | python | def predict(self, preprocessed_inputs, _):
"Predicts CenterNet prediction tensors given an input batch.\n\n Feature extractors are free to produce predictions from multiple feature\n maps and therefore we return a dictionary mapping strings to lists.\n E.g. the hourglass backbone produces two feature maps.... |
def loss(self, prediction_dict, true_image_shapes, scope=None):
'Computes scalar loss tensors with respect to provided groundtruth.\n\n This function implements the various CenterNet losses.\n\n Args:\n prediction_dict: a dictionary holding predicted tensors returned by\n "predict" function.\n ... | 4,125,836,591,047,334,000 | Computes scalar loss tensors with respect to provided groundtruth.
This function implements the various CenterNet losses.
Args:
prediction_dict: a dictionary holding predicted tensors returned by
"predict" function.
true_image_shapes: int32 tensor of shape [batch, 3] where each row is of
the form [height,... | research/object_detection/meta_architectures/center_net_meta_arch.py | loss | AvikantSrivastava/models | python | def loss(self, prediction_dict, true_image_shapes, scope=None):
'Computes scalar loss tensors with respect to provided groundtruth.\n\n This function implements the various CenterNet losses.\n\n Args:\n prediction_dict: a dictionary holding predicted tensors returned by\n "predict" function.\n ... |
def postprocess(self, prediction_dict, true_image_shapes, **params):
'Produces boxes given a prediction dict returned by predict().\n\n Although predict returns a list of tensors, only the last tensor in\n each list is used for making box predictions.\n\n Args:\n prediction_dict: a dictionary holding ... | -427,511,309,887,318,850 | Produces boxes given a prediction dict returned by predict().
Although predict returns a list of tensors, only the last tensor in
each list is used for making box predictions.
Args:
prediction_dict: a dictionary holding predicted tensors from "predict"
function.
true_image_shapes: int32 tensor of shape [batch... | research/object_detection/meta_architectures/center_net_meta_arch.py | postprocess | AvikantSrivastava/models | python | def postprocess(self, prediction_dict, true_image_shapes, **params):
'Produces boxes given a prediction dict returned by predict().\n\n Although predict returns a list of tensors, only the last tensor in\n each list is used for making box predictions.\n\n Args:\n prediction_dict: a dictionary holding ... |
def _postprocess_embeddings(self, prediction_dict, y_indices, x_indices):
'Performs postprocessing on embedding predictions.\n\n Args:\n prediction_dict: a dictionary holding predicted tensors, returned from the\n predict() method. This dictionary should contain embedding prediction\n feature ... | -3,924,071,412,387,423,000 | Performs postprocessing on embedding predictions.
Args:
prediction_dict: a dictionary holding predicted tensors, returned from the
predict() method. This dictionary should contain embedding prediction
feature maps for tracking task.
y_indices: A [batch_size, max_detections] int tensor with y indices for
... | research/object_detection/meta_architectures/center_net_meta_arch.py | _postprocess_embeddings | AvikantSrivastava/models | python | def _postprocess_embeddings(self, prediction_dict, y_indices, x_indices):
'Performs postprocessing on embedding predictions.\n\n Args:\n prediction_dict: a dictionary holding predicted tensors, returned from the\n predict() method. This dictionary should contain embedding prediction\n feature ... |
def _postprocess_keypoints(self, prediction_dict, classes, y_indices, x_indices, boxes, num_detections):
'Performs postprocessing on keypoint predictions.\n\n Args:\n prediction_dict: a dictionary holding predicted tensors, returned from the\n predict() method. This dictionary should contain keypoint... | -4,493,164,361,329,250,300 | Performs postprocessing on keypoint predictions.
Args:
prediction_dict: a dictionary holding predicted tensors, returned from the
predict() method. This dictionary should contain keypoint prediction
feature maps for each keypoint task.
classes: A [batch_size, max_detections] int tensor with class indices f... | research/object_detection/meta_architectures/center_net_meta_arch.py | _postprocess_keypoints | AvikantSrivastava/models | python | def _postprocess_keypoints(self, prediction_dict, classes, y_indices, x_indices, boxes, num_detections):
'Performs postprocessing on keypoint predictions.\n\n Args:\n prediction_dict: a dictionary holding predicted tensors, returned from the\n predict() method. This dictionary should contain keypoint... |
def _get_instance_indices(self, classes, num_detections, batch_index, class_id):
'Gets the instance indices that match the target class ID.\n\n Args:\n classes: A [batch_size, max_detections] int tensor with class indices for\n all detected objects.\n num_detections: A [batch_size] int tensor wi... | -8,815,530,122,229,937,000 | Gets the instance indices that match the target class ID.
Args:
classes: A [batch_size, max_detections] int tensor with class indices for
all detected objects.
num_detections: A [batch_size] int tensor with the number of valid
detections for each image.
batch_index: An integer specifying the index for an... | research/object_detection/meta_architectures/center_net_meta_arch.py | _get_instance_indices | AvikantSrivastava/models | python | def _get_instance_indices(self, classes, num_detections, batch_index, class_id):
'Gets the instance indices that match the target class ID.\n\n Args:\n classes: A [batch_size, max_detections] int tensor with class indices for\n all detected objects.\n num_detections: A [batch_size] int tensor wi... |
def _postprocess_keypoints_for_class_and_image(self, keypoint_heatmap, keypoint_offsets, keypoint_regression, classes, y_indices, x_indices, boxes, indices_with_kpt_class, batch_index, kp_params):
'Postprocess keypoints for a single image and class.\n\n This function performs the following postprocessing operati... | 4,418,362,711,257,832,400 | Postprocess keypoints for a single image and class.
This function performs the following postprocessing operations on a single
image and single keypoint class:
- Converts keypoints scores to range [0, 1] with sigmoid.
- Determines the detections that correspond to the specified keypoint class.
- Gathers the regressed ... | research/object_detection/meta_architectures/center_net_meta_arch.py | _postprocess_keypoints_for_class_and_image | AvikantSrivastava/models | python | def _postprocess_keypoints_for_class_and_image(self, keypoint_heatmap, keypoint_offsets, keypoint_regression, classes, y_indices, x_indices, boxes, indices_with_kpt_class, batch_index, kp_params):
'Postprocess keypoints for a single image and class.\n\n This function performs the following postprocessing operati... |
def restore_from_objects(self, fine_tune_checkpoint_type='detection'):
"Returns a map of Trackable objects to load from a foreign checkpoint.\n\n Returns a dictionary of Tensorflow 2 Trackable objects (e.g. tf.Module\n or Checkpoint). This enables the model to initialize based on weights from\n another tas... | 3,968,676,734,106,757,600 | Returns a map of Trackable objects to load from a foreign checkpoint.
Returns a dictionary of Tensorflow 2 Trackable objects (e.g. tf.Module
or Checkpoint). This enables the model to initialize based on weights from
another task. For example, the feature extractor variables from a
classification model can be used to b... | research/object_detection/meta_architectures/center_net_meta_arch.py | restore_from_objects | AvikantSrivastava/models | python | def restore_from_objects(self, fine_tune_checkpoint_type='detection'):
"Returns a map of Trackable objects to load from a foreign checkpoint.\n\n Returns a dictionary of Tensorflow 2 Trackable objects (e.g. tf.Module\n or Checkpoint). This enables the model to initialize based on weights from\n another tas... |
def true_fn(keypoint_heatmap, keypoint_offsets, keypoint_regression, classes, y_indices, x_indices, boxes, instance_inds, ex_ind, kp_params):
'Logics to execute when instance_inds is not an empty set.'
(kpt_coords_for_class, kpt_scores_for_class) = self._postprocess_keypoints_for_class_and_image(keypoint_heatma... | 9,145,846,410,273,019,000 | Logics to execute when instance_inds is not an empty set. | research/object_detection/meta_architectures/center_net_meta_arch.py | true_fn | AvikantSrivastava/models | python | def true_fn(keypoint_heatmap, keypoint_offsets, keypoint_regression, classes, y_indices, x_indices, boxes, instance_inds, ex_ind, kp_params):
(kpt_coords_for_class, kpt_scores_for_class) = self._postprocess_keypoints_for_class_and_image(keypoint_heatmap, keypoint_offsets, keypoint_regression, classes, y_indice... |
def false_fn():
'Logics to execute when the instance_inds is an empty set.'
return (tf.zeros([1, 0, total_num_keypoints, 2], dtype=tf.float32), tf.zeros([1, 0, total_num_keypoints], dtype=tf.float32)) | -7,194,681,745,874,285,000 | Logics to execute when the instance_inds is an empty set. | research/object_detection/meta_architectures/center_net_meta_arch.py | false_fn | AvikantSrivastava/models | python | def false_fn():
return (tf.zeros([1, 0, total_num_keypoints, 2], dtype=tf.float32), tf.zeros([1, 0, total_num_keypoints], dtype=tf.float32)) |
def _single_replace(self, to_replace, method, inplace, limit):
'\n Replaces values in a Series using the fill method specified when no\n replacement value is given in the replace method\n '
if (self.ndim != 1):
raise TypeError('cannot replace {0} with method {1} on a {2}'.format(to_replace, met... | 3,824,816,040,778,656,000 | Replaces values in a Series using the fill method specified when no
replacement value is given in the replace method | pandas/core/generic.py | _single_replace | kapilepatel/pandas | python | def _single_replace(self, to_replace, method, inplace, limit):
'\n Replaces values in a Series using the fill method specified when no\n replacement value is given in the replace method\n '
if (self.ndim != 1):
raise TypeError('cannot replace {0} with method {1} on a {2}'.format(to_replace, met... |
def _doc_parms(cls):
'Return a tuple of the doc parms.'
axis_descr = ('{%s}' % ', '.join(('{0} ({1})'.format(a, i) for (i, a) in enumerate(cls._AXIS_ORDERS))))
name = (cls._constructor_sliced.__name__ if (cls._AXIS_LEN > 1) else 'scalar')
name2 = cls.__name__
return (axis_descr, name, name2) | -8,152,487,647,864,077,000 | Return a tuple of the doc parms. | pandas/core/generic.py | _doc_parms | kapilepatel/pandas | python | def _doc_parms(cls):
axis_descr = ('{%s}' % ', '.join(('{0} ({1})'.format(a, i) for (i, a) in enumerate(cls._AXIS_ORDERS))))
name = (cls._constructor_sliced.__name__ if (cls._AXIS_LEN > 1) else 'scalar')
name2 = cls.__name__
return (axis_descr, name, name2) |
def _init_mgr(self, mgr, axes=None, dtype=None, copy=False):
' passed a manager and a axes dict '
for (a, axe) in axes.items():
if (axe is not None):
mgr = mgr.reindex_axis(axe, axis=self._get_block_manager_axis(a), copy=False)
if copy:
mgr = mgr.copy()
if (dtype is not None)... | 5,469,990,824,350,398,000 | passed a manager and a axes dict | pandas/core/generic.py | _init_mgr | kapilepatel/pandas | python | def _init_mgr(self, mgr, axes=None, dtype=None, copy=False):
' '
for (a, axe) in axes.items():
if (axe is not None):
mgr = mgr.reindex_axis(axe, axis=self._get_block_manager_axis(a), copy=False)
if copy:
mgr = mgr.copy()
if (dtype is not None):
if ((len(mgr.blocks) >... |
@property
def is_copy(self):
'\n Return the copy.\n '
warnings.warn("Attribute 'is_copy' is deprecated and will be removed in a future version.", FutureWarning, stacklevel=2)
return self._is_copy | -5,138,356,737,473,574,000 | Return the copy. | pandas/core/generic.py | is_copy | kapilepatel/pandas | python | @property
def is_copy(self):
'\n \n '
warnings.warn("Attribute 'is_copy' is deprecated and will be removed in a future version.", FutureWarning, stacklevel=2)
return self._is_copy |
def _validate_dtype(self, dtype):
' validate the passed dtype '
if (dtype is not None):
dtype = pandas_dtype(dtype)
if (dtype.kind == 'V'):
raise NotImplementedError('compound dtypes are not implemented in the {0} constructor'.format(self.__class__.__name__))
return dtype | -2,802,526,631,258,644,000 | validate the passed dtype | pandas/core/generic.py | _validate_dtype | kapilepatel/pandas | python | def _validate_dtype(self, dtype):
' '
if (dtype is not None):
dtype = pandas_dtype(dtype)
if (dtype.kind == 'V'):
raise NotImplementedError('compound dtypes are not implemented in the {0} constructor'.format(self.__class__.__name__))
return dtype |
@property
def _constructor(self):
'Used when a manipulation result has the same dimensions as the\n original.\n '
raise AbstractMethodError(self) | 6,925,604,355,509,617,000 | Used when a manipulation result has the same dimensions as the
original. | pandas/core/generic.py | _constructor | kapilepatel/pandas | python | @property
def _constructor(self):
'Used when a manipulation result has the same dimensions as the\n original.\n '
raise AbstractMethodError(self) |
@property
def _constructor_sliced(self):
'Used when a manipulation result has one lower dimension(s) as the\n original, such as DataFrame single columns slicing.\n '
raise AbstractMethodError(self) | -4,845,604,620,012,364,000 | Used when a manipulation result has one lower dimension(s) as the
original, such as DataFrame single columns slicing. | pandas/core/generic.py | _constructor_sliced | kapilepatel/pandas | python | @property
def _constructor_sliced(self):
'Used when a manipulation result has one lower dimension(s) as the\n original, such as DataFrame single columns slicing.\n '
raise AbstractMethodError(self) |
@property
def _constructor_expanddim(self):
'Used when a manipulation result has one higher dimension as the\n original, such as Series.to_frame() and DataFrame.to_panel()\n '
raise NotImplementedError | 5,577,836,939,290,729,000 | Used when a manipulation result has one higher dimension as the
original, such as Series.to_frame() and DataFrame.to_panel() | pandas/core/generic.py | _constructor_expanddim | kapilepatel/pandas | python | @property
def _constructor_expanddim(self):
'Used when a manipulation result has one higher dimension as the\n original, such as Series.to_frame() and DataFrame.to_panel()\n '
raise NotImplementedError |
@classmethod
def _setup_axes(cls, axes, info_axis=None, stat_axis=None, aliases=None, slicers=None, axes_are_reversed=False, build_axes=True, ns=None, docs=None):
'Provide axes setup for the major PandasObjects.\n\n Parameters\n ----------\n axes : the names of the axes in order (lowest to high... | -2,622,064,521,481,804,000 | Provide axes setup for the major PandasObjects.
Parameters
----------
axes : the names of the axes in order (lowest to highest)
info_axis_num : the axis of the selector dimension (int)
stat_axis_num : the number of axis for the default stats (int)
aliases : other names for a single axis (dict)
slicers : how axes slice... | pandas/core/generic.py | _setup_axes | kapilepatel/pandas | python | @classmethod
def _setup_axes(cls, axes, info_axis=None, stat_axis=None, aliases=None, slicers=None, axes_are_reversed=False, build_axes=True, ns=None, docs=None):
'Provide axes setup for the major PandasObjects.\n\n Parameters\n ----------\n axes : the names of the axes in order (lowest to high... |
def _construct_axes_dict(self, axes=None, **kwargs):
'Return an axes dictionary for myself.'
d = {a: self._get_axis(a) for a in (axes or self._AXIS_ORDERS)}
d.update(kwargs)
return d | 9,032,129,010,733,145,000 | Return an axes dictionary for myself. | pandas/core/generic.py | _construct_axes_dict | kapilepatel/pandas | python | def _construct_axes_dict(self, axes=None, **kwargs):
d = {a: self._get_axis(a) for a in (axes or self._AXIS_ORDERS)}
d.update(kwargs)
return d |
@staticmethod
def _construct_axes_dict_from(self, axes, **kwargs):
'Return an axes dictionary for the passed axes.'
d = {a: ax for (a, ax) in zip(self._AXIS_ORDERS, axes)}
d.update(kwargs)
return d | -5,779,026,038,520,304,000 | Return an axes dictionary for the passed axes. | pandas/core/generic.py | _construct_axes_dict_from | kapilepatel/pandas | python | @staticmethod
def _construct_axes_dict_from(self, axes, **kwargs):
d = {a: ax for (a, ax) in zip(self._AXIS_ORDERS, axes)}
d.update(kwargs)
return d |
def _construct_axes_dict_for_slice(self, axes=None, **kwargs):
'Return an axes dictionary for myself.'
d = {self._AXIS_SLICEMAP[a]: self._get_axis(a) for a in (axes or self._AXIS_ORDERS)}
d.update(kwargs)
return d | 3,606,054,628,442,910,700 | Return an axes dictionary for myself. | pandas/core/generic.py | _construct_axes_dict_for_slice | kapilepatel/pandas | python | def _construct_axes_dict_for_slice(self, axes=None, **kwargs):
d = {self._AXIS_SLICEMAP[a]: self._get_axis(a) for a in (axes or self._AXIS_ORDERS)}
d.update(kwargs)
return d |
def _construct_axes_from_arguments(self, args, kwargs, require_all=False, sentinel=None):
'Construct and returns axes if supplied in args/kwargs.\n\n If require_all, raise if all axis arguments are not supplied\n return a tuple of (axes, kwargs).\n\n sentinel specifies the default parameter whe... | 3,963,837,012,820,377,000 | Construct and returns axes if supplied in args/kwargs.
If require_all, raise if all axis arguments are not supplied
return a tuple of (axes, kwargs).
sentinel specifies the default parameter when an axis is not
supplied; useful to distinguish when a user explicitly passes None
in scenarios where None has special mean... | pandas/core/generic.py | _construct_axes_from_arguments | kapilepatel/pandas | python | def _construct_axes_from_arguments(self, args, kwargs, require_all=False, sentinel=None):
'Construct and returns axes if supplied in args/kwargs.\n\n If require_all, raise if all axis arguments are not supplied\n return a tuple of (axes, kwargs).\n\n sentinel specifies the default parameter whe... |
@classmethod
def _get_block_manager_axis(cls, axis):
'Map the axis to the block_manager axis.'
axis = cls._get_axis_number(axis)
if cls._AXIS_REVERSED:
m = (cls._AXIS_LEN - 1)
return (m - axis)
return axis | -5,141,080,245,831,676,000 | Map the axis to the block_manager axis. | pandas/core/generic.py | _get_block_manager_axis | kapilepatel/pandas | python | @classmethod
def _get_block_manager_axis(cls, axis):
axis = cls._get_axis_number(axis)
if cls._AXIS_REVERSED:
m = (cls._AXIS_LEN - 1)
return (m - axis)
return axis |
@property
def shape(self):
'\n Return a tuple of axis dimensions\n '
return tuple((len(self._get_axis(a)) for a in self._AXIS_ORDERS)) | -2,033,092,480,248,761,000 | Return a tuple of axis dimensions | pandas/core/generic.py | shape | kapilepatel/pandas | python | @property
def shape(self):
'\n \n '
return tuple((len(self._get_axis(a)) for a in self._AXIS_ORDERS)) |
@property
def axes(self):
'\n Return index label(s) of the internal NDFrame\n '
return [self._get_axis(a) for a in self._AXIS_ORDERS] | -3,242,568,803,582,906,000 | Return index label(s) of the internal NDFrame | pandas/core/generic.py | axes | kapilepatel/pandas | python | @property
def axes(self):
'\n \n '
return [self._get_axis(a) for a in self._AXIS_ORDERS] |
@property
def ndim(self):
"\n Return an int representing the number of axes / array dimensions.\n\n Return 1 if Series. Otherwise return 2 if DataFrame.\n\n See Also\n --------\n ndarray.ndim : Number of array dimensions.\n\n Examples\n --------\n >>> s = pd.S... | 4,350,127,406,230,049,000 | Return an int representing the number of axes / array dimensions.
Return 1 if Series. Otherwise return 2 if DataFrame.
See Also
--------
ndarray.ndim : Number of array dimensions.
Examples
--------
>>> s = pd.Series({'a': 1, 'b': 2, 'c': 3})
>>> s.ndim
1
>>> df = pd.DataFrame({'col1': [1, 2], 'col2': [3, 4]})
>>> d... | pandas/core/generic.py | ndim | kapilepatel/pandas | python | @property
def ndim(self):
"\n Return an int representing the number of axes / array dimensions.\n\n Return 1 if Series. Otherwise return 2 if DataFrame.\n\n See Also\n --------\n ndarray.ndim : Number of array dimensions.\n\n Examples\n --------\n >>> s = pd.S... |
@property
def size(self):
"\n Return an int representing the number of elements in this object.\n\n Return the number of rows if Series. Otherwise return the number of\n rows times number of columns if DataFrame.\n\n See Also\n --------\n ndarray.size : Number of elements i... | 4,814,416,826,443,750,000 | Return an int representing the number of elements in this object.
Return the number of rows if Series. Otherwise return the number of
rows times number of columns if DataFrame.
See Also
--------
ndarray.size : Number of elements in the array.
Examples
--------
>>> s = pd.Series({'a': 1, 'b': 2, 'c': 3})
>>> s.size
3... | pandas/core/generic.py | size | kapilepatel/pandas | python | @property
def size(self):
"\n Return an int representing the number of elements in this object.\n\n Return the number of rows if Series. Otherwise return the number of\n rows times number of columns if DataFrame.\n\n See Also\n --------\n ndarray.size : Number of elements i... |
@property
def _selected_obj(self):
' internal compat with SelectionMixin '
return self | 223,117,681,623,033,600 | internal compat with SelectionMixin | pandas/core/generic.py | _selected_obj | kapilepatel/pandas | python | @property
def _selected_obj(self):
' '
return self |
@property
def _obj_with_exclusions(self):
' internal compat with SelectionMixin '
return self | 3,547,958,630,176,529,000 | internal compat with SelectionMixin | pandas/core/generic.py | _obj_with_exclusions | kapilepatel/pandas | python | @property
def _obj_with_exclusions(self):
' '
return self |
def set_axis(self, labels, axis=0, inplace=None):
'\n Assign desired index to given axis.\n\n Indexes for column or row labels can be changed by assigning\n a list-like or Index.\n\n .. versionchanged:: 0.21.0\n\n The signature is now `labels` and `axis`, consistent with\n ... | 8,482,339,352,067,965,000 | Assign desired index to given axis.
Indexes for column or row labels can be changed by assigning
a list-like or Index.
.. versionchanged:: 0.21.0
The signature is now `labels` and `axis`, consistent with
the rest of pandas API. Previously, the `axis` and `labels`
arguments were respectively the first and se... | pandas/core/generic.py | set_axis | kapilepatel/pandas | python | def set_axis(self, labels, axis=0, inplace=None):
'\n Assign desired index to given axis.\n\n Indexes for column or row labels can be changed by assigning\n a list-like or Index.\n\n .. versionchanged:: 0.21.0\n\n The signature is now `labels` and `axis`, consistent with\n ... |
def transpose(self, *args, **kwargs):
'\n Permute the dimensions of the %(klass)s\n\n Parameters\n ----------\n args : %(args_transpose)s\n copy : boolean, default False\n Make a copy of the underlying data. Mixed-dtype data will\n always result in a copy\n ... | 5,950,421,671,865,310,000 | Permute the dimensions of the %(klass)s
Parameters
----------
args : %(args_transpose)s
copy : boolean, default False
Make a copy of the underlying data. Mixed-dtype data will
always result in a copy
**kwargs
Additional keyword arguments will be passed to the function.
Returns
-------
y : same as input
E... | pandas/core/generic.py | transpose | kapilepatel/pandas | python | def transpose(self, *args, **kwargs):
'\n Permute the dimensions of the %(klass)s\n\n Parameters\n ----------\n args : %(args_transpose)s\n copy : boolean, default False\n Make a copy of the underlying data. Mixed-dtype data will\n always result in a copy\n ... |
def swapaxes(self, axis1, axis2, copy=True):
'\n Interchange axes and swap values axes appropriately.\n\n Returns\n -------\n y : same as input\n '
i = self._get_axis_number(axis1)
j = self._get_axis_number(axis2)
if (i == j):
if copy:
return self.c... | 1,932,957,769,574,267,100 | Interchange axes and swap values axes appropriately.
Returns
-------
y : same as input | pandas/core/generic.py | swapaxes | kapilepatel/pandas | python | def swapaxes(self, axis1, axis2, copy=True):
'\n Interchange axes and swap values axes appropriately.\n\n Returns\n -------\n y : same as input\n '
i = self._get_axis_number(axis1)
j = self._get_axis_number(axis2)
if (i == j):
if copy:
return self.c... |
def droplevel(self, level, axis=0):
"\n Return DataFrame with requested index / column level(s) removed.\n\n .. versionadded:: 0.24.0\n\n Parameters\n ----------\n level : int, str, or list-like\n If a string is given, must be the name of a level\n If list-li... | 8,172,893,012,025,901,000 | Return DataFrame with requested index / column level(s) removed.
.. versionadded:: 0.24.0
Parameters
----------
level : int, str, or list-like
If a string is given, must be the name of a level
If list-like, elements must be names or positional indexes
of levels.
axis : {0 or 'index', 1 or 'columns'}, def... | pandas/core/generic.py | droplevel | kapilepatel/pandas | python | def droplevel(self, level, axis=0):
"\n Return DataFrame with requested index / column level(s) removed.\n\n .. versionadded:: 0.24.0\n\n Parameters\n ----------\n level : int, str, or list-like\n If a string is given, must be the name of a level\n If list-li... |
def pop(self, item):
"\n Return item and drop from frame. Raise KeyError if not found.\n\n Parameters\n ----------\n item : str\n Label of column to be popped.\n\n Returns\n -------\n Series\n\n Examples\n --------\n >>> df = pd.DataFr... | -819,123,279,384,925,300 | Return item and drop from frame. Raise KeyError if not found.
Parameters
----------
item : str
Label of column to be popped.
Returns
-------
Series
Examples
--------
>>> df = pd.DataFrame([('falcon', 'bird', 389.0),
... ('parrot', 'bird', 24.0),
... ('lion', 'mammal', 80.5),... | pandas/core/generic.py | pop | kapilepatel/pandas | python | def pop(self, item):
"\n Return item and drop from frame. Raise KeyError if not found.\n\n Parameters\n ----------\n item : str\n Label of column to be popped.\n\n Returns\n -------\n Series\n\n Examples\n --------\n >>> df = pd.DataFr... |
def squeeze(self, axis=None):
"\n Squeeze 1 dimensional axis objects into scalars.\n\n Series or DataFrames with a single element are squeezed to a scalar.\n DataFrames with a single column or a single row are squeezed to a\n Series. Otherwise the object is unchanged.\n\n This met... | 9,216,137,054,533,155,000 | Squeeze 1 dimensional axis objects into scalars.
Series or DataFrames with a single element are squeezed to a scalar.
DataFrames with a single column or a single row are squeezed to a
Series. Otherwise the object is unchanged.
This method is most useful when you don't know if your
object is a Series or DataFrame, but... | pandas/core/generic.py | squeeze | kapilepatel/pandas | python | def squeeze(self, axis=None):
"\n Squeeze 1 dimensional axis objects into scalars.\n\n Series or DataFrames with a single element are squeezed to a scalar.\n DataFrames with a single column or a single row are squeezed to a\n Series. Otherwise the object is unchanged.\n\n This met... |
def swaplevel(self, i=(- 2), j=(- 1), axis=0):
'\n Swap levels i and j in a MultiIndex on a particular axis\n\n Parameters\n ----------\n i, j : int, str (can be mixed)\n Level of index to be swapped. Can pass level name as string.\n\n Returns\n -------\n ... | -4,580,033,766,606,492,700 | Swap levels i and j in a MultiIndex on a particular axis
Parameters
----------
i, j : int, str (can be mixed)
Level of index to be swapped. Can pass level name as string.
Returns
-------
swapped : same type as caller (new object)
.. versionchanged:: 0.18.1
The indexes ``i`` and ``j`` are now optional, and de... | pandas/core/generic.py | swaplevel | kapilepatel/pandas | python | def swaplevel(self, i=(- 2), j=(- 1), axis=0):
'\n Swap levels i and j in a MultiIndex on a particular axis\n\n Parameters\n ----------\n i, j : int, str (can be mixed)\n Level of index to be swapped. Can pass level name as string.\n\n Returns\n -------\n ... |
def rename(self, *args, **kwargs):
'\n Alter axes input function or functions. Function / dict values must be\n unique (1-to-1). Labels not contained in a dict / Series will be left\n as-is. Extra labels listed don\'t throw an error. Alternatively, change\n ``Series.name`` with a scalar ... | 5,026,194,608,817,388,000 | Alter axes input function or functions. Function / dict values must be
unique (1-to-1). Labels not contained in a dict / Series will be left
as-is. Extra labels listed don't throw an error. Alternatively, change
``Series.name`` with a scalar value (Series only).
Parameters
----------
%(axes)s : scalar, list-like, dict... | pandas/core/generic.py | rename | kapilepatel/pandas | python | def rename(self, *args, **kwargs):
'\n Alter axes input function or functions. Function / dict values must be\n unique (1-to-1). Labels not contained in a dict / Series will be left\n as-is. Extra labels listed don\'t throw an error. Alternatively, change\n ``Series.name`` with a scalar ... |
@rewrite_axis_style_signature('mapper', [('copy', True), ('inplace', False)])
def rename_axis(self, mapper=sentinel, **kwargs):
'\n Set the name of the axis for the index or columns.\n\n Parameters\n ----------\n mapper : scalar, list-like, optional\n Value to set the axis nam... | 4,914,523,485,724,767,000 | Set the name of the axis for the index or columns.
Parameters
----------
mapper : scalar, list-like, optional
Value to set the axis name attribute.
index, columns : scalar, list-like, dict-like or function, optional
A scalar, list-like, dict-like or functions transformations to
apply to that axis' values.
... | pandas/core/generic.py | rename_axis | kapilepatel/pandas | python | @rewrite_axis_style_signature('mapper', [('copy', True), ('inplace', False)])
def rename_axis(self, mapper=sentinel, **kwargs):
'\n Set the name of the axis for the index or columns.\n\n Parameters\n ----------\n mapper : scalar, list-like, optional\n Value to set the axis nam... |
def _set_axis_name(self, name, axis=0, inplace=False):
'\n Set the name(s) of the axis.\n\n Parameters\n ----------\n name : str or list of str\n Name(s) to set.\n axis : {0 or \'index\', 1 or \'columns\'}, default 0\n The axis to set the label. The value 0 o... | -3,503,825,494,571,687,400 | Set the name(s) of the axis.
Parameters
----------
name : str or list of str
Name(s) to set.
axis : {0 or 'index', 1 or 'columns'}, default 0
The axis to set the label. The value 0 or 'index' specifies index,
and the value 1 or 'columns' specifies columns.
inplace : bool, default False
If `True`, do op... | pandas/core/generic.py | _set_axis_name | kapilepatel/pandas | python | def _set_axis_name(self, name, axis=0, inplace=False):
'\n Set the name(s) of the axis.\n\n Parameters\n ----------\n name : str or list of str\n Name(s) to set.\n axis : {0 or \'index\', 1 or \'columns\'}, default 0\n The axis to set the label. The value 0 o... |
def equals(self, other):
'\n Test whether two objects contain the same elements.\n\n This function allows two Series or DataFrames to be compared against\n each other to see if they have the same shape and elements. NaNs in\n the same location are considered equal. The column headers do ... | -7,646,363,434,194,578,000 | Test whether two objects contain the same elements.
This function allows two Series or DataFrames to be compared against
each other to see if they have the same shape and elements. NaNs in
the same location are considered equal. The column headers do not
need to have the same type, but the elements within the columns ... | pandas/core/generic.py | equals | kapilepatel/pandas | python | def equals(self, other):
'\n Test whether two objects contain the same elements.\n\n This function allows two Series or DataFrames to be compared against\n each other to see if they have the same shape and elements. NaNs in\n the same location are considered equal. The column headers do ... |
def bool(self):
'\n Return the bool of a single element PandasObject.\n\n This must be a boolean scalar value, either True or False. Raise a\n ValueError if the PandasObject does not have exactly 1 element, or that\n element is not boolean\n '
v = self.squeeze()
if isinst... | -5,600,088,005,043,880,000 | Return the bool of a single element PandasObject.
This must be a boolean scalar value, either True or False. Raise a
ValueError if the PandasObject does not have exactly 1 element, or that
element is not boolean | pandas/core/generic.py | bool | kapilepatel/pandas | python | def bool(self):
'\n Return the bool of a single element PandasObject.\n\n This must be a boolean scalar value, either True or False. Raise a\n ValueError if the PandasObject does not have exactly 1 element, or that\n element is not boolean\n '
v = self.squeeze()
if isinst... |
def _is_level_reference(self, key, axis=0):
'\n Test whether a key is a level reference for a given axis.\n\n To be considered a level reference, `key` must be a string that:\n - (axis=0): Matches the name of an index level and does NOT match\n a column label.\n - (axis=1)... | 5,860,233,145,217,881,000 | Test whether a key is a level reference for a given axis.
To be considered a level reference, `key` must be a string that:
- (axis=0): Matches the name of an index level and does NOT match
a column label.
- (axis=1): Matches the name of a column level and does NOT match
an index label.
Parameters
--------... | pandas/core/generic.py | _is_level_reference | kapilepatel/pandas | python | def _is_level_reference(self, key, axis=0):
'\n Test whether a key is a level reference for a given axis.\n\n To be considered a level reference, `key` must be a string that:\n - (axis=0): Matches the name of an index level and does NOT match\n a column label.\n - (axis=1)... |
def _is_label_reference(self, key, axis=0):
'\n Test whether a key is a label reference for a given axis.\n\n To be considered a label reference, `key` must be a string that:\n - (axis=0): Matches a column label\n - (axis=1): Matches an index label\n\n Parameters\n ----... | 7,299,412,412,023,767,000 | Test whether a key is a label reference for a given axis.
To be considered a label reference, `key` must be a string that:
- (axis=0): Matches a column label
- (axis=1): Matches an index label
Parameters
----------
key: str
Potential label name
axis: int, default 0
Axis perpendicular to the axis that labe... | pandas/core/generic.py | _is_label_reference | kapilepatel/pandas | python | def _is_label_reference(self, key, axis=0):
'\n Test whether a key is a label reference for a given axis.\n\n To be considered a label reference, `key` must be a string that:\n - (axis=0): Matches a column label\n - (axis=1): Matches an index label\n\n Parameters\n ----... |
def _is_label_or_level_reference(self, key, axis=0):
'\n Test whether a key is a label or level reference for a given axis.\n\n To be considered either a label or a level reference, `key` must be a\n string that:\n - (axis=0): Matches a column label or an index level\n - (axis... | 1,784,305,677,259,383,000 | Test whether a key is a label or level reference for a given axis.
To be considered either a label or a level reference, `key` must be a
string that:
- (axis=0): Matches a column label or an index level
- (axis=1): Matches an index label or a column level
Parameters
----------
key: str
Potential label or leve... | pandas/core/generic.py | _is_label_or_level_reference | kapilepatel/pandas | python | def _is_label_or_level_reference(self, key, axis=0):
'\n Test whether a key is a label or level reference for a given axis.\n\n To be considered either a label or a level reference, `key` must be a\n string that:\n - (axis=0): Matches a column label or an index level\n - (axis... |
def _check_label_or_level_ambiguity(self, key, axis=0):
'\n Check whether `key` is ambiguous.\n\n By ambiguous, we mean that it matches both a level of the input\n `axis` and a label of the other axis.\n\n Parameters\n ----------\n key: str or object\n label or l... | -731,676,864,510,409,500 | Check whether `key` is ambiguous.
By ambiguous, we mean that it matches both a level of the input
`axis` and a label of the other axis.
Parameters
----------
key: str or object
label or level name
axis: int, default 0
Axis that levels are associated with (0 for index, 1 for columns)
Raises
------
ValueError:... | pandas/core/generic.py | _check_label_or_level_ambiguity | kapilepatel/pandas | python | def _check_label_or_level_ambiguity(self, key, axis=0):
'\n Check whether `key` is ambiguous.\n\n By ambiguous, we mean that it matches both a level of the input\n `axis` and a label of the other axis.\n\n Parameters\n ----------\n key: str or object\n label or l... |
def _get_label_or_level_values(self, key, axis=0):
"\n Return a 1-D array of values associated with `key`, a label or level\n from the given `axis`.\n\n Retrieval logic:\n - (axis=0): Return column values if `key` matches a column label.\n Otherwise return index level values... | -345,589,549,459,989,760 | Return a 1-D array of values associated with `key`, a label or level
from the given `axis`.
Retrieval logic:
- (axis=0): Return column values if `key` matches a column label.
Otherwise return index level values if `key` matches an index
level.
- (axis=1): Return row values if `key` matches an index label.
... | pandas/core/generic.py | _get_label_or_level_values | kapilepatel/pandas | python | def _get_label_or_level_values(self, key, axis=0):
"\n Return a 1-D array of values associated with `key`, a label or level\n from the given `axis`.\n\n Retrieval logic:\n - (axis=0): Return column values if `key` matches a column label.\n Otherwise return index level values... |
def _drop_labels_or_levels(self, keys, axis=0):
'\n Drop labels and/or levels for the given `axis`.\n\n For each key in `keys`:\n - (axis=0): If key matches a column label then drop the column.\n Otherwise if key matches an index level then drop the level.\n - (axis=1): If... | -2,654,759,804,545,341,400 | Drop labels and/or levels for the given `axis`.
For each key in `keys`:
- (axis=0): If key matches a column label then drop the column.
Otherwise if key matches an index level then drop the level.
- (axis=1): If key matches an index label then drop the row.
Otherwise if key matches a column level then drop... | pandas/core/generic.py | _drop_labels_or_levels | kapilepatel/pandas | python | def _drop_labels_or_levels(self, keys, axis=0):
'\n Drop labels and/or levels for the given `axis`.\n\n For each key in `keys`:\n - (axis=0): If key matches a column label then drop the column.\n Otherwise if key matches an index level then drop the level.\n - (axis=1): If... |
def __iter__(self):
'Iterate over info axis'
return iter(self._info_axis) | -2,233,122,514,471,977,500 | Iterate over info axis | pandas/core/generic.py | __iter__ | kapilepatel/pandas | python | def __iter__(self):
return iter(self._info_axis) |
def keys(self):
"Get the 'info axis' (see Indexing for more)\n\n This is index for Series, columns for DataFrame and major_axis for\n Panel.\n "
return self._info_axis | 5,350,577,501,278,983,000 | Get the 'info axis' (see Indexing for more)
This is index for Series, columns for DataFrame and major_axis for
Panel. | pandas/core/generic.py | keys | kapilepatel/pandas | python | def keys(self):
"Get the 'info axis' (see Indexing for more)\n\n This is index for Series, columns for DataFrame and major_axis for\n Panel.\n "
return self._info_axis |
def iteritems(self):
'Iterate over (label, values) on info axis\n\n This is index for Series, columns for DataFrame, major_axis for Panel,\n and so on.\n '
for h in self._info_axis:
(yield (h, self[h])) | -8,959,623,205,265,551,000 | Iterate over (label, values) on info axis
This is index for Series, columns for DataFrame, major_axis for Panel,
and so on. | pandas/core/generic.py | iteritems | kapilepatel/pandas | python | def iteritems(self):
'Iterate over (label, values) on info axis\n\n This is index for Series, columns for DataFrame, major_axis for Panel,\n and so on.\n '
for h in self._info_axis:
(yield (h, self[h])) |
def __len__(self):
'Returns length of info axis'
return len(self._info_axis) | 2,516,517,228,926,761,500 | Returns length of info axis | pandas/core/generic.py | __len__ | kapilepatel/pandas | python | def __len__(self):
return len(self._info_axis) |
def __contains__(self, key):
'True if the key is in the info axis'
return (key in self._info_axis) | 3,148,144,458,199,314,000 | True if the key is in the info axis | pandas/core/generic.py | __contains__ | kapilepatel/pandas | python | def __contains__(self, key):
return (key in self._info_axis) |
@property
def empty(self):
"\n Indicator whether DataFrame is empty.\n\n True if DataFrame is entirely empty (no items), meaning any of the\n axes are of length 0.\n\n Returns\n -------\n bool\n If DataFrame is empty, return True, if not return False.\n\n ... | 7,124,979,622,633,238,000 | Indicator whether DataFrame is empty.
True if DataFrame is entirely empty (no items), meaning any of the
axes are of length 0.
Returns
-------
bool
If DataFrame is empty, return True, if not return False.
See Also
--------
Series.dropna
DataFrame.dropna
Notes
-----
If DataFrame contains only NaNs, it is still n... | pandas/core/generic.py | empty | kapilepatel/pandas | python | @property
def empty(self):
"\n Indicator whether DataFrame is empty.\n\n True if DataFrame is entirely empty (no items), meaning any of the\n axes are of length 0.\n\n Returns\n -------\n bool\n If DataFrame is empty, return True, if not return False.\n\n ... |
def to_dense(self):
'\n Return dense representation of NDFrame (as opposed to sparse).\n '
return self | -8,314,391,330,797,823,000 | Return dense representation of NDFrame (as opposed to sparse). | pandas/core/generic.py | to_dense | kapilepatel/pandas | python | def to_dense(self):
'\n \n '
return self |
def _repr_latex_(self):
'\n Returns a LaTeX representation for a particular object.\n Mainly for use with nbconvert (jupyter notebook conversion to pdf).\n '
if config.get_option('display.latex.repr'):
return self.to_latex()
else:
return None | 3,186,633,606,793,060,000 | Returns a LaTeX representation for a particular object.
Mainly for use with nbconvert (jupyter notebook conversion to pdf). | pandas/core/generic.py | _repr_latex_ | kapilepatel/pandas | python | def _repr_latex_(self):
'\n Returns a LaTeX representation for a particular object.\n Mainly for use with nbconvert (jupyter notebook conversion to pdf).\n '
if config.get_option('display.latex.repr'):
return self.to_latex()
else:
return None |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.