doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
sklearn.metrics.average_precision_score sklearn.metrics.average_precision_score(y_true, y_score, *, average='macro', pos_label=1, sample_weight=None) [source] Compute average precision (AP) from prediction scores. AP summarizes a precision-recall curve as the weighted mean of precisions achieved at each threshold, ...
sklearn.modules.generated.sklearn.metrics.average_precision_score
sklearn.metrics.balanced_accuracy_score sklearn.metrics.balanced_accuracy_score(y_true, y_pred, *, sample_weight=None, adjusted=False) [source] Compute the balanced accuracy. The balanced accuracy in binary and multiclass classification problems to deal with imbalanced datasets. It is defined as the average of reca...
sklearn.modules.generated.sklearn.metrics.balanced_accuracy_score
sklearn.metrics.brier_score_loss sklearn.metrics.brier_score_loss(y_true, y_prob, *, sample_weight=None, pos_label=None) [source] Compute the Brier score loss. The smaller the Brier score loss, the better, hence the naming with “loss”. The Brier score measures the mean squared difference between the predicted proba...
sklearn.modules.generated.sklearn.metrics.brier_score_loss
sklearn.metrics.calinski_harabasz_score sklearn.metrics.calinski_harabasz_score(X, labels) [source] Compute the Calinski and Harabasz score. It is also known as the Variance Ratio Criterion. The score is defined as ratio between the within-cluster dispersion and the between-cluster dispersion. Read more in the User...
sklearn.modules.generated.sklearn.metrics.calinski_harabasz_score
sklearn.metrics.check_scoring sklearn.metrics.check_scoring(estimator, scoring=None, *, allow_none=False) [source] Determine scorer from user options. A TypeError will be thrown if the estimator cannot be scored. Parameters estimatorestimator object implementing ‘fit’ The object to use to fit the data. scor...
sklearn.modules.generated.sklearn.metrics.check_scoring
sklearn.metrics.classification_report sklearn.metrics.classification_report(y_true, y_pred, *, labels=None, target_names=None, sample_weight=None, digits=2, output_dict=False, zero_division='warn') [source] Build a text report showing the main classification metrics. Read more in the User Guide. Parameters y_tr...
sklearn.modules.generated.sklearn.metrics.classification_report
sklearn.metrics.cluster.contingency_matrix sklearn.metrics.cluster.contingency_matrix(labels_true, labels_pred, *, eps=None, sparse=False, dtype=<class 'numpy.int64'>) [source] Build a contingency matrix describing the relationship between labels. Parameters labels_trueint array, shape = [n_samples] Ground tr...
sklearn.modules.generated.sklearn.metrics.cluster.contingency_matrix
sklearn.metrics.cluster.pair_confusion_matrix sklearn.metrics.cluster.pair_confusion_matrix(labels_true, labels_pred) [source] Pair confusion matrix arising from two clusterings. The pair confusion matrix \(C\) computes a 2 by 2 similarity matrix between two clusterings by considering all pairs of samples and count...
sklearn.modules.generated.sklearn.metrics.cluster.pair_confusion_matrix
sklearn.metrics.cohen_kappa_score sklearn.metrics.cohen_kappa_score(y1, y2, *, labels=None, weights=None, sample_weight=None) [source] Cohen’s kappa: a statistic that measures inter-annotator agreement. This function computes Cohen’s kappa [1], a score that expresses the level of agreement between two annotators on...
sklearn.modules.generated.sklearn.metrics.cohen_kappa_score
sklearn.metrics.completeness_score sklearn.metrics.completeness_score(labels_true, labels_pred) [source] Completeness metric of a cluster labeling given a ground truth. A clustering result satisfies completeness if all the data points that are members of a given class are elements of the same cluster. This metric i...
sklearn.modules.generated.sklearn.metrics.completeness_score
sklearn.metrics.confusion_matrix sklearn.metrics.confusion_matrix(y_true, y_pred, *, labels=None, sample_weight=None, normalize=None) [source] Compute confusion matrix to evaluate the accuracy of a classification. By definition a confusion matrix \(C\) is such that \(C_{i, j}\) is equal to the number of observation...
sklearn.modules.generated.sklearn.metrics.confusion_matrix
sklearn.metrics.consensus_score sklearn.metrics.consensus_score(a, b, *, similarity='jaccard') [source] The similarity of two sets of biclusters. Similarity between individual biclusters is computed. Then the best matching between sets is found using the Hungarian algorithm. The final score is the sum of similariti...
sklearn.modules.generated.sklearn.metrics.consensus_score
sklearn.metrics.coverage_error sklearn.metrics.coverage_error(y_true, y_score, *, sample_weight=None) [source] Coverage error measure. Compute how far we need to go through the ranked scores to cover all true labels. The best value is equal to the average number of labels in y_true per sample. Ties in y_scores are ...
sklearn.modules.generated.sklearn.metrics.coverage_error
sklearn.metrics.davies_bouldin_score sklearn.metrics.davies_bouldin_score(X, labels) [source] Computes the Davies-Bouldin score. The score is defined as the average similarity measure of each cluster with its most similar cluster, where similarity is the ratio of within-cluster distances to between-cluster distance...
sklearn.modules.generated.sklearn.metrics.davies_bouldin_score
sklearn.metrics.dcg_score sklearn.metrics.dcg_score(y_true, y_score, *, k=None, log_base=2, sample_weight=None, ignore_ties=False) [source] Compute Discounted Cumulative Gain. Sum the true scores ranked in the order induced by the predicted scores, after applying a logarithmic discount. This ranking metric yields a...
sklearn.modules.generated.sklearn.metrics.dcg_score
sklearn.metrics.det_curve sklearn.metrics.det_curve(y_true, y_score, pos_label=None, sample_weight=None) [source] Compute error rates for different probability thresholds. Note This metric is used for evaluation of ranking and error tradeoffs of a binary classification task. Read more in the User Guide. New in v...
sklearn.modules.generated.sklearn.metrics.det_curve
sklearn.metrics.explained_variance_score sklearn.metrics.explained_variance_score(y_true, y_pred, *, sample_weight=None, multioutput='uniform_average') [source] Explained variance regression score function. Best possible score is 1.0, lower values are worse. Read more in the User Guide. Parameters y_truearray-l...
sklearn.modules.generated.sklearn.metrics.explained_variance_score
sklearn.metrics.f1_score sklearn.metrics.f1_score(y_true, y_pred, *, labels=None, pos_label=1, average='binary', sample_weight=None, zero_division='warn') [source] Compute the F1 score, also known as balanced F-score or F-measure. The F1 score can be interpreted as a weighted average of the precision and recall, wh...
sklearn.modules.generated.sklearn.metrics.f1_score
sklearn.metrics.fbeta_score sklearn.metrics.fbeta_score(y_true, y_pred, *, beta, labels=None, pos_label=1, average='binary', sample_weight=None, zero_division='warn') [source] Compute the F-beta score. The F-beta score is the weighted harmonic mean of precision and recall, reaching its optimal value at 1 and its wo...
sklearn.modules.generated.sklearn.metrics.fbeta_score
sklearn.metrics.fowlkes_mallows_score sklearn.metrics.fowlkes_mallows_score(labels_true, labels_pred, *, sparse=False) [source] Measure the similarity of two clusterings of a set of points. New in version 0.18. The Fowlkes-Mallows index (FMI) is defined as the geometric mean between of the precision and recall: F...
sklearn.modules.generated.sklearn.metrics.fowlkes_mallows_score
sklearn.metrics.get_scorer sklearn.metrics.get_scorer(scoring) [source] Get a scorer from string. Read more in the User Guide. Parameters scoringstr or callable Scoring method as string. If callable it is returned as is. Returns scorercallable The scorer.
sklearn.modules.generated.sklearn.metrics.get_scorer
sklearn.metrics.hamming_loss sklearn.metrics.hamming_loss(y_true, y_pred, *, sample_weight=None) [source] Compute the average Hamming loss. The Hamming loss is the fraction of labels that are incorrectly predicted. Read more in the User Guide. Parameters y_true1d array-like, or label indicator array / sparse ma...
sklearn.modules.generated.sklearn.metrics.hamming_loss
sklearn.metrics.hinge_loss sklearn.metrics.hinge_loss(y_true, pred_decision, *, labels=None, sample_weight=None) [source] Average hinge loss (non-regularized). In binary class case, assuming labels in y_true are encoded with +1 and -1, when a prediction mistake is made, margin = y_true * pred_decision is always neg...
sklearn.modules.generated.sklearn.metrics.hinge_loss
sklearn.metrics.homogeneity_completeness_v_measure sklearn.metrics.homogeneity_completeness_v_measure(labels_true, labels_pred, *, beta=1.0) [source] Compute the homogeneity and completeness and V-Measure scores at once. Those metrics are based on normalized conditional entropy measures of the clustering labeling t...
sklearn.modules.generated.sklearn.metrics.homogeneity_completeness_v_measure
sklearn.metrics.homogeneity_score sklearn.metrics.homogeneity_score(labels_true, labels_pred) [source] Homogeneity metric of a cluster labeling given a ground truth. A clustering result satisfies homogeneity if all of its clusters contain only data points which are members of a single class. This metric is independ...
sklearn.modules.generated.sklearn.metrics.homogeneity_score
sklearn.metrics.jaccard_score sklearn.metrics.jaccard_score(y_true, y_pred, *, labels=None, pos_label=1, average='binary', sample_weight=None, zero_division='warn') [source] Jaccard similarity coefficient score. The Jaccard index [1], or Jaccard similarity coefficient, defined as the size of the intersection divide...
sklearn.modules.generated.sklearn.metrics.jaccard_score
sklearn.metrics.label_ranking_average_precision_score sklearn.metrics.label_ranking_average_precision_score(y_true, y_score, *, sample_weight=None) [source] Compute ranking-based average precision. Label ranking average precision (LRAP) is the average over each ground truth label assigned to each sample, of the rat...
sklearn.modules.generated.sklearn.metrics.label_ranking_average_precision_score
sklearn.metrics.label_ranking_loss sklearn.metrics.label_ranking_loss(y_true, y_score, *, sample_weight=None) [source] Compute Ranking loss measure. Compute the average number of label pairs that are incorrectly ordered given y_score weighted by the size of the label set and the number of labels not in the label se...
sklearn.modules.generated.sklearn.metrics.label_ranking_loss
sklearn.metrics.log_loss sklearn.metrics.log_loss(y_true, y_pred, *, eps=1e-15, normalize=True, sample_weight=None, labels=None) [source] Log loss, aka logistic loss or cross-entropy loss. This is the loss function used in (multinomial) logistic regression and extensions of it such as neural networks, defined as th...
sklearn.modules.generated.sklearn.metrics.log_loss
sklearn.metrics.make_scorer sklearn.metrics.make_scorer(score_func, *, greater_is_better=True, needs_proba=False, needs_threshold=False, **kwargs) [source] Make a scorer from a performance metric or loss function. This factory function wraps scoring functions for use in GridSearchCV and cross_val_score. It takes a ...
sklearn.modules.generated.sklearn.metrics.make_scorer
sklearn.metrics.matthews_corrcoef sklearn.metrics.matthews_corrcoef(y_true, y_pred, *, sample_weight=None) [source] Compute the Matthews correlation coefficient (MCC). The Matthews correlation coefficient is used in machine learning as a measure of the quality of binary and multiclass classifications. It takes into...
sklearn.modules.generated.sklearn.metrics.matthews_corrcoef
sklearn.metrics.max_error sklearn.metrics.max_error(y_true, y_pred) [source] max_error metric calculates the maximum residual error. Read more in the User Guide. Parameters y_truearray-like of shape (n_samples,) Ground truth (correct) target values. y_predarray-like of shape (n_samples,) Estimated target ...
sklearn.modules.generated.sklearn.metrics.max_error
sklearn.metrics.mean_absolute_error sklearn.metrics.mean_absolute_error(y_true, y_pred, *, sample_weight=None, multioutput='uniform_average') [source] Mean absolute error regression loss. Read more in the User Guide. Parameters y_truearray-like of shape (n_samples,) or (n_samples, n_outputs) Ground truth (cor...
sklearn.modules.generated.sklearn.metrics.mean_absolute_error
sklearn.metrics.mean_absolute_percentage_error sklearn.metrics.mean_absolute_percentage_error(y_true, y_pred, sample_weight=None, multioutput='uniform_average') [source] Mean absolute percentage error regression loss. Note here that we do not represent the output as a percentage in range [0, 100]. Instead, we repre...
sklearn.modules.generated.sklearn.metrics.mean_absolute_percentage_error
sklearn.metrics.mean_gamma_deviance sklearn.metrics.mean_gamma_deviance(y_true, y_pred, *, sample_weight=None) [source] Mean Gamma deviance regression loss. Gamma deviance is equivalent to the Tweedie deviance with the power parameter power=2. It is invariant to scaling of the target variable, and measures relative...
sklearn.modules.generated.sklearn.metrics.mean_gamma_deviance
sklearn.metrics.mean_poisson_deviance sklearn.metrics.mean_poisson_deviance(y_true, y_pred, *, sample_weight=None) [source] Mean Poisson deviance regression loss. Poisson deviance is equivalent to the Tweedie deviance with the power parameter power=1. Read more in the User Guide. Parameters y_truearray-like of ...
sklearn.modules.generated.sklearn.metrics.mean_poisson_deviance
sklearn.metrics.mean_squared_error sklearn.metrics.mean_squared_error(y_true, y_pred, *, sample_weight=None, multioutput='uniform_average', squared=True) [source] Mean squared error regression loss. Read more in the User Guide. Parameters y_truearray-like of shape (n_samples,) or (n_samples, n_outputs) Ground...
sklearn.modules.generated.sklearn.metrics.mean_squared_error
sklearn.metrics.mean_squared_log_error sklearn.metrics.mean_squared_log_error(y_true, y_pred, *, sample_weight=None, multioutput='uniform_average') [source] Mean squared logarithmic error regression loss. Read more in the User Guide. Parameters y_truearray-like of shape (n_samples,) or (n_samples, n_outputs) ...
sklearn.modules.generated.sklearn.metrics.mean_squared_log_error
sklearn.metrics.mean_tweedie_deviance sklearn.metrics.mean_tweedie_deviance(y_true, y_pred, *, sample_weight=None, power=0) [source] Mean Tweedie deviance regression loss. Read more in the User Guide. Parameters y_truearray-like of shape (n_samples,) Ground truth (correct) target values. y_predarray-like of...
sklearn.modules.generated.sklearn.metrics.mean_tweedie_deviance
sklearn.metrics.median_absolute_error sklearn.metrics.median_absolute_error(y_true, y_pred, *, multioutput='uniform_average', sample_weight=None) [source] Median absolute error regression loss. Median absolute error output is non-negative floating point. The best value is 0.0. Read more in the User Guide. Paramete...
sklearn.modules.generated.sklearn.metrics.median_absolute_error
sklearn.metrics.multilabel_confusion_matrix sklearn.metrics.multilabel_confusion_matrix(y_true, y_pred, *, sample_weight=None, labels=None, samplewise=False) [source] Compute a confusion matrix for each class or sample. New in version 0.21. Compute class-wise (default) or sample-wise (samplewise=True) multilabel ...
sklearn.modules.generated.sklearn.metrics.multilabel_confusion_matrix
sklearn.metrics.mutual_info_score sklearn.metrics.mutual_info_score(labels_true, labels_pred, *, contingency=None) [source] Mutual Information between two clusterings. The Mutual Information is a measure of the similarity between two labels of the same data. Where \(|U_i|\) is the number of the samples in cluster \...
sklearn.modules.generated.sklearn.metrics.mutual_info_score
sklearn.metrics.ndcg_score sklearn.metrics.ndcg_score(y_true, y_score, *, k=None, sample_weight=None, ignore_ties=False) [source] Compute Normalized Discounted Cumulative Gain. Sum the true scores ranked in the order induced by the predicted scores, after applying a logarithmic discount. Then divide by the best pos...
sklearn.modules.generated.sklearn.metrics.ndcg_score
sklearn.metrics.normalized_mutual_info_score sklearn.metrics.normalized_mutual_info_score(labels_true, labels_pred, *, average_method='arithmetic') [source] Normalized Mutual Information between two clusterings. Normalized Mutual Information (NMI) is a normalization of the Mutual Information (MI) score to scale the...
sklearn.modules.generated.sklearn.metrics.normalized_mutual_info_score
sklearn.metrics.pairwise.additive_chi2_kernel sklearn.metrics.pairwise.additive_chi2_kernel(X, Y=None) [source] Computes the additive chi-squared kernel between observations in X and Y. The chi-squared kernel is computed between each pair of rows in X and Y. X and Y have to be non-negative. This kernel is most comm...
sklearn.modules.generated.sklearn.metrics.pairwise.additive_chi2_kernel
sklearn.metrics.pairwise.chi2_kernel sklearn.metrics.pairwise.chi2_kernel(X, Y=None, gamma=1.0) [source] Computes the exponential chi-squared kernel X and Y. The chi-squared kernel is computed between each pair of rows in X and Y. X and Y have to be non-negative. This kernel is most commonly applied to histograms. ...
sklearn.modules.generated.sklearn.metrics.pairwise.chi2_kernel
sklearn.metrics.pairwise.cosine_distances sklearn.metrics.pairwise.cosine_distances(X, Y=None) [source] Compute cosine distance between samples in X and Y. Cosine distance is defined as 1.0 minus the cosine similarity. Read more in the User Guide. Parameters X{array-like, sparse matrix} of shape (n_samples_X, n...
sklearn.modules.generated.sklearn.metrics.pairwise.cosine_distances
sklearn.metrics.pairwise.cosine_similarity sklearn.metrics.pairwise.cosine_similarity(X, Y=None, dense_output=True) [source] Compute cosine similarity between samples in X and Y. Cosine similarity, or the cosine kernel, computes similarity as the normalized dot product of X and Y: K(X, Y) = <X, Y> / (||X||*||Y||) O...
sklearn.modules.generated.sklearn.metrics.pairwise.cosine_similarity
sklearn.metrics.pairwise.distance_metrics sklearn.metrics.pairwise.distance_metrics() [source] Valid metrics for pairwise_distances. This function simply returns the valid pairwise distance metrics. It exists to allow for a description of the mapping for each of the valid strings. The valid distance metrics, and th...
sklearn.modules.generated.sklearn.metrics.pairwise.distance_metrics
sklearn.metrics.pairwise.euclidean_distances sklearn.metrics.pairwise.euclidean_distances(X, Y=None, *, Y_norm_squared=None, squared=False, X_norm_squared=None) [source] Considering the rows of X (and Y=X) as vectors, compute the distance matrix between each pair of vectors. For efficiency reasons, the euclidean di...
sklearn.modules.generated.sklearn.metrics.pairwise.euclidean_distances
sklearn.metrics.pairwise.haversine_distances sklearn.metrics.pairwise.haversine_distances(X, Y=None) [source] Compute the Haversine distance between samples in X and Y. The Haversine (or great circle) distance is the angular distance between two points on the surface of a sphere. The first coordinate of each point ...
sklearn.modules.generated.sklearn.metrics.pairwise.haversine_distances
sklearn.metrics.pairwise.kernel_metrics sklearn.metrics.pairwise.kernel_metrics() [source] Valid metrics for pairwise_kernels. This function simply returns the valid pairwise distance metrics. It exists, however, to allow for a verbose description of the mapping for each of the valid strings. The valid distance me...
sklearn.modules.generated.sklearn.metrics.pairwise.kernel_metrics
sklearn.metrics.pairwise.laplacian_kernel sklearn.metrics.pairwise.laplacian_kernel(X, Y=None, gamma=None) [source] Compute the laplacian kernel between X and Y. The laplacian kernel is defined as: K(x, y) = exp(-gamma ||x-y||_1) for each pair of rows x in X and y in Y. Read more in the User Guide. New in version...
sklearn.modules.generated.sklearn.metrics.pairwise.laplacian_kernel
sklearn.metrics.pairwise.linear_kernel sklearn.metrics.pairwise.linear_kernel(X, Y=None, dense_output=True) [source] Compute the linear kernel between X and Y. Read more in the User Guide. Parameters Xndarray of shape (n_samples_X, n_features) Yndarray of shape (n_samples_Y, n_features), default=None dense_...
sklearn.modules.generated.sklearn.metrics.pairwise.linear_kernel
sklearn.metrics.pairwise.manhattan_distances sklearn.metrics.pairwise.manhattan_distances(X, Y=None, *, sum_over_features=True) [source] Compute the L1 distances between the vectors in X and Y. With sum_over_features equal to False it returns the componentwise distances. Read more in the User Guide. Parameters ...
sklearn.modules.generated.sklearn.metrics.pairwise.manhattan_distances
sklearn.metrics.pairwise.nan_euclidean_distances sklearn.metrics.pairwise.nan_euclidean_distances(X, Y=None, *, squared=False, missing_values=nan, copy=True) [source] Calculate the euclidean distances in the presence of missing values. Compute the euclidean distance between each pair of samples in X and Y, where Y=...
sklearn.modules.generated.sklearn.metrics.pairwise.nan_euclidean_distances
sklearn.metrics.pairwise.paired_cosine_distances sklearn.metrics.pairwise.paired_cosine_distances(X, Y) [source] Computes the paired cosine distances between X and Y. Read more in the User Guide. Parameters Xarray-like of shape (n_samples, n_features) Yarray-like of shape (n_samples, n_features) Returns ...
sklearn.modules.generated.sklearn.metrics.pairwise.paired_cosine_distances
sklearn.metrics.pairwise.paired_distances sklearn.metrics.pairwise.paired_distances(X, Y, *, metric='euclidean', **kwds) [source] Computes the paired distances between X and Y. Computes the distances between (X[0], Y[0]), (X[1], Y[1]), etc… Read more in the User Guide. Parameters Xndarray of shape (n_samples, n...
sklearn.modules.generated.sklearn.metrics.pairwise.paired_distances
sklearn.metrics.pairwise.paired_euclidean_distances sklearn.metrics.pairwise.paired_euclidean_distances(X, Y) [source] Computes the paired euclidean distances between X and Y. Read more in the User Guide. Parameters Xarray-like of shape (n_samples, n_features) Yarray-like of shape (n_samples, n_features) R...
sklearn.modules.generated.sklearn.metrics.pairwise.paired_euclidean_distances
sklearn.metrics.pairwise.paired_manhattan_distances sklearn.metrics.pairwise.paired_manhattan_distances(X, Y) [source] Compute the L1 distances between the vectors in X and Y. Read more in the User Guide. Parameters Xarray-like of shape (n_samples, n_features) Yarray-like of shape (n_samples, n_features) R...
sklearn.modules.generated.sklearn.metrics.pairwise.paired_manhattan_distances
sklearn.metrics.pairwise.pairwise_kernels sklearn.metrics.pairwise.pairwise_kernels(X, Y=None, metric='linear', *, filter_params=False, n_jobs=None, **kwds) [source] Compute the kernel between arrays X and optional array Y. This method takes either a vector array or a kernel matrix, and returns a kernel matrix. If ...
sklearn.modules.generated.sklearn.metrics.pairwise.pairwise_kernels
sklearn.metrics.pairwise.polynomial_kernel sklearn.metrics.pairwise.polynomial_kernel(X, Y=None, degree=3, gamma=None, coef0=1) [source] Compute the polynomial kernel between X and Y: K(X, Y) = (gamma <X, Y> + coef0)^degree Read more in the User Guide. Parameters Xndarray of shape (n_samples_X, n_features) Y...
sklearn.modules.generated.sklearn.metrics.pairwise.polynomial_kernel
sklearn.metrics.pairwise.rbf_kernel sklearn.metrics.pairwise.rbf_kernel(X, Y=None, gamma=None) [source] Compute the rbf (gaussian) kernel between X and Y: K(x, y) = exp(-gamma ||x-y||^2) for each pair of rows x in X and y in Y. Read more in the User Guide. Parameters Xndarray of shape (n_samples_X, n_features)...
sklearn.modules.generated.sklearn.metrics.pairwise.rbf_kernel
sklearn.metrics.pairwise.sigmoid_kernel sklearn.metrics.pairwise.sigmoid_kernel(X, Y=None, gamma=None, coef0=1) [source] Compute the sigmoid kernel between X and Y: K(X, Y) = tanh(gamma <X, Y> + coef0) Read more in the User Guide. Parameters Xndarray of shape (n_samples_X, n_features) Yndarray of shape (n_sa...
sklearn.modules.generated.sklearn.metrics.pairwise.sigmoid_kernel
sklearn.metrics.pairwise_distances sklearn.metrics.pairwise_distances(X, Y=None, metric='euclidean', *, n_jobs=None, force_all_finite=True, **kwds) [source] Compute the distance matrix from a vector array X and optional Y. This method takes either a vector array or a distance matrix, and returns a distance matrix. ...
sklearn.modules.generated.sklearn.metrics.pairwise_distances
sklearn.metrics.pairwise_distances_argmin sklearn.metrics.pairwise_distances_argmin(X, Y, *, axis=1, metric='euclidean', metric_kwargs=None) [source] Compute minimum distances between one point and a set of points. This function computes for each row in X, the index of the row of Y which is closest (according to th...
sklearn.modules.generated.sklearn.metrics.pairwise_distances_argmin
sklearn.metrics.pairwise_distances_argmin_min sklearn.metrics.pairwise_distances_argmin_min(X, Y, *, axis=1, metric='euclidean', metric_kwargs=None) [source] Compute minimum distances between one point and a set of points. This function computes for each row in X, the index of the row of Y which is closest (accordi...
sklearn.modules.generated.sklearn.metrics.pairwise_distances_argmin_min
sklearn.metrics.pairwise_distances_chunked sklearn.metrics.pairwise_distances_chunked(X, Y=None, *, reduce_func=None, metric='euclidean', n_jobs=None, working_memory=None, **kwds) [source] Generate a distance matrix chunk by chunk with optional reduction. In cases where not all of a pairwise distance matrix needs t...
sklearn.modules.generated.sklearn.metrics.pairwise_distances_chunked
sklearn.metrics.plot_confusion_matrix sklearn.metrics.plot_confusion_matrix(estimator, X, y_true, *, labels=None, sample_weight=None, normalize=None, display_labels=None, include_values=True, xticks_rotation='horizontal', values_format=None, cmap='viridis', ax=None, colorbar=True) [source] Plot Confusion Matrix. Re...
sklearn.modules.generated.sklearn.metrics.plot_confusion_matrix
sklearn.metrics.plot_det_curve sklearn.metrics.plot_det_curve(estimator, X, y, *, sample_weight=None, response_method='auto', name=None, ax=None, pos_label=None, **kwargs) [source] Plot detection error tradeoff (DET) curve. Extra keyword arguments will be passed to matplotlib’s plot. Read more in the User Guide. N...
sklearn.modules.generated.sklearn.metrics.plot_det_curve
sklearn.metrics.plot_precision_recall_curve sklearn.metrics.plot_precision_recall_curve(estimator, X, y, *, sample_weight=None, response_method='auto', name=None, ax=None, pos_label=None, **kwargs) [source] Plot Precision Recall Curve for binary classifiers. Extra keyword arguments will be passed to matplotlib’s pl...
sklearn.modules.generated.sklearn.metrics.plot_precision_recall_curve
sklearn.metrics.plot_roc_curve sklearn.metrics.plot_roc_curve(estimator, X, y, *, sample_weight=None, drop_intermediate=True, response_method='auto', name=None, ax=None, pos_label=None, **kwargs) [source] Plot Receiver operating characteristic (ROC) curve. Extra keyword arguments will be passed to matplotlib’s plot...
sklearn.modules.generated.sklearn.metrics.plot_roc_curve
sklearn.metrics.precision_recall_curve sklearn.metrics.precision_recall_curve(y_true, probas_pred, *, pos_label=None, sample_weight=None) [source] Compute precision-recall pairs for different probability thresholds. Note: this implementation is restricted to the binary classification task. The precision is the rati...
sklearn.modules.generated.sklearn.metrics.precision_recall_curve
sklearn.metrics.precision_recall_fscore_support sklearn.metrics.precision_recall_fscore_support(y_true, y_pred, *, beta=1.0, labels=None, pos_label=1, average=None, warn_for='precision', 'recall', 'f-score', sample_weight=None, zero_division='warn') [source] Compute precision, recall, F-measure and support for each...
sklearn.modules.generated.sklearn.metrics.precision_recall_fscore_support
sklearn.metrics.precision_score sklearn.metrics.precision_score(y_true, y_pred, *, labels=None, pos_label=1, average='binary', sample_weight=None, zero_division='warn') [source] Compute the precision. The precision is the ratio tp / (tp + fp) where tp is the number of true positives and fp the number of false posit...
sklearn.modules.generated.sklearn.metrics.precision_score
sklearn.metrics.r2_score sklearn.metrics.r2_score(y_true, y_pred, *, sample_weight=None, multioutput='uniform_average') [source] R^2 (coefficient of determination) regression score function. Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always ...
sklearn.modules.generated.sklearn.metrics.r2_score
sklearn.metrics.rand_score sklearn.metrics.rand_score(labels_true, labels_pred) [source] Rand index. The Rand Index computes a similarity measure between two clusterings by considering all pairs of samples and counting pairs that are assigned in the same or different clusters in the predicted and true clusterings. ...
sklearn.modules.generated.sklearn.metrics.rand_score
sklearn.metrics.recall_score sklearn.metrics.recall_score(y_true, y_pred, *, labels=None, pos_label=1, average='binary', sample_weight=None, zero_division='warn') [source] Compute the recall. The recall is the ratio tp / (tp + fn) where tp is the number of true positives and fn the number of false negatives. The re...
sklearn.modules.generated.sklearn.metrics.recall_score
sklearn.metrics.roc_auc_score sklearn.metrics.roc_auc_score(y_true, y_score, *, average='macro', sample_weight=None, max_fpr=None, multi_class='raise', labels=None) [source] Compute Area Under the Receiver Operating Characteristic Curve (ROC AUC) from prediction scores. Note: this implementation can be used with bi...
sklearn.modules.generated.sklearn.metrics.roc_auc_score
sklearn.metrics.roc_curve sklearn.metrics.roc_curve(y_true, y_score, *, pos_label=None, sample_weight=None, drop_intermediate=True) [source] Compute Receiver operating characteristic (ROC). Note: this implementation is restricted to the binary classification task. Read more in the User Guide. Parameters y_truen...
sklearn.modules.generated.sklearn.metrics.roc_curve
sklearn.metrics.silhouette_samples sklearn.metrics.silhouette_samples(X, labels, *, metric='euclidean', **kwds) [source] Compute the Silhouette Coefficient for each sample. The Silhouette Coefficient is a measure of how well samples are clustered with samples that are similar to themselves. Clustering models with a...
sklearn.modules.generated.sklearn.metrics.silhouette_samples
sklearn.metrics.silhouette_score sklearn.metrics.silhouette_score(X, labels, *, metric='euclidean', sample_size=None, random_state=None, **kwds) [source] Compute the mean Silhouette Coefficient of all samples. The Silhouette Coefficient is calculated using the mean intra-cluster distance (a) and the mean nearest-cl...
sklearn.modules.generated.sklearn.metrics.silhouette_score
sklearn.metrics.top_k_accuracy_score sklearn.metrics.top_k_accuracy_score(y_true, y_score, *, k=2, normalize=True, sample_weight=None, labels=None) [source] Top-k Accuracy classification score. This metric computes the number of times where the correct label is among the top k labels predicted (ranked by predicted ...
sklearn.modules.generated.sklearn.metrics.top_k_accuracy_score
sklearn.metrics.v_measure_score sklearn.metrics.v_measure_score(labels_true, labels_pred, *, beta=1.0) [source] V-measure cluster labeling given a ground truth. This score is identical to normalized_mutual_info_score with the 'arithmetic' option for averaging. The V-measure is the harmonic mean between homogeneity ...
sklearn.modules.generated.sklearn.metrics.v_measure_score
sklearn.metrics.zero_one_loss sklearn.metrics.zero_one_loss(y_true, y_pred, *, normalize=True, sample_weight=None) [source] Zero-one classification loss. If normalize is True, return the fraction of misclassifications (float), else it returns the number of misclassifications (int). The best performance is 0. Read m...
sklearn.modules.generated.sklearn.metrics.zero_one_loss
sklearn.model_selection.check_cv sklearn.model_selection.check_cv(cv=5, y=None, *, classifier=False) [source] Input checker utility for building a cross-validator Parameters cvint, cross-validation generator or an iterable, default=None Determines the cross-validation splitting strategy. Possible inputs for c...
sklearn.modules.generated.sklearn.model_selection.check_cv
sklearn.model_selection.cross_validate sklearn.model_selection.cross_validate(estimator, X, y=None, *, groups=None, scoring=None, cv=None, n_jobs=None, verbose=0, fit_params=None, pre_dispatch='2*n_jobs', return_train_score=False, return_estimator=False, error_score=nan) [source] Evaluate metric(s) by cross-validat...
sklearn.modules.generated.sklearn.model_selection.cross_validate
sklearn.model_selection.cross_val_predict sklearn.model_selection.cross_val_predict(estimator, X, y=None, *, groups=None, cv=None, n_jobs=None, verbose=0, fit_params=None, pre_dispatch='2*n_jobs', method='predict') [source] Generate cross-validated estimates for each input data point The data is split according to ...
sklearn.modules.generated.sklearn.model_selection.cross_val_predict
sklearn.model_selection.cross_val_score sklearn.model_selection.cross_val_score(estimator, X, y=None, *, groups=None, scoring=None, cv=None, n_jobs=None, verbose=0, fit_params=None, pre_dispatch='2*n_jobs', error_score=nan) [source] Evaluate a score by cross-validation Read more in the User Guide. Parameters es...
sklearn.modules.generated.sklearn.model_selection.cross_val_score
sklearn.model_selection.learning_curve sklearn.model_selection.learning_curve(estimator, X, y, *, groups=None, train_sizes=array([0.1, 0.33, 0.55, 0.78, 1.0]), cv=None, scoring=None, exploit_incremental_learning=False, n_jobs=None, pre_dispatch='all', verbose=0, shuffle=False, random_state=None, error_score=nan, retu...
sklearn.modules.generated.sklearn.model_selection.learning_curve
sklearn.model_selection.permutation_test_score sklearn.model_selection.permutation_test_score(estimator, X, y, *, groups=None, cv=None, n_permutations=100, n_jobs=None, random_state=0, verbose=0, scoring=None, fit_params=None) [source] Evaluate the significance of a cross-validated score with permutations Permutes ...
sklearn.modules.generated.sklearn.model_selection.permutation_test_score
sklearn.model_selection.train_test_split sklearn.model_selection.train_test_split(*arrays, test_size=None, train_size=None, random_state=None, shuffle=True, stratify=None) [source] Split arrays or matrices into random train and test subsets Quick utility that wraps input validation and next(ShuffleSplit().split(X, ...
sklearn.modules.generated.sklearn.model_selection.train_test_split
sklearn.model_selection.validation_curve sklearn.model_selection.validation_curve(estimator, X, y, *, param_name, param_range, groups=None, cv=None, scoring=None, n_jobs=None, pre_dispatch='all', verbose=0, error_score=nan, fit_params=None) [source] Validation curve. Determine training and test scores for varying p...
sklearn.modules.generated.sklearn.model_selection.validation_curve
sklearn.neighbors.kneighbors_graph sklearn.neighbors.kneighbors_graph(X, n_neighbors, *, mode='connectivity', metric='minkowski', p=2, metric_params=None, include_self=False, n_jobs=None) [source] Computes the (weighted) graph of k-Neighbors for points in X Read more in the User Guide. Parameters Xarray-like of...
sklearn.modules.generated.sklearn.neighbors.kneighbors_graph
sklearn.neighbors.radius_neighbors_graph sklearn.neighbors.radius_neighbors_graph(X, radius, *, mode='connectivity', metric='minkowski', p=2, metric_params=None, include_self=False, n_jobs=None) [source] Computes the (weighted) graph of Neighbors for points in X Neighborhoods are restricted the points at a distance...
sklearn.modules.generated.sklearn.neighbors.radius_neighbors_graph
sklearn.pipeline.make_pipeline sklearn.pipeline.make_pipeline(*steps, memory=None, verbose=False) [source] Construct a Pipeline from the given estimators. This is a shorthand for the Pipeline constructor; it does not require, and does not permit, naming the estimators. Instead, their names will be set to the lowerc...
sklearn.modules.generated.sklearn.pipeline.make_pipeline
sklearn.pipeline.make_union sklearn.pipeline.make_union(*transformers, n_jobs=None, verbose=False) [source] Construct a FeatureUnion from the given transformers. This is a shorthand for the FeatureUnion constructor; it does not require, and does not permit, naming the transformers. Instead, they will be given names...
sklearn.modules.generated.sklearn.pipeline.make_union
sklearn.preprocessing.add_dummy_feature sklearn.preprocessing.add_dummy_feature(X, value=1.0) [source] Augment dataset with an additional dummy feature. This is useful for fitting an intercept term with implementations which cannot otherwise fit it directly. Parameters X{array-like, sparse matrix} of shape (n_s...
sklearn.modules.generated.sklearn.preprocessing.add_dummy_feature
sklearn.preprocessing.binarize sklearn.preprocessing.binarize(X, *, threshold=0.0, copy=True) [source] Boolean thresholding of array-like or scipy.sparse matrix. Read more in the User Guide. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The data to binarize, element by element. scip...
sklearn.modules.generated.sklearn.preprocessing.binarize
sklearn.preprocessing.label_binarize sklearn.preprocessing.label_binarize(y, *, classes, neg_label=0, pos_label=1, sparse_output=False) [source] Binarize labels in a one-vs-all fashion. Several regression and binary classification algorithms are available in scikit-learn. A simple way to extend these algorithms to ...
sklearn.modules.generated.sklearn.preprocessing.label_binarize