doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
sklearn.metrics.fowlkes_mallows_score(labels_true, labels_pred, *, sparse=False) [source]
Measure the similarity of two clusterings of a set of points. New in version 0.18. The Fowlkes-Mallows index (FMI) is defined as the geometric mean between of the precision and recall: FMI = TP / sqrt((TP + FP) * (TP + FN))
W... | sklearn.modules.generated.sklearn.metrics.fowlkes_mallows_score#sklearn.metrics.fowlkes_mallows_score |
sklearn.metrics.get_scorer(scoring) [source]
Get a scorer from string. Read more in the User Guide. Parameters
scoringstr or callable
Scoring method as string. If callable it is returned as is. Returns
scorercallable
The scorer. | sklearn.modules.generated.sklearn.metrics.get_scorer#sklearn.metrics.get_scorer |
sklearn.metrics.hamming_loss(y_true, y_pred, *, sample_weight=None) [source]
Compute the average Hamming loss. The Hamming loss is the fraction of labels that are incorrectly predicted. Read more in the User Guide. Parameters
y_true1d array-like, or label indicator array / sparse matrix
Ground truth (correct) l... | sklearn.modules.generated.sklearn.metrics.hamming_loss#sklearn.metrics.hamming_loss |
sklearn.metrics.hinge_loss(y_true, pred_decision, *, labels=None, sample_weight=None) [source]
Average hinge loss (non-regularized). In binary class case, assuming labels in y_true are encoded with +1 and -1, when a prediction mistake is made, margin = y_true * pred_decision is always negative (since the signs disagr... | sklearn.modules.generated.sklearn.metrics.hinge_loss#sklearn.metrics.hinge_loss |
sklearn.metrics.homogeneity_completeness_v_measure(labels_true, labels_pred, *, beta=1.0) [source]
Compute the homogeneity and completeness and V-Measure scores at once. Those metrics are based on normalized conditional entropy measures of the clustering labeling to evaluate given the knowledge of a Ground Truth clas... | sklearn.modules.generated.sklearn.metrics.homogeneity_completeness_v_measure#sklearn.metrics.homogeneity_completeness_v_measure |
sklearn.metrics.homogeneity_score(labels_true, labels_pred) [source]
Homogeneity metric of a cluster labeling given a ground truth. A clustering result satisfies homogeneity if all of its clusters contain only data points which are members of a single class. This metric is independent of the absolute values of the la... | sklearn.modules.generated.sklearn.metrics.homogeneity_score#sklearn.metrics.homogeneity_score |
sklearn.metrics.jaccard_score(y_true, y_pred, *, labels=None, pos_label=1, average='binary', sample_weight=None, zero_division='warn') [source]
Jaccard similarity coefficient score. The Jaccard index [1], or Jaccard similarity coefficient, defined as the size of the intersection divided by the size of the union of tw... | sklearn.modules.generated.sklearn.metrics.jaccard_score#sklearn.metrics.jaccard_score |
sklearn.metrics.label_ranking_average_precision_score(y_true, y_score, *, sample_weight=None) [source]
Compute ranking-based average precision. Label ranking average precision (LRAP) is the average over each ground truth label assigned to each sample, of the ratio of true vs. total labels with lower score. This metri... | sklearn.modules.generated.sklearn.metrics.label_ranking_average_precision_score#sklearn.metrics.label_ranking_average_precision_score |
sklearn.metrics.label_ranking_loss(y_true, y_score, *, sample_weight=None) [source]
Compute Ranking loss measure. Compute the average number of label pairs that are incorrectly ordered given y_score weighted by the size of the label set and the number of labels not in the label set. This is similar to the error set s... | sklearn.modules.generated.sklearn.metrics.label_ranking_loss#sklearn.metrics.label_ranking_loss |
sklearn.metrics.log_loss(y_true, y_pred, *, eps=1e-15, normalize=True, sample_weight=None, labels=None) [source]
Log loss, aka logistic loss or cross-entropy loss. This is the loss function used in (multinomial) logistic regression and extensions of it such as neural networks, defined as the negative log-likelihood o... | sklearn.modules.generated.sklearn.metrics.log_loss#sklearn.metrics.log_loss |
sklearn.metrics.make_scorer(score_func, *, greater_is_better=True, needs_proba=False, needs_threshold=False, **kwargs) [source]
Make a scorer from a performance metric or loss function. This factory function wraps scoring functions for use in GridSearchCV and cross_val_score. It takes a score function, such as accura... | sklearn.modules.generated.sklearn.metrics.make_scorer#sklearn.metrics.make_scorer |
sklearn.metrics.matthews_corrcoef(y_true, y_pred, *, sample_weight=None) [source]
Compute the Matthews correlation coefficient (MCC). The Matthews correlation coefficient is used in machine learning as a measure of the quality of binary and multiclass classifications. It takes into account true and false positives an... | sklearn.modules.generated.sklearn.metrics.matthews_corrcoef#sklearn.metrics.matthews_corrcoef |
sklearn.metrics.max_error(y_true, y_pred) [source]
max_error metric calculates the maximum residual error. Read more in the User Guide. Parameters
y_truearray-like of shape (n_samples,)
Ground truth (correct) target values.
y_predarray-like of shape (n_samples,)
Estimated target values. Returns
max_er... | sklearn.modules.generated.sklearn.metrics.max_error#sklearn.metrics.max_error |
sklearn.metrics.mean_absolute_error(y_true, y_pred, *, sample_weight=None, multioutput='uniform_average') [source]
Mean absolute error regression loss. Read more in the User Guide. Parameters
y_truearray-like of shape (n_samples,) or (n_samples, n_outputs)
Ground truth (correct) target values.
y_predarray-lik... | sklearn.modules.generated.sklearn.metrics.mean_absolute_error#sklearn.metrics.mean_absolute_error |
sklearn.metrics.mean_absolute_percentage_error(y_true, y_pred, sample_weight=None, multioutput='uniform_average') [source]
Mean absolute percentage error regression loss. Note here that we do not represent the output as a percentage in range [0, 100]. Instead, we represent it in range [0, 1/eps]. Read more in the Use... | sklearn.modules.generated.sklearn.metrics.mean_absolute_percentage_error#sklearn.metrics.mean_absolute_percentage_error |
sklearn.metrics.mean_gamma_deviance(y_true, y_pred, *, sample_weight=None) [source]
Mean Gamma deviance regression loss. Gamma deviance is equivalent to the Tweedie deviance with the power parameter power=2. It is invariant to scaling of the target variable, and measures relative errors. Read more in the User Guide. ... | sklearn.modules.generated.sklearn.metrics.mean_gamma_deviance#sklearn.metrics.mean_gamma_deviance |
sklearn.metrics.mean_poisson_deviance(y_true, y_pred, *, sample_weight=None) [source]
Mean Poisson deviance regression loss. Poisson deviance is equivalent to the Tweedie deviance with the power parameter power=1. Read more in the User Guide. Parameters
y_truearray-like of shape (n_samples,)
Ground truth (corre... | sklearn.modules.generated.sklearn.metrics.mean_poisson_deviance#sklearn.metrics.mean_poisson_deviance |
sklearn.metrics.mean_squared_error(y_true, y_pred, *, sample_weight=None, multioutput='uniform_average', squared=True) [source]
Mean squared error regression loss. Read more in the User Guide. Parameters
y_truearray-like of shape (n_samples,) or (n_samples, n_outputs)
Ground truth (correct) target values.
y_p... | sklearn.modules.generated.sklearn.metrics.mean_squared_error#sklearn.metrics.mean_squared_error |
sklearn.metrics.mean_squared_log_error(y_true, y_pred, *, sample_weight=None, multioutput='uniform_average') [source]
Mean squared logarithmic error regression loss. Read more in the User Guide. Parameters
y_truearray-like of shape (n_samples,) or (n_samples, n_outputs)
Ground truth (correct) target values.
y... | sklearn.modules.generated.sklearn.metrics.mean_squared_log_error#sklearn.metrics.mean_squared_log_error |
sklearn.metrics.mean_tweedie_deviance(y_true, y_pred, *, sample_weight=None, power=0) [source]
Mean Tweedie deviance regression loss. Read more in the User Guide. Parameters
y_truearray-like of shape (n_samples,)
Ground truth (correct) target values.
y_predarray-like of shape (n_samples,)
Estimated target v... | sklearn.modules.generated.sklearn.metrics.mean_tweedie_deviance#sklearn.metrics.mean_tweedie_deviance |
sklearn.metrics.median_absolute_error(y_true, y_pred, *, multioutput='uniform_average', sample_weight=None) [source]
Median absolute error regression loss. Median absolute error output is non-negative floating point. The best value is 0.0. Read more in the User Guide. Parameters
y_truearray-like of shape = (n_sam... | sklearn.modules.generated.sklearn.metrics.median_absolute_error#sklearn.metrics.median_absolute_error |
sklearn.metrics.multilabel_confusion_matrix(y_true, y_pred, *, sample_weight=None, labels=None, samplewise=False) [source]
Compute a confusion matrix for each class or sample. New in version 0.21. Compute class-wise (default) or sample-wise (samplewise=True) multilabel confusion matrix to evaluate the accuracy of a... | sklearn.modules.generated.sklearn.metrics.multilabel_confusion_matrix#sklearn.metrics.multilabel_confusion_matrix |
sklearn.metrics.mutual_info_score(labels_true, labels_pred, *, contingency=None) [source]
Mutual Information between two clusterings. The Mutual Information is a measure of the similarity between two labels of the same data. Where \(|U_i|\) is the number of the samples in cluster \(U_i\) and \(|V_j|\) is the number o... | sklearn.modules.generated.sklearn.metrics.mutual_info_score#sklearn.metrics.mutual_info_score |
sklearn.metrics.ndcg_score(y_true, y_score, *, k=None, sample_weight=None, ignore_ties=False) [source]
Compute Normalized Discounted Cumulative Gain. Sum the true scores ranked in the order induced by the predicted scores, after applying a logarithmic discount. Then divide by the best possible score (Ideal DCG, obtai... | sklearn.modules.generated.sklearn.metrics.ndcg_score#sklearn.metrics.ndcg_score |
sklearn.metrics.normalized_mutual_info_score(labels_true, labels_pred, *, average_method='arithmetic') [source]
Normalized Mutual Information between two clusterings. Normalized Mutual Information (NMI) is a normalization of the Mutual Information (MI) score to scale the results between 0 (no mutual information) and ... | sklearn.modules.generated.sklearn.metrics.normalized_mutual_info_score#sklearn.metrics.normalized_mutual_info_score |
sklearn.metrics.pairwise.additive_chi2_kernel(X, Y=None) [source]
Computes the additive chi-squared kernel between observations in X and Y. The chi-squared kernel is computed between each pair of rows in X and Y. X and Y have to be non-negative. This kernel is most commonly applied to histograms. The chi-squared kern... | sklearn.modules.generated.sklearn.metrics.pairwise.additive_chi2_kernel#sklearn.metrics.pairwise.additive_chi2_kernel |
sklearn.metrics.pairwise.chi2_kernel(X, Y=None, gamma=1.0) [source]
Computes the exponential chi-squared kernel X and Y. The chi-squared kernel is computed between each pair of rows in X and Y. X and Y have to be non-negative. This kernel is most commonly applied to histograms. The chi-squared kernel is given by: k(x... | sklearn.modules.generated.sklearn.metrics.pairwise.chi2_kernel#sklearn.metrics.pairwise.chi2_kernel |
sklearn.metrics.pairwise.cosine_distances(X, Y=None) [source]
Compute cosine distance between samples in X and Y. Cosine distance is defined as 1.0 minus the cosine similarity. Read more in the User Guide. Parameters
X{array-like, sparse matrix} of shape (n_samples_X, n_features)
Matrix X.
Y{array-like, spars... | sklearn.modules.generated.sklearn.metrics.pairwise.cosine_distances#sklearn.metrics.pairwise.cosine_distances |
sklearn.metrics.pairwise.cosine_similarity(X, Y=None, dense_output=True) [source]
Compute cosine similarity between samples in X and Y. Cosine similarity, or the cosine kernel, computes similarity as the normalized dot product of X and Y: K(X, Y) = <X, Y> / (||X||*||Y||) On L2-normalized data, this function is equiva... | sklearn.modules.generated.sklearn.metrics.pairwise.cosine_similarity#sklearn.metrics.pairwise.cosine_similarity |
sklearn.metrics.pairwise.distance_metrics() [source]
Valid metrics for pairwise_distances. This function simply returns the valid pairwise distance metrics. It exists to allow for a description of the mapping for each of the valid strings. The valid distance metrics, and the function they map to, are:
metric Funct... | sklearn.modules.generated.sklearn.metrics.pairwise.distance_metrics#sklearn.metrics.pairwise.distance_metrics |
sklearn.metrics.pairwise.euclidean_distances(X, Y=None, *, Y_norm_squared=None, squared=False, X_norm_squared=None) [source]
Considering the rows of X (and Y=X) as vectors, compute the distance matrix between each pair of vectors. For efficiency reasons, the euclidean distance between a pair of row vector x and y is ... | sklearn.modules.generated.sklearn.metrics.pairwise.euclidean_distances#sklearn.metrics.pairwise.euclidean_distances |
sklearn.metrics.pairwise.haversine_distances(X, Y=None) [source]
Compute the Haversine distance between samples in X and Y. The Haversine (or great circle) distance is the angular distance between two points on the surface of a sphere. The first coordinate of each point is assumed to be the latitude, the second is th... | sklearn.modules.generated.sklearn.metrics.pairwise.haversine_distances#sklearn.metrics.pairwise.haversine_distances |
sklearn.metrics.pairwise.kernel_metrics() [source]
Valid metrics for pairwise_kernels. This function simply returns the valid pairwise distance metrics. It exists, however, to allow for a verbose description of the mapping for each of the valid strings. The valid distance metrics, and the function they map to, are:
... | sklearn.modules.generated.sklearn.metrics.pairwise.kernel_metrics#sklearn.metrics.pairwise.kernel_metrics |
sklearn.metrics.pairwise.laplacian_kernel(X, Y=None, gamma=None) [source]
Compute the laplacian kernel between X and Y. The laplacian kernel is defined as: K(x, y) = exp(-gamma ||x-y||_1)
for each pair of rows x in X and y in Y. Read more in the User Guide. New in version 0.17. Parameters
Xndarray of shape (n_... | sklearn.modules.generated.sklearn.metrics.pairwise.laplacian_kernel#sklearn.metrics.pairwise.laplacian_kernel |
sklearn.metrics.pairwise.linear_kernel(X, Y=None, dense_output=True) [source]
Compute the linear kernel between X and Y. Read more in the User Guide. Parameters
Xndarray of shape (n_samples_X, n_features)
Yndarray of shape (n_samples_Y, n_features), default=None
dense_outputbool, default=True
Whether to ret... | sklearn.modules.generated.sklearn.metrics.pairwise.linear_kernel#sklearn.metrics.pairwise.linear_kernel |
sklearn.metrics.pairwise.manhattan_distances(X, Y=None, *, sum_over_features=True) [source]
Compute the L1 distances between the vectors in X and Y. With sum_over_features equal to False it returns the componentwise distances. Read more in the User Guide. Parameters
Xarray-like of shape (n_samples_X, n_features) ... | sklearn.modules.generated.sklearn.metrics.pairwise.manhattan_distances#sklearn.metrics.pairwise.manhattan_distances |
sklearn.metrics.pairwise.nan_euclidean_distances(X, Y=None, *, squared=False, missing_values=nan, copy=True) [source]
Calculate the euclidean distances in the presence of missing values. Compute the euclidean distance between each pair of samples in X and Y, where Y=X is assumed if Y=None. When calculating the distan... | sklearn.modules.generated.sklearn.metrics.pairwise.nan_euclidean_distances#sklearn.metrics.pairwise.nan_euclidean_distances |
sklearn.metrics.pairwise.paired_cosine_distances(X, Y) [source]
Computes the paired cosine distances between X and Y. Read more in the User Guide. Parameters
Xarray-like of shape (n_samples, n_features)
Yarray-like of shape (n_samples, n_features)
Returns
distancesndarray of shape (n_samples,)
Notes T... | sklearn.modules.generated.sklearn.metrics.pairwise.paired_cosine_distances#sklearn.metrics.pairwise.paired_cosine_distances |
sklearn.metrics.pairwise.paired_distances(X, Y, *, metric='euclidean', **kwds) [source]
Computes the paired distances between X and Y. Computes the distances between (X[0], Y[0]), (X[1], Y[1]), etc… Read more in the User Guide. Parameters
Xndarray of shape (n_samples, n_features)
Array 1 for distance computatio... | sklearn.modules.generated.sklearn.metrics.pairwise.paired_distances#sklearn.metrics.pairwise.paired_distances |
sklearn.metrics.pairwise.paired_euclidean_distances(X, Y) [source]
Computes the paired euclidean distances between X and Y. Read more in the User Guide. Parameters
Xarray-like of shape (n_samples, n_features)
Yarray-like of shape (n_samples, n_features)
Returns
distancesndarray of shape (n_samples,) | sklearn.modules.generated.sklearn.metrics.pairwise.paired_euclidean_distances#sklearn.metrics.pairwise.paired_euclidean_distances |
sklearn.metrics.pairwise.paired_manhattan_distances(X, Y) [source]
Compute the L1 distances between the vectors in X and Y. Read more in the User Guide. Parameters
Xarray-like of shape (n_samples, n_features)
Yarray-like of shape (n_samples, n_features)
Returns
distancesndarray of shape (n_samples,) | sklearn.modules.generated.sklearn.metrics.pairwise.paired_manhattan_distances#sklearn.metrics.pairwise.paired_manhattan_distances |
sklearn.metrics.pairwise.pairwise_kernels(X, Y=None, metric='linear', *, filter_params=False, n_jobs=None, **kwds) [source]
Compute the kernel between arrays X and optional array Y. This method takes either a vector array or a kernel matrix, and returns a kernel matrix. If the input is a vector array, the kernels are... | sklearn.modules.generated.sklearn.metrics.pairwise.pairwise_kernels#sklearn.metrics.pairwise.pairwise_kernels |
sklearn.metrics.pairwise.polynomial_kernel(X, Y=None, degree=3, gamma=None, coef0=1) [source]
Compute the polynomial kernel between X and Y: K(X, Y) = (gamma <X, Y> + coef0)^degree
Read more in the User Guide. Parameters
Xndarray of shape (n_samples_X, n_features)
Yndarray of shape (n_samples_Y, n_features), d... | sklearn.modules.generated.sklearn.metrics.pairwise.polynomial_kernel#sklearn.metrics.pairwise.polynomial_kernel |
sklearn.metrics.pairwise.rbf_kernel(X, Y=None, gamma=None) [source]
Compute the rbf (gaussian) kernel between X and Y: K(x, y) = exp(-gamma ||x-y||^2)
for each pair of rows x in X and y in Y. Read more in the User Guide. Parameters
Xndarray of shape (n_samples_X, n_features)
Yndarray of shape (n_samples_Y, n_f... | sklearn.modules.generated.sklearn.metrics.pairwise.rbf_kernel#sklearn.metrics.pairwise.rbf_kernel |
sklearn.metrics.pairwise.sigmoid_kernel(X, Y=None, gamma=None, coef0=1) [source]
Compute the sigmoid kernel between X and Y: K(X, Y) = tanh(gamma <X, Y> + coef0)
Read more in the User Guide. Parameters
Xndarray of shape (n_samples_X, n_features)
Yndarray of shape (n_samples_Y, n_features), default=None
gamma... | sklearn.modules.generated.sklearn.metrics.pairwise.sigmoid_kernel#sklearn.metrics.pairwise.sigmoid_kernel |
sklearn.metrics.pairwise_distances(X, Y=None, metric='euclidean', *, n_jobs=None, force_all_finite=True, **kwds) [source]
Compute the distance matrix from a vector array X and optional Y. This method takes either a vector array or a distance matrix, and returns a distance matrix. If the input is a vector array, the d... | sklearn.modules.generated.sklearn.metrics.pairwise_distances#sklearn.metrics.pairwise_distances |
sklearn.metrics.pairwise_distances_argmin(X, Y, *, axis=1, metric='euclidean', metric_kwargs=None) [source]
Compute minimum distances between one point and a set of points. This function computes for each row in X, the index of the row of Y which is closest (according to the specified distance). This is mostly equiva... | sklearn.modules.generated.sklearn.metrics.pairwise_distances_argmin#sklearn.metrics.pairwise_distances_argmin |
sklearn.metrics.pairwise_distances_argmin_min(X, Y, *, axis=1, metric='euclidean', metric_kwargs=None) [source]
Compute minimum distances between one point and a set of points. This function computes for each row in X, the index of the row of Y which is closest (according to the specified distance). The minimal dista... | sklearn.modules.generated.sklearn.metrics.pairwise_distances_argmin_min#sklearn.metrics.pairwise_distances_argmin_min |
sklearn.metrics.pairwise_distances_chunked(X, Y=None, *, reduce_func=None, metric='euclidean', n_jobs=None, working_memory=None, **kwds) [source]
Generate a distance matrix chunk by chunk with optional reduction. In cases where not all of a pairwise distance matrix needs to be stored at once, this is used to calculat... | sklearn.modules.generated.sklearn.metrics.pairwise_distances_chunked#sklearn.metrics.pairwise_distances_chunked |
sklearn.metrics.plot_confusion_matrix(estimator, X, y_true, *, labels=None, sample_weight=None, normalize=None, display_labels=None, include_values=True, xticks_rotation='horizontal', values_format=None, cmap='viridis', ax=None, colorbar=True) [source]
Plot Confusion Matrix. Read more in the User Guide. Parameters
... | sklearn.modules.generated.sklearn.metrics.plot_confusion_matrix#sklearn.metrics.plot_confusion_matrix |
sklearn.metrics.plot_det_curve(estimator, X, y, *, sample_weight=None, response_method='auto', name=None, ax=None, pos_label=None, **kwargs) [source]
Plot detection error tradeoff (DET) curve. Extra keyword arguments will be passed to matplotlib’s plot. Read more in the User Guide. New in version 0.24. Parameters ... | sklearn.modules.generated.sklearn.metrics.plot_det_curve#sklearn.metrics.plot_det_curve |
sklearn.metrics.plot_precision_recall_curve(estimator, X, y, *, sample_weight=None, response_method='auto', name=None, ax=None, pos_label=None, **kwargs) [source]
Plot Precision Recall Curve for binary classifiers. Extra keyword arguments will be passed to matplotlib’s plot. Read more in the User Guide. Parameters
... | sklearn.modules.generated.sklearn.metrics.plot_precision_recall_curve#sklearn.metrics.plot_precision_recall_curve |
sklearn.metrics.plot_roc_curve(estimator, X, y, *, sample_weight=None, drop_intermediate=True, response_method='auto', name=None, ax=None, pos_label=None, **kwargs) [source]
Plot Receiver operating characteristic (ROC) curve. Extra keyword arguments will be passed to matplotlib’s plot. Read more in the User Guide. P... | sklearn.modules.generated.sklearn.metrics.plot_roc_curve#sklearn.metrics.plot_roc_curve |
class sklearn.metrics.PrecisionRecallDisplay(precision, recall, *, average_precision=None, estimator_name=None, pos_label=None) [source]
Precision Recall visualization. It is recommend to use plot_precision_recall_curve to create a visualizer. All parameters are stored as attributes. Read more in the User Guide. Par... | sklearn.modules.generated.sklearn.metrics.precisionrecalldisplay#sklearn.metrics.PrecisionRecallDisplay |
sklearn.metrics.PrecisionRecallDisplay
class sklearn.metrics.PrecisionRecallDisplay(precision, recall, *, average_precision=None, estimator_name=None, pos_label=None) [source]
Precision Recall visualization. It is recommend to use plot_precision_recall_curve to create a visualizer. All parameters are stored as attr... | sklearn.modules.generated.sklearn.metrics.precisionrecalldisplay |
plot(ax=None, *, name=None, **kwargs) [source]
Plot visualization. Extra keyword arguments will be passed to matplotlib’s plot. Parameters
axMatplotlib Axes, default=None
Axes object to plot on. If None, a new figure and axes is created.
namestr, default=None
Name of precision recall curve for labeling. If ... | sklearn.modules.generated.sklearn.metrics.precisionrecalldisplay#sklearn.metrics.PrecisionRecallDisplay.plot |
sklearn.metrics.precision_recall_curve(y_true, probas_pred, *, pos_label=None, sample_weight=None) [source]
Compute precision-recall pairs for different probability thresholds. Note: this implementation is restricted to the binary classification task. The precision is the ratio tp / (tp + fp) where tp is the number o... | sklearn.modules.generated.sklearn.metrics.precision_recall_curve#sklearn.metrics.precision_recall_curve |
sklearn.metrics.precision_recall_fscore_support(y_true, y_pred, *, beta=1.0, labels=None, pos_label=1, average=None, warn_for='precision', 'recall', 'f-score', sample_weight=None, zero_division='warn') [source]
Compute precision, recall, F-measure and support for each class. The precision is the ratio tp / (tp + fp) ... | sklearn.modules.generated.sklearn.metrics.precision_recall_fscore_support#sklearn.metrics.precision_recall_fscore_support |
sklearn.metrics.precision_score(y_true, y_pred, *, labels=None, pos_label=1, average='binary', sample_weight=None, zero_division='warn') [source]
Compute the precision. The precision is the ratio tp / (tp + fp) where tp is the number of true positives and fp the number of false positives. The precision is intuitively... | sklearn.modules.generated.sklearn.metrics.precision_score#sklearn.metrics.precision_score |
sklearn.metrics.r2_score(y_true, y_pred, *, sample_weight=None, multioutput='uniform_average') [source]
R^2 (coefficient of determination) regression score function. Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value... | sklearn.modules.generated.sklearn.metrics.r2_score#sklearn.metrics.r2_score |
sklearn.metrics.rand_score(labels_true, labels_pred) [source]
Rand index. The Rand Index computes a similarity measure between two clusterings by considering all pairs of samples and counting pairs that are assigned in the same or different clusters in the predicted and true clusterings. The raw RI score is: RI = (nu... | sklearn.modules.generated.sklearn.metrics.rand_score#sklearn.metrics.rand_score |
sklearn.metrics.recall_score(y_true, y_pred, *, labels=None, pos_label=1, average='binary', sample_weight=None, zero_division='warn') [source]
Compute the recall. The recall is the ratio tp / (tp + fn) where tp is the number of true positives and fn the number of false negatives. The recall is intuitively the ability... | sklearn.modules.generated.sklearn.metrics.recall_score#sklearn.metrics.recall_score |
class sklearn.metrics.RocCurveDisplay(*, fpr, tpr, roc_auc=None, estimator_name=None, pos_label=None) [source]
ROC Curve visualization. It is recommend to use plot_roc_curve to create a visualizer. All parameters are stored as attributes. Read more in the User Guide. Parameters
fprndarray
False positive rate. ... | sklearn.modules.generated.sklearn.metrics.roccurvedisplay#sklearn.metrics.RocCurveDisplay |
sklearn.metrics.RocCurveDisplay
class sklearn.metrics.RocCurveDisplay(*, fpr, tpr, roc_auc=None, estimator_name=None, pos_label=None) [source]
ROC Curve visualization. It is recommend to use plot_roc_curve to create a visualizer. All parameters are stored as attributes. Read more in the User Guide. Parameters
f... | sklearn.modules.generated.sklearn.metrics.roccurvedisplay |
plot(ax=None, *, name=None, **kwargs) [source]
Plot visualization Extra keyword arguments will be passed to matplotlib’s plot. Parameters
axmatplotlib axes, default=None
Axes object to plot on. If None, a new figure and axes is created.
namestr, default=None
Name of ROC Curve for labeling. If None, use the ... | sklearn.modules.generated.sklearn.metrics.roccurvedisplay#sklearn.metrics.RocCurveDisplay.plot |
sklearn.metrics.roc_auc_score(y_true, y_score, *, average='macro', sample_weight=None, max_fpr=None, multi_class='raise', labels=None) [source]
Compute Area Under the Receiver Operating Characteristic Curve (ROC AUC) from prediction scores. Note: this implementation can be used with binary, multiclass and multilabel ... | sklearn.modules.generated.sklearn.metrics.roc_auc_score#sklearn.metrics.roc_auc_score |
sklearn.metrics.roc_curve(y_true, y_score, *, pos_label=None, sample_weight=None, drop_intermediate=True) [source]
Compute Receiver operating characteristic (ROC). Note: this implementation is restricted to the binary classification task. Read more in the User Guide. Parameters
y_truendarray of shape (n_samples,)... | sklearn.modules.generated.sklearn.metrics.roc_curve#sklearn.metrics.roc_curve |
sklearn.metrics.silhouette_samples(X, labels, *, metric='euclidean', **kwds) [source]
Compute the Silhouette Coefficient for each sample. The Silhouette Coefficient is a measure of how well samples are clustered with samples that are similar to themselves. Clustering models with a high Silhouette Coefficient are said... | sklearn.modules.generated.sklearn.metrics.silhouette_samples#sklearn.metrics.silhouette_samples |
sklearn.metrics.silhouette_score(X, labels, *, metric='euclidean', sample_size=None, random_state=None, **kwds) [source]
Compute the mean Silhouette Coefficient of all samples. The Silhouette Coefficient is calculated using the mean intra-cluster distance (a) and the mean nearest-cluster distance (b) for each sample.... | sklearn.modules.generated.sklearn.metrics.silhouette_score#sklearn.metrics.silhouette_score |
sklearn.metrics.top_k_accuracy_score(y_true, y_score, *, k=2, normalize=True, sample_weight=None, labels=None) [source]
Top-k Accuracy classification score. This metric computes the number of times where the correct label is among the top k labels predicted (ranked by predicted scores). Note that the multilabel case ... | sklearn.modules.generated.sklearn.metrics.top_k_accuracy_score#sklearn.metrics.top_k_accuracy_score |
sklearn.metrics.v_measure_score(labels_true, labels_pred, *, beta=1.0) [source]
V-measure cluster labeling given a ground truth. This score is identical to normalized_mutual_info_score with the 'arithmetic' option for averaging. The V-measure is the harmonic mean between homogeneity and completeness: v = (1 + beta) *... | sklearn.modules.generated.sklearn.metrics.v_measure_score#sklearn.metrics.v_measure_score |
sklearn.metrics.zero_one_loss(y_true, y_pred, *, normalize=True, sample_weight=None) [source]
Zero-one classification loss. If normalize is True, return the fraction of misclassifications (float), else it returns the number of misclassifications (int). The best performance is 0. Read more in the User Guide. Paramete... | sklearn.modules.generated.sklearn.metrics.zero_one_loss#sklearn.metrics.zero_one_loss |
class sklearn.mixture.BayesianGaussianMixture(*, n_components=1, covariance_type='full', tol=0.001, reg_covar=1e-06, max_iter=100, n_init=1, init_params='kmeans', weight_concentration_prior_type='dirichlet_process', weight_concentration_prior=None, mean_precision_prior=None, mean_prior=None, degrees_of_freedom_prior=No... | sklearn.modules.generated.sklearn.mixture.bayesiangaussianmixture#sklearn.mixture.BayesianGaussianMixture |
sklearn.mixture.BayesianGaussianMixture
class sklearn.mixture.BayesianGaussianMixture(*, n_components=1, covariance_type='full', tol=0.001, reg_covar=1e-06, max_iter=100, n_init=1, init_params='kmeans', weight_concentration_prior_type='dirichlet_process', weight_concentration_prior=None, mean_precision_prior=None, me... | sklearn.modules.generated.sklearn.mixture.bayesiangaussianmixture |
fit(X, y=None) [source]
Estimate model parameters with the EM algorithm. The method fits the model n_init times and sets the parameters with which the model has the largest likelihood or lower bound. Within each trial, the method iterates between E-step and M-step for max_iter times until the change of likelihood or ... | sklearn.modules.generated.sklearn.mixture.bayesiangaussianmixture#sklearn.mixture.BayesianGaussianMixture.fit |
fit_predict(X, y=None) [source]
Estimate model parameters using X and predict the labels for X. The method fits the model n_init times and sets the parameters with which the model has the largest likelihood or lower bound. Within each trial, the method iterates between E-step and M-step for max_iter times until the c... | sklearn.modules.generated.sklearn.mixture.bayesiangaussianmixture#sklearn.mixture.BayesianGaussianMixture.fit_predict |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.mixture.bayesiangaussianmixture#sklearn.mixture.BayesianGaussianMixture.get_params |
predict(X) [source]
Predict the labels for the data samples in X using trained model. Parameters
Xarray-like of shape (n_samples, n_features)
List of n_features-dimensional data points. Each row corresponds to a single data point. Returns
labelsarray, shape (n_samples,)
Component labels. | sklearn.modules.generated.sklearn.mixture.bayesiangaussianmixture#sklearn.mixture.BayesianGaussianMixture.predict |
predict_proba(X) [source]
Predict posterior probability of each component given the data. Parameters
Xarray-like of shape (n_samples, n_features)
List of n_features-dimensional data points. Each row corresponds to a single data point. Returns
resparray, shape (n_samples, n_components)
Returns the probab... | sklearn.modules.generated.sklearn.mixture.bayesiangaussianmixture#sklearn.mixture.BayesianGaussianMixture.predict_proba |
sample(n_samples=1) [source]
Generate random samples from the fitted Gaussian distribution. Parameters
n_samplesint, default=1
Number of samples to generate. Returns
Xarray, shape (n_samples, n_features)
Randomly generated sample
yarray, shape (nsamples,)
Component labels | sklearn.modules.generated.sklearn.mixture.bayesiangaussianmixture#sklearn.mixture.BayesianGaussianMixture.sample |
score(X, y=None) [source]
Compute the per-sample average log-likelihood of the given data X. Parameters
Xarray-like of shape (n_samples, n_dimensions)
List of n_features-dimensional data points. Each row corresponds to a single data point. Returns
log_likelihoodfloat
Log likelihood of the Gaussian mixtu... | sklearn.modules.generated.sklearn.mixture.bayesiangaussianmixture#sklearn.mixture.BayesianGaussianMixture.score |
score_samples(X) [source]
Compute the weighted log probabilities for each sample. Parameters
Xarray-like of shape (n_samples, n_features)
List of n_features-dimensional data points. Each row corresponds to a single data point. Returns
log_probarray, shape (n_samples,)
Log probabilities of each data poin... | sklearn.modules.generated.sklearn.mixture.bayesiangaussianmixture#sklearn.mixture.BayesianGaussianMixture.score_samples |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Es... | sklearn.modules.generated.sklearn.mixture.bayesiangaussianmixture#sklearn.mixture.BayesianGaussianMixture.set_params |
class sklearn.mixture.GaussianMixture(n_components=1, *, covariance_type='full', tol=0.001, reg_covar=1e-06, max_iter=100, n_init=1, init_params='kmeans', weights_init=None, means_init=None, precisions_init=None, random_state=None, warm_start=False, verbose=0, verbose_interval=10) [source]
Gaussian Mixture. Represent... | sklearn.modules.generated.sklearn.mixture.gaussianmixture#sklearn.mixture.GaussianMixture |
sklearn.mixture.GaussianMixture
class sklearn.mixture.GaussianMixture(n_components=1, *, covariance_type='full', tol=0.001, reg_covar=1e-06, max_iter=100, n_init=1, init_params='kmeans', weights_init=None, means_init=None, precisions_init=None, random_state=None, warm_start=False, verbose=0, verbose_interval=10) [sou... | sklearn.modules.generated.sklearn.mixture.gaussianmixture |
aic(X) [source]
Akaike information criterion for the current model on the input X. Parameters
Xarray of shape (n_samples, n_dimensions)
Returns
aicfloat
The lower the better. | sklearn.modules.generated.sklearn.mixture.gaussianmixture#sklearn.mixture.GaussianMixture.aic |
bic(X) [source]
Bayesian information criterion for the current model on the input X. Parameters
Xarray of shape (n_samples, n_dimensions)
Returns
bicfloat
The lower the better. | sklearn.modules.generated.sklearn.mixture.gaussianmixture#sklearn.mixture.GaussianMixture.bic |
fit(X, y=None) [source]
Estimate model parameters with the EM algorithm. The method fits the model n_init times and sets the parameters with which the model has the largest likelihood or lower bound. Within each trial, the method iterates between E-step and M-step for max_iter times until the change of likelihood or ... | sklearn.modules.generated.sklearn.mixture.gaussianmixture#sklearn.mixture.GaussianMixture.fit |
fit_predict(X, y=None) [source]
Estimate model parameters using X and predict the labels for X. The method fits the model n_init times and sets the parameters with which the model has the largest likelihood or lower bound. Within each trial, the method iterates between E-step and M-step for max_iter times until the c... | sklearn.modules.generated.sklearn.mixture.gaussianmixture#sklearn.mixture.GaussianMixture.fit_predict |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.mixture.gaussianmixture#sklearn.mixture.GaussianMixture.get_params |
predict(X) [source]
Predict the labels for the data samples in X using trained model. Parameters
Xarray-like of shape (n_samples, n_features)
List of n_features-dimensional data points. Each row corresponds to a single data point. Returns
labelsarray, shape (n_samples,)
Component labels. | sklearn.modules.generated.sklearn.mixture.gaussianmixture#sklearn.mixture.GaussianMixture.predict |
predict_proba(X) [source]
Predict posterior probability of each component given the data. Parameters
Xarray-like of shape (n_samples, n_features)
List of n_features-dimensional data points. Each row corresponds to a single data point. Returns
resparray, shape (n_samples, n_components)
Returns the probab... | sklearn.modules.generated.sklearn.mixture.gaussianmixture#sklearn.mixture.GaussianMixture.predict_proba |
sample(n_samples=1) [source]
Generate random samples from the fitted Gaussian distribution. Parameters
n_samplesint, default=1
Number of samples to generate. Returns
Xarray, shape (n_samples, n_features)
Randomly generated sample
yarray, shape (nsamples,)
Component labels | sklearn.modules.generated.sklearn.mixture.gaussianmixture#sklearn.mixture.GaussianMixture.sample |
score(X, y=None) [source]
Compute the per-sample average log-likelihood of the given data X. Parameters
Xarray-like of shape (n_samples, n_dimensions)
List of n_features-dimensional data points. Each row corresponds to a single data point. Returns
log_likelihoodfloat
Log likelihood of the Gaussian mixtu... | sklearn.modules.generated.sklearn.mixture.gaussianmixture#sklearn.mixture.GaussianMixture.score |
score_samples(X) [source]
Compute the weighted log probabilities for each sample. Parameters
Xarray-like of shape (n_samples, n_features)
List of n_features-dimensional data points. Each row corresponds to a single data point. Returns
log_probarray, shape (n_samples,)
Log probabilities of each data poin... | sklearn.modules.generated.sklearn.mixture.gaussianmixture#sklearn.mixture.GaussianMixture.score_samples |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Es... | sklearn.modules.generated.sklearn.mixture.gaussianmixture#sklearn.mixture.GaussianMixture.set_params |
sklearn.model_selection.check_cv(cv=5, y=None, *, classifier=False) [source]
Input checker utility for building a cross-validator Parameters
cvint, cross-validation generator or an iterable, default=None
Determines the cross-validation splitting strategy. Possible inputs for cv are: - None, to use the default 5... | sklearn.modules.generated.sklearn.model_selection.check_cv#sklearn.model_selection.check_cv |
sklearn.model_selection.cross_validate(estimator, X, y=None, *, groups=None, scoring=None, cv=None, n_jobs=None, verbose=0, fit_params=None, pre_dispatch='2*n_jobs', return_train_score=False, return_estimator=False, error_score=nan) [source]
Evaluate metric(s) by cross-validation and also record fit/score times. Read... | sklearn.modules.generated.sklearn.model_selection.cross_validate#sklearn.model_selection.cross_validate |
sklearn.model_selection.cross_val_predict(estimator, X, y=None, *, groups=None, cv=None, n_jobs=None, verbose=0, fit_params=None, pre_dispatch='2*n_jobs', method='predict') [source]
Generate cross-validated estimates for each input data point The data is split according to the cv parameter. Each sample belongs to exa... | sklearn.modules.generated.sklearn.model_selection.cross_val_predict#sklearn.model_selection.cross_val_predict |
sklearn.model_selection.cross_val_score(estimator, X, y=None, *, groups=None, scoring=None, cv=None, n_jobs=None, verbose=0, fit_params=None, pre_dispatch='2*n_jobs', error_score=nan) [source]
Evaluate a score by cross-validation Read more in the User Guide. Parameters
estimatorestimator object implementing ‘fit’... | sklearn.modules.generated.sklearn.model_selection.cross_val_score#sklearn.model_selection.cross_val_score |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.