doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
densify() [source]
Convert coefficient matrix to dense array format. Converts the coef_ member (back) to a numpy.ndarray. This is the default format of coef_ and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op. Returns
self
... | sklearn.modules.generated.sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier.densify |
fit(X, y, coef_init=None, intercept_init=None, sample_weight=None) [source]
Fit linear model with Stochastic Gradient Descent. Parameters
X{array-like, sparse matrix}, shape (n_samples, n_features)
Training data.
yndarray of shape (n_samples,)
Target values.
coef_initndarray of shape (n_classes, n_feature... | sklearn.modules.generated.sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier.get_params |
partial_fit(X, y, classes=None, sample_weight=None) [source]
Perform one epoch of stochastic gradient descent on given samples. Internally, this method uses max_iter = 1. Therefore, it is not guaranteed that a minimum of the cost function is reached after calling it once. Matters such as objective convergence and ear... | sklearn.modules.generated.sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier.partial_fit |
predict(X) [source]
Predict class labels for samples in X. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape [n_samples]
Predicted class label per sample. | sklearn.modules.generated.sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier.predict |
property predict_log_proba
Log of probability estimates. This method is only available for log loss and modified Huber loss. When loss=”modified_huber”, probability estimates may be hard zeros and ones, so taking the logarithm is not possible. See predict_proba for details. Parameters
X{array-like, sparse matrix}... | sklearn.modules.generated.sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier.predict_log_proba |
property predict_proba
Probability estimates. This method is only available for log loss and modified Huber loss. Multiclass probability estimates are derived from binary (one-vs.-rest) estimates by simple normalization, as recommended by Zadrozny and Elkan. Binary probability estimates for loss=”modified_huber” are ... | sklearn.modules.generated.sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier.predict_proba |
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_featur... | sklearn.modules.generated.sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier.score |
set_params(**kwargs) [source]
Set and validate the parameters of estimator. Parameters
**kwargsdict
Estimator parameters. Returns
selfobject
Estimator instance. | sklearn.modules.generated.sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier.set_params |
sparsify() [source]
Convert coefficient matrix to sparse format. Converts the coef_ member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation. The intercept_ member is not converted. Returns
self
Fitted estimator. ... | sklearn.modules.generated.sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier.sparsify |
class sklearn.linear_model.SGDRegressor(loss='squared_loss', *, penalty='l2', alpha=0.0001, l1_ratio=0.15, fit_intercept=True, max_iter=1000, tol=0.001, shuffle=True, verbose=0, epsilon=0.1, random_state=None, learning_rate='invscaling', eta0=0.01, power_t=0.25, early_stopping=False, validation_fraction=0.1, n_iter_no_... | sklearn.modules.generated.sklearn.linear_model.sgdregressor#sklearn.linear_model.SGDRegressor |
sklearn.linear_model.SGDRegressor
class sklearn.linear_model.SGDRegressor(loss='squared_loss', *, penalty='l2', alpha=0.0001, l1_ratio=0.15, fit_intercept=True, max_iter=1000, tol=0.001, shuffle=True, verbose=0, epsilon=0.1, random_state=None, learning_rate='invscaling', eta0=0.01, power_t=0.25, early_stopping=False,... | sklearn.modules.generated.sklearn.linear_model.sgdregressor |
densify() [source]
Convert coefficient matrix to dense array format. Converts the coef_ member (back) to a numpy.ndarray. This is the default format of coef_ and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op. Returns
self
... | sklearn.modules.generated.sklearn.linear_model.sgdregressor#sklearn.linear_model.SGDRegressor.densify |
fit(X, y, coef_init=None, intercept_init=None, sample_weight=None) [source]
Fit linear model with Stochastic Gradient Descent. Parameters
X{array-like, sparse matrix}, shape (n_samples, n_features)
Training data
yndarray of shape (n_samples,)
Target values
coef_initndarray of shape (n_features,), default=... | sklearn.modules.generated.sklearn.linear_model.sgdregressor#sklearn.linear_model.SGDRegressor.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.linear_model.sgdregressor#sklearn.linear_model.SGDRegressor.get_params |
partial_fit(X, y, sample_weight=None) [source]
Perform one epoch of stochastic gradient descent on given samples. Internally, this method uses max_iter = 1. Therefore, it is not guaranteed that a minimum of the cost function is reached after calling it once. Matters such as objective convergence and early stopping sh... | sklearn.modules.generated.sklearn.linear_model.sgdregressor#sklearn.linear_model.SGDRegressor.partial_fit |
predict(X) [source]
Predict using the linear model Parameters
X{array-like, sparse matrix}, shape (n_samples, n_features)
Returns
ndarray of shape (n_samples,)
Predicted target values per element in X. | sklearn.modules.generated.sklearn.linear_model.sgdregressor#sklearn.linear_model.SGDRegressor.predict |
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum()... | sklearn.modules.generated.sklearn.linear_model.sgdregressor#sklearn.linear_model.SGDRegressor.score |
set_params(**kwargs) [source]
Set and validate the parameters of estimator. Parameters
**kwargsdict
Estimator parameters. Returns
selfobject
Estimator instance. | sklearn.modules.generated.sklearn.linear_model.sgdregressor#sklearn.linear_model.SGDRegressor.set_params |
sparsify() [source]
Convert coefficient matrix to sparse format. Converts the coef_ member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation. The intercept_ member is not converted. Returns
self
Fitted estimator. ... | sklearn.modules.generated.sklearn.linear_model.sgdregressor#sklearn.linear_model.SGDRegressor.sparsify |
class sklearn.linear_model.TheilSenRegressor(*, fit_intercept=True, copy_X=True, max_subpopulation=10000.0, n_subsamples=None, max_iter=300, tol=0.001, random_state=None, n_jobs=None, verbose=False) [source]
Theil-Sen Estimator: robust multivariate regression model. The algorithm calculates least square solutions on ... | sklearn.modules.generated.sklearn.linear_model.theilsenregressor#sklearn.linear_model.TheilSenRegressor |
sklearn.linear_model.TheilSenRegressor
class sklearn.linear_model.TheilSenRegressor(*, fit_intercept=True, copy_X=True, max_subpopulation=10000.0, n_subsamples=None, max_iter=300, tol=0.001, random_state=None, n_jobs=None, verbose=False) [source]
Theil-Sen Estimator: robust multivariate regression model. The algori... | sklearn.modules.generated.sklearn.linear_model.theilsenregressor |
fit(X, y) [source]
Fit linear model. Parameters
Xndarray of shape (n_samples, n_features)
Training data.
yndarray of shape (n_samples,)
Target values. Returns
selfreturns an instance of self. | sklearn.modules.generated.sklearn.linear_model.theilsenregressor#sklearn.linear_model.TheilSenRegressor.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.linear_model.theilsenregressor#sklearn.linear_model.TheilSenRegressor.get_params |
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values. | sklearn.modules.generated.sklearn.linear_model.theilsenregressor#sklearn.linear_model.TheilSenRegressor.predict |
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum()... | sklearn.modules.generated.sklearn.linear_model.theilsenregressor#sklearn.linear_model.TheilSenRegressor.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Es... | sklearn.modules.generated.sklearn.linear_model.theilsenregressor#sklearn.linear_model.TheilSenRegressor.set_params |
class sklearn.linear_model.TweedieRegressor(*, power=0.0, alpha=1.0, fit_intercept=True, link='auto', max_iter=100, tol=0.0001, warm_start=False, verbose=0) [source]
Generalized Linear Model with a Tweedie distribution. This estimator can be used to model different GLMs depending on the power parameter, which determi... | sklearn.modules.generated.sklearn.linear_model.tweedieregressor#sklearn.linear_model.TweedieRegressor |
sklearn.linear_model.TweedieRegressor
class sklearn.linear_model.TweedieRegressor(*, power=0.0, alpha=1.0, fit_intercept=True, link='auto', max_iter=100, tol=0.0001, warm_start=False, verbose=0) [source]
Generalized Linear Model with a Tweedie distribution. This estimator can be used to model different GLMs dependi... | sklearn.modules.generated.sklearn.linear_model.tweedieregressor |
fit(X, y, sample_weight=None) [source]
Fit a Generalized Linear Model. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data.
yarray-like of shape (n_samples,)
Target values.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
selfre... | sklearn.modules.generated.sklearn.linear_model.tweedieregressor#sklearn.linear_model.TweedieRegressor.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.linear_model.tweedieregressor#sklearn.linear_model.TweedieRegressor.get_params |
predict(X) [source]
Predict using GLM with feature matrix X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Samples. Returns
y_predarray of shape (n_samples,)
Returns predicted values. | sklearn.modules.generated.sklearn.linear_model.tweedieregressor#sklearn.linear_model.TweedieRegressor.predict |
score(X, y, sample_weight=None) [source]
Compute D^2, the percentage of deviance explained. D^2 is a generalization of the coefficient of determination R^2. R^2 uses squared error and D^2 deviance. Note that those two are equal for family='normal'. D^2 is defined as \(D^2 = 1-\frac{D(y_{true},y_{pred})}{D_{null}}\), ... | sklearn.modules.generated.sklearn.linear_model.tweedieregressor#sklearn.linear_model.TweedieRegressor.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Es... | sklearn.modules.generated.sklearn.linear_model.tweedieregressor#sklearn.linear_model.TweedieRegressor.set_params |
class sklearn.manifold.Isomap(*, n_neighbors=5, n_components=2, eigen_solver='auto', tol=0, max_iter=None, path_method='auto', neighbors_algorithm='auto', n_jobs=None, metric='minkowski', p=2, metric_params=None) [source]
Isomap Embedding Non-linear dimensionality reduction through Isometric Mapping Read more in the ... | sklearn.modules.generated.sklearn.manifold.isomap#sklearn.manifold.Isomap |
sklearn.manifold.Isomap
class sklearn.manifold.Isomap(*, n_neighbors=5, n_components=2, eigen_solver='auto', tol=0, max_iter=None, path_method='auto', neighbors_algorithm='auto', n_jobs=None, metric='minkowski', p=2, metric_params=None) [source]
Isomap Embedding Non-linear dimensionality reduction through Isometric... | sklearn.modules.generated.sklearn.manifold.isomap |
fit(X, y=None) [source]
Compute the embedding vectors for data X Parameters
X{array-like, sparse graph, BallTree, KDTree, NearestNeighbors}
Sample data, shape = (n_samples, n_features), in the form of a numpy array, sparse graph, precomputed tree, or NearestNeighbors object.
yIgnored
Returns
selfreturns... | sklearn.modules.generated.sklearn.manifold.isomap#sklearn.manifold.Isomap.fit |
fit_transform(X, y=None) [source]
Fit the model from data in X and transform X. Parameters
X{array-like, sparse graph, BallTree, KDTree}
Training vector, where n_samples in the number of samples and n_features is the number of features.
yIgnored
Returns
X_newarray-like, shape (n_samples, n_components) | sklearn.modules.generated.sklearn.manifold.isomap#sklearn.manifold.Isomap.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.manifold.isomap#sklearn.manifold.Isomap.get_params |
reconstruction_error() [source]
Compute the reconstruction error for the embedding. Returns
reconstruction_errorfloat
Notes The cost function of an isomap embedding is E = frobenius_norm[K(D) - K(D_fit)] / n_samples Where D is the matrix of distances for the input data X, D_fit is the matrix of distances for ... | sklearn.modules.generated.sklearn.manifold.isomap#sklearn.manifold.Isomap.reconstruction_error |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Es... | sklearn.modules.generated.sklearn.manifold.isomap#sklearn.manifold.Isomap.set_params |
transform(X) [source]
Transform X. This is implemented by linking the points X into the graph of geodesic distances of the training data. First the n_neighbors nearest neighbors of X are found in the training data, and from these the shortest geodesic distances from each point in X to each point in the training data ... | sklearn.modules.generated.sklearn.manifold.isomap#sklearn.manifold.Isomap.transform |
class sklearn.manifold.LocallyLinearEmbedding(*, n_neighbors=5, n_components=2, reg=0.001, eigen_solver='auto', tol=1e-06, max_iter=100, method='standard', hessian_tol=0.0001, modified_tol=1e-12, neighbors_algorithm='auto', random_state=None, n_jobs=None) [source]
Locally Linear Embedding Read more in the User Guide.... | sklearn.modules.generated.sklearn.manifold.locallylinearembedding#sklearn.manifold.LocallyLinearEmbedding |
sklearn.manifold.LocallyLinearEmbedding
class sklearn.manifold.LocallyLinearEmbedding(*, n_neighbors=5, n_components=2, reg=0.001, eigen_solver='auto', tol=1e-06, max_iter=100, method='standard', hessian_tol=0.0001, modified_tol=1e-12, neighbors_algorithm='auto', random_state=None, n_jobs=None) [source]
Locally Lin... | sklearn.modules.generated.sklearn.manifold.locallylinearembedding |
fit(X, y=None) [source]
Compute the embedding vectors for data X Parameters
Xarray-like of shape [n_samples, n_features]
training set.
yIgnored
Returns
selfreturns an instance of self. | sklearn.modules.generated.sklearn.manifold.locallylinearembedding#sklearn.manifold.LocallyLinearEmbedding.fit |
fit_transform(X, y=None) [source]
Compute the embedding vectors for data X and transform X. Parameters
Xarray-like of shape [n_samples, n_features]
training set.
yIgnored
Returns
X_newarray-like, shape (n_samples, n_components) | sklearn.modules.generated.sklearn.manifold.locallylinearembedding#sklearn.manifold.LocallyLinearEmbedding.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.manifold.locallylinearembedding#sklearn.manifold.LocallyLinearEmbedding.get_params |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Es... | sklearn.modules.generated.sklearn.manifold.locallylinearembedding#sklearn.manifold.LocallyLinearEmbedding.set_params |
transform(X) [source]
Transform new points into embedding space. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
X_newarray, shape = [n_samples, n_components]
Notes Because of scaling performed by this method, it is discouraged to use it together with methods that are not scale-invariant... | sklearn.modules.generated.sklearn.manifold.locallylinearembedding#sklearn.manifold.LocallyLinearEmbedding.transform |
sklearn.manifold.locally_linear_embedding(X, *, n_neighbors, n_components, reg=0.001, eigen_solver='auto', tol=1e-06, max_iter=100, method='standard', hessian_tol=0.0001, modified_tol=1e-12, random_state=None, n_jobs=None) [source]
Perform a Locally Linear Embedding analysis on the data. Read more in the User Guide. ... | sklearn.modules.generated.sklearn.manifold.locally_linear_embedding#sklearn.manifold.locally_linear_embedding |
class sklearn.manifold.MDS(n_components=2, *, metric=True, n_init=4, max_iter=300, verbose=0, eps=0.001, n_jobs=None, random_state=None, dissimilarity='euclidean') [source]
Multidimensional scaling. Read more in the User Guide. Parameters
n_componentsint, default=2
Number of dimensions in which to immerse the d... | sklearn.modules.generated.sklearn.manifold.mds#sklearn.manifold.MDS |
sklearn.manifold.MDS
class sklearn.manifold.MDS(n_components=2, *, metric=True, n_init=4, max_iter=300, verbose=0, eps=0.001, n_jobs=None, random_state=None, dissimilarity='euclidean') [source]
Multidimensional scaling. Read more in the User Guide. Parameters
n_componentsint, default=2
Number of dimensions in... | sklearn.modules.generated.sklearn.manifold.mds |
fit(X, y=None, init=None) [source]
Computes the position of the points in the embedding space. Parameters
Xarray-like of shape (n_samples, n_features) or (n_samples, n_samples)
Input data. If dissimilarity=='precomputed', the input should be the dissimilarity matrix.
yIgnored
initndarray of shape (n_samples... | sklearn.modules.generated.sklearn.manifold.mds#sklearn.manifold.MDS.fit |
fit_transform(X, y=None, init=None) [source]
Fit the data from X, and returns the embedded coordinates. Parameters
Xarray-like of shape (n_samples, n_features) or (n_samples, n_samples)
Input data. If dissimilarity=='precomputed', the input should be the dissimilarity matrix.
yIgnored
initndarray of shape (... | sklearn.modules.generated.sklearn.manifold.mds#sklearn.manifold.MDS.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.manifold.mds#sklearn.manifold.MDS.get_params |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Es... | sklearn.modules.generated.sklearn.manifold.mds#sklearn.manifold.MDS.set_params |
sklearn.manifold.smacof(dissimilarities, *, metric=True, n_components=2, init=None, n_init=8, n_jobs=None, max_iter=300, verbose=0, eps=0.001, random_state=None, return_n_iter=False) [source]
Computes multidimensional scaling using the SMACOF algorithm. The SMACOF (Scaling by MAjorizing a COmplicated Function) algori... | sklearn.modules.generated.sklearn.manifold.smacof#sklearn.manifold.smacof |
class sklearn.manifold.SpectralEmbedding(n_components=2, *, affinity='nearest_neighbors', gamma=None, random_state=None, eigen_solver=None, n_neighbors=None, n_jobs=None) [source]
Spectral embedding for non-linear dimensionality reduction. Forms an affinity matrix given by the specified function and applies spectral ... | sklearn.modules.generated.sklearn.manifold.spectralembedding#sklearn.manifold.SpectralEmbedding |
sklearn.manifold.SpectralEmbedding
class sklearn.manifold.SpectralEmbedding(n_components=2, *, affinity='nearest_neighbors', gamma=None, random_state=None, eigen_solver=None, n_neighbors=None, n_jobs=None) [source]
Spectral embedding for non-linear dimensionality reduction. Forms an affinity matrix given by the spe... | sklearn.modules.generated.sklearn.manifold.spectralembedding |
fit(X, y=None) [source]
Fit the model from data in X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and n_features is the number of features. If affinity is “precomputed” X : {array-like, sparse matrix}, shape (n_samples, n_sam... | sklearn.modules.generated.sklearn.manifold.spectralembedding#sklearn.manifold.SpectralEmbedding.fit |
fit_transform(X, y=None) [source]
Fit the model from data in X and transform X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and n_features is the number of features. If affinity is “precomputed” X : {array-like, sparse matrix... | sklearn.modules.generated.sklearn.manifold.spectralembedding#sklearn.manifold.SpectralEmbedding.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.manifold.spectralembedding#sklearn.manifold.SpectralEmbedding.get_params |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Es... | sklearn.modules.generated.sklearn.manifold.spectralembedding#sklearn.manifold.SpectralEmbedding.set_params |
sklearn.manifold.spectral_embedding(adjacency, *, n_components=8, eigen_solver=None, random_state=None, eigen_tol=0.0, norm_laplacian=True, drop_first=True) [source]
Project the sample on the first eigenvectors of the graph Laplacian. The adjacency matrix is used to compute a normalized graph Laplacian whose spectrum... | sklearn.modules.generated.sklearn.manifold.spectral_embedding#sklearn.manifold.spectral_embedding |
sklearn.manifold.trustworthiness(X, X_embedded, *, n_neighbors=5, metric='euclidean') [source]
Expresses to what extent the local structure is retained. The trustworthiness is within [0, 1]. It is defined as \[T(k) = 1 - \frac{2}{nk (2n - 3k - 1)} \sum^n_{i=1} \sum_{j \in \mathcal{N}_{i}^{k}} \max(0, (r(i, j) - k))\... | sklearn.modules.generated.sklearn.manifold.trustworthiness#sklearn.manifold.trustworthiness |
class sklearn.manifold.TSNE(n_components=2, *, perplexity=30.0, early_exaggeration=12.0, learning_rate=200.0, n_iter=1000, n_iter_without_progress=300, min_grad_norm=1e-07, metric='euclidean', init='random', verbose=0, random_state=None, method='barnes_hut', angle=0.5, n_jobs=None, square_distances='legacy') [source]
... | sklearn.modules.generated.sklearn.manifold.tsne#sklearn.manifold.TSNE |
sklearn.manifold.TSNE
class sklearn.manifold.TSNE(n_components=2, *, perplexity=30.0, early_exaggeration=12.0, learning_rate=200.0, n_iter=1000, n_iter_without_progress=300, min_grad_norm=1e-07, metric='euclidean', init='random', verbose=0, random_state=None, method='barnes_hut', angle=0.5, n_jobs=None, square_distan... | sklearn.modules.generated.sklearn.manifold.tsne |
fit(X, y=None) [source]
Fit X into an embedded space. Parameters
Xndarray of shape (n_samples, n_features) or (n_samples, n_samples)
If the metric is ‘precomputed’ X must be a square distance matrix. Otherwise it contains a sample per row. If the method is ‘exact’, X may be a sparse matrix of type ‘csr’, ‘csc’ ... | sklearn.modules.generated.sklearn.manifold.tsne#sklearn.manifold.TSNE.fit |
fit_transform(X, y=None) [source]
Fit X into an embedded space and return that transformed output. Parameters
Xndarray of shape (n_samples, n_features) or (n_samples, n_samples)
If the metric is ‘precomputed’ X must be a square distance matrix. Otherwise it contains a sample per row. If the method is ‘exact’, X... | sklearn.modules.generated.sklearn.manifold.tsne#sklearn.manifold.TSNE.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.manifold.tsne#sklearn.manifold.TSNE.get_params |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Es... | sklearn.modules.generated.sklearn.manifold.tsne#sklearn.manifold.TSNE.set_params |
sklearn.metrics.accuracy_score(y_true, y_pred, *, normalize=True, sample_weight=None) [source]
Accuracy classification score. In multilabel classification, this function computes subset accuracy: the set of labels predicted for a sample must exactly match the corresponding set of labels in y_true. Read more in the Us... | sklearn.modules.generated.sklearn.metrics.accuracy_score#sklearn.metrics.accuracy_score |
sklearn.metrics.adjusted_mutual_info_score(labels_true, labels_pred, *, average_method='arithmetic') [source]
Adjusted Mutual Information between two clusterings. Adjusted Mutual Information (AMI) is an adjustment of the Mutual Information (MI) score to account for chance. It accounts for the fact that the MI is gene... | sklearn.modules.generated.sklearn.metrics.adjusted_mutual_info_score#sklearn.metrics.adjusted_mutual_info_score |
sklearn.metrics.adjusted_rand_score(labels_true, labels_pred) [source]
Rand index adjusted for chance. The Rand Index computes a similarity measure between two clusterings by considering all pairs of samples and counting pairs that are assigned in the same or different clusters in the predicted and true clusterings. ... | sklearn.modules.generated.sklearn.metrics.adjusted_rand_score#sklearn.metrics.adjusted_rand_score |
sklearn.metrics.auc(x, y) [source]
Compute Area Under the Curve (AUC) using the trapezoidal rule. This is a general function, given points on a curve. For computing the area under the ROC-curve, see roc_auc_score. For an alternative way to summarize a precision-recall curve, see average_precision_score. Parameters
... | sklearn.modules.generated.sklearn.metrics.auc#sklearn.metrics.auc |
sklearn.metrics.average_precision_score(y_true, y_score, *, average='macro', pos_label=1, sample_weight=None) [source]
Compute average precision (AP) from prediction scores. AP summarizes a precision-recall curve as the weighted mean of precisions achieved at each threshold, with the increase in recall from the previ... | sklearn.modules.generated.sklearn.metrics.average_precision_score#sklearn.metrics.average_precision_score |
sklearn.metrics.balanced_accuracy_score(y_true, y_pred, *, sample_weight=None, adjusted=False) [source]
Compute the balanced accuracy. The balanced accuracy in binary and multiclass classification problems to deal with imbalanced datasets. It is defined as the average of recall obtained on each class. The best value ... | sklearn.modules.generated.sklearn.metrics.balanced_accuracy_score#sklearn.metrics.balanced_accuracy_score |
sklearn.metrics.brier_score_loss(y_true, y_prob, *, sample_weight=None, pos_label=None) [source]
Compute the Brier score loss. The smaller the Brier score loss, the better, hence the naming with “loss”. The Brier score measures the mean squared difference between the predicted probability and the actual outcome. The ... | sklearn.modules.generated.sklearn.metrics.brier_score_loss#sklearn.metrics.brier_score_loss |
sklearn.metrics.calinski_harabasz_score(X, labels) [source]
Compute the Calinski and Harabasz score. It is also known as the Variance Ratio Criterion. The score is defined as ratio between the within-cluster dispersion and the between-cluster dispersion. Read more in the User Guide. Parameters
Xarray-like of shap... | sklearn.modules.generated.sklearn.metrics.calinski_harabasz_score#sklearn.metrics.calinski_harabasz_score |
sklearn.metrics.check_scoring(estimator, scoring=None, *, allow_none=False) [source]
Determine scorer from user options. A TypeError will be thrown if the estimator cannot be scored. Parameters
estimatorestimator object implementing ‘fit’
The object to use to fit the data.
scoringstr or callable, default=None... | sklearn.modules.generated.sklearn.metrics.check_scoring#sklearn.metrics.check_scoring |
sklearn.metrics.classification_report(y_true, y_pred, *, labels=None, target_names=None, sample_weight=None, digits=2, output_dict=False, zero_division='warn') [source]
Build a text report showing the main classification metrics. Read more in the User Guide. Parameters
y_true1d array-like, or label indicator arra... | sklearn.modules.generated.sklearn.metrics.classification_report#sklearn.metrics.classification_report |
sklearn.metrics.cluster.contingency_matrix(labels_true, labels_pred, *, eps=None, sparse=False, dtype=<class 'numpy.int64'>) [source]
Build a contingency matrix describing the relationship between labels. Parameters
labels_trueint array, shape = [n_samples]
Ground truth class labels to be used as a reference. ... | sklearn.modules.generated.sklearn.metrics.cluster.contingency_matrix#sklearn.metrics.cluster.contingency_matrix |
sklearn.metrics.cluster.pair_confusion_matrix(labels_true, labels_pred) [source]
Pair confusion matrix arising from two clusterings. The pair confusion matrix \(C\) computes a 2 by 2 similarity matrix between two clusterings by considering all pairs of samples and counting pairs that are assigned into the same or int... | sklearn.modules.generated.sklearn.metrics.cluster.pair_confusion_matrix#sklearn.metrics.cluster.pair_confusion_matrix |
sklearn.metrics.cohen_kappa_score(y1, y2, *, labels=None, weights=None, sample_weight=None) [source]
Cohen’s kappa: a statistic that measures inter-annotator agreement. This function computes Cohen’s kappa [1], a score that expresses the level of agreement between two annotators on a classification problem. It is def... | sklearn.modules.generated.sklearn.metrics.cohen_kappa_score#sklearn.metrics.cohen_kappa_score |
sklearn.metrics.completeness_score(labels_true, labels_pred) [source]
Completeness metric of a cluster labeling given a ground truth. A clustering result satisfies completeness if all the data points that are members of a given class are elements of the same cluster. This metric is independent of the absolute values ... | sklearn.modules.generated.sklearn.metrics.completeness_score#sklearn.metrics.completeness_score |
class sklearn.metrics.ConfusionMatrixDisplay(confusion_matrix, *, display_labels=None) [source]
Confusion Matrix visualization. It is recommend to use plot_confusion_matrix to create a ConfusionMatrixDisplay. All parameters are stored as attributes. Read more in the User Guide. Parameters
confusion_matrixndarray ... | sklearn.modules.generated.sklearn.metrics.confusionmatrixdisplay#sklearn.metrics.ConfusionMatrixDisplay |
sklearn.metrics.ConfusionMatrixDisplay
class sklearn.metrics.ConfusionMatrixDisplay(confusion_matrix, *, display_labels=None) [source]
Confusion Matrix visualization. It is recommend to use plot_confusion_matrix to create a ConfusionMatrixDisplay. All parameters are stored as attributes. Read more in the User Guide... | sklearn.modules.generated.sklearn.metrics.confusionmatrixdisplay |
plot(*, include_values=True, cmap='viridis', xticks_rotation='horizontal', values_format=None, ax=None, colorbar=True) [source]
Plot visualization. Parameters
include_valuesbool, default=True
Includes values in confusion matrix.
cmapstr or matplotlib Colormap, default=’viridis’
Colormap recognized by matplo... | sklearn.modules.generated.sklearn.metrics.confusionmatrixdisplay#sklearn.metrics.ConfusionMatrixDisplay.plot |
sklearn.metrics.confusion_matrix(y_true, y_pred, *, labels=None, sample_weight=None, normalize=None) [source]
Compute confusion matrix to evaluate the accuracy of a classification. By definition a confusion matrix \(C\) is such that \(C_{i, j}\) is equal to the number of observations known to be in group \(i\) and pr... | sklearn.modules.generated.sklearn.metrics.confusion_matrix#sklearn.metrics.confusion_matrix |
sklearn.metrics.consensus_score(a, b, *, similarity='jaccard') [source]
The similarity of two sets of biclusters. Similarity between individual biclusters is computed. Then the best matching between sets is found using the Hungarian algorithm. The final score is the sum of similarities divided by the size of the larg... | sklearn.modules.generated.sklearn.metrics.consensus_score#sklearn.metrics.consensus_score |
sklearn.metrics.coverage_error(y_true, y_score, *, sample_weight=None) [source]
Coverage error measure. Compute how far we need to go through the ranked scores to cover all true labels. The best value is equal to the average number of labels in y_true per sample. Ties in y_scores are broken by giving maximal rank tha... | sklearn.modules.generated.sklearn.metrics.coverage_error#sklearn.metrics.coverage_error |
sklearn.metrics.davies_bouldin_score(X, labels) [source]
Computes the Davies-Bouldin score. The score is defined as the average similarity measure of each cluster with its most similar cluster, where similarity is the ratio of within-cluster distances to between-cluster distances. Thus, clusters which are farther apa... | sklearn.modules.generated.sklearn.metrics.davies_bouldin_score#sklearn.metrics.davies_bouldin_score |
sklearn.metrics.dcg_score(y_true, y_score, *, k=None, log_base=2, sample_weight=None, ignore_ties=False) [source]
Compute Discounted Cumulative Gain. Sum the true scores ranked in the order induced by the predicted scores, after applying a logarithmic discount. This ranking metric yields a high value if true labels a... | sklearn.modules.generated.sklearn.metrics.dcg_score#sklearn.metrics.dcg_score |
class sklearn.metrics.DetCurveDisplay(*, fpr, fnr, estimator_name=None, pos_label=None) [source]
DET curve visualization. It is recommend to use plot_det_curve to create a visualizer. All parameters are stored as attributes. Read more in the User Guide. New in version 0.24. Parameters
fprndarray
False positiv... | sklearn.modules.generated.sklearn.metrics.detcurvedisplay#sklearn.metrics.DetCurveDisplay |
sklearn.metrics.DetCurveDisplay
class sklearn.metrics.DetCurveDisplay(*, fpr, fnr, estimator_name=None, pos_label=None) [source]
DET curve visualization. It is recommend to use plot_det_curve to create a visualizer. All parameters are stored as attributes. Read more in the User Guide. New in version 0.24. Parame... | sklearn.modules.generated.sklearn.metrics.detcurvedisplay |
plot(ax=None, *, name=None, **kwargs) [source]
Plot visualization. Parameters
axmatplotlib axes, default=None
Axes object to plot on. If None, a new figure and axes is created.
namestr, default=None
Name of DET curve for labeling. If None, use the name of the estimator. Returns
displayDetCurveDisplay ... | sklearn.modules.generated.sklearn.metrics.detcurvedisplay#sklearn.metrics.DetCurveDisplay.plot |
sklearn.metrics.det_curve(y_true, y_score, pos_label=None, sample_weight=None) [source]
Compute error rates for different probability thresholds. Note This metric is used for evaluation of ranking and error tradeoffs of a binary classification task. Read more in the User Guide. New in version 0.24. Parameters
... | sklearn.modules.generated.sklearn.metrics.det_curve#sklearn.metrics.det_curve |
sklearn.metrics.explained_variance_score(y_true, y_pred, *, sample_weight=None, multioutput='uniform_average') [source]
Explained variance regression score function. Best possible score is 1.0, lower values are worse. Read more in the User Guide. Parameters
y_truearray-like of shape (n_samples,) or (n_samples, n_... | sklearn.modules.generated.sklearn.metrics.explained_variance_score#sklearn.metrics.explained_variance_score |
sklearn.metrics.f1_score(y_true, y_pred, *, labels=None, pos_label=1, average='binary', sample_weight=None, zero_division='warn') [source]
Compute the F1 score, also known as balanced F-score or F-measure. The F1 score can be interpreted as a weighted average of the precision and recall, where an F1 score reaches its... | sklearn.modules.generated.sklearn.metrics.f1_score#sklearn.metrics.f1_score |
sklearn.metrics.fbeta_score(y_true, y_pred, *, beta, labels=None, pos_label=1, average='binary', sample_weight=None, zero_division='warn') [source]
Compute the F-beta score. The F-beta score is the weighted harmonic mean of precision and recall, reaching its optimal value at 1 and its worst value at 0. The beta param... | sklearn.modules.generated.sklearn.metrics.fbeta_score#sklearn.metrics.fbeta_score |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.