doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Es... | sklearn.modules.generated.sklearn.covariance.oas#sklearn.covariance.OAS.set_params |
class sklearn.covariance.ShrunkCovariance(*, store_precision=True, assume_centered=False, shrinkage=0.1) [source]
Covariance estimator with shrinkage Read more in the User Guide. Parameters
store_precisionbool, default=True
Specify if the estimated precision is stored
assume_centeredbool, default=False
If T... | sklearn.modules.generated.sklearn.covariance.shrunkcovariance#sklearn.covariance.ShrunkCovariance |
sklearn.covariance.ShrunkCovariance
class sklearn.covariance.ShrunkCovariance(*, store_precision=True, assume_centered=False, shrinkage=0.1) [source]
Covariance estimator with shrinkage Read more in the User Guide. Parameters
store_precisionbool, default=True
Specify if the estimated precision is stored
ass... | sklearn.modules.generated.sklearn.covariance.shrunkcovariance |
error_norm(comp_cov, norm='frobenius', scaling=True, squared=True) [source]
Computes the Mean Squared Error between two covariance estimators. (In the sense of the Frobenius norm). Parameters
comp_covarray-like of shape (n_features, n_features)
The covariance to compare with.
norm{“frobenius”, “spectral”}, de... | sklearn.modules.generated.sklearn.covariance.shrunkcovariance#sklearn.covariance.ShrunkCovariance.error_norm |
fit(X, y=None) [source]
Fit the shrunk covariance model according to the given training data and parameters. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features. y: Ignored
Not used, present for API consistenc... | sklearn.modules.generated.sklearn.covariance.shrunkcovariance#sklearn.covariance.ShrunkCovariance.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.covariance.shrunkcovariance#sklearn.covariance.ShrunkCovariance.get_params |
get_precision() [source]
Getter for the precision matrix. Returns
precision_array-like of shape (n_features, n_features)
The precision matrix associated to the current covariance object. | sklearn.modules.generated.sklearn.covariance.shrunkcovariance#sklearn.covariance.ShrunkCovariance.get_precision |
mahalanobis(X) [source]
Computes the squared Mahalanobis distances of given observations. Parameters
Xarray-like of shape (n_samples, n_features)
The observations, the Mahalanobis distances of the which we compute. Observations are assumed to be drawn from the same distribution than the data used in fit. Ret... | sklearn.modules.generated.sklearn.covariance.shrunkcovariance#sklearn.covariance.ShrunkCovariance.mahalanobis |
score(X_test, y=None) [source]
Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix. Parameters
X_testarray-like of shape (n_samples, n_features)
Test data of which we compute the likelihood, where n_samples is the number of samples and n_features is ... | sklearn.modules.generated.sklearn.covariance.shrunkcovariance#sklearn.covariance.ShrunkCovariance.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Es... | sklearn.modules.generated.sklearn.covariance.shrunkcovariance#sklearn.covariance.ShrunkCovariance.set_params |
sklearn.covariance.shrunk_covariance(emp_cov, shrinkage=0.1) [source]
Calculates a covariance matrix shrunk on the diagonal Read more in the User Guide. Parameters
emp_covarray-like of shape (n_features, n_features)
Covariance matrix to be shrunk
shrinkagefloat, default=0.1
Coefficient in the convex combina... | sklearn.modules.generated.sklearn.covariance.shrunk_covariance#sklearn.covariance.shrunk_covariance |
class sklearn.cross_decomposition.CCA(n_components=2, *, scale=True, max_iter=500, tol=1e-06, copy=True) [source]
Canonical Correlation Analysis, also known as “Mode B” PLS. Read more in the User Guide. Parameters
n_componentsint, default=2
Number of components to keep. Should be in [1, min(n_samples,
n_feature... | sklearn.modules.generated.sklearn.cross_decomposition.cca#sklearn.cross_decomposition.CCA |
sklearn.cross_decomposition.CCA
class sklearn.cross_decomposition.CCA(n_components=2, *, scale=True, max_iter=500, tol=1e-06, copy=True) [source]
Canonical Correlation Analysis, also known as “Mode B” PLS. Read more in the User Guide. Parameters
n_componentsint, default=2
Number of components to keep. Should ... | sklearn.modules.generated.sklearn.cross_decomposition.cca |
fit(X, Y) [source]
Fit model to data. Parameters
Xarray-like of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of predictors.
Yarray-like of shape (n_samples,) or (n_samples, n_targets)
Target vectors, where n_samples is the number of sa... | sklearn.modules.generated.sklearn.cross_decomposition.cca#sklearn.cross_decomposition.CCA.fit |
fit_transform(X, y=None) [source]
Learn and apply the dimension reduction on the train data. Parameters
Xarray-like of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of predictors.
yarray-like of shape (n_samples, n_targets), default=None ... | sklearn.modules.generated.sklearn.cross_decomposition.cca#sklearn.cross_decomposition.CCA.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.cross_decomposition.cca#sklearn.cross_decomposition.CCA.get_params |
inverse_transform(X) [source]
Transform data back to its original space. Parameters
Xarray-like of shape (n_samples, n_components)
New data, where n_samples is the number of samples and n_components is the number of pls components. Returns
x_reconstructedarray-like of shape (n_samples, n_features)
Not... | sklearn.modules.generated.sklearn.cross_decomposition.cca#sklearn.cross_decomposition.CCA.inverse_transform |
predict(X, copy=True) [source]
Predict targets of given samples. Parameters
Xarray-like of shape (n_samples, n_features)
Samples.
copybool, default=True
Whether to copy X and Y, or perform in-place normalization. Notes This call requires the estimation of a matrix of shape (n_features, n_targets), which... | sklearn.modules.generated.sklearn.cross_decomposition.cca#sklearn.cross_decomposition.CCA.predict |
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum()... | sklearn.modules.generated.sklearn.cross_decomposition.cca#sklearn.cross_decomposition.CCA.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Es... | sklearn.modules.generated.sklearn.cross_decomposition.cca#sklearn.cross_decomposition.CCA.set_params |
transform(X, Y=None, copy=True) [source]
Apply the dimension reduction. Parameters
Xarray-like of shape (n_samples, n_features)
Samples to transform.
Yarray-like of shape (n_samples, n_targets), default=None
Target vectors.
copybool, default=True
Whether to copy X and Y, or perform in-place normalizatio... | sklearn.modules.generated.sklearn.cross_decomposition.cca#sklearn.cross_decomposition.CCA.transform |
class sklearn.cross_decomposition.PLSCanonical(n_components=2, *, scale=True, algorithm='nipals', max_iter=500, tol=1e-06, copy=True) [source]
Partial Least Squares transformer and regressor. Read more in the User Guide. New in version 0.8. Parameters
n_componentsint, default=2
Number of components to keep. S... | sklearn.modules.generated.sklearn.cross_decomposition.plscanonical#sklearn.cross_decomposition.PLSCanonical |
sklearn.cross_decomposition.PLSCanonical
class sklearn.cross_decomposition.PLSCanonical(n_components=2, *, scale=True, algorithm='nipals', max_iter=500, tol=1e-06, copy=True) [source]
Partial Least Squares transformer and regressor. Read more in the User Guide. New in version 0.8. Parameters
n_componentsint, ... | sklearn.modules.generated.sklearn.cross_decomposition.plscanonical |
fit(X, Y) [source]
Fit model to data. Parameters
Xarray-like of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of predictors.
Yarray-like of shape (n_samples,) or (n_samples, n_targets)
Target vectors, where n_samples is the number of sa... | sklearn.modules.generated.sklearn.cross_decomposition.plscanonical#sklearn.cross_decomposition.PLSCanonical.fit |
fit_transform(X, y=None) [source]
Learn and apply the dimension reduction on the train data. Parameters
Xarray-like of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of predictors.
yarray-like of shape (n_samples, n_targets), default=None ... | sklearn.modules.generated.sklearn.cross_decomposition.plscanonical#sklearn.cross_decomposition.PLSCanonical.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.cross_decomposition.plscanonical#sklearn.cross_decomposition.PLSCanonical.get_params |
inverse_transform(X) [source]
Transform data back to its original space. Parameters
Xarray-like of shape (n_samples, n_components)
New data, where n_samples is the number of samples and n_components is the number of pls components. Returns
x_reconstructedarray-like of shape (n_samples, n_features)
Not... | sklearn.modules.generated.sklearn.cross_decomposition.plscanonical#sklearn.cross_decomposition.PLSCanonical.inverse_transform |
predict(X, copy=True) [source]
Predict targets of given samples. Parameters
Xarray-like of shape (n_samples, n_features)
Samples.
copybool, default=True
Whether to copy X and Y, or perform in-place normalization. Notes This call requires the estimation of a matrix of shape (n_features, n_targets), which... | sklearn.modules.generated.sklearn.cross_decomposition.plscanonical#sklearn.cross_decomposition.PLSCanonical.predict |
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum()... | sklearn.modules.generated.sklearn.cross_decomposition.plscanonical#sklearn.cross_decomposition.PLSCanonical.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Es... | sklearn.modules.generated.sklearn.cross_decomposition.plscanonical#sklearn.cross_decomposition.PLSCanonical.set_params |
transform(X, Y=None, copy=True) [source]
Apply the dimension reduction. Parameters
Xarray-like of shape (n_samples, n_features)
Samples to transform.
Yarray-like of shape (n_samples, n_targets), default=None
Target vectors.
copybool, default=True
Whether to copy X and Y, or perform in-place normalizatio... | sklearn.modules.generated.sklearn.cross_decomposition.plscanonical#sklearn.cross_decomposition.PLSCanonical.transform |
class sklearn.cross_decomposition.PLSRegression(n_components=2, *, scale=True, max_iter=500, tol=1e-06, copy=True) [source]
PLS regression PLSRegression is also known as PLS2 or PLS1, depending on the number of targets. Read more in the User Guide. New in version 0.8. Parameters
n_componentsint, default=2
Num... | sklearn.modules.generated.sklearn.cross_decomposition.plsregression#sklearn.cross_decomposition.PLSRegression |
sklearn.cross_decomposition.PLSRegression
class sklearn.cross_decomposition.PLSRegression(n_components=2, *, scale=True, max_iter=500, tol=1e-06, copy=True) [source]
PLS regression PLSRegression is also known as PLS2 or PLS1, depending on the number of targets. Read more in the User Guide. New in version 0.8. Pa... | sklearn.modules.generated.sklearn.cross_decomposition.plsregression |
fit(X, Y) [source]
Fit model to data. Parameters
Xarray-like of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of predictors.
Yarray-like of shape (n_samples,) or (n_samples, n_targets)
Target vectors, where n_samples is the number of sa... | sklearn.modules.generated.sklearn.cross_decomposition.plsregression#sklearn.cross_decomposition.PLSRegression.fit |
fit_transform(X, y=None) [source]
Learn and apply the dimension reduction on the train data. Parameters
Xarray-like of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of predictors.
yarray-like of shape (n_samples, n_targets), default=None ... | sklearn.modules.generated.sklearn.cross_decomposition.plsregression#sklearn.cross_decomposition.PLSRegression.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.cross_decomposition.plsregression#sklearn.cross_decomposition.PLSRegression.get_params |
inverse_transform(X) [source]
Transform data back to its original space. Parameters
Xarray-like of shape (n_samples, n_components)
New data, where n_samples is the number of samples and n_components is the number of pls components. Returns
x_reconstructedarray-like of shape (n_samples, n_features)
Not... | sklearn.modules.generated.sklearn.cross_decomposition.plsregression#sklearn.cross_decomposition.PLSRegression.inverse_transform |
predict(X, copy=True) [source]
Predict targets of given samples. Parameters
Xarray-like of shape (n_samples, n_features)
Samples.
copybool, default=True
Whether to copy X and Y, or perform in-place normalization. Notes This call requires the estimation of a matrix of shape (n_features, n_targets), which... | sklearn.modules.generated.sklearn.cross_decomposition.plsregression#sklearn.cross_decomposition.PLSRegression.predict |
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum()... | sklearn.modules.generated.sklearn.cross_decomposition.plsregression#sklearn.cross_decomposition.PLSRegression.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Es... | sklearn.modules.generated.sklearn.cross_decomposition.plsregression#sklearn.cross_decomposition.PLSRegression.set_params |
transform(X, Y=None, copy=True) [source]
Apply the dimension reduction. Parameters
Xarray-like of shape (n_samples, n_features)
Samples to transform.
Yarray-like of shape (n_samples, n_targets), default=None
Target vectors.
copybool, default=True
Whether to copy X and Y, or perform in-place normalizatio... | sklearn.modules.generated.sklearn.cross_decomposition.plsregression#sklearn.cross_decomposition.PLSRegression.transform |
class sklearn.cross_decomposition.PLSSVD(n_components=2, *, scale=True, copy=True) [source]
Partial Least Square SVD. This transformer simply performs a SVD on the crosscovariance matrix X’Y. It is able to project both the training data X and the targets Y. The training data X is projected on the left singular vector... | sklearn.modules.generated.sklearn.cross_decomposition.plssvd#sklearn.cross_decomposition.PLSSVD |
sklearn.cross_decomposition.PLSSVD
class sklearn.cross_decomposition.PLSSVD(n_components=2, *, scale=True, copy=True) [source]
Partial Least Square SVD. This transformer simply performs a SVD on the crosscovariance matrix X’Y. It is able to project both the training data X and the targets Y. The training data X is ... | sklearn.modules.generated.sklearn.cross_decomposition.plssvd |
fit(X, Y) [source]
Fit model to data. Parameters
Xarray-like of shape (n_samples, n_features)
Training samples.
Yarray-like of shape (n_samples,) or (n_samples, n_targets)
Targets. | sklearn.modules.generated.sklearn.cross_decomposition.plssvd#sklearn.cross_decomposition.PLSSVD.fit |
fit_transform(X, y=None) [source]
Learn and apply the dimensionality reduction. Parameters
Xarray-like of shape (n_samples, n_features)
Training samples.
yarray-like of shape (n_samples,) or (n_samples, n_targets), default=None
Targets. Returns
outarray-like or tuple of array-like
The transformed da... | sklearn.modules.generated.sklearn.cross_decomposition.plssvd#sklearn.cross_decomposition.PLSSVD.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.cross_decomposition.plssvd#sklearn.cross_decomposition.PLSSVD.get_params |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Es... | sklearn.modules.generated.sklearn.cross_decomposition.plssvd#sklearn.cross_decomposition.PLSSVD.set_params |
transform(X, Y=None) [source]
Apply the dimensionality reduction. Parameters
Xarray-like of shape (n_samples, n_features)
Samples to be transformed.
Yarray-like of shape (n_samples,) or (n_samples, n_targets), default=None
Targets. Returns
outarray-like or tuple of array-like
The transformed data X_... | sklearn.modules.generated.sklearn.cross_decomposition.plssvd#sklearn.cross_decomposition.PLSSVD.transform |
sklearn.datasets.clear_data_home(data_home=None) [source]
Delete all the content of the data home cache. Parameters
data_homestr, default=None
The path to scikit-learn data directory. If None, the default path is ~/sklearn_learn_data. | sklearn.modules.generated.sklearn.datasets.clear_data_home#sklearn.datasets.clear_data_home |
sklearn.datasets.dump_svmlight_file(X, y, f, *, zero_based=True, comment=None, query_id=None, multilabel=False) [source]
Dump the dataset in svmlight / libsvm file format. This format is a text-based format, with one sample per line. It does not store zero valued features hence is suitable for sparse dataset. The fir... | sklearn.modules.generated.sklearn.datasets.dump_svmlight_file#sklearn.datasets.dump_svmlight_file |
sklearn.datasets.fetch_20newsgroups(*, data_home=None, subset='train', categories=None, shuffle=True, random_state=42, remove=(), download_if_missing=True, return_X_y=False) [source]
Load the filenames and data from the 20 newsgroups dataset (classification). Download it if necessary.
Classes 20
Samples total 188... | sklearn.modules.generated.sklearn.datasets.fetch_20newsgroups#sklearn.datasets.fetch_20newsgroups |
sklearn.datasets.fetch_20newsgroups_vectorized(*, subset='train', remove=(), data_home=None, download_if_missing=True, return_X_y=False, normalize=True, as_frame=False) [source]
Load and vectorize the 20 newsgroups dataset (classification). Download it if necessary. This is a convenience function; the transformation ... | sklearn.modules.generated.sklearn.datasets.fetch_20newsgroups_vectorized#sklearn.datasets.fetch_20newsgroups_vectorized |
sklearn.datasets.fetch_california_housing(*, data_home=None, download_if_missing=True, return_X_y=False, as_frame=False) [source]
Load the California housing dataset (regression).
Samples total 20640
Dimensionality 8
Features real
Target real 0.15 - 5. Read more in the User Guide. Parameters
data_homest... | sklearn.modules.generated.sklearn.datasets.fetch_california_housing#sklearn.datasets.fetch_california_housing |
sklearn.datasets.fetch_covtype(*, data_home=None, download_if_missing=True, random_state=None, shuffle=False, return_X_y=False, as_frame=False) [source]
Load the covertype dataset (classification). Download it if necessary.
Classes 7
Samples total 581012
Dimensionality 54
Features int Read more in the User ... | sklearn.modules.generated.sklearn.datasets.fetch_covtype#sklearn.datasets.fetch_covtype |
sklearn.datasets.fetch_kddcup99(*, subset=None, data_home=None, shuffle=False, random_state=None, percent10=True, download_if_missing=True, return_X_y=False, as_frame=False) [source]
Load the kddcup99 dataset (classification). Download it if necessary.
Classes 23
Samples total 4898431
Dimensionality 41
Featur... | sklearn.modules.generated.sklearn.datasets.fetch_kddcup99#sklearn.datasets.fetch_kddcup99 |
sklearn.datasets.fetch_lfw_pairs(*, subset='train', data_home=None, funneled=True, resize=0.5, color=False, slice_=slice(70, 195, None), slice(78, 172, None), download_if_missing=True) [source]
Load the Labeled Faces in the Wild (LFW) pairs dataset (classification). Download it if necessary.
Classes 2
Samples tot... | sklearn.modules.generated.sklearn.datasets.fetch_lfw_pairs#sklearn.datasets.fetch_lfw_pairs |
sklearn.datasets.fetch_lfw_people(*, data_home=None, funneled=True, resize=0.5, min_faces_per_person=0, color=False, slice_=slice(70, 195, None), slice(78, 172, None), download_if_missing=True, return_X_y=False) [source]
Load the Labeled Faces in the Wild (LFW) people dataset (classification). Download it if necessar... | sklearn.modules.generated.sklearn.datasets.fetch_lfw_people#sklearn.datasets.fetch_lfw_people |
sklearn.datasets.fetch_olivetti_faces(*, data_home=None, shuffle=False, random_state=0, download_if_missing=True, return_X_y=False) [source]
Load the Olivetti faces data-set from AT&T (classification). Download it if necessary.
Classes 40
Samples total 400
Dimensionality 4096
Features real, between 0 and 1 ... | sklearn.modules.generated.sklearn.datasets.fetch_olivetti_faces#sklearn.datasets.fetch_olivetti_faces |
sklearn.datasets.fetch_openml(name: Optional[str] = None, *, version: Union[str, int] = 'active', data_id: Optional[int] = None, data_home: Optional[str] = None, target_column: Optional[Union[str, List]] = 'default-target', cache: bool = True, return_X_y: bool = False, as_frame: Union[str, bool] = 'auto') [source]
Fe... | sklearn.modules.generated.sklearn.datasets.fetch_openml#sklearn.datasets.fetch_openml |
sklearn.datasets.fetch_rcv1(*, data_home=None, subset='all', download_if_missing=True, random_state=None, shuffle=False, return_X_y=False) [source]
Load the RCV1 multilabel dataset (classification). Download it if necessary. Version: RCV1-v2, vectors, full sets, topics multilabels.
Classes 103
Samples total 80441... | sklearn.modules.generated.sklearn.datasets.fetch_rcv1#sklearn.datasets.fetch_rcv1 |
sklearn.datasets.fetch_species_distributions(*, data_home=None, download_if_missing=True) [source]
Loader for species distribution dataset from Phillips et. al. (2006) Read more in the User Guide. Parameters
data_homestr, default=None
Specify another download and cache folder for the datasets. By default all sc... | sklearn.modules.generated.sklearn.datasets.fetch_species_distributions#sklearn.datasets.fetch_species_distributions |
sklearn.datasets.get_data_home(data_home=None) → str[source]
Return the path of the scikit-learn data dir. This folder is used by some large dataset loaders to avoid downloading the data several times. By default the data dir is set to a folder named ‘scikit_learn_data’ in the user home folder. Alternatively, it can ... | sklearn.modules.generated.sklearn.datasets.get_data_home#sklearn.datasets.get_data_home |
sklearn.datasets.load_boston(*, return_X_y=False) [source]
Load and return the boston house-prices dataset (regression).
Samples total 506
Dimensionality 13
Features real, positive
Targets real 5. - 50. Read more in the User Guide. Parameters
return_X_ybool, default=False
If True, returns (data, targe... | sklearn.modules.generated.sklearn.datasets.load_boston#sklearn.datasets.load_boston |
sklearn.datasets.load_breast_cancer(*, return_X_y=False, as_frame=False) [source]
Load and return the breast cancer wisconsin dataset (classification). The breast cancer dataset is a classic and very easy binary classification dataset.
Classes 2
Samples per class 212(M),357(B)
Samples total 569
Dimensionality... | sklearn.modules.generated.sklearn.datasets.load_breast_cancer#sklearn.datasets.load_breast_cancer |
sklearn.datasets.load_diabetes(*, return_X_y=False, as_frame=False) [source]
Load and return the diabetes dataset (regression).
Samples total 442
Dimensionality 10
Features real, -.2 < x < .2
Targets integer 25 - 346 Read more in the User Guide. Parameters
return_X_ybool, default=False.
If True, retur... | sklearn.modules.generated.sklearn.datasets.load_diabetes#sklearn.datasets.load_diabetes |
sklearn.datasets.load_digits(*, n_class=10, return_X_y=False, as_frame=False) [source]
Load and return the digits dataset (classification). Each datapoint is a 8x8 image of a digit.
Classes 10
Samples per class ~180
Samples total 1797
Dimensionality 64
Features integers 0-16 Read more in the User Guide. ... | sklearn.modules.generated.sklearn.datasets.load_digits#sklearn.datasets.load_digits |
sklearn.datasets.load_files(container_path, *, description=None, categories=None, load_content=True, shuffle=True, encoding=None, decode_error='strict', random_state=0) [source]
Load text files with categories as subfolder names. Individual samples are assumed to be files stored a two levels folder structure such as ... | sklearn.modules.generated.sklearn.datasets.load_files#sklearn.datasets.load_files |
sklearn.datasets.load_iris(*, return_X_y=False, as_frame=False) [source]
Load and return the iris dataset (classification). The iris dataset is a classic and very easy multi-class classification dataset.
Classes 3
Samples per class 50
Samples total 150
Dimensionality 4
Features real, positive Read more in... | sklearn.modules.generated.sklearn.datasets.load_iris#sklearn.datasets.load_iris |
sklearn.datasets.load_linnerud(*, return_X_y=False, as_frame=False) [source]
Load and return the physical excercise linnerud dataset. This dataset is suitable for multi-ouput regression tasks.
Samples total 20
Dimensionality 3 (for both data and target)
Features integer
Targets integer Read more in the User... | sklearn.modules.generated.sklearn.datasets.load_linnerud#sklearn.datasets.load_linnerud |
sklearn.datasets.load_sample_image(image_name) [source]
Load the numpy array of a single sample image Read more in the User Guide. Parameters
image_name{china.jpg, flower.jpg}
The name of the sample image loaded Returns
img3D array
The image as a numpy array: height x width x color Examples >>> from... | sklearn.modules.generated.sklearn.datasets.load_sample_image#sklearn.datasets.load_sample_image |
sklearn.datasets.load_sample_images() [source]
Load sample images for image manipulation. Loads both, china and flower. Read more in the User Guide. Returns
dataBunch
Dictionary-like object, with the following attributes.
imageslist of ndarray of shape (427, 640, 3)
The two sample image.
filenameslist
T... | sklearn.modules.generated.sklearn.datasets.load_sample_images#sklearn.datasets.load_sample_images |
sklearn.datasets.load_svmlight_file(f, *, n_features=None, dtype=<class 'numpy.float64'>, multilabel=False, zero_based='auto', query_id=False, offset=0, length=-1) [source]
Load datasets in the svmlight / libsvm format into sparse CSR matrix This format is a text-based format, with one sample per line. It does not st... | sklearn.modules.generated.sklearn.datasets.load_svmlight_file#sklearn.datasets.load_svmlight_file |
sklearn.datasets.load_svmlight_files(files, *, n_features=None, dtype=<class 'numpy.float64'>, multilabel=False, zero_based='auto', query_id=False, offset=0, length=-1) [source]
Load dataset from multiple files in SVMlight format This function is equivalent to mapping load_svmlight_file over a list of files, except t... | sklearn.modules.generated.sklearn.datasets.load_svmlight_files#sklearn.datasets.load_svmlight_files |
sklearn.datasets.load_wine(*, return_X_y=False, as_frame=False) [source]
Load and return the wine dataset (classification). New in version 0.18. The wine dataset is a classic and very easy multi-class classification dataset.
Classes 3
Samples per class [59,71,48]
Samples total 178
Dimensionality 13
Featur... | sklearn.modules.generated.sklearn.datasets.load_wine#sklearn.datasets.load_wine |
sklearn.datasets.make_biclusters(shape, n_clusters, *, noise=0.0, minval=10, maxval=100, shuffle=True, random_state=None) [source]
Generate an array with constant block diagonal structure for biclustering. Read more in the User Guide. Parameters
shapeiterable of shape (n_rows, n_cols)
The shape of the result. ... | sklearn.modules.generated.sklearn.datasets.make_biclusters#sklearn.datasets.make_biclusters |
sklearn.datasets.make_blobs(n_samples=100, n_features=2, *, centers=None, cluster_std=1.0, center_box=- 10.0, 10.0, shuffle=True, random_state=None, return_centers=False) [source]
Generate isotropic Gaussian blobs for clustering. Read more in the User Guide. Parameters
n_samplesint or array-like, default=100
If... | sklearn.modules.generated.sklearn.datasets.make_blobs#sklearn.datasets.make_blobs |
sklearn.datasets.make_checkerboard(shape, n_clusters, *, noise=0.0, minval=10, maxval=100, shuffle=True, random_state=None) [source]
Generate an array with block checkerboard structure for biclustering. Read more in the User Guide. Parameters
shapetuple of shape (n_rows, n_cols)
The shape of the result.
n_clu... | sklearn.modules.generated.sklearn.datasets.make_checkerboard#sklearn.datasets.make_checkerboard |
sklearn.datasets.make_circles(n_samples=100, *, shuffle=True, noise=None, random_state=None, factor=0.8) [source]
Make a large circle containing a smaller circle in 2d. A simple toy dataset to visualize clustering and classification algorithms. Read more in the User Guide. Parameters
n_samplesint or tuple of shap... | sklearn.modules.generated.sklearn.datasets.make_circles#sklearn.datasets.make_circles |
sklearn.datasets.make_classification(n_samples=100, n_features=20, *, n_informative=2, n_redundant=2, n_repeated=0, n_classes=2, n_clusters_per_class=2, weights=None, flip_y=0.01, class_sep=1.0, hypercube=True, shift=0.0, scale=1.0, shuffle=True, random_state=None) [source]
Generate a random n-class classification pr... | sklearn.modules.generated.sklearn.datasets.make_classification#sklearn.datasets.make_classification |
sklearn.datasets.make_friedman1(n_samples=100, n_features=10, *, noise=0.0, random_state=None) [source]
Generate the “Friedman #1” regression problem. This dataset is described in Friedman [1] and Breiman [2]. Inputs X are independent features uniformly distributed on the interval [0, 1]. The output y is created acco... | sklearn.modules.generated.sklearn.datasets.make_friedman1#sklearn.datasets.make_friedman1 |
sklearn.datasets.make_friedman2(n_samples=100, *, noise=0.0, random_state=None) [source]
Generate the “Friedman #2” regression problem. This dataset is described in Friedman [1] and Breiman [2]. Inputs X are 4 independent features uniformly distributed on the intervals: 0 <= X[:, 0] <= 100,
40 * pi <= X[:, 1] <= 560 ... | sklearn.modules.generated.sklearn.datasets.make_friedman2#sklearn.datasets.make_friedman2 |
sklearn.datasets.make_friedman3(n_samples=100, *, noise=0.0, random_state=None) [source]
Generate the “Friedman #3” regression problem. This dataset is described in Friedman [1] and Breiman [2]. Inputs X are 4 independent features uniformly distributed on the intervals: 0 <= X[:, 0] <= 100,
40 * pi <= X[:, 1] <= 560 ... | sklearn.modules.generated.sklearn.datasets.make_friedman3#sklearn.datasets.make_friedman3 |
sklearn.datasets.make_gaussian_quantiles(*, mean=None, cov=1.0, n_samples=100, n_features=2, n_classes=3, shuffle=True, random_state=None) [source]
Generate isotropic Gaussian and label samples by quantile. This classification dataset is constructed by taking a multi-dimensional standard normal distribution and defin... | sklearn.modules.generated.sklearn.datasets.make_gaussian_quantiles#sklearn.datasets.make_gaussian_quantiles |
sklearn.datasets.make_hastie_10_2(n_samples=12000, *, random_state=None) [source]
Generates data for binary classification used in Hastie et al. 2009, Example 10.2. The ten features are standard independent Gaussian and the target y is defined by: y[i] = 1 if np.sum(X[i] ** 2) > 9.34 else -1
Read more in the User Gu... | sklearn.modules.generated.sklearn.datasets.make_hastie_10_2#sklearn.datasets.make_hastie_10_2 |
sklearn.datasets.make_low_rank_matrix(n_samples=100, n_features=100, *, effective_rank=10, tail_strength=0.5, random_state=None) [source]
Generate a mostly low rank matrix with bell-shaped singular values. Most of the variance can be explained by a bell-shaped curve of width effective_rank: the low rank part of the s... | sklearn.modules.generated.sklearn.datasets.make_low_rank_matrix#sklearn.datasets.make_low_rank_matrix |
sklearn.datasets.make_moons(n_samples=100, *, shuffle=True, noise=None, random_state=None) [source]
Make two interleaving half circles. A simple toy dataset to visualize clustering and classification algorithms. Read more in the User Guide. Parameters
n_samplesint or tuple of shape (2,), dtype=int, default=100
... | sklearn.modules.generated.sklearn.datasets.make_moons#sklearn.datasets.make_moons |
sklearn.datasets.make_multilabel_classification(n_samples=100, n_features=20, *, n_classes=5, n_labels=2, length=50, allow_unlabeled=True, sparse=False, return_indicator='dense', return_distributions=False, random_state=None) [source]
Generate a random multilabel classification problem. For each sample, the generati... | sklearn.modules.generated.sklearn.datasets.make_multilabel_classification#sklearn.datasets.make_multilabel_classification |
sklearn.datasets.make_regression(n_samples=100, n_features=100, *, n_informative=10, n_targets=1, bias=0.0, effective_rank=None, tail_strength=0.5, noise=0.0, shuffle=True, coef=False, random_state=None) [source]
Generate a random regression problem. The input set can either be well conditioned (by default) or have a... | sklearn.modules.generated.sklearn.datasets.make_regression#sklearn.datasets.make_regression |
sklearn.datasets.make_sparse_coded_signal(n_samples, *, n_components, n_features, n_nonzero_coefs, random_state=None) [source]
Generate a signal as a sparse combination of dictionary elements. Returns a matrix Y = DX, such as D is (n_features, n_components), X is (n_components, n_samples) and each column of X has exa... | sklearn.modules.generated.sklearn.datasets.make_sparse_coded_signal#sklearn.datasets.make_sparse_coded_signal |
sklearn.datasets.make_sparse_spd_matrix(dim=1, *, alpha=0.95, norm_diag=False, smallest_coef=0.1, largest_coef=0.9, random_state=None) [source]
Generate a sparse symmetric definite positive matrix. Read more in the User Guide. Parameters
dimint, default=1
The size of the random matrix to generate.
alphafloat,... | sklearn.modules.generated.sklearn.datasets.make_sparse_spd_matrix#sklearn.datasets.make_sparse_spd_matrix |
sklearn.datasets.make_sparse_uncorrelated(n_samples=100, n_features=10, *, random_state=None) [source]
Generate a random regression problem with sparse uncorrelated design. This dataset is described in Celeux et al [1]. as: X ~ N(0, 1)
y(X) = X[:, 0] + 2 * X[:, 1] - 2 * X[:, 2] - 1.5 * X[:, 3]
Only the first 4 featu... | sklearn.modules.generated.sklearn.datasets.make_sparse_uncorrelated#sklearn.datasets.make_sparse_uncorrelated |
sklearn.datasets.make_spd_matrix(n_dim, *, random_state=None) [source]
Generate a random symmetric, positive-definite matrix. Read more in the User Guide. Parameters
n_dimint
The matrix dimension.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset cre... | sklearn.modules.generated.sklearn.datasets.make_spd_matrix#sklearn.datasets.make_spd_matrix |
sklearn.datasets.make_swiss_roll(n_samples=100, *, noise=0.0, random_state=None) [source]
Generate a swiss roll dataset. Read more in the User Guide. Parameters
n_samplesint, default=100
The number of sample points on the S curve.
noisefloat, default=0.0
The standard deviation of the gaussian noise.
rando... | sklearn.modules.generated.sklearn.datasets.make_swiss_roll#sklearn.datasets.make_swiss_roll |
sklearn.datasets.make_s_curve(n_samples=100, *, noise=0.0, random_state=None) [source]
Generate an S curve dataset. Read more in the User Guide. Parameters
n_samplesint, default=100
The number of sample points on the S curve.
noisefloat, default=0.0
The standard deviation of the gaussian noise.
random_sta... | sklearn.modules.generated.sklearn.datasets.make_s_curve#sklearn.datasets.make_s_curve |
class sklearn.decomposition.DictionaryLearning(n_components=None, *, alpha=1, max_iter=1000, tol=1e-08, fit_algorithm='lars', transform_algorithm='omp', transform_n_nonzero_coefs=None, transform_alpha=None, n_jobs=None, code_init=None, dict_init=None, verbose=False, split_sign=False, random_state=None, positive_code=Fa... | sklearn.modules.generated.sklearn.decomposition.dictionarylearning#sklearn.decomposition.DictionaryLearning |
sklearn.decomposition.DictionaryLearning
class sklearn.decomposition.DictionaryLearning(n_components=None, *, alpha=1, max_iter=1000, tol=1e-08, fit_algorithm='lars', transform_algorithm='omp', transform_n_nonzero_coefs=None, transform_alpha=None, n_jobs=None, code_init=None, dict_init=None, verbose=False, split_sign... | sklearn.modules.generated.sklearn.decomposition.dictionarylearning |
fit(X, y=None) [source]
Fit the model from data in X. Parameters
Xarray-like of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features.
yIgnored
Returns
selfobject
Returns the object itself. | sklearn.modules.generated.sklearn.decomposition.dictionarylearning#sklearn.decomposition.DictionaryLearning.fit |
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outp... | sklearn.modules.generated.sklearn.decomposition.dictionarylearning#sklearn.decomposition.DictionaryLearning.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.decomposition.dictionarylearning#sklearn.decomposition.DictionaryLearning.get_params |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Es... | sklearn.modules.generated.sklearn.decomposition.dictionarylearning#sklearn.decomposition.DictionaryLearning.set_params |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.