doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
sklearn.cluster.cluster_optics_dbscan
sklearn.cluster.cluster_optics_dbscan(*, reachability, core_distances, ordering, eps) [source]
Performs DBSCAN extraction for an arbitrary epsilon. Extracting the clusters runs in linear time. Note that this results in labels_ which are close to a DBSCAN with similar settings a... | sklearn.modules.generated.sklearn.cluster.cluster_optics_dbscan |
sklearn.cluster.cluster_optics_xi
sklearn.cluster.cluster_optics_xi(*, reachability, predecessor, ordering, min_samples, min_cluster_size=None, xi=0.05, predecessor_correction=True) [source]
Automatically extract clusters according to the Xi-steep method. Parameters
reachabilityndarray of shape (n_samples,)
R... | sklearn.modules.generated.sklearn.cluster.cluster_optics_xi |
sklearn.cluster.compute_optics_graph
sklearn.cluster.compute_optics_graph(X, *, min_samples, max_eps, metric, p, metric_params, algorithm, leaf_size, n_jobs) [source]
Computes the OPTICS reachability graph. Read more in the User Guide. Parameters
Xndarray of shape (n_samples, n_features), or (n_samples, n_sampl... | sklearn.modules.generated.sklearn.cluster.compute_optics_graph |
sklearn.cluster.dbscan
sklearn.cluster.dbscan(X, eps=0.5, *, min_samples=5, metric='minkowski', metric_params=None, algorithm='auto', leaf_size=30, p=2, sample_weight=None, n_jobs=None) [source]
Perform DBSCAN clustering from vector array or distance matrix. Read more in the User Guide. Parameters
X{array-like,... | sklearn.modules.generated.dbscan-function |
sklearn.cluster.estimate_bandwidth
sklearn.cluster.estimate_bandwidth(X, *, quantile=0.3, n_samples=None, random_state=0, n_jobs=None) [source]
Estimate the bandwidth to use with the mean-shift algorithm. That this function takes time at least quadratic in n_samples. For large datasets, it’s wise to set that parame... | sklearn.modules.generated.sklearn.cluster.estimate_bandwidth |
sklearn.cluster.kmeans_plusplus
sklearn.cluster.kmeans_plusplus(X, n_clusters, *, x_squared_norms=None, random_state=None, n_local_trials=None) [source]
Init n_clusters seeds according to k-means++ New in version 0.24. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data to pick... | sklearn.modules.generated.sklearn.cluster.kmeans_plusplus |
sklearn.cluster.k_means
sklearn.cluster.k_means(X, n_clusters, *, sample_weight=None, init='k-means++', precompute_distances='deprecated', n_init=10, max_iter=300, verbose=False, tol=0.0001, random_state=None, copy_x=True, n_jobs='deprecated', algorithm='auto', return_n_iter=False) [source]
K-means clustering algor... | sklearn.modules.generated.sklearn.cluster.k_means |
sklearn.cluster.mean_shift
sklearn.cluster.mean_shift(X, *, bandwidth=None, seeds=None, bin_seeding=False, min_bin_freq=1, cluster_all=True, max_iter=300, n_jobs=None) [source]
Perform mean shift clustering of data using a flat kernel. Read more in the User Guide. Parameters
Xarray-like of shape (n_samples, n_f... | sklearn.modules.generated.sklearn.cluster.mean_shift |
sklearn.cluster.spectral_clustering
sklearn.cluster.spectral_clustering(affinity, *, n_clusters=8, n_components=None, eigen_solver=None, random_state=None, n_init=10, eigen_tol=0.0, assign_labels='kmeans', verbose=False) [source]
Apply clustering to a projection of the normalized Laplacian. In practice Spectral Clu... | sklearn.modules.generated.sklearn.cluster.spectral_clustering |
sklearn.cluster.ward_tree
sklearn.cluster.ward_tree(X, *, connectivity=None, n_clusters=None, return_distance=False) [source]
Ward clustering based on a Feature matrix. Recursively merges the pair of clusters that minimally increases within-cluster variance. The inertia matrix uses a Heapq-based representation. Thi... | sklearn.modules.generated.sklearn.cluster.ward_tree |
sklearn.compose.make_column_selector
sklearn.compose.make_column_selector(pattern=None, *, dtype_include=None, dtype_exclude=None) [source]
Create a callable to select columns to be used with ColumnTransformer. make_column_selector can select columns based on datatype or the columns name with a regex. When using mu... | sklearn.modules.generated.sklearn.compose.make_column_selector |
sklearn.compose.make_column_transformer
sklearn.compose.make_column_transformer(*transformers, remainder='drop', sparse_threshold=0.3, n_jobs=None, verbose=False) [source]
Construct a ColumnTransformer from the given transformers. This is a shorthand for the ColumnTransformer constructor; it does not require, and d... | sklearn.modules.generated.sklearn.compose.make_column_transformer |
sklearn.config_context
sklearn.config_context(**new_config) [source]
Context manager for global scikit-learn configuration Parameters
assume_finitebool, default=False
If True, validation for finiteness will be skipped, saving time, but leading to potential crashes. If False, validation for finiteness will be ... | sklearn.modules.generated.sklearn.config_context |
sklearn.covariance.empirical_covariance
sklearn.covariance.empirical_covariance(X, *, assume_centered=False) [source]
Computes the Maximum likelihood covariance estimator Parameters
Xndarray of shape (n_samples, n_features)
Data from which to compute the covariance estimate
assume_centeredbool, default=Fals... | sklearn.modules.generated.sklearn.covariance.empirical_covariance |
sklearn.covariance.graphical_lasso
sklearn.covariance.graphical_lasso(emp_cov, alpha, *, cov_init=None, mode='cd', tol=0.0001, enet_tol=0.0001, max_iter=100, verbose=False, return_costs=False, eps=2.220446049250313e-16, return_n_iter=False) [source]
l1-penalized covariance estimator Read more in the User Guide. Ch... | sklearn.modules.generated.sklearn.covariance.graphical_lasso |
sklearn.covariance.ledoit_wolf
sklearn.covariance.ledoit_wolf(X, *, assume_centered=False, block_size=1000) [source]
Estimates the shrunk Ledoit-Wolf covariance matrix. Read more in the User Guide. Parameters
Xarray-like of shape (n_samples, n_features)
Data from which to compute the covariance estimate
ass... | sklearn.modules.generated.sklearn.covariance.ledoit_wolf |
sklearn.covariance.oas
sklearn.covariance.oas(X, *, assume_centered=False) [source]
Estimate covariance with the Oracle Approximating Shrinkage algorithm. Parameters
Xarray-like of shape (n_samples, n_features)
Data from which to compute the covariance estimate.
assume_centeredbool, default=False
If True,... | sklearn.modules.generated.oas-function |
sklearn.covariance.shrunk_covariance
sklearn.covariance.shrunk_covariance(emp_cov, shrinkage=0.1) [source]
Calculates a covariance matrix shrunk on the diagonal Read more in the User Guide. Parameters
emp_covarray-like of shape (n_features, n_features)
Covariance matrix to be shrunk
shrinkagefloat, default=... | sklearn.modules.generated.sklearn.covariance.shrunk_covariance |
sklearn.datasets.clear_data_home
sklearn.datasets.clear_data_home(data_home=None) [source]
Delete all the content of the data home cache. Parameters
data_homestr, default=None
The path to scikit-learn data directory. If None, the default path is ~/sklearn_learn_data. | sklearn.modules.generated.sklearn.datasets.clear_data_home |
sklearn.datasets.dump_svmlight_file
sklearn.datasets.dump_svmlight_file(X, y, f, *, zero_based=True, comment=None, query_id=None, multilabel=False) [source]
Dump the dataset in svmlight / libsvm file format. This format is a text-based format, with one sample per line. It does not store zero valued features hence i... | sklearn.modules.generated.sklearn.datasets.dump_svmlight_file |
sklearn.datasets.fetch_20newsgroups
sklearn.datasets.fetch_20newsgroups(*, data_home=None, subset='train', categories=None, shuffle=True, random_state=42, remove=(), download_if_missing=True, return_X_y=False) [source]
Load the filenames and data from the 20 newsgroups dataset (classification). Download it if neces... | sklearn.modules.generated.sklearn.datasets.fetch_20newsgroups |
sklearn.datasets.fetch_20newsgroups_vectorized
sklearn.datasets.fetch_20newsgroups_vectorized(*, subset='train', remove=(), data_home=None, download_if_missing=True, return_X_y=False, normalize=True, as_frame=False) [source]
Load and vectorize the 20 newsgroups dataset (classification). Download it if necessary. Th... | sklearn.modules.generated.sklearn.datasets.fetch_20newsgroups_vectorized |
sklearn.datasets.fetch_california_housing
sklearn.datasets.fetch_california_housing(*, data_home=None, download_if_missing=True, return_X_y=False, as_frame=False) [source]
Load the California housing dataset (regression).
Samples total 20640
Dimensionality 8
Features real
Target real 0.15 - 5. Read more i... | sklearn.modules.generated.sklearn.datasets.fetch_california_housing |
sklearn.datasets.fetch_covtype
sklearn.datasets.fetch_covtype(*, data_home=None, download_if_missing=True, random_state=None, shuffle=False, return_X_y=False, as_frame=False) [source]
Load the covertype dataset (classification). Download it if necessary.
Classes 7
Samples total 581012
Dimensionality 54
Feat... | sklearn.modules.generated.sklearn.datasets.fetch_covtype |
sklearn.datasets.fetch_kddcup99
sklearn.datasets.fetch_kddcup99(*, subset=None, data_home=None, shuffle=False, random_state=None, percent10=True, download_if_missing=True, return_X_y=False, as_frame=False) [source]
Load the kddcup99 dataset (classification). Download it if necessary.
Classes 23
Samples total 48... | sklearn.modules.generated.sklearn.datasets.fetch_kddcup99 |
sklearn.datasets.fetch_lfw_pairs
sklearn.datasets.fetch_lfw_pairs(*, subset='train', data_home=None, funneled=True, resize=0.5, color=False, slice_=slice(70, 195, None), slice(78, 172, None), download_if_missing=True) [source]
Load the Labeled Faces in the Wild (LFW) pairs dataset (classification). Download it if n... | sklearn.modules.generated.sklearn.datasets.fetch_lfw_pairs |
sklearn.datasets.fetch_lfw_people
sklearn.datasets.fetch_lfw_people(*, data_home=None, funneled=True, resize=0.5, min_faces_per_person=0, color=False, slice_=slice(70, 195, None), slice(78, 172, None), download_if_missing=True, return_X_y=False) [source]
Load the Labeled Faces in the Wild (LFW) people dataset (clas... | sklearn.modules.generated.sklearn.datasets.fetch_lfw_people |
sklearn.datasets.fetch_olivetti_faces
sklearn.datasets.fetch_olivetti_faces(*, data_home=None, shuffle=False, random_state=0, download_if_missing=True, return_X_y=False) [source]
Load the Olivetti faces data-set from AT&T (classification). Download it if necessary.
Classes 40
Samples total 400
Dimensionality ... | sklearn.modules.generated.sklearn.datasets.fetch_olivetti_faces |
sklearn.datasets.fetch_openml
sklearn.datasets.fetch_openml(name: Optional[str] = None, *, version: Union[str, int] = 'active', data_id: Optional[int] = None, data_home: Optional[str] = None, target_column: Optional[Union[str, List]] = 'default-target', cache: bool = True, return_X_y: bool = False, as_frame: Union[st... | sklearn.modules.generated.sklearn.datasets.fetch_openml |
sklearn.datasets.fetch_rcv1
sklearn.datasets.fetch_rcv1(*, data_home=None, subset='all', download_if_missing=True, random_state=None, shuffle=False, return_X_y=False) [source]
Load the RCV1 multilabel dataset (classification). Download it if necessary. Version: RCV1-v2, vectors, full sets, topics multilabels.
Cla... | sklearn.modules.generated.sklearn.datasets.fetch_rcv1 |
sklearn.datasets.fetch_species_distributions
sklearn.datasets.fetch_species_distributions(*, data_home=None, download_if_missing=True) [source]
Loader for species distribution dataset from Phillips et. al. (2006) Read more in the User Guide. Parameters
data_homestr, default=None
Specify another download and c... | sklearn.modules.generated.sklearn.datasets.fetch_species_distributions |
sklearn.datasets.get_data_home
sklearn.datasets.get_data_home(data_home=None) → str[source]
Return the path of the scikit-learn data dir. This folder is used by some large dataset loaders to avoid downloading the data several times. By default the data dir is set to a folder named ‘scikit_learn_data’ in the user ho... | sklearn.modules.generated.sklearn.datasets.get_data_home |
sklearn.datasets.load_boston
sklearn.datasets.load_boston(*, return_X_y=False) [source]
Load and return the boston house-prices dataset (regression).
Samples total 506
Dimensionality 13
Features real, positive
Targets real 5. - 50. Read more in the User Guide. Parameters
return_X_ybool, default=False ... | sklearn.modules.generated.sklearn.datasets.load_boston |
sklearn.datasets.load_breast_cancer
sklearn.datasets.load_breast_cancer(*, return_X_y=False, as_frame=False) [source]
Load and return the breast cancer wisconsin dataset (classification). The breast cancer dataset is a classic and very easy binary classification dataset.
Classes 2
Samples per class 212(M),357(B... | sklearn.modules.generated.sklearn.datasets.load_breast_cancer |
sklearn.datasets.load_diabetes
sklearn.datasets.load_diabetes(*, return_X_y=False, as_frame=False) [source]
Load and return the diabetes dataset (regression).
Samples total 442
Dimensionality 10
Features real, -.2 < x < .2
Targets integer 25 - 346 Read more in the User Guide. Parameters
return_X_ybool... | sklearn.modules.generated.sklearn.datasets.load_diabetes |
sklearn.datasets.load_digits
sklearn.datasets.load_digits(*, n_class=10, return_X_y=False, as_frame=False) [source]
Load and return the digits dataset (classification). Each datapoint is a 8x8 image of a digit.
Classes 10
Samples per class ~180
Samples total 1797
Dimensionality 64
Features integers 0-16 ... | sklearn.modules.generated.sklearn.datasets.load_digits |
sklearn.datasets.load_files
sklearn.datasets.load_files(container_path, *, description=None, categories=None, load_content=True, shuffle=True, encoding=None, decode_error='strict', random_state=0) [source]
Load text files with categories as subfolder names. Individual samples are assumed to be files stored a two le... | sklearn.modules.generated.sklearn.datasets.load_files |
sklearn.datasets.load_iris
sklearn.datasets.load_iris(*, return_X_y=False, as_frame=False) [source]
Load and return the iris dataset (classification). The iris dataset is a classic and very easy multi-class classification dataset.
Classes 3
Samples per class 50
Samples total 150
Dimensionality 4
Features ... | sklearn.modules.generated.sklearn.datasets.load_iris |
sklearn.datasets.load_linnerud
sklearn.datasets.load_linnerud(*, return_X_y=False, as_frame=False) [source]
Load and return the physical excercise linnerud dataset. This dataset is suitable for multi-ouput regression tasks.
Samples total 20
Dimensionality 3 (for both data and target)
Features integer
Target... | sklearn.modules.generated.sklearn.datasets.load_linnerud |
sklearn.datasets.load_sample_image
sklearn.datasets.load_sample_image(image_name) [source]
Load the numpy array of a single sample image Read more in the User Guide. Parameters
image_name{china.jpg, flower.jpg}
The name of the sample image loaded Returns
img3D array
The image as a numpy array: height ... | sklearn.modules.generated.sklearn.datasets.load_sample_image |
sklearn.datasets.load_sample_images
sklearn.datasets.load_sample_images() [source]
Load sample images for image manipulation. Loads both, china and flower. Read more in the User Guide. Returns
dataBunch
Dictionary-like object, with the following attributes.
imageslist of ndarray of shape (427, 640, 3)
The... | sklearn.modules.generated.sklearn.datasets.load_sample_images |
sklearn.datasets.load_svmlight_file
sklearn.datasets.load_svmlight_file(f, *, n_features=None, dtype=<class 'numpy.float64'>, multilabel=False, zero_based='auto', query_id=False, offset=0, length=-1) [source]
Load datasets in the svmlight / libsvm format into sparse CSR matrix This format is a text-based format, wi... | sklearn.modules.generated.sklearn.datasets.load_svmlight_file |
sklearn.datasets.load_svmlight_files
sklearn.datasets.load_svmlight_files(files, *, n_features=None, dtype=<class 'numpy.float64'>, multilabel=False, zero_based='auto', query_id=False, offset=0, length=-1) [source]
Load dataset from multiple files in SVMlight format This function is equivalent to mapping load_svmli... | sklearn.modules.generated.sklearn.datasets.load_svmlight_files |
sklearn.datasets.load_wine
sklearn.datasets.load_wine(*, return_X_y=False, as_frame=False) [source]
Load and return the wine dataset (classification). New in version 0.18. The wine dataset is a classic and very easy multi-class classification dataset.
Classes 3
Samples per class [59,71,48]
Samples total 178... | sklearn.modules.generated.sklearn.datasets.load_wine |
sklearn.datasets.make_biclusters
sklearn.datasets.make_biclusters(shape, n_clusters, *, noise=0.0, minval=10, maxval=100, shuffle=True, random_state=None) [source]
Generate an array with constant block diagonal structure for biclustering. Read more in the User Guide. Parameters
shapeiterable of shape (n_rows, n... | sklearn.modules.generated.sklearn.datasets.make_biclusters |
sklearn.datasets.make_blobs
sklearn.datasets.make_blobs(n_samples=100, n_features=2, *, centers=None, cluster_std=1.0, center_box=- 10.0, 10.0, shuffle=True, random_state=None, return_centers=False) [source]
Generate isotropic Gaussian blobs for clustering. Read more in the User Guide. Parameters
n_samplesint o... | sklearn.modules.generated.sklearn.datasets.make_blobs |
sklearn.datasets.make_checkerboard
sklearn.datasets.make_checkerboard(shape, n_clusters, *, noise=0.0, minval=10, maxval=100, shuffle=True, random_state=None) [source]
Generate an array with block checkerboard structure for biclustering. Read more in the User Guide. Parameters
shapetuple of shape (n_rows, n_col... | sklearn.modules.generated.sklearn.datasets.make_checkerboard |
sklearn.datasets.make_circles
sklearn.datasets.make_circles(n_samples=100, *, shuffle=True, noise=None, random_state=None, factor=0.8) [source]
Make a large circle containing a smaller circle in 2d. A simple toy dataset to visualize clustering and classification algorithms. Read more in the User Guide. Parameters ... | sklearn.modules.generated.sklearn.datasets.make_circles |
sklearn.datasets.make_classification
sklearn.datasets.make_classification(n_samples=100, n_features=20, *, n_informative=2, n_redundant=2, n_repeated=0, n_classes=2, n_clusters_per_class=2, weights=None, flip_y=0.01, class_sep=1.0, hypercube=True, shift=0.0, scale=1.0, shuffle=True, random_state=None) [source]
Gene... | sklearn.modules.generated.sklearn.datasets.make_classification |
sklearn.datasets.make_friedman1
sklearn.datasets.make_friedman1(n_samples=100, n_features=10, *, noise=0.0, random_state=None) [source]
Generate the “Friedman #1” regression problem. This dataset is described in Friedman [1] and Breiman [2]. Inputs X are independent features uniformly distributed on the interval [0... | sklearn.modules.generated.sklearn.datasets.make_friedman1 |
sklearn.datasets.make_friedman2
sklearn.datasets.make_friedman2(n_samples=100, *, noise=0.0, random_state=None) [source]
Generate the “Friedman #2” regression problem. This dataset is described in Friedman [1] and Breiman [2]. Inputs X are 4 independent features uniformly distributed on the intervals: 0 <= X[:, 0] ... | sklearn.modules.generated.sklearn.datasets.make_friedman2 |
sklearn.datasets.make_friedman3
sklearn.datasets.make_friedman3(n_samples=100, *, noise=0.0, random_state=None) [source]
Generate the “Friedman #3” regression problem. This dataset is described in Friedman [1] and Breiman [2]. Inputs X are 4 independent features uniformly distributed on the intervals: 0 <= X[:, 0] ... | sklearn.modules.generated.sklearn.datasets.make_friedman3 |
sklearn.datasets.make_gaussian_quantiles
sklearn.datasets.make_gaussian_quantiles(*, mean=None, cov=1.0, n_samples=100, n_features=2, n_classes=3, shuffle=True, random_state=None) [source]
Generate isotropic Gaussian and label samples by quantile. This classification dataset is constructed by taking a multi-dimensi... | sklearn.modules.generated.sklearn.datasets.make_gaussian_quantiles |
sklearn.datasets.make_hastie_10_2
sklearn.datasets.make_hastie_10_2(n_samples=12000, *, random_state=None) [source]
Generates data for binary classification used in Hastie et al. 2009, Example 10.2. The ten features are standard independent Gaussian and the target y is defined by: y[i] = 1 if np.sum(X[i] ** 2) > 9.... | sklearn.modules.generated.sklearn.datasets.make_hastie_10_2 |
sklearn.datasets.make_low_rank_matrix
sklearn.datasets.make_low_rank_matrix(n_samples=100, n_features=100, *, effective_rank=10, tail_strength=0.5, random_state=None) [source]
Generate a mostly low rank matrix with bell-shaped singular values. Most of the variance can be explained by a bell-shaped curve of width ef... | sklearn.modules.generated.sklearn.datasets.make_low_rank_matrix |
sklearn.datasets.make_moons
sklearn.datasets.make_moons(n_samples=100, *, shuffle=True, noise=None, random_state=None) [source]
Make two interleaving half circles. A simple toy dataset to visualize clustering and classification algorithms. Read more in the User Guide. Parameters
n_samplesint or tuple of shape (... | sklearn.modules.generated.sklearn.datasets.make_moons |
sklearn.datasets.make_multilabel_classification
sklearn.datasets.make_multilabel_classification(n_samples=100, n_features=20, *, n_classes=5, n_labels=2, length=50, allow_unlabeled=True, sparse=False, return_indicator='dense', return_distributions=False, random_state=None) [source]
Generate a random multilabel clas... | sklearn.modules.generated.sklearn.datasets.make_multilabel_classification |
sklearn.datasets.make_regression
sklearn.datasets.make_regression(n_samples=100, n_features=100, *, n_informative=10, n_targets=1, bias=0.0, effective_rank=None, tail_strength=0.5, noise=0.0, shuffle=True, coef=False, random_state=None) [source]
Generate a random regression problem. The input set can either be well... | sklearn.modules.generated.sklearn.datasets.make_regression |
sklearn.datasets.make_sparse_coded_signal
sklearn.datasets.make_sparse_coded_signal(n_samples, *, n_components, n_features, n_nonzero_coefs, random_state=None) [source]
Generate a signal as a sparse combination of dictionary elements. Returns a matrix Y = DX, such as D is (n_features, n_components), X is (n_compone... | sklearn.modules.generated.sklearn.datasets.make_sparse_coded_signal |
sklearn.datasets.make_sparse_spd_matrix
sklearn.datasets.make_sparse_spd_matrix(dim=1, *, alpha=0.95, norm_diag=False, smallest_coef=0.1, largest_coef=0.9, random_state=None) [source]
Generate a sparse symmetric definite positive matrix. Read more in the User Guide. Parameters
dimint, default=1
The size of th... | sklearn.modules.generated.sklearn.datasets.make_sparse_spd_matrix |
sklearn.datasets.make_sparse_uncorrelated
sklearn.datasets.make_sparse_uncorrelated(n_samples=100, n_features=10, *, random_state=None) [source]
Generate a random regression problem with sparse uncorrelated design. This dataset is described in Celeux et al [1]. as: X ~ N(0, 1)
y(X) = X[:, 0] + 2 * X[:, 1] - 2 * X[:... | sklearn.modules.generated.sklearn.datasets.make_sparse_uncorrelated |
sklearn.datasets.make_spd_matrix
sklearn.datasets.make_spd_matrix(n_dim, *, random_state=None) [source]
Generate a random symmetric, positive-definite matrix. Read more in the User Guide. Parameters
n_dimint
The matrix dimension.
random_stateint, RandomState instance or None, default=None
Determines rando... | sklearn.modules.generated.sklearn.datasets.make_spd_matrix |
sklearn.datasets.make_swiss_roll
sklearn.datasets.make_swiss_roll(n_samples=100, *, noise=0.0, random_state=None) [source]
Generate a swiss roll dataset. Read more in the User Guide. Parameters
n_samplesint, default=100
The number of sample points on the S curve.
noisefloat, default=0.0
The standard devia... | sklearn.modules.generated.sklearn.datasets.make_swiss_roll |
sklearn.datasets.make_s_curve
sklearn.datasets.make_s_curve(n_samples=100, *, noise=0.0, random_state=None) [source]
Generate an S curve dataset. Read more in the User Guide. Parameters
n_samplesint, default=100
The number of sample points on the S curve.
noisefloat, default=0.0
The standard deviation of ... | sklearn.modules.generated.sklearn.datasets.make_s_curve |
sklearn.decomposition.dict_learning
sklearn.decomposition.dict_learning(X, n_components, *, alpha, max_iter=100, tol=1e-08, method='lars', n_jobs=None, dict_init=None, code_init=None, callback=None, verbose=False, random_state=None, return_n_iter=False, positive_dict=False, positive_code=False, method_max_iter=1000) ... | sklearn.modules.generated.sklearn.decomposition.dict_learning |
sklearn.decomposition.dict_learning_online
sklearn.decomposition.dict_learning_online(X, n_components=2, *, alpha=1, n_iter=100, return_code=True, dict_init=None, callback=None, batch_size=3, verbose=False, shuffle=True, n_jobs=None, method='lars', iter_offset=0, random_state=None, return_inner_stats=False, inner_sta... | sklearn.modules.generated.sklearn.decomposition.dict_learning_online |
sklearn.decomposition.fastica
sklearn.decomposition.fastica(X, n_components=None, *, algorithm='parallel', whiten=True, fun='logcosh', fun_args=None, max_iter=200, tol=0.0001, w_init=None, random_state=None, return_X_mean=False, compute_sources=True, return_n_iter=False) [source]
Perform Fast Independent Component ... | sklearn.modules.generated.fastica-function |
sklearn.decomposition.non_negative_factorization
sklearn.decomposition.non_negative_factorization(X, W=None, H=None, n_components=None, *, init='warn', update_H=True, solver='cd', beta_loss='frobenius', tol=0.0001, max_iter=200, alpha=0.0, l1_ratio=0.0, regularization=None, random_state=None, verbose=0, shuffle=False... | sklearn.modules.generated.sklearn.decomposition.non_negative_factorization |
sklearn.decomposition.sparse_encode
sklearn.decomposition.sparse_encode(X, dictionary, *, gram=None, cov=None, algorithm='lasso_lars', n_nonzero_coefs=None, alpha=None, copy_cov=True, init=None, max_iter=1000, n_jobs=None, check_input=True, verbose=0, positive=False) [source]
Sparse coding Each row of the result is... | sklearn.modules.generated.sklearn.decomposition.sparse_encode |
sklearn.feature_extraction.image.extract_patches_2d
sklearn.feature_extraction.image.extract_patches_2d(image, patch_size, *, max_patches=None, random_state=None) [source]
Reshape a 2D image into a collection of patches The resulting patches are allocated in a dedicated array. Read more in the User Guide. Paramete... | sklearn.modules.generated.sklearn.feature_extraction.image.extract_patches_2d |
sklearn.feature_extraction.image.grid_to_graph
sklearn.feature_extraction.image.grid_to_graph(n_x, n_y, n_z=1, *, mask=None, return_as=<class 'scipy.sparse.coo.coo_matrix'>, dtype=<class 'int'>) [source]
Graph of the pixel-to-pixel connections Edges exist if 2 voxels are connected. Parameters
n_xint
Dimension... | sklearn.modules.generated.sklearn.feature_extraction.image.grid_to_graph |
sklearn.feature_extraction.image.img_to_graph
sklearn.feature_extraction.image.img_to_graph(img, *, mask=None, return_as=<class 'scipy.sparse.coo.coo_matrix'>, dtype=None) [source]
Graph of the pixel-to-pixel gradient connections Edges are weighted with the gradient values. Read more in the User Guide. Parameters ... | sklearn.modules.generated.sklearn.feature_extraction.image.img_to_graph |
sklearn.feature_extraction.image.reconstruct_from_patches_2d
sklearn.feature_extraction.image.reconstruct_from_patches_2d(patches, image_size) [source]
Reconstruct the image from all of its patches. Patches are assumed to overlap and the image is constructed by filling in the patches from left to right, top to bott... | sklearn.modules.generated.sklearn.feature_extraction.image.reconstruct_from_patches_2d |
sklearn.feature_selection.chi2
sklearn.feature_selection.chi2(X, y) [source]
Compute chi-squared stats between each non-negative feature and class. This score can be used to select the n_features features with the highest values for the test chi-squared statistic from X, which must contain only non-negative feature... | sklearn.modules.generated.sklearn.feature_selection.chi2 |
sklearn.feature_selection.f_classif
sklearn.feature_selection.f_classif(X, y) [source]
Compute the ANOVA F-value for the provided sample. Read more in the User Guide. Parameters
X{array-like, sparse matrix} shape = [n_samples, n_features]
The set of regressors that will be tested sequentially.
yarray of sha... | sklearn.modules.generated.sklearn.feature_selection.f_classif |
sklearn.feature_selection.f_regression
sklearn.feature_selection.f_regression(X, y, *, center=True) [source]
Univariate linear regression tests. Linear model for testing the individual effect of each of many regressors. This is a scoring function to be used in a feature selection procedure, not a free standing feat... | sklearn.modules.generated.sklearn.feature_selection.f_regression |
sklearn.feature_selection.mutual_info_classif
sklearn.feature_selection.mutual_info_classif(X, y, *, discrete_features='auto', n_neighbors=3, copy=True, random_state=None) [source]
Estimate mutual information for a discrete target variable. Mutual information (MI) [1] between two random variables is a non-negative ... | sklearn.modules.generated.sklearn.feature_selection.mutual_info_classif |
sklearn.feature_selection.mutual_info_regression
sklearn.feature_selection.mutual_info_regression(X, y, *, discrete_features='auto', n_neighbors=3, copy=True, random_state=None) [source]
Estimate mutual information for a continuous target variable. Mutual information (MI) [1] between two random variables is a non-n... | sklearn.modules.generated.sklearn.feature_selection.mutual_info_regression |
sklearn.get_config
sklearn.get_config() [source]
Retrieve current values for configuration set by set_config Returns
configdict
Keys are parameter names that can be passed to set_config. See also
config_context
Context manager for global scikit-learn configuration.
set_config
Set global scikit-le... | sklearn.modules.generated.sklearn.get_config |
sklearn.inspection.partial_dependence
sklearn.inspection.partial_dependence(estimator, X, features, *, response_method='auto', percentiles=0.05, 0.95, grid_resolution=100, method='auto', kind='legacy') [source]
Partial dependence of features. Partial dependence of a feature (or a set of features) corresponds to the... | sklearn.modules.generated.sklearn.inspection.partial_dependence |
sklearn.inspection.permutation_importance
sklearn.inspection.permutation_importance(estimator, X, y, *, scoring=None, n_repeats=5, n_jobs=None, random_state=None, sample_weight=None) [source]
Permutation importance for feature evaluation [BRE]. The estimator is required to be a fitted estimator. X can be the data s... | sklearn.modules.generated.sklearn.inspection.permutation_importance |
sklearn.inspection.plot_partial_dependence
sklearn.inspection.plot_partial_dependence(estimator, X, features, *, feature_names=None, target=None, response_method='auto', n_cols=3, grid_resolution=100, percentiles=0.05, 0.95, method='auto', n_jobs=None, verbose=0, line_kw=None, contour_kw=None, ax=None, kind='average'... | sklearn.modules.generated.sklearn.inspection.plot_partial_dependence |
sklearn.isotonic.check_increasing
sklearn.isotonic.check_increasing(x, y) [source]
Determine whether y is monotonically correlated with x. y is found increasing or decreasing with respect to x based on a Spearman correlation test. Parameters
xarray-like of shape (n_samples,)
Training data.
yarray-like of sh... | sklearn.modules.generated.sklearn.isotonic.check_increasing |
sklearn.isotonic.isotonic_regression
sklearn.isotonic.isotonic_regression(y, *, sample_weight=None, y_min=None, y_max=None, increasing=True) [source]
Solve the isotonic regression model. Read more in the User Guide. Parameters
yarray-like of shape (n_samples,)
The data.
sample_weightarray-like of shape (n_s... | sklearn.modules.generated.sklearn.isotonic.isotonic_regression |
sklearn.linear_model.enet_path
sklearn.linear_model.enet_path(X, y, *, l1_ratio=0.5, eps=0.001, n_alphas=100, alphas=None, precompute='auto', Xy=None, copy_X=True, coef_init=None, verbose=False, return_n_iter=False, positive=False, check_input=True, **params) [source]
Compute elastic net path with coordinate descen... | sklearn.modules.generated.sklearn.linear_model.enet_path |
sklearn.linear_model.lars_path
sklearn.linear_model.lars_path(X, y, Xy=None, *, Gram=None, max_iter=500, alpha_min=0, method='lar', copy_X=True, eps=2.220446049250313e-16, copy_Gram=True, verbose=0, return_path=True, return_n_iter=False, positive=False) [source]
Compute Least Angle Regression or Lasso path using LA... | sklearn.modules.generated.sklearn.linear_model.lars_path |
sklearn.linear_model.lars_path_gram
sklearn.linear_model.lars_path_gram(Xy, Gram, *, n_samples, max_iter=500, alpha_min=0, method='lar', copy_X=True, eps=2.220446049250313e-16, copy_Gram=True, verbose=0, return_path=True, return_n_iter=False, positive=False) [source]
lars_path in the sufficient stats mode [1] The o... | sklearn.modules.generated.sklearn.linear_model.lars_path_gram |
sklearn.linear_model.lasso_path
sklearn.linear_model.lasso_path(X, y, *, eps=0.001, n_alphas=100, alphas=None, precompute='auto', Xy=None, copy_X=True, coef_init=None, verbose=False, return_n_iter=False, positive=False, **params) [source]
Compute Lasso path with coordinate descent The Lasso optimization function va... | sklearn.modules.generated.sklearn.linear_model.lasso_path |
sklearn.linear_model.orthogonal_mp
sklearn.linear_model.orthogonal_mp(X, y, *, n_nonzero_coefs=None, tol=None, precompute=False, copy_X=True, return_path=False, return_n_iter=False) [source]
Orthogonal Matching Pursuit (OMP). Solves n_targets Orthogonal Matching Pursuit problems. An instance of the problem has the ... | sklearn.modules.generated.sklearn.linear_model.orthogonal_mp |
sklearn.linear_model.orthogonal_mp_gram
sklearn.linear_model.orthogonal_mp_gram(Gram, Xy, *, n_nonzero_coefs=None, tol=None, norms_squared=None, copy_Gram=True, copy_Xy=True, return_path=False, return_n_iter=False) [source]
Gram Orthogonal Matching Pursuit (OMP). Solves n_targets Orthogonal Matching Pursuit problem... | sklearn.modules.generated.sklearn.linear_model.orthogonal_mp_gram |
sklearn.linear_model.PassiveAggressiveRegressor
sklearn.linear_model.PassiveAggressiveRegressor(*, C=1.0, fit_intercept=True, max_iter=1000, tol=0.001, early_stopping=False, validation_fraction=0.1, n_iter_no_change=5, shuffle=True, verbose=0, loss='epsilon_insensitive', epsilon=0.1, random_state=None, warm_start=Fal... | sklearn.modules.generated.sklearn.linear_model.passiveaggressiveregressor |
sklearn.linear_model.ridge_regression
sklearn.linear_model.ridge_regression(X, y, alpha, *, sample_weight=None, solver='auto', max_iter=None, tol=0.001, verbose=0, random_state=None, return_n_iter=False, return_intercept=False, check_input=True) [source]
Solve the ridge equation by the method of normal equations. R... | sklearn.modules.generated.sklearn.linear_model.ridge_regression |
sklearn.manifold.locally_linear_embedding
sklearn.manifold.locally_linear_embedding(X, *, n_neighbors, n_components, reg=0.001, eigen_solver='auto', tol=1e-06, max_iter=100, method='standard', hessian_tol=0.0001, modified_tol=1e-12, random_state=None, n_jobs=None) [source]
Perform a Locally Linear Embedding analysi... | sklearn.modules.generated.sklearn.manifold.locally_linear_embedding |
sklearn.manifold.smacof
sklearn.manifold.smacof(dissimilarities, *, metric=True, n_components=2, init=None, n_init=8, n_jobs=None, max_iter=300, verbose=0, eps=0.001, random_state=None, return_n_iter=False) [source]
Computes multidimensional scaling using the SMACOF algorithm. The SMACOF (Scaling by MAjorizing a CO... | sklearn.modules.generated.sklearn.manifold.smacof |
sklearn.manifold.spectral_embedding
sklearn.manifold.spectral_embedding(adjacency, *, n_components=8, eigen_solver=None, random_state=None, eigen_tol=0.0, norm_laplacian=True, drop_first=True) [source]
Project the sample on the first eigenvectors of the graph Laplacian. The adjacency matrix is used to compute a nor... | sklearn.modules.generated.sklearn.manifold.spectral_embedding |
sklearn.manifold.trustworthiness
sklearn.manifold.trustworthiness(X, X_embedded, *, n_neighbors=5, metric='euclidean') [source]
Expresses to what extent the local structure is retained. The trustworthiness is within [0, 1]. It is defined as \[T(k) = 1 - \frac{2}{nk (2n - 3k - 1)} \sum^n_{i=1} \sum_{j \in \mathcal{... | sklearn.modules.generated.sklearn.manifold.trustworthiness |
sklearn.metrics.accuracy_score
sklearn.metrics.accuracy_score(y_true, y_pred, *, normalize=True, sample_weight=None) [source]
Accuracy classification score. In multilabel classification, this function computes subset accuracy: the set of labels predicted for a sample must exactly match the corresponding set of labe... | sklearn.modules.generated.sklearn.metrics.accuracy_score |
sklearn.metrics.adjusted_mutual_info_score
sklearn.metrics.adjusted_mutual_info_score(labels_true, labels_pred, *, average_method='arithmetic') [source]
Adjusted Mutual Information between two clusterings. Adjusted Mutual Information (AMI) is an adjustment of the Mutual Information (MI) score to account for chance.... | sklearn.modules.generated.sklearn.metrics.adjusted_mutual_info_score |
sklearn.metrics.adjusted_rand_score
sklearn.metrics.adjusted_rand_score(labels_true, labels_pred) [source]
Rand index adjusted for chance. The Rand Index computes a similarity measure between two clusterings by considering all pairs of samples and counting pairs that are assigned in the same or different clusters i... | sklearn.modules.generated.sklearn.metrics.adjusted_rand_score |
sklearn.metrics.auc
sklearn.metrics.auc(x, y) [source]
Compute Area Under the Curve (AUC) using the trapezoidal rule. This is a general function, given points on a curve. For computing the area under the ROC-curve, see roc_auc_score. For an alternative way to summarize a precision-recall curve, see average_precisio... | sklearn.modules.generated.sklearn.metrics.auc |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.