doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
sklearn.preprocessing.maxabs_scale
sklearn.preprocessing.maxabs_scale(X, *, axis=0, copy=True) [source]
Scale each feature to the [-1, 1] range without breaking the sparsity. This estimator scales each feature individually such that the maximal absolute value of each feature in the training set will be 1.0. This sc... | sklearn.modules.generated.sklearn.preprocessing.maxabs_scale |
sklearn.preprocessing.minmax_scale
sklearn.preprocessing.minmax_scale(X, feature_range=0, 1, *, axis=0, copy=True) [source]
Transform features by scaling each feature to a given range. This estimator scales and translates each feature individually such that it is in the given range on the training set, i.e. between... | sklearn.modules.generated.sklearn.preprocessing.minmax_scale |
sklearn.preprocessing.normalize
sklearn.preprocessing.normalize(X, norm='l2', *, axis=1, copy=True, return_norm=False) [source]
Scale input vectors individually to unit norm (vector length). Read more in the User Guide. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data to norma... | sklearn.modules.generated.sklearn.preprocessing.normalize |
sklearn.preprocessing.power_transform
sklearn.preprocessing.power_transform(X, method='yeo-johnson', *, standardize=True, copy=True) [source]
Power transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteros... | sklearn.modules.generated.sklearn.preprocessing.power_transform |
sklearn.preprocessing.quantile_transform
sklearn.preprocessing.quantile_transform(X, *, axis=0, n_quantiles=1000, output_distribution='uniform', ignore_implicit_zeros=False, subsample=100000, random_state=None, copy=True) [source]
Transform features using quantiles information. This method transforms the features t... | sklearn.modules.generated.sklearn.preprocessing.quantile_transform |
sklearn.preprocessing.robust_scale
sklearn.preprocessing.robust_scale(X, *, axis=0, with_centering=True, with_scaling=True, quantile_range=25.0, 75.0, copy=True, unit_variance=False) [source]
Standardize a dataset along any axis Center to the median and component wise scale according to the interquartile range. Rea... | sklearn.modules.generated.sklearn.preprocessing.robust_scale |
sklearn.preprocessing.scale
sklearn.preprocessing.scale(X, *, axis=0, with_mean=True, with_std=True, copy=True) [source]
Standardize a dataset along any axis. Center to the mean and component wise scale to unit variance. Read more in the User Guide. Parameters
X{array-like, sparse matrix} of shape (n_samples, n... | sklearn.modules.generated.sklearn.preprocessing.scale |
sklearn.random_projection.johnson_lindenstrauss_min_dim
sklearn.random_projection.johnson_lindenstrauss_min_dim(n_samples, *, eps=0.1) [source]
Find a ‘safe’ number of components to randomly project to. The distortion introduced by a random projection p only changes the distance between two points by a factor (1 +-... | sklearn.modules.generated.sklearn.random_projection.johnson_lindenstrauss_min_dim |
sklearn.set_config
sklearn.set_config(assume_finite=None, working_memory=None, print_changed_only=None, display=None) [source]
Set global scikit-learn configuration New in version 0.19. Parameters
assume_finitebool, default=None
If True, validation for finiteness will be skipped, saving time, but leading to... | sklearn.modules.generated.sklearn.set_config |
sklearn.show_versions
sklearn.show_versions() [source]
Print useful debugging information” New in version 0.20. | sklearn.modules.generated.sklearn.show_versions |
sklearn.svm.l1_min_c
sklearn.svm.l1_min_c(X, y, *, loss='squared_hinge', fit_intercept=True, intercept_scaling=1.0) [source]
Return the lowest bound for C such that for C in (l1_min_C, infinity) the model is guaranteed not to be empty. This applies to l1 penalized classifiers, such as LinearSVC with penalty=’l1’ an... | sklearn.modules.generated.sklearn.svm.l1_min_c |
sklearn.tree.export_graphviz
sklearn.tree.export_graphviz(decision_tree, out_file=None, *, max_depth=None, feature_names=None, class_names=None, label='all', filled=False, leaves_parallel=False, impurity=True, node_ids=False, proportion=False, rotate=False, rounded=False, special_characters=False, precision=3) [sourc... | sklearn.modules.generated.sklearn.tree.export_graphviz |
sklearn.tree.export_text
sklearn.tree.export_text(decision_tree, *, feature_names=None, max_depth=10, spacing=3, decimals=2, show_weights=False) [source]
Build a text report showing the rules of a decision tree. Note that backwards compatibility may not be supported. Parameters
decision_treeobject
The decisio... | sklearn.modules.generated.sklearn.tree.export_text |
sklearn.tree.plot_tree
sklearn.tree.plot_tree(decision_tree, *, max_depth=None, feature_names=None, class_names=None, label='all', filled=False, impurity=True, node_ids=False, proportion=False, rotate='deprecated', rounded=False, precision=3, ax=None, fontsize=None) [source]
Plot a decision tree. The sample counts ... | sklearn.modules.generated.sklearn.tree.plot_tree |
sklearn.utils.all_estimators
sklearn.utils.all_estimators(type_filter=None) [source]
Get a list of all estimators from sklearn. This function crawls the module and gets all classes that inherit from BaseEstimator. Classes that are defined in test-modules are not included. Parameters
type_filter{“classifier”, “r... | sklearn.modules.generated.sklearn.utils.all_estimators |
sklearn.utils.arrayfuncs.min_pos
sklearn.utils.arrayfuncs.min_pos()
Find the minimum value of an array over positive values Returns a huge value if none of the values are positive | sklearn.modules.generated.sklearn.utils.arrayfuncs.min_pos |
sklearn.utils.assert_all_finite
sklearn.utils.assert_all_finite(X, *, allow_nan=False) [source]
Throw a ValueError if X contains NaN or infinity. Parameters
X{ndarray, sparse matrix}
allow_nanbool, default=False | sklearn.modules.generated.sklearn.utils.assert_all_finite |
sklearn.utils.as_float_array
sklearn.utils.as_float_array(X, *, copy=True, force_all_finite=True) [source]
Converts an array-like to an array of floats. The new dtype will be np.float32 or np.float64, depending on the original type. The function can create a copy or modify the argument depending on the argument cop... | sklearn.modules.generated.sklearn.utils.as_float_array |
sklearn.utils.Bunch
sklearn.utils.Bunch(**kwargs) [source]
Container object exposing keys as attributes. Bunch objects are sometimes used as an output for functions and methods. They extend dictionaries by enabling values to be accessed by key, bunch["value_key"], or by an attribute, bunch.value_key. Examples >>> b... | sklearn.modules.generated.sklearn.utils.bunch |
sklearn.utils.check_array
sklearn.utils.check_array(array, accept_sparse=False, *, accept_large_sparse=True, dtype='numeric', order=None, copy=False, force_all_finite=True, ensure_2d=True, allow_nd=False, ensure_min_samples=1, ensure_min_features=1, estimator=None) [source]
Input validation on an array, list, spars... | sklearn.modules.generated.sklearn.utils.check_array |
sklearn.utils.check_consistent_length
sklearn.utils.check_consistent_length(*arrays) [source]
Check that all arrays have consistent first dimensions. Checks whether all objects in arrays have the same shape or length. Parameters
*arrayslist or tuple of input objects.
Objects that will be checked for consisten... | sklearn.modules.generated.sklearn.utils.check_consistent_length |
sklearn.utils.check_random_state
sklearn.utils.check_random_state(seed) [source]
Turn seed into a np.random.RandomState instance Parameters
seedNone, int or instance of RandomState
If seed is None, return the RandomState singleton used by np.random. If seed is an int, return a new RandomState instance seeded ... | sklearn.modules.generated.sklearn.utils.check_random_state |
sklearn.utils.check_scalar
sklearn.utils.check_scalar(x, name, target_type, *, min_val=None, max_val=None) [source]
Validate scalar parameters type and value. Parameters
xobject
The scalar parameter to validate.
namestr
The name of the parameter to be printed in error messages.
target_typetype or tuple ... | sklearn.modules.generated.sklearn.utils.check_scalar |
sklearn.utils.check_X_y
sklearn.utils.check_X_y(X, y, accept_sparse=False, *, accept_large_sparse=True, dtype='numeric', order=None, copy=False, force_all_finite=True, ensure_2d=True, allow_nd=False, multi_output=False, ensure_min_samples=1, ensure_min_features=1, y_numeric=False, estimator=None) [source]
Input val... | sklearn.modules.generated.sklearn.utils.check_x_y |
sklearn.utils.class_weight.compute_class_weight
sklearn.utils.class_weight.compute_class_weight(class_weight, *, classes, y) [source]
Estimate class weights for unbalanced datasets. Parameters
class_weightdict, ‘balanced’ or None
If ‘balanced’, class weights will be given by n_samples / (n_classes * np.bincou... | sklearn.modules.generated.sklearn.utils.class_weight.compute_class_weight |
sklearn.utils.class_weight.compute_sample_weight
sklearn.utils.class_weight.compute_sample_weight(class_weight, y, *, indices=None) [source]
Estimate sample weights by class for unbalanced datasets. Parameters
class_weightdict, list of dicts, “balanced”, or None
Weights associated with classes in the form {cl... | sklearn.modules.generated.sklearn.utils.class_weight.compute_sample_weight |
sklearn.utils.deprecated
sklearn.utils.deprecated(extra='') [source]
Decorator to mark a function or class as deprecated. Issue a warning when the function is called/the class is instantiated and adds a warning to the docstring. The optional extra argument will be appended to the deprecation message and the docstri... | sklearn.modules.generated.sklearn.utils.deprecated |
sklearn.utils.estimator_checks.check_estimator
sklearn.utils.estimator_checks.check_estimator(Estimator, generate_only=False) [source]
Check if estimator adheres to scikit-learn conventions. This estimator will run an extensive test-suite for input validation, shapes, etc, making sure that the estimator complies wi... | sklearn.modules.generated.sklearn.utils.estimator_checks.check_estimator |
sklearn.utils.estimator_checks.parametrize_with_checks
sklearn.utils.estimator_checks.parametrize_with_checks(estimators) [source]
Pytest specific decorator for parametrizing estimator checks. The id of each check is set to be a pprint version of the estimator and the name of the check with its keyword arguments. T... | sklearn.modules.generated.sklearn.utils.estimator_checks.parametrize_with_checks |
sklearn.utils.estimator_html_repr
sklearn.utils.estimator_html_repr(estimator) [source]
Build a HTML representation of an estimator. Read more in the User Guide. Parameters
estimatorestimator object
The estimator to visualize. Returns
html: str
HTML representation of estimator. | sklearn.modules.generated.sklearn.utils.estimator_html_repr |
sklearn.utils.extmath.density
sklearn.utils.extmath.density(w, **kwargs) [source]
Compute density of a sparse vector. Parameters
warray-like
The sparse vector. Returns
float
The density of w, between 0 and 1.
Examples using sklearn.utils.extmath.density
Classification of text documents using s... | sklearn.modules.generated.sklearn.utils.extmath.density |
sklearn.utils.extmath.fast_logdet
sklearn.utils.extmath.fast_logdet(A) [source]
Compute log(det(A)) for A symmetric. Equivalent to : np.log(nl.det(A)) but more robust. It returns -Inf if det(A) is non positive or is not defined. Parameters
Aarray-like
The matrix. | sklearn.modules.generated.sklearn.utils.extmath.fast_logdet |
sklearn.utils.extmath.randomized_range_finder
sklearn.utils.extmath.randomized_range_finder(A, *, size, n_iter, power_iteration_normalizer='auto', random_state=None) [source]
Computes an orthonormal matrix whose range approximates the range of A. Parameters
A2D array
The input data matrix.
sizeint
Size of... | sklearn.modules.generated.sklearn.utils.extmath.randomized_range_finder |
sklearn.utils.extmath.randomized_svd
sklearn.utils.extmath.randomized_svd(M, n_components, *, n_oversamples=10, n_iter='auto', power_iteration_normalizer='auto', transpose='auto', flip_sign=True, random_state=0) [source]
Computes a truncated randomized SVD. Parameters
M{ndarray, sparse matrix}
Matrix to decom... | sklearn.modules.generated.sklearn.utils.extmath.randomized_svd |
sklearn.utils.extmath.safe_sparse_dot
sklearn.utils.extmath.safe_sparse_dot(a, b, *, dense_output=False) [source]
Dot product that handle the sparse matrix case correctly. Parameters
a{ndarray, sparse matrix}
b{ndarray, sparse matrix}
dense_outputbool, default=False
When False, a and b both being sparse w... | sklearn.modules.generated.sklearn.utils.extmath.safe_sparse_dot |
sklearn.utils.extmath.weighted_mode
sklearn.utils.extmath.weighted_mode(a, w, *, axis=0) [source]
Returns an array of the weighted modal (most common) value in a. If there is more than one such value, only the first is returned. The bin-count for the modal bins is also returned. This is an extension of the algorith... | sklearn.modules.generated.sklearn.utils.extmath.weighted_mode |
sklearn.utils.gen_even_slices
sklearn.utils.gen_even_slices(n, n_packs, *, n_samples=None) [source]
Generator to create n_packs slices going up to n. Parameters
nint
n_packsint
Number of slices to generate.
n_samplesint, default=None
Number of samples. Pass n_samples when the slices are to be used for s... | sklearn.modules.generated.sklearn.utils.gen_even_slices |
sklearn.utils.graph.single_source_shortest_path_length
sklearn.utils.graph.single_source_shortest_path_length(graph, source, *, cutoff=None) [source]
Return the shortest path length from source to all reachable nodes. Returns a dictionary of shortest path lengths keyed by target. Parameters
graph{sparse matrix,... | sklearn.modules.generated.sklearn.utils.graph.single_source_shortest_path_length |
sklearn.utils.graph_shortest_path.graph_shortest_path
sklearn.utils.graph_shortest_path.graph_shortest_path()
Perform a shortest-path graph search on a positive directed or undirected graph. Parameters
dist_matrixarraylike or sparse matrix, shape = (N,N)
Array of positive distances. If vertex i is connected t... | sklearn.modules.generated.sklearn.utils.graph_shortest_path.graph_shortest_path |
sklearn.utils.indexable
sklearn.utils.indexable(*iterables) [source]
Make arrays indexable for cross-validation. Checks consistent length, passes through None, and ensures that everything can be indexed by converting sparse matrices to csr and converting non-interable objects to arrays. Parameters
*iterables{li... | sklearn.modules.generated.sklearn.utils.indexable |
sklearn.utils.metaestimators.if_delegate_has_method
sklearn.utils.metaestimators.if_delegate_has_method(delegate) [source]
Create a decorator for methods that are delegated to a sub-estimator This enables ducktyping by hasattr returning True according to the sub-estimator. Parameters
delegatestring, list of str... | sklearn.modules.generated.sklearn.utils.metaestimators.if_delegate_has_method |
sklearn.utils.multiclass.is_multilabel
sklearn.utils.multiclass.is_multilabel(y) [source]
Check if y is in a multilabel format. Parameters
yndarray of shape (n_samples,)
Target values. Returns
outbool
Return True, if y is in a multilabel format, else `False. Examples >>> import numpy as np
>>> fro... | sklearn.modules.generated.sklearn.utils.multiclass.is_multilabel |
sklearn.utils.multiclass.type_of_target
sklearn.utils.multiclass.type_of_target(y) [source]
Determine the type of data indicated by the target. Note that this type is the most specific type that can be inferred. For example:
binary is more specific but compatible with multiclass.
multiclass of integers is more s... | sklearn.modules.generated.sklearn.utils.multiclass.type_of_target |
sklearn.utils.multiclass.unique_labels
sklearn.utils.multiclass.unique_labels(*ys) [source]
Extract an ordered array of unique labels. We don’t allow:
mix of multilabel and multiclass (single label) targets mix of label indicator matrix and anything else, because there are no explicit labels) mix of label indica... | sklearn.modules.generated.sklearn.utils.multiclass.unique_labels |
sklearn.utils.murmurhash3_32
sklearn.utils.murmurhash3_32()
Compute the 32bit murmurhash3 of key at seed. The underlying implementation is MurmurHash3_x86_32 generating low latency 32bits hash suitable for implementing lookup tables, Bloom filters, count min sketch or feature hashing. Parameters
keynp.int32, by... | sklearn.modules.generated.sklearn.utils.murmurhash3_32 |
sklearn.utils.parallel_backend
sklearn.utils.parallel_backend(backend, n_jobs=- 1, inner_max_num_threads=None, **backend_params) [source]
Change the default backend used by Parallel inside a with block. If backend is a string it must match a previously registered implementation using the register_parallel_backend f... | sklearn.modules.generated.sklearn.utils.parallel_backend |
sklearn.utils.random.sample_without_replacement
sklearn.utils.random.sample_without_replacement()
Sample integers without replacement. Select n_samples integers from the set [0, n_population) without replacement. Parameters
n_populationint
The size of the set to sample from.
n_samplesint
The number of int... | sklearn.modules.generated.sklearn.utils.random.sample_without_replacement |
sklearn.utils.register_parallel_backend
sklearn.utils.register_parallel_backend(name, factory, make_default=False) [source]
Register a new Parallel backend factory. The new backend can then be selected by passing its name as the backend argument to the Parallel class. Moreover, the default backend can be overwritte... | sklearn.modules.generated.sklearn.utils.register_parallel_backend |
sklearn.utils.resample
sklearn.utils.resample(*arrays, replace=True, n_samples=None, random_state=None, stratify=None) [source]
Resample arrays or sparse matrices in a consistent way. The default strategy implements one step of the bootstrapping procedure. Parameters
*arrayssequence of array-like of shape (n_sa... | sklearn.modules.generated.sklearn.utils.resample |
sklearn.utils.safe_mask
sklearn.utils.safe_mask(X, mask) [source]
Return a mask which is safe to use on X. Parameters
X{array-like, sparse matrix}
Data on which to apply mask.
maskndarray
Mask to be used on X. Returns
mask | sklearn.modules.generated.sklearn.utils.safe_mask |
sklearn.utils.safe_sqr
sklearn.utils.safe_sqr(X, *, copy=True) [source]
Element wise squaring of array-likes and sparse matrices. Parameters
X{array-like, ndarray, sparse matrix}
copybool, default=True
Whether to create a copy of X and operate on it or to perform inplace computation (default behaviour). ... | sklearn.modules.generated.sklearn.utils.safe_sqr |
sklearn.utils.shuffle
sklearn.utils.shuffle(*arrays, random_state=None, n_samples=None) [source]
Shuffle arrays or sparse matrices in a consistent way. This is a convenience alias to resample(*arrays, replace=False) to do random permutations of the collections. Parameters
*arrayssequence of indexable data-struc... | sklearn.modules.generated.sklearn.utils.shuffle |
sklearn.utils.sparsefuncs.incr_mean_variance_axis
sklearn.utils.sparsefuncs.incr_mean_variance_axis(X, *, axis, last_mean, last_var, last_n, weights=None) [source]
Compute incremental mean and variance along an axis on a CSR or CSC matrix. last_mean, last_var are the statistics computed at the last step by this fun... | sklearn.modules.generated.sklearn.utils.sparsefuncs.incr_mean_variance_axis |
sklearn.utils.sparsefuncs.inplace_column_scale
sklearn.utils.sparsefuncs.inplace_column_scale(X, scale) [source]
Inplace column scaling of a CSC/CSR matrix. Scale each feature of the data matrix by multiplying with specific scale provided by the caller assuming a (n_samples, n_features) shape. Parameters
Xspars... | sklearn.modules.generated.sklearn.utils.sparsefuncs.inplace_column_scale |
sklearn.utils.sparsefuncs.inplace_csr_column_scale
sklearn.utils.sparsefuncs.inplace_csr_column_scale(X, scale) [source]
Inplace column scaling of a CSR matrix. Scale each feature of the data matrix by multiplying with specific scale provided by the caller assuming a (n_samples, n_features) shape. Parameters
Xs... | sklearn.modules.generated.sklearn.utils.sparsefuncs.inplace_csr_column_scale |
sklearn.utils.sparsefuncs.inplace_row_scale
sklearn.utils.sparsefuncs.inplace_row_scale(X, scale) [source]
Inplace row scaling of a CSR or CSC matrix. Scale each row of the data matrix by multiplying with specific scale provided by the caller assuming a (n_samples, n_features) shape. Parameters
Xsparse matrix o... | sklearn.modules.generated.sklearn.utils.sparsefuncs.inplace_row_scale |
sklearn.utils.sparsefuncs.inplace_swap_column
sklearn.utils.sparsefuncs.inplace_swap_column(X, m, n) [source]
Swaps two columns of a CSC/CSR matrix in-place. Parameters
Xsparse matrix of shape (n_samples, n_features)
Matrix whose two columns are to be swapped. It should be of CSR or CSC format.
mint
Index... | sklearn.modules.generated.sklearn.utils.sparsefuncs.inplace_swap_column |
sklearn.utils.sparsefuncs.inplace_swap_row
sklearn.utils.sparsefuncs.inplace_swap_row(X, m, n) [source]
Swaps two rows of a CSC/CSR matrix in-place. Parameters
Xsparse matrix of shape (n_samples, n_features)
Matrix whose two rows are to be swapped. It should be of CSR or CSC format.
mint
Index of the row ... | sklearn.modules.generated.sklearn.utils.sparsefuncs.inplace_swap_row |
sklearn.utils.sparsefuncs.mean_variance_axis
sklearn.utils.sparsefuncs.mean_variance_axis(X, axis, weights=None, return_sum_weights=False) [source]
Compute mean and variance along an axis on a CSR or CSC matrix. Parameters
Xsparse matrix of shape (n_samples, n_features)
Input data. It can be of CSR or CSC for... | sklearn.modules.generated.sklearn.utils.sparsefuncs.mean_variance_axis |
sklearn.utils.sparsefuncs_fast.inplace_csr_row_normalize_l1
sklearn.utils.sparsefuncs_fast.inplace_csr_row_normalize_l1()
Inplace row normalize using the l1 norm | sklearn.modules.generated.sklearn.utils.sparsefuncs_fast.inplace_csr_row_normalize_l1 |
sklearn.utils.sparsefuncs_fast.inplace_csr_row_normalize_l2
sklearn.utils.sparsefuncs_fast.inplace_csr_row_normalize_l2()
Inplace row normalize using the l2 norm | sklearn.modules.generated.sklearn.utils.sparsefuncs_fast.inplace_csr_row_normalize_l2 |
sklearn.utils.validation.check_is_fitted
sklearn.utils.validation.check_is_fitted(estimator, attributes=None, *, msg=None, all_or_any=<built-in function all>) [source]
Perform is_fitted validation for estimator. Checks if the estimator is fitted by verifying the presence of fitted attributes (ending with a trailing... | sklearn.modules.generated.sklearn.utils.validation.check_is_fitted |
sklearn.utils.validation.check_memory
sklearn.utils.validation.check_memory(memory) [source]
Check that memory is joblib.Memory-like. joblib.Memory-like means that memory can be converted into a joblib.Memory instance (typically a str denoting the location) or has the same interface (has a cache method). Parameter... | sklearn.modules.generated.sklearn.utils.validation.check_memory |
sklearn.utils.validation.check_symmetric
sklearn.utils.validation.check_symmetric(array, *, tol=1e-10, raise_warning=True, raise_exception=False) [source]
Make sure that array is 2D, square and symmetric. If the array is not symmetric, then a symmetrized version is returned. Optionally, a warning or exception is ra... | sklearn.modules.generated.sklearn.utils.validation.check_symmetric |
sklearn.utils.validation.column_or_1d
sklearn.utils.validation.column_or_1d(y, *, warn=False) [source]
Ravel column or 1d numpy array, else raises an error. Parameters
yarray-like
warnbool, default=False
To control display of warnings. Returns
yndarray | sklearn.modules.generated.sklearn.utils.validation.column_or_1d |
sklearn.utils.validation.has_fit_parameter
sklearn.utils.validation.has_fit_parameter(estimator, parameter) [source]
Checks whether the estimator’s fit method supports the given parameter. Parameters
estimatorobject
An estimator to inspect.
parameterstr
The searched parameter. Returns
is_parameter: b... | sklearn.modules.generated.sklearn.utils.validation.has_fit_parameter |
sklearn.utils._safe_indexing
sklearn.utils._safe_indexing(X, indices, *, axis=0) [source]
Return rows, items or columns of X using indices. Warning This utility is documented, but private. This means that backward compatibility might be broken without any deprecation cycle. Parameters
Xarray-like, sparse-matr... | sklearn.modules.generated.sklearn.utils._safe_indexing |
sklearn.svm.l1_min_c(X, y, *, loss='squared_hinge', fit_intercept=True, intercept_scaling=1.0) [source]
Return the lowest bound for C such that for C in (l1_min_C, infinity) the model is guaranteed not to be empty. This applies to l1 penalized classifiers, such as LinearSVC with penalty=’l1’ and linear_model.Logistic... | sklearn.modules.generated.sklearn.svm.l1_min_c#sklearn.svm.l1_min_c |
class sklearn.svm.LinearSVC(penalty='l2', loss='squared_hinge', *, dual=True, tol=0.0001, C=1.0, multi_class='ovr', fit_intercept=True, intercept_scaling=1, class_weight=None, verbose=0, random_state=None, max_iter=1000) [source]
Linear Support Vector Classification. Similar to SVC with parameter kernel=’linear’, but... | sklearn.modules.generated.sklearn.svm.linearsvc#sklearn.svm.LinearSVC |
sklearn.svm.LinearSVC
class sklearn.svm.LinearSVC(penalty='l2', loss='squared_hinge', *, dual=True, tol=0.0001, C=1.0, multi_class='ovr', fit_intercept=True, intercept_scaling=1, class_weight=None, verbose=0, random_state=None, max_iter=1000) [source]
Linear Support Vector Classification. Similar to SVC with parame... | sklearn.modules.generated.sklearn.svm.linearsvc |
decision_function(X) [source]
Predict confidence scores for samples. The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
array, shape=(n_samples,) if n_classes == 2... | sklearn.modules.generated.sklearn.svm.linearsvc#sklearn.svm.LinearSVC.decision_function |
densify() [source]
Convert coefficient matrix to dense array format. Converts the coef_ member (back) to a numpy.ndarray. This is the default format of coef_ and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op. Returns
self
... | sklearn.modules.generated.sklearn.svm.linearsvc#sklearn.svm.LinearSVC.densify |
fit(X, y, sample_weight=None) [source]
Fit the model according to the given training data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target vec... | sklearn.modules.generated.sklearn.svm.linearsvc#sklearn.svm.LinearSVC.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.svm.linearsvc#sklearn.svm.LinearSVC.get_params |
predict(X) [source]
Predict class labels for samples in X. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape [n_samples]
Predicted class label per sample. | sklearn.modules.generated.sklearn.svm.linearsvc#sklearn.svm.LinearSVC.predict |
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_featur... | sklearn.modules.generated.sklearn.svm.linearsvc#sklearn.svm.LinearSVC.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Es... | sklearn.modules.generated.sklearn.svm.linearsvc#sklearn.svm.LinearSVC.set_params |
sparsify() [source]
Convert coefficient matrix to sparse format. Converts the coef_ member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation. The intercept_ member is not converted. Returns
self
Fitted estimator. ... | sklearn.modules.generated.sklearn.svm.linearsvc#sklearn.svm.LinearSVC.sparsify |
class sklearn.svm.LinearSVR(*, epsilon=0.0, tol=0.0001, C=1.0, loss='epsilon_insensitive', fit_intercept=True, intercept_scaling=1.0, dual=True, verbose=0, random_state=None, max_iter=1000) [source]
Linear Support Vector Regression. Similar to SVR with parameter kernel=’linear’, but implemented in terms of liblinear ... | sklearn.modules.generated.sklearn.svm.linearsvr#sklearn.svm.LinearSVR |
sklearn.svm.LinearSVR
class sklearn.svm.LinearSVR(*, epsilon=0.0, tol=0.0001, C=1.0, loss='epsilon_insensitive', fit_intercept=True, intercept_scaling=1.0, dual=True, verbose=0, random_state=None, max_iter=1000) [source]
Linear Support Vector Regression. Similar to SVR with parameter kernel=’linear’, but implemente... | sklearn.modules.generated.sklearn.svm.linearsvr |
fit(X, y, sample_weight=None) [source]
Fit the model according to the given training data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target vec... | sklearn.modules.generated.sklearn.svm.linearsvr#sklearn.svm.LinearSVR.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.svm.linearsvr#sklearn.svm.LinearSVR.get_params |
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values. | sklearn.modules.generated.sklearn.svm.linearsvr#sklearn.svm.LinearSVR.predict |
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum()... | sklearn.modules.generated.sklearn.svm.linearsvr#sklearn.svm.LinearSVR.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Es... | sklearn.modules.generated.sklearn.svm.linearsvr#sklearn.svm.LinearSVR.set_params |
class sklearn.svm.NuSVC(*, nu=0.5, kernel='rbf', degree=3, gamma='scale', coef0=0.0, shrinking=True, probability=False, tol=0.001, cache_size=200, class_weight=None, verbose=False, max_iter=- 1, decision_function_shape='ovr', break_ties=False, random_state=None) [source]
Nu-Support Vector Classification. Similar to S... | sklearn.modules.generated.sklearn.svm.nusvc#sklearn.svm.NuSVC |
sklearn.svm.NuSVC
class sklearn.svm.NuSVC(*, nu=0.5, kernel='rbf', degree=3, gamma='scale', coef0=0.0, shrinking=True, probability=False, tol=0.001, cache_size=200, class_weight=None, verbose=False, max_iter=- 1, decision_function_shape='ovr', break_ties=False, random_state=None) [source]
Nu-Support Vector Classifi... | sklearn.modules.generated.sklearn.svm.nusvc |
decision_function(X) [source]
Evaluates the decision function for the samples in X. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Xndarray of shape (n_samples, n_classes * (n_classes-1) / 2)
Returns the decision function of the sample for each class in the model. If decision_function_sha... | sklearn.modules.generated.sklearn.svm.nusvc#sklearn.svm.NuSVC.decision_function |
fit(X, y, sample_weight=None) [source]
Fit the SVM model according to the given training data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples, n_samples)
Training vectors, where n_samples is the number of samples and n_features is the number of features. For kernel=”preco... | sklearn.modules.generated.sklearn.svm.nusvc#sklearn.svm.NuSVC.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.svm.nusvc#sklearn.svm.NuSVC.get_params |
predict(X) [source]
Perform classification on samples in X. For an one-class model, +1 or -1 is returned. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples_test, n_samples_train)
For kernel=”precomputed”, the expected shape of X is (n_samples_test, n_samples_train). Retur... | sklearn.modules.generated.sklearn.svm.nusvc#sklearn.svm.NuSVC.predict |
property predict_log_proba
Compute log probabilities of possible outcomes for samples in X. The model need to have probability information computed at training time: fit with attribute probability set to True. Parameters
Xarray-like of shape (n_samples, n_features) or (n_samples_test, n_samples_train)
For kerne... | sklearn.modules.generated.sklearn.svm.nusvc#sklearn.svm.NuSVC.predict_log_proba |
property predict_proba
Compute probabilities of possible outcomes for samples in X. The model need to have probability information computed at training time: fit with attribute probability set to True. Parameters
Xarray-like of shape (n_samples, n_features)
For kernel=”precomputed”, the expected shape of X is (... | sklearn.modules.generated.sklearn.svm.nusvc#sklearn.svm.NuSVC.predict_proba |
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_featur... | sklearn.modules.generated.sklearn.svm.nusvc#sklearn.svm.NuSVC.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Es... | sklearn.modules.generated.sklearn.svm.nusvc#sklearn.svm.NuSVC.set_params |
class sklearn.svm.NuSVR(*, nu=0.5, C=1.0, kernel='rbf', degree=3, gamma='scale', coef0=0.0, shrinking=True, tol=0.001, cache_size=200, verbose=False, max_iter=- 1) [source]
Nu Support Vector Regression. Similar to NuSVC, for regression, uses a parameter nu to control the number of support vectors. However, unlike NuS... | sklearn.modules.generated.sklearn.svm.nusvr#sklearn.svm.NuSVR |
sklearn.svm.NuSVR
class sklearn.svm.NuSVR(*, nu=0.5, C=1.0, kernel='rbf', degree=3, gamma='scale', coef0=0.0, shrinking=True, tol=0.001, cache_size=200, verbose=False, max_iter=- 1) [source]
Nu Support Vector Regression. Similar to NuSVC, for regression, uses a parameter nu to control the number of support vectors.... | sklearn.modules.generated.sklearn.svm.nusvr |
fit(X, y, sample_weight=None) [source]
Fit the SVM model according to the given training data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples, n_samples)
Training vectors, where n_samples is the number of samples and n_features is the number of features. For kernel=”preco... | sklearn.modules.generated.sklearn.svm.nusvr#sklearn.svm.NuSVR.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.svm.nusvr#sklearn.svm.NuSVR.get_params |
predict(X) [source]
Perform regression on samples in X. For an one-class model, +1 (inlier) or -1 (outlier) is returned. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
For kernel=”precomputed”, the expected shape of X is (n_samples_test, n_samples_train). Returns
y_predndarray of... | sklearn.modules.generated.sklearn.svm.nusvr#sklearn.svm.NuSVR.predict |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.