doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
tf.raw_ops.ScatterNdUpdate Applies sparse updates to individual values or slices within a given View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.ScatterNdUpdate tf.raw_ops.ScatterNdUpdate( ref, indices, updates, use_locking=True, name=None ) variable according to indices. ref is a Tensor with rank P and indices is a Tensor of rank Q. indices must be integer tensor, containing indices into ref. It must be shape \([d_0, ..., d_{Q-2}, K]\) where 0 < K <= P. The innermost dimension of indices (with length K) corresponds to indices into elements (if K = P) or slices (if K < P) along the Kth dimension of ref. updates is Tensor of rank Q-1+P-K with shape: $$[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].$$ For example, say we want to update 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that update would look like this: ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) indices = tf.constant([[4], [3], [1] ,[7]]) updates = tf.constant([9, 10, 11, 12]) update = tf.scatter_nd_update(ref, indices, updates) with tf.Session() as sess: print sess.run(update) The resulting update to ref would look like this: [1, 11, 3, 10, 9, 6, 7, 12] See tf.scatter_nd for more details about how to make updates to slices. See also tf.scatter_update and tf.batch_scatter_update. Args ref A mutable Tensor. A mutable Tensor. Should be from a Variable node. indices A Tensor. Must be one of the following types: int32, int64. A Tensor. Must be one of the following types: int32, int64. A tensor of indices into ref. updates A Tensor. Must have the same type as ref. A Tensor. Must have the same type as ref. A tensor of updated values to add to ref. use_locking An optional bool. Defaults to True. An optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as ref.
tensorflow.raw_ops.scatterndupdate
tf.raw_ops.ScatterSub Subtracts sparse updates to a variable reference. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.ScatterSub tf.raw_ops.ScatterSub( ref, indices, updates, use_locking=False, name=None ) # Scalar indices ref[indices, ...] -= updates[...] # Vector indices (for each i) ref[indices[i], ...] -= updates[i, ...] # High rank indices (for each i, ..., j) ref[indices[i, ..., j], ...] -= updates[i, ..., j, ...] This operation outputs ref after the update is done. This makes it easier to chain operations that need to use the reset value. Duplicate entries are handled correctly: if multiple indices reference the same location, their (negated) contributions add. Requires updates.shape = indices.shape + ref.shape[1:] or updates.shape = []. Args ref A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable node. indices A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref. updates A Tensor. Must have the same type as ref. A tensor of updated values to subtract from ref. use_locking An optional bool. Defaults to False. If True, the subtraction will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as ref.
tensorflow.raw_ops.scattersub
tf.raw_ops.ScatterUpdate Applies sparse updates to a variable reference. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.ScatterUpdate tf.raw_ops.ScatterUpdate( ref, indices, updates, use_locking=True, name=None ) This operation computes # Scalar indices ref[indices, ...] = updates[...] # Vector indices (for each i) ref[indices[i], ...] = updates[i, ...] # High rank indices (for each i, ..., j) ref[indices[i, ..., j], ...] = updates[i, ..., j, ...] This operation outputs ref after the update is done. This makes it easier to chain operations that need to use the reset value. If values in ref is to be updated more than once, because there are duplicate entries in indices, the order at which the updates happen for each value is undefined. Requires updates.shape = indices.shape + ref.shape[1:] or updates.shape = []. See also tf.batch_scatter_update and tf.scatter_nd_update. Args ref A mutable Tensor. Should be from a Variable node. indices A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref. updates A Tensor. Must have the same type as ref. A tensor of updated values to store in ref. use_locking An optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as ref.
tensorflow.raw_ops.scatterupdate
tf.raw_ops.SdcaFprint Computes fingerprints of the input strings. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SdcaFprint tf.raw_ops.SdcaFprint( input, name=None ) Args input A Tensor of type string. vector of strings to compute fingerprints on. name A name for the operation (optional). Returns A Tensor of type int64.
tensorflow.raw_ops.sdcafprint
tf.raw_ops.SdcaOptimizer Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SdcaOptimizer tf.raw_ops.SdcaOptimizer( sparse_example_indices, sparse_feature_indices, sparse_feature_values, dense_features, example_weights, example_labels, sparse_indices, sparse_weights, dense_weights, example_state_data, loss_type, l1, l2, num_loss_partitions, num_inner_iterations, adaptative=True, name=None ) linear models with L1 + L2 regularization. As global optimization objective is strongly-convex, the optimizer optimizes the dual objective at each step. The optimizer applies each update one example at a time. Examples are sampled uniformly, and the optimizer is learning rate free and enjoys linear convergence rate. Proximal Stochastic Dual Coordinate Ascent. Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ Adding vs. Averaging in Distributed Primal-Dual Optimization. Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 Stochastic Dual Coordinate Ascent with Adaptive Probabilities. Dominik Csiba, Zheng Qu, Peter Richtarik. 2015 Args sparse_example_indices A list of Tensor objects with type int64. a list of vectors which contain example indices. sparse_feature_indices A list with the same length as sparse_example_indices of Tensor objects with type int64. a list of vectors which contain feature indices. sparse_feature_values A list of Tensor objects with type float32. a list of vectors which contains feature value associated with each feature group. dense_features A list of Tensor objects with type float32. a list of matrices which contains the dense feature values. example_weights A Tensor of type float32. a vector which contains the weight associated with each example. example_labels A Tensor of type float32. a vector which contains the label/target associated with each example. sparse_indices A list with the same length as sparse_example_indices of Tensor objects with type int64. a list of vectors where each value is the indices which has corresponding weights in sparse_weights. This field maybe omitted for the dense approach. sparse_weights A list with the same length as sparse_example_indices of Tensor objects with type float32. a list of vectors where each value is the weight associated with a sparse feature group. dense_weights A list with the same length as dense_features of Tensor objects with type float32. a list of vectors where the values are the weights associated with a dense feature group. example_state_data A Tensor of type float32. a list of vectors containing the example state data. loss_type A string from: "logistic_loss", "squared_loss", "hinge_loss", "smooth_hinge_loss", "poisson_loss". Type of the primal loss. Currently SdcaSolver supports logistic, squared and hinge losses. l1 A float. Symmetric l1 regularization strength. l2 A float. Symmetric l2 regularization strength. num_loss_partitions An int that is >= 1. Number of partitions of the global loss function. num_inner_iterations An int that is >= 1. Number of iterations per mini-batch. adaptative An optional bool. Defaults to True. Whether to use Adaptive SDCA for the inner loop. name A name for the operation (optional). Returns A tuple of Tensor objects (out_example_state_data, out_delta_sparse_weights, out_delta_dense_weights). out_example_state_data A Tensor of type float32. out_delta_sparse_weights A list with the same length as sparse_example_indices of Tensor objects with type float32. out_delta_dense_weights A list with the same length as dense_features of Tensor objects with type float32.
tensorflow.raw_ops.sdcaoptimizer
tf.raw_ops.SdcaOptimizerV2 Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SdcaOptimizerV2 tf.raw_ops.SdcaOptimizerV2( sparse_example_indices, sparse_feature_indices, sparse_feature_values, dense_features, example_weights, example_labels, sparse_indices, sparse_weights, dense_weights, example_state_data, loss_type, l1, l2, num_loss_partitions, num_inner_iterations, adaptive=True, name=None ) linear models with L1 + L2 regularization. As global optimization objective is strongly-convex, the optimizer optimizes the dual objective at each step. The optimizer applies each update one example at a time. Examples are sampled uniformly, and the optimizer is learning rate free and enjoys linear convergence rate. Proximal Stochastic Dual Coordinate Ascent. Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ Adding vs. Averaging in Distributed Primal-Dual Optimization. Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 Stochastic Dual Coordinate Ascent with Adaptive Probabilities. Dominik Csiba, Zheng Qu, Peter Richtarik. 2015 Args sparse_example_indices A list of Tensor objects with type int64. a list of vectors which contain example indices. sparse_feature_indices A list with the same length as sparse_example_indices of Tensor objects with type int64. a list of vectors which contain feature indices. sparse_feature_values A list of Tensor objects with type float32. a list of vectors which contains feature value associated with each feature group. dense_features A list of Tensor objects with type float32. a list of matrices which contains the dense feature values. example_weights A Tensor of type float32. a vector which contains the weight associated with each example. example_labels A Tensor of type float32. a vector which contains the label/target associated with each example. sparse_indices A list with the same length as sparse_example_indices of Tensor objects with type int64. a list of vectors where each value is the indices which has corresponding weights in sparse_weights. This field maybe omitted for the dense approach. sparse_weights A list with the same length as sparse_example_indices of Tensor objects with type float32. a list of vectors where each value is the weight associated with a sparse feature group. dense_weights A list with the same length as dense_features of Tensor objects with type float32. a list of vectors where the values are the weights associated with a dense feature group. example_state_data A Tensor of type float32. a list of vectors containing the example state data. loss_type A string from: "logistic_loss", "squared_loss", "hinge_loss", "smooth_hinge_loss", "poisson_loss". Type of the primal loss. Currently SdcaSolver supports logistic, squared and hinge losses. l1 A float. Symmetric l1 regularization strength. l2 A float. Symmetric l2 regularization strength. num_loss_partitions An int that is >= 1. Number of partitions of the global loss function. num_inner_iterations An int that is >= 1. Number of iterations per mini-batch. adaptive An optional bool. Defaults to True. Whether to use Adaptive SDCA for the inner loop. name A name for the operation (optional). Returns A tuple of Tensor objects (out_example_state_data, out_delta_sparse_weights, out_delta_dense_weights). out_example_state_data A Tensor of type float32. out_delta_sparse_weights A list with the same length as sparse_example_indices of Tensor objects with type float32. out_delta_dense_weights A list with the same length as dense_features of Tensor objects with type float32.
tensorflow.raw_ops.sdcaoptimizerv2
tf.raw_ops.SdcaShrinkL1 Applies L1 regularization shrink step on the parameters. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SdcaShrinkL1 tf.raw_ops.SdcaShrinkL1( weights, l1, l2, name=None ) Args weights A list of Tensor objects with type mutable float32. a list of vectors where each value is the weight associated with a feature group. l1 A float. Symmetric l1 regularization strength. l2 A float. Symmetric l2 regularization strength. Should be a positive float. name A name for the operation (optional). Returns The created Operation.
tensorflow.raw_ops.sdcashrinkl1
tf.raw_ops.SegmentMax Computes the maximum along segments of a tensor. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SegmentMax tf.raw_ops.SegmentMax( data, segment_ids, name=None ) Read the section on segmentation for an explanation of segments. Computes a tensor such that \(output_i = \max_j(data_j)\) where max is over j such that segment_ids[j] == i. If the max is empty for a given segment ID i, output[i] = 0. For example: c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]]) tf.segment_max(c, tf.constant([0, 0, 1])) # ==> [[4, 3, 3, 4], # [5, 6, 7, 8]] Args data A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64. segment_ids A Tensor. Must be one of the following types: int32, int64. A 1-D tensor whose size is equal to the size of data's first dimension. Values should be sorted and can be repeated. name A name for the operation (optional). Returns A Tensor. Has the same type as data.
tensorflow.raw_ops.segmentmax
tf.raw_ops.SegmentMean Computes the mean along segments of a tensor. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SegmentMean tf.raw_ops.SegmentMean( data, segment_ids, name=None ) Read the section on segmentation for an explanation of segments. Computes a tensor such that \(output_i = \frac{\sum_j data_j}{N}\) where mean is over j such that segment_ids[j] == i and N is the total number of values summed. If the mean is empty for a given segment ID i, output[i] = 0. For example: c = tf.constant([[1.0,2,3,4], [4, 3, 2, 1], [5,6,7,8]]) tf.segment_mean(c, tf.constant([0, 0, 1])) # ==> [[2.5, 2.5, 2.5, 2.5], # [5, 6, 7, 8]] Args data A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. segment_ids A Tensor. Must be one of the following types: int32, int64. A 1-D tensor whose size is equal to the size of data's first dimension. Values should be sorted and can be repeated. name A name for the operation (optional). Returns A Tensor. Has the same type as data.
tensorflow.raw_ops.segmentmean
tf.raw_ops.SegmentMin Computes the minimum along segments of a tensor. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SegmentMin tf.raw_ops.SegmentMin( data, segment_ids, name=None ) Read the section on segmentation for an explanation of segments. Computes a tensor such that \(output_i = \min_j(data_j)\) where min is over j such that segment_ids[j] == i. If the min is empty for a given segment ID i, output[i] = 0. For example: c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]]) tf.segment_min(c, tf.constant([0, 0, 1])) # ==> [[1, 2, 2, 1], # [5, 6, 7, 8]] Args data A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64. segment_ids A Tensor. Must be one of the following types: int32, int64. A 1-D tensor whose size is equal to the size of data's first dimension. Values should be sorted and can be repeated. name A name for the operation (optional). Returns A Tensor. Has the same type as data.
tensorflow.raw_ops.segmentmin
tf.raw_ops.SegmentProd Computes the product along segments of a tensor. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SegmentProd tf.raw_ops.SegmentProd( data, segment_ids, name=None ) Read the section on segmentation for an explanation of segments. Computes a tensor such that \(output_i = \prod_j data_j\) where the product is over j such that segment_ids[j] == i. If the product is empty for a given segment ID i, output[i] = 1. For example: c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]]) tf.segment_prod(c, tf.constant([0, 0, 1])) # ==> [[4, 6, 6, 4], # [5, 6, 7, 8]] Args data A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. segment_ids A Tensor. Must be one of the following types: int32, int64. A 1-D tensor whose size is equal to the size of data's first dimension. Values should be sorted and can be repeated. name A name for the operation (optional). Returns A Tensor. Has the same type as data.
tensorflow.raw_ops.segmentprod
tf.raw_ops.SegmentSum Computes the sum along segments of a tensor. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SegmentSum tf.raw_ops.SegmentSum( data, segment_ids, name=None ) Read the section on segmentation for an explanation of segments. Computes a tensor such that \(output_i = \sum_j data_j\) where sum is over j such that segment_ids[j] == i. If the sum is empty for a given segment ID i, output[i] = 0. For example: c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]]) tf.segment_sum(c, tf.constant([0, 0, 1])) # ==> [[5, 5, 5, 5], # [5, 6, 7, 8]] Args data A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. segment_ids A Tensor. Must be one of the following types: int32, int64. A 1-D tensor whose size is equal to the size of data's first dimension. Values should be sorted and can be repeated. name A name for the operation (optional). Returns A Tensor. Has the same type as data.
tensorflow.raw_ops.segmentsum
tf.raw_ops.Select Selects elements from x or y, depending on condition. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.Select tf.raw_ops.Select( condition, x, y, name=None ) The x, and y tensors must all have the same shape, and the output will also have that shape. The condition tensor must be a scalar if x and y are scalars. If x and y are vectors or higher rank, then condition must be either a scalar, a vector with size matching the first dimension of x, or must have the same shape as x. The condition tensor acts as a mask that chooses, based on the value at each element, whether the corresponding element / row in the output should be taken from x (if true) or y (if false). If condition is a vector and x and y are higher rank matrices, then it chooses which row (outer dimension) to copy from x and y. If condition has the same shape as x and y, then it chooses which element to copy from x and y. For example: # 'condition' tensor is [[True, False] # [False, True]] # 't' is [[1, 2], # [3, 4]] # 'e' is [[5, 6], # [7, 8]] select(condition, t, e) # => [[1, 6], [7, 4]] # 'condition' tensor is [True, False] # 't' is [[1, 2], # [3, 4]] # 'e' is [[5, 6], # [7, 8]] select(condition, t, e) ==> [[1, 2], [7, 8]] Args condition A Tensor of type bool. x A Tensor which may have the same shape as condition. If condition is rank 1, x may have higher rank, but its first dimension must match the size of condition. y A Tensor with the same type and shape as x. name A name for the operation (optional). Returns A Tensor. Has the same type as t.
tensorflow.raw_ops.select
tf.raw_ops.SelectV2 View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SelectV2 tf.raw_ops.SelectV2( condition, t, e, name=None ) Args condition A Tensor of type bool. t A Tensor. e A Tensor. Must have the same type as t. name A name for the operation (optional). Returns A Tensor. Has the same type as t.
tensorflow.raw_ops.selectv2
tf.raw_ops.SelfAdjointEig Computes the Eigen Decomposition of a batch of square self-adjoint matrices. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SelfAdjointEig tf.raw_ops.SelfAdjointEig( input, name=None ) The input is a tensor of shape [..., M, M] whose inner-most 2 dimensions form square matrices, with the same constraints as the single matrix SelfAdjointEig. The result is a [..., M+1, M] matrix with [..., 0,:] containing the eigenvalues, and subsequent [...,1:, :] containing the eigenvectors. The eigenvalues are sorted in non-decreasing order. Args input A Tensor. Must be one of the following types: float64, float32, half. Shape is [..., M, M]. name A name for the operation (optional). Returns A Tensor. Has the same type as input.
tensorflow.raw_ops.selfadjointeig
tf.raw_ops.SelfAdjointEigV2 Computes the eigen decomposition of one or more square self-adjoint matrices. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SelfAdjointEigV2 tf.raw_ops.SelfAdjointEigV2( input, compute_v=True, name=None ) Computes the eigenvalues and (optionally) eigenvectors of each inner matrix in input such that input[..., :, :] = v[..., :, :] * diag(e[..., :]). The eigenvalues are sorted in non-decreasing order. # a is a tensor. # e is a tensor of eigenvalues. # v is a tensor of eigenvectors. e, v = self_adjoint_eig(a) e = self_adjoint_eig(a, compute_v=False) Args input A Tensor. Must be one of the following types: float64, float32, half, complex64, complex128. Tensor input of shape [N, N]. compute_v An optional bool. Defaults to True. If True then eigenvectors will be computed and returned in v. Otherwise, only the eigenvalues will be computed. name A name for the operation (optional). Returns A tuple of Tensor objects (e, v). e A Tensor. Has the same type as input. v A Tensor. Has the same type as input.
tensorflow.raw_ops.selfadjointeigv2
tf.raw_ops.Selu Computes scaled exponential linear: scale * alpha * (exp(features) - 1) View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.Selu tf.raw_ops.Selu( features, name=None ) if < 0, scale * features otherwise. To be used together with initializer = tf.variance_scaling_initializer(factor=1.0, mode='FAN_IN'). For correct dropout, use tf.contrib.nn.alpha_dropout. See Self-Normalizing Neural Networks Args features A Tensor. Must be one of the following types: half, bfloat16, float32, float64. name A name for the operation (optional). Returns A Tensor. Has the same type as features.
tensorflow.raw_ops.selu
tf.raw_ops.SeluGrad Computes gradients for the scaled exponential linear (Selu) operation. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SeluGrad tf.raw_ops.SeluGrad( gradients, outputs, name=None ) Args gradients A Tensor. Must be one of the following types: half, bfloat16, float32, float64. The backpropagated gradients to the corresponding Selu operation. outputs A Tensor. Must have the same type as gradients. The outputs of the corresponding Selu operation. name A name for the operation (optional). Returns A Tensor. Has the same type as gradients.
tensorflow.raw_ops.selugrad
tf.raw_ops.Send Sends the named tensor from send_device to recv_device. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.Send tf.raw_ops.Send( tensor, tensor_name, send_device, send_device_incarnation, recv_device, client_terminated=False, name=None ) Args tensor A Tensor. The tensor to send. tensor_name A string. The name of the tensor to send. send_device A string. The name of the device sending the tensor. send_device_incarnation An int. The current incarnation of send_device. recv_device A string. The name of the device receiving the tensor. client_terminated An optional bool. Defaults to False. If set to true, this indicates that the node was added to the graph as a result of a client-side feed or fetch of Tensor data, in which case the corresponding send or recv is expected to be managed locally by the caller. name A name for the operation (optional). Returns The created Operation.
tensorflow.raw_ops.send
tf.raw_ops.SendTPUEmbeddingGradients Performs gradient updates of embedding tables. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SendTPUEmbeddingGradients tf.raw_ops.SendTPUEmbeddingGradients( inputs, learning_rates, config, name=None ) Args inputs A list of at least 1 Tensor objects with type float32. A TensorList of gradients with which to update embedding tables. This argument has the same length and shapes as the return value of RecvTPUEmbeddingActivations, but contains gradients of the model's loss with respect to the embedding activations. The embedding tables are updated from these gradients via the optimizer specified in the TPU embedding configuration given to tpu.initialize_system. learning_rates A list of Tensor objects with type float32. A TensorList of float32 scalars, one for each dynamic learning rate tag: see the comments in //third_party/tensorflow/core/protobuf/tpu/optimization_parameters.proto. Multiple tables can share the same dynamic learning rate tag as specified in the configuration. If the learning rates for all tables are constant, this list should be empty. config A string. Serialized TPUEmbeddingConfiguration proto. name A name for the operation (optional). Returns The created Operation.
tensorflow.raw_ops.sendtpuembeddinggradients
tf.raw_ops.SerializeIterator Converts the given resource_handle representing an iterator to a variant tensor. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SerializeIterator tf.raw_ops.SerializeIterator( resource_handle, external_state_policy=0, name=None ) Args resource_handle A Tensor of type resource. A handle to an iterator resource. external_state_policy An optional int. Defaults to 0. name A name for the operation (optional). Returns A Tensor of type variant.
tensorflow.raw_ops.serializeiterator
tf.raw_ops.SerializeManySparse Serialize an N-minibatch SparseTensor into an [N, 3] Tensor object. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SerializeManySparse tf.raw_ops.SerializeManySparse( sparse_indices, sparse_values, sparse_shape, out_type=tf.dtypes.string, name=None ) The SparseTensor must have rank R greater than 1, and the first dimension is treated as the minibatch dimension. Elements of the SparseTensor must be sorted in increasing order of this first dimension. The serialized SparseTensor objects going into each row of serialized_sparse will have rank R-1. The minibatch size N is extracted from sparse_shape[0]. Args sparse_indices A Tensor of type int64. 2-D. The indices of the minibatch SparseTensor. sparse_values A Tensor. 1-D. The values of the minibatch SparseTensor. sparse_shape A Tensor of type int64. 1-D. The shape of the minibatch SparseTensor. out_type An optional tf.DType from: tf.string, tf.variant. Defaults to tf.string. The dtype to use for serialization; the supported types are string (default) and variant. name A name for the operation (optional). Returns A Tensor of type out_type.
tensorflow.raw_ops.serializemanysparse
tf.raw_ops.SerializeSparse Serialize a SparseTensor into a [3] Tensor object. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SerializeSparse tf.raw_ops.SerializeSparse( sparse_indices, sparse_values, sparse_shape, out_type=tf.dtypes.string, name=None ) Args sparse_indices A Tensor of type int64. 2-D. The indices of the SparseTensor. sparse_values A Tensor. 1-D. The values of the SparseTensor. sparse_shape A Tensor of type int64. 1-D. The shape of the SparseTensor. out_type An optional tf.DType from: tf.string, tf.variant. Defaults to tf.string. The dtype to use for serialization; the supported types are string (default) and variant. name A name for the operation (optional). Returns A Tensor of type out_type.
tensorflow.raw_ops.serializesparse
tf.raw_ops.SerializeTensor Transforms a Tensor into a serialized TensorProto proto. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SerializeTensor tf.raw_ops.SerializeTensor( tensor, name=None ) Args tensor A Tensor. A Tensor of type T. name A name for the operation (optional). Returns A Tensor of type string.
tensorflow.raw_ops.serializetensor
tf.raw_ops.SetSize Number of unique elements along last dimension of input set. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SetSize tf.raw_ops.SetSize( set_indices, set_values, set_shape, validate_indices=True, name=None ) Input set is a SparseTensor represented by set_indices, set_values, and set_shape. The last dimension contains values in a set, duplicates are allowed but ignored. If validate_indices is True, this op validates the order and range of set indices. Args set_indices A Tensor of type int64. 2D Tensor, indices of a SparseTensor. set_values A Tensor. Must be one of the following types: int8, int16, int32, int64, uint8, uint16, string. 1D Tensor, values of a SparseTensor. set_shape A Tensor of type int64. 1D Tensor, shape of a SparseTensor. validate_indices An optional bool. Defaults to True. name A name for the operation (optional). Returns A Tensor of type int32.
tensorflow.raw_ops.setsize
tf.raw_ops.SetStatsAggregatorDataset View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SetStatsAggregatorDataset tf.raw_ops.SetStatsAggregatorDataset( input_dataset, stats_aggregator, tag, counter_prefix, output_types, output_shapes, name=None ) Args input_dataset A Tensor of type variant. stats_aggregator A Tensor of type resource. tag A Tensor of type string. counter_prefix A Tensor of type string. output_types A list of tf.DTypes that has length >= 1. output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1. name A name for the operation (optional). Returns A Tensor of type variant.
tensorflow.raw_ops.setstatsaggregatordataset
tf.raw_ops.Shape Returns the shape of a tensor. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.Shape tf.raw_ops.Shape( input, out_type=tf.dtypes.int32, name=None ) This operation returns a 1-D integer tensor representing the shape of input. For example: # 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]] shape(t) ==> [2, 2, 3] Args input A Tensor. out_type An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int32. name A name for the operation (optional). Returns A Tensor of type out_type.
tensorflow.raw_ops.shape
tf.raw_ops.ShapeN Returns shape of tensors. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.ShapeN tf.raw_ops.ShapeN( input, out_type=tf.dtypes.int32, name=None ) This operation returns N 1-D integer tensors representing shape of input[i]s. Args input A list of at least 1 Tensor objects with the same type. out_type An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int32. name A name for the operation (optional). Returns A list with the same length as input of Tensor objects with type out_type.
tensorflow.raw_ops.shapen
tf.raw_ops.ShardDataset Creates a Dataset that includes only 1/num_shards of this dataset. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.ShardDataset tf.raw_ops.ShardDataset( input_dataset, num_shards, index, output_types, output_shapes, require_non_empty=False, name=None ) Args input_dataset A Tensor of type variant. num_shards A Tensor of type int64. An integer representing the number of shards operating in parallel. index A Tensor of type int64. An integer representing the current worker index. output_types A list of tf.DTypes that has length >= 1. output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1. require_non_empty An optional bool. Defaults to False. name A name for the operation (optional). Returns A Tensor of type variant.
tensorflow.raw_ops.sharddataset
tf.raw_ops.ShardedFilename Generate a sharded filename. The filename is printf formatted as View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.ShardedFilename tf.raw_ops.ShardedFilename( basename, shard, num_shards, name=None ) %s-%05d-of-%05d, basename, shard, num_shards. Args basename A Tensor of type string. shard A Tensor of type int32. num_shards A Tensor of type int32. name A name for the operation (optional). Returns A Tensor of type string.
tensorflow.raw_ops.shardedfilename
tf.raw_ops.ShardedFilespec Generate a glob pattern matching all sharded file names. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.ShardedFilespec tf.raw_ops.ShardedFilespec( basename, num_shards, name=None ) Args basename A Tensor of type string. num_shards A Tensor of type int32. name A name for the operation (optional). Returns A Tensor of type string.
tensorflow.raw_ops.shardedfilespec
tf.raw_ops.ShuffleAndRepeatDataset Creates a dataset that shuffles and repeats elements from input_dataset View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.ShuffleAndRepeatDataset tf.raw_ops.ShuffleAndRepeatDataset( input_dataset, buffer_size, seed, seed2, count, output_types, output_shapes, reshuffle_each_iteration=True, name=None ) pseudorandomly. Args input_dataset A Tensor of type variant. buffer_size A Tensor of type int64. The number of output elements to buffer in an iterator over this dataset. Compare with the min_after_dequeue attr when creating a RandomShuffleQueue. seed A Tensor of type int64. A scalar seed for the random number generator. If either seed or seed2 is set to be non-zero, the random number generator is seeded by the given seed. Otherwise, a random seed is used. seed2 A Tensor of type int64. A second scalar seed to avoid seed collision. count A Tensor of type int64. A scalar representing the number of times the underlying dataset should be repeated. The default is -1, which results in infinite repetition. output_types A list of tf.DTypes that has length >= 1. output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1. reshuffle_each_iteration An optional bool. Defaults to True. name A name for the operation (optional). Returns A Tensor of type variant.
tensorflow.raw_ops.shuffleandrepeatdataset
tf.raw_ops.ShuffleAndRepeatDatasetV2 View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.ShuffleAndRepeatDatasetV2 tf.raw_ops.ShuffleAndRepeatDatasetV2( input_dataset, buffer_size, seed, seed2, count, seed_generator, output_types, output_shapes, reshuffle_each_iteration=True, name=None ) Args input_dataset A Tensor of type variant. buffer_size A Tensor of type int64. seed A Tensor of type int64. seed2 A Tensor of type int64. count A Tensor of type int64. seed_generator A Tensor of type resource. output_types A list of tf.DTypes that has length >= 1. output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1. reshuffle_each_iteration An optional bool. Defaults to True. name A name for the operation (optional). Returns A Tensor of type variant.
tensorflow.raw_ops.shuffleandrepeatdatasetv2
tf.raw_ops.ShuffleDataset Creates a dataset that shuffles elements from input_dataset pseudorandomly. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.ShuffleDataset tf.raw_ops.ShuffleDataset( input_dataset, buffer_size, seed, seed2, output_types, output_shapes, reshuffle_each_iteration=True, name=None ) Args input_dataset A Tensor of type variant. buffer_size A Tensor of type int64. The number of output elements to buffer in an iterator over this dataset. Compare with the min_after_dequeue attr when creating a RandomShuffleQueue. seed A Tensor of type int64. A scalar seed for the random number generator. If either seed or seed2 is set to be non-zero, the random number generator is seeded by the given seed. Otherwise, a random seed is used. seed2 A Tensor of type int64. A second scalar seed to avoid seed collision. output_types A list of tf.DTypes that has length >= 1. output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1. reshuffle_each_iteration An optional bool. Defaults to True. If true, each iterator over this dataset will be given a different pseudorandomly generated seed, based on a sequence seeded by the seed and seed2 inputs. If false, each iterator will be given the same seed, and repeated iteration over this dataset will yield the exact same sequence of results. name A name for the operation (optional). Returns A Tensor of type variant.
tensorflow.raw_ops.shuffledataset
tf.raw_ops.ShuffleDatasetV2 View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.ShuffleDatasetV2 tf.raw_ops.ShuffleDatasetV2( input_dataset, buffer_size, seed_generator, output_types, output_shapes, name=None ) Args input_dataset A Tensor of type variant. buffer_size A Tensor of type int64. seed_generator A Tensor of type resource. output_types A list of tf.DTypes that has length >= 1. output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1. name A name for the operation (optional). Returns A Tensor of type variant.
tensorflow.raw_ops.shuffledatasetv2
tf.raw_ops.ShuffleDatasetV3 View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.ShuffleDatasetV3 tf.raw_ops.ShuffleDatasetV3( input_dataset, buffer_size, seed, seed2, seed_generator, output_types, output_shapes, reshuffle_each_iteration=True, name=None ) Args input_dataset A Tensor of type variant. buffer_size A Tensor of type int64. seed A Tensor of type int64. seed2 A Tensor of type int64. seed_generator A Tensor of type resource. output_types A list of tf.DTypes that has length >= 1. output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1. reshuffle_each_iteration An optional bool. Defaults to True. name A name for the operation (optional). Returns A Tensor of type variant.
tensorflow.raw_ops.shuffledatasetv3
tf.raw_ops.ShutdownDistributedTPU Shuts down a running distributed TPU system. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.ShutdownDistributedTPU tf.raw_ops.ShutdownDistributedTPU( name=None ) The op returns an error if no system is running. Args name A name for the operation (optional). Returns The created Operation.
tensorflow.raw_ops.shutdowndistributedtpu
tf.raw_ops.Sigmoid Computes sigmoid of x element-wise. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.Sigmoid tf.raw_ops.Sigmoid( x, name=None ) Specifically, y = 1 / (1 + exp(-x)). Args x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, complex64, complex128. name A name for the operation (optional). Returns A Tensor. Has the same type as x.
tensorflow.raw_ops.sigmoid
tf.raw_ops.SigmoidGrad Computes the gradient of the sigmoid of x wrt its input. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SigmoidGrad tf.raw_ops.SigmoidGrad( y, dy, name=None ) Specifically, grad = dy * y * (1 - y), where y = sigmoid(x), and dy is the corresponding input gradient. Args y A Tensor. Must be one of the following types: bfloat16, half, float32, float64, complex64, complex128. dy A Tensor. Must have the same type as y. name A name for the operation (optional). Returns A Tensor. Has the same type as y.
tensorflow.raw_ops.sigmoidgrad
tf.raw_ops.Sign Returns an element-wise indication of the sign of a number. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.Sign tf.raw_ops.Sign( x, name=None ) y = sign(x) = -1 if x < 0; 0 if x == 0; 1 if x > 0. For complex numbers, y = sign(x) = x / |x| if x != 0, otherwise y = 0. Example usage: tf.math.sign([0., 2., -3.]) <tf.Tensor: shape=(3,), dtype=float32, numpy=array([ 0., 1., -1.], dtype=float32)> Args x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, int32, int64, complex64, complex128. name A name for the operation (optional). Returns A Tensor. Has the same type as x.
tensorflow.raw_ops.sign
tf.raw_ops.Sin Computes sine of x element-wise. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.Sin tf.raw_ops.Sin( x, name=None ) Given an input tensor, this function computes sine of every element in the tensor. Input range is (-inf, inf) and output range is [-1,1]. x = tf.constant([-float("inf"), -9, -0.5, 1, 1.2, 200, 10, float("inf")]) tf.math.sin(x) ==> [nan -0.4121185 -0.47942555 0.84147096 0.9320391 -0.87329733 -0.54402107 nan] Args x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, complex64, complex128. name A name for the operation (optional). Returns A Tensor. Has the same type as x.
tensorflow.raw_ops.sin
tf.raw_ops.Sinh Computes hyperbolic sine of x element-wise. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.Sinh tf.raw_ops.Sinh( x, name=None ) Given an input tensor, this function computes hyperbolic sine of every element in the tensor. Input range is [-inf,inf] and output range is [-inf,inf]. x = tf.constant([-float("inf"), -9, -0.5, 1, 1.2, 2, 10, float("inf")]) tf.math.sinh(x) ==> [-inf -4.0515420e+03 -5.2109528e-01 1.1752012e+00 1.5094614e+00 3.6268604e+00 1.1013232e+04 inf] Args x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, complex64, complex128. name A name for the operation (optional). Returns A Tensor. Has the same type as x.
tensorflow.raw_ops.sinh
tf.raw_ops.Size Returns the size of a tensor. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.Size tf.raw_ops.Size( input, out_type=tf.dtypes.int32, name=None ) This operation returns an integer representing the number of elements in input. For example: # 't' is [[[1, 1,, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]] size(t) ==> 12 Args input A Tensor. out_type An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int32. name A name for the operation (optional). Returns A Tensor of type out_type.
tensorflow.raw_ops.size
tf.raw_ops.SkipDataset Creates a dataset that skips count elements from the input_dataset. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SkipDataset tf.raw_ops.SkipDataset( input_dataset, count, output_types, output_shapes, name=None ) Args input_dataset A Tensor of type variant. count A Tensor of type int64. A scalar representing the number of elements from the input_dataset that should be skipped. If count is -1, skips everything. output_types A list of tf.DTypes that has length >= 1. output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1. name A name for the operation (optional). Returns A Tensor of type variant.
tensorflow.raw_ops.skipdataset
tf.raw_ops.SleepDataset View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SleepDataset tf.raw_ops.SleepDataset( input_dataset, sleep_microseconds, output_types, output_shapes, name=None ) Args input_dataset A Tensor of type variant. sleep_microseconds A Tensor of type int64. output_types A list of tf.DTypes that has length >= 1. output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1. name A name for the operation (optional). Returns A Tensor of type variant.
tensorflow.raw_ops.sleepdataset
tf.raw_ops.Slice Return a slice from 'input'. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.Slice tf.raw_ops.Slice( input, begin, size, name=None ) The output tensor is a tensor with dimensions described by 'size' whose values are extracted from 'input' starting at the offsets in 'begin'. Requirements: 0 <= begin[i] <= begin[i] + size[i] <= Di for i in [0, n) Args input A Tensor. begin A Tensor. Must be one of the following types: int32, int64. begin[i] specifies the offset into the 'i'th dimension of 'input' to slice from. size A Tensor. Must have the same type as begin. size[i] specifies the number of elements of the 'i'th dimension of 'input' to slice. If size[i] is -1, all remaining elements in dimension i are included in the slice (i.e. this is equivalent to setting size[i] = input.dim_size(i) - begin[i]). name A name for the operation (optional). Returns A Tensor. Has the same type as input.
tensorflow.raw_ops.slice
tf.raw_ops.SlidingWindowDataset Creates a dataset that passes a sliding window over input_dataset. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SlidingWindowDataset tf.raw_ops.SlidingWindowDataset( input_dataset, window_size, window_shift, window_stride, output_types, output_shapes, name=None ) Args input_dataset A Tensor of type variant. window_size A Tensor of type int64. A scalar representing the number of elements in the sliding window. window_shift A Tensor of type int64. A scalar representing the steps moving the sliding window forward in one iteration. It must be positive. window_stride A Tensor of type int64. A scalar representing the stride of the input elements of the sliding window. It must be positive. output_types A list of tf.DTypes that has length >= 1. output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1. name A name for the operation (optional). Returns A Tensor of type variant.
tensorflow.raw_ops.slidingwindowdataset
tf.raw_ops.Snapshot Returns a copy of the input tensor. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.Snapshot tf.raw_ops.Snapshot( input, name=None ) Args input A Tensor. name A name for the operation (optional). Returns A Tensor. Has the same type as input.
tensorflow.raw_ops.snapshot
tf.raw_ops.SnapshotDataset Creates a dataset that will write to / read from a snapshot. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SnapshotDataset tf.raw_ops.SnapshotDataset( input_dataset, path, output_types, output_shapes, compression='', reader_path_prefix='', writer_path_prefix='', shard_size_bytes=10737418240, pending_snapshot_expiry_seconds=86400, num_reader_threads=1, reader_buffer_size=1, num_writer_threads=1, writer_buffer_size=1, shuffle_on_read=False, seed=0, seed2=0, mode='auto', snapshot_name='', name=None ) This dataset attempts to determine whether a valid snapshot exists at the snapshot_path, and reads from the snapshot in lieu of using input_dataset. If not, it will run the preprocessing pipeline as usual, and write out a snapshot of the data processed for future use. Args input_dataset A Tensor of type variant. A variant tensor representing the input dataset. path A Tensor of type string. The path we should write snapshots to / read snapshots from. output_types A list of tf.DTypes that has length >= 1. output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1. compression An optional string. Defaults to "". reader_path_prefix An optional string. Defaults to "". writer_path_prefix An optional string. Defaults to "". shard_size_bytes An optional int. Defaults to 10737418240. pending_snapshot_expiry_seconds An optional int. Defaults to 86400. num_reader_threads An optional int. Defaults to 1. reader_buffer_size An optional int. Defaults to 1. num_writer_threads An optional int. Defaults to 1. writer_buffer_size An optional int. Defaults to 1. shuffle_on_read An optional bool. Defaults to False. seed An optional int. Defaults to 0. seed2 An optional int. Defaults to 0. mode An optional string. Defaults to "auto". snapshot_name An optional string. Defaults to "". name A name for the operation (optional). Returns A Tensor of type variant.
tensorflow.raw_ops.snapshotdataset
tf.raw_ops.SnapshotDatasetV2 Creates a dataset that will write to / read from a snapshot. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SnapshotDatasetV2 tf.raw_ops.SnapshotDatasetV2( input_dataset, path, reader_func_other_args, shard_func_other_args, output_types, output_shapes, reader_func, shard_func, compression='', reader_prefix='', writer_prefix='', name=None ) This dataset attempts to determine whether a valid snapshot exists at the snapshot_path, and reads from the snapshot in lieu of using input_dataset. If not, it will run the preprocessing pipeline as usual, and write out a snapshot of the data processed for future use. Args input_dataset A Tensor of type variant. A variant tensor representing the input dataset. path A Tensor of type string. The path we should write snapshots to / read snapshots from. reader_func_other_args A list of Tensor objects. shard_func_other_args A list of Tensor objects. output_types A list of tf.DTypes that has length >= 1. output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1. reader_func A function decorated with @Defun. Optional. A function to control how to read data from snapshot shards. shard_func A function decorated with @Defun. Optional. A function to control how to shard data when writing a snapshot. compression An optional string. Defaults to "". The type of compression to be applied to the saved snapshot files. reader_prefix An optional string. Defaults to "". writer_prefix An optional string. Defaults to "". name A name for the operation (optional). Returns A Tensor of type variant.
tensorflow.raw_ops.snapshotdatasetv2
tf.raw_ops.SobolSample Generates points from the Sobol sequence. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SobolSample tf.raw_ops.SobolSample( dim, num_results, skip, dtype=tf.dtypes.float32, name=None ) Creates a Sobol sequence with num_results samples. Each sample has dimension dim. Skips the first skip samples. Args dim A Tensor of type int32. Positive scalar Tensor representing each sample's dimension. num_results A Tensor of type int32. Positive scalar Tensor of dtype int32. The number of Sobol points to return in the output. skip A Tensor of type int32. Positive scalar Tensor of dtype int32. The number of initial points of the Sobol sequence to skip. dtype An optional tf.DType from: tf.float32, tf.float64. Defaults to tf.float32. The type of the sample. One of: float32 or float64. name A name for the operation (optional). Returns A Tensor of type dtype.
tensorflow.raw_ops.sobolsample
tf.raw_ops.Softmax Computes softmax activations. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.Softmax tf.raw_ops.Softmax( logits, name=None ) For each batch i and class j we have $$softmax[i, j] = exp(logits[i, j]) / sum_j(exp(logits[i, j]))$$ Args logits A Tensor. Must be one of the following types: half, bfloat16, float32, float64. 2-D with shape [batch_size, num_classes]. name A name for the operation (optional). Returns A Tensor. Has the same type as logits.
tensorflow.raw_ops.softmax
tf.raw_ops.SoftmaxCrossEntropyWithLogits Computes softmax cross entropy cost and gradients to backpropagate. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SoftmaxCrossEntropyWithLogits tf.raw_ops.SoftmaxCrossEntropyWithLogits( features, labels, name=None ) Inputs are the logits, not probabilities. Args features A Tensor. Must be one of the following types: half, bfloat16, float32, float64. batch_size x num_classes matrix labels A Tensor. Must have the same type as features. batch_size x num_classes matrix The caller must ensure that each batch of labels represents a valid probability distribution. name A name for the operation (optional). Returns A tuple of Tensor objects (loss, backprop). loss A Tensor. Has the same type as features. backprop A Tensor. Has the same type as features.
tensorflow.raw_ops.softmaxcrossentropywithlogits
tf.raw_ops.Softplus Computes softplus: log(exp(features) + 1). View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.Softplus tf.raw_ops.Softplus( features, name=None ) Args features A Tensor. Must be one of the following types: half, bfloat16, float32, float64. name A name for the operation (optional). Returns A Tensor. Has the same type as features.
tensorflow.raw_ops.softplus
tf.raw_ops.SoftplusGrad Computes softplus gradients for a softplus operation. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SoftplusGrad tf.raw_ops.SoftplusGrad( gradients, features, name=None ) Args gradients A Tensor. Must be one of the following types: half, bfloat16, float32, float64. The backpropagated gradients to the corresponding softplus operation. features A Tensor. Must have the same type as gradients. The features passed as input to the corresponding softplus operation. name A name for the operation (optional). Returns A Tensor. Has the same type as gradients.
tensorflow.raw_ops.softplusgrad
tf.raw_ops.Softsign Computes softsign: features / (abs(features) + 1). View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.Softsign tf.raw_ops.Softsign( features, name=None ) Args features A Tensor. Must be one of the following types: half, bfloat16, float32, float64. name A name for the operation (optional). Returns A Tensor. Has the same type as features.
tensorflow.raw_ops.softsign
tf.raw_ops.SoftsignGrad Computes softsign gradients for a softsign operation. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SoftsignGrad tf.raw_ops.SoftsignGrad( gradients, features, name=None ) Args gradients A Tensor. Must be one of the following types: half, bfloat16, float32, float64. The backpropagated gradients to the corresponding softsign operation. features A Tensor. Must have the same type as gradients. The features passed as input to the corresponding softsign operation. name A name for the operation (optional). Returns A Tensor. Has the same type as gradients.
tensorflow.raw_ops.softsigngrad
tf.raw_ops.SpaceToBatch SpaceToBatch for 4-D tensors of type T. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SpaceToBatch tf.raw_ops.SpaceToBatch( input, paddings, block_size, name=None ) This is a legacy version of the more general SpaceToBatchND. Zero-pads and then rearranges (permutes) blocks of spatial data into batch. More specifically, this op outputs a copy of the input tensor where values from the height and width dimensions are moved to the batch dimension. After the zero-padding, both height and width of the input must be divisible by the block size. Args input A Tensor. 4-D with shape [batch, height, width, depth]. paddings A Tensor. Must be one of the following types: int32, int64. 2-D tensor of non-negative integers with shape [2, 2]. It specifies the padding of the input with zeros across the spatial dimensions as follows: paddings = [[pad_top, pad_bottom], [pad_left, pad_right]] The effective spatial dimensions of the zero-padded input tensor will be: height_pad = pad_top + height + pad_bottom width_pad = pad_left + width + pad_right The attr block_size must be greater than one. It indicates the block size. Non-overlapping blocks of size block_size x block size in the height and width dimensions are rearranged into the batch dimension at each location. The batch of the output tensor is batch * block_size * block_size. Both height_pad and width_pad must be divisible by block_size. The shape of the output will be: [batchblock_sizeblock_size, height_pad/block_size, width_pad/block_size, depth] Some examples: (1) For the following input of shape [1, 2, 2, 1] and block_size of 2: x = [[[[1], [2]], [[3], [4]]]] The output tensor has shape [4, 1, 1, 1] and value: [[[[1]]], [[[2]]], [[[3]]], [[[4]]]] (2) For the following input of shape [1, 2, 2, 3] and block_size of 2: x = [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]] The output tensor has shape [4, 1, 1, 3] and value: [[[[1, 2, 3]]], [[[4, 5, 6]]], [[[7, 8, 9]]], [[[10, 11, 12]]]] (3) For the following input of shape [1, 4, 4, 1] and block_size of 2: x = [[[[1], [2], [3], [4]], [[5], [6], [7], [8]], [[9], [10], [11], [12]], [[13], [14], [15], [16]]]] The output tensor has shape [4, 2, 2, 1] and value: x = [[[[1], [3]], [[9], [11]]], [[[2], [4]], [[10], [12]]], [[[5], [7]], [[13], [15]]], [[[6], [8]], [[14], [16]]]] (4) For the following input of shape [2, 2, 4, 1] and block_size of 2: x = [[[[1], [2], [3], [4]], [[5], [6], [7], [8]]], [[[9], [10], [11], [12]], [[13], [14], [15], [16]]]] The output tensor has shape [8, 1, 2, 1] and value: x = [[[[1], [3]]], [[[9], [11]]], [[[2], [4]]], [[[10], [12]]], [[[5], [7]]], [[[13], [15]]], [[[6], [8]]], [[[14], [16]]]] Among others, this operation is useful for reducing atrous convolution into regular convolution. block_size An int that is >= 2. name A name for the operation (optional). Returns A Tensor. Has the same type as input.
tensorflow.raw_ops.spacetobatch
tf.raw_ops.SpaceToBatchND SpaceToBatch for N-D tensors of type T. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SpaceToBatchND tf.raw_ops.SpaceToBatchND( input, block_shape, paddings, name=None ) This operation divides "spatial" dimensions [1, ..., M] of the input into a grid of blocks of shape block_shape, and interleaves these blocks with the "batch" dimension (0) such that in the output, the spatial dimensions [1, ..., M] correspond to the position within the grid, and the batch dimension combines both the position within a spatial block and the original batch position. Prior to division into blocks, the spatial dimensions of the input are optionally zero padded according to paddings. See below for a precise description. Args input A Tensor. N-D with shape input_shape = [batch] + spatial_shape + remaining_shape, where spatial_shape has M dimensions. block_shape A Tensor. Must be one of the following types: int32, int64. 1-D with shape [M], all values must be >= 1. paddings A Tensor. Must be one of the following types: int32, int64. 2-D with shape [M, 2], all values must be >= 0. paddings[i] = [pad_start, pad_end] specifies the padding for input dimension i + 1, which corresponds to spatial dimension i. It is required that block_shape[i] divides input_shape[i + 1] + pad_start + pad_end. This operation is equivalent to the following steps: Zero-pad the start and end of dimensions [1, ..., M] of the input according to paddings to produce padded of shape padded_shape. Reshape padded to reshaped_padded of shape: [batch] + [padded_shape[1] / block_shape[0], block_shape[0], ..., padded_shape[M] / block_shape[M-1], block_shape[M-1]] + remaining_shape Permute dimensions of reshaped_padded to produce permuted_reshaped_padded of shape: block_shape + [batch] + [padded_shape[1] / block_shape[0], ..., padded_shape[M] / block_shape[M-1]] + remaining_shape Reshape permuted_reshaped_padded to flatten block_shape into the batch dimension, producing an output tensor of shape: [batch * prod(block_shape)] + [padded_shape[1] / block_shape[0], ..., padded_shape[M] / block_shape[M-1]] + remaining_shape Some examples: (1) For the following input of shape [1, 2, 2, 1], block_shape = [2, 2], and paddings = [[0, 0], [0, 0]]: x = [[[[1], [2]], [[3], [4]]]] The output tensor has shape [4, 1, 1, 1] and value: [[[[1]]], [[[2]]], [[[3]]], [[[4]]]] (2) For the following input of shape [1, 2, 2, 3], block_shape = [2, 2], and paddings = [[0, 0], [0, 0]]: x = [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]] The output tensor has shape [4, 1, 1, 3] and value: [[[[1, 2, 3]]], [[[4, 5, 6]]], [[[7, 8, 9]]], [[[10, 11, 12]]]] (3) For the following input of shape [1, 4, 4, 1], block_shape = [2, 2], and paddings = [[0, 0], [0, 0]]: x = [[[[1], [2], [3], [4]], [[5], [6], [7], [8]], [[9], [10], [11], [12]], [[13], [14], [15], [16]]]] The output tensor has shape [4, 2, 2, 1] and value: x = [[[[1], [3]], [[9], [11]]], [[[2], [4]], [[10], [12]]], [[[5], [7]], [[13], [15]]], [[[6], [8]], [[14], [16]]]] (4) For the following input of shape [2, 2, 4, 1], block_shape = [2, 2], and paddings = [[0, 0], [2, 0]]: x = [[[[1], [2], [3], [4]], [[5], [6], [7], [8]]], [[[9], [10], [11], [12]], [[13], [14], [15], [16]]]] The output tensor has shape [8, 1, 3, 1] and value: x = [[[[0], [1], [3]]], [[[0], [9], [11]]], [[[0], [2], [4]]], [[[0], [10], [12]]], [[[0], [5], [7]]], [[[0], [13], [15]]], [[[0], [6], [8]]], [[[0], [14], [16]]]] Among others, this operation is useful for reducing atrous convolution into regular convolution. name A name for the operation (optional). Returns A Tensor. Has the same type as input.
tensorflow.raw_ops.spacetobatchnd
tf.raw_ops.SpaceToDepth SpaceToDepth for tensors of type T. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SpaceToDepth tf.raw_ops.SpaceToDepth( input, block_size, data_format='NHWC', name=None ) Rearranges blocks of spatial data, into depth. More specifically, this op outputs a copy of the input tensor where values from the height and width dimensions are moved to the depth dimension. The attr block_size indicates the input block size. Non-overlapping blocks of size block_size x block size are rearranged into depth at each location. The depth of the output tensor is block_size * block_size * input_depth. The Y, X coordinates within each block of the input become the high order component of the output channel index. The input tensor's height and width must be divisible by block_size. The data_format attr specifies the layout of the input and output tensors with the following options: "NHWC": [ batch, height, width, channels ] "NCHW": [ batch, channels, height, width ] "NCHW_VECT_C": qint8 [ batch, channels / 4, height, width, 4 ] It is useful to consider the operation as transforming a 6-D Tensor. e.g. for data_format = NHWC, Each element in the input tensor can be specified via 6 coordinates, ordered by decreasing memory layout significance as: n,oY,bY,oX,bX,iC (where n=batch index, oX, oY means X or Y coordinates within the output image, bX, bY means coordinates within the input block, iC means input channels). The output would be a transpose to the following layout: n,oY,oX,bY,bX,iC This operation is useful for resizing the activations between convolutions (but keeping all data), e.g. instead of pooling. It is also useful for training purely convolutional models. For example, given an input of shape [1, 2, 2, 1], data_format = "NHWC" and block_size = 2: x = [[[[1], [2]], [[3], [4]]]] This operation will output a tensor of shape [1, 1, 1, 4]: [[[[1, 2, 3, 4]]]] Here, the input has a batch of 1 and each batch element has shape [2, 2, 1], the corresponding output will have a single element (i.e. width and height are both 1) and will have a depth of 4 channels (1 * block_size * block_size). The output element shape is [1, 1, 4]. For an input tensor with larger depth, here of shape [1, 2, 2, 3], e.g. x = [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]] This operation, for block_size of 2, will return the following tensor of shape [1, 1, 1, 12] [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]] Similarly, for the following input of shape [1 4 4 1], and a block size of 2: x = [[[[1], [2], [5], [6]], [[3], [4], [7], [8]], [[9], [10], [13], [14]], [[11], [12], [15], [16]]]] the operator will return the following tensor of shape [1 2 2 4]: x = [[[[1, 2, 3, 4], [5, 6, 7, 8]], [[9, 10, 11, 12], [13, 14, 15, 16]]]] Args input A Tensor. block_size An int that is >= 2. The size of the spatial block. data_format An optional string from: "NHWC", "NCHW", "NCHW_VECT_C". Defaults to "NHWC". name A name for the operation (optional). Returns A Tensor. Has the same type as input.
tensorflow.raw_ops.spacetodepth
tf.raw_ops.SparseAccumulatorApplyGradient Applies a sparse gradient to a given accumulator. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseAccumulatorApplyGradient tf.raw_ops.SparseAccumulatorApplyGradient( handle, local_step, gradient_indices, gradient_values, gradient_shape, has_known_shape, name=None ) Does not add if local_step is smaller than the accumulator's global_step. Args handle A Tensor of type mutable string. The handle to a accumulator. local_step A Tensor of type int64. The local_step value at which the sparse gradient was computed. gradient_indices A Tensor of type int64. Indices of the sparse gradient to be accumulated. Must be a vector. gradient_values A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Values are the non-zero slices of the gradient, and must have the same first dimension as indices, i.e., the nnz represented by indices and values must be consistent. gradient_shape A Tensor of type int64. Shape of the sparse gradient to be accumulated. has_known_shape A bool. Boolean indicating whether gradient_shape is unknown, in which case the input is ignored during validation. name A name for the operation (optional). Returns The created Operation.
tensorflow.raw_ops.sparseaccumulatorapplygradient
tf.raw_ops.SparseAccumulatorTakeGradient Extracts the average sparse gradient in a SparseConditionalAccumulator. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseAccumulatorTakeGradient tf.raw_ops.SparseAccumulatorTakeGradient( handle, num_required, dtype, name=None ) The op will blocks until sufficient (i.e., more than num_required) gradients have been accumulated. If the accumulator has already aggregated more than num_required gradients, it will return its average of the accumulated gradients. Also automatically increments the recorded global_step in the accumulator by 1, and resets the aggregate to 0. Args handle A Tensor of type mutable string. The handle to a SparseConditionalAccumulator. num_required A Tensor of type int32. Number of gradients required before we return an aggregate. dtype A tf.DType from: tf.float32, tf.float64, tf.int32, tf.uint8, tf.int16, tf.int8, tf.complex64, tf.int64, tf.qint8, tf.quint8, tf.qint32, tf.bfloat16, tf.uint16, tf.complex128, tf.half, tf.uint32, tf.uint64. The data type of accumulated gradients. Needs to correspond to the type of the accumulator. name A name for the operation (optional). Returns A tuple of Tensor objects (indices, values, shape). indices A Tensor of type int64. values A Tensor of type dtype. shape A Tensor of type int64.
tensorflow.raw_ops.sparseaccumulatortakegradient
tf.raw_ops.SparseAdd Adds two SparseTensor objects to produce another SparseTensor. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseAdd tf.raw_ops.SparseAdd( a_indices, a_values, a_shape, b_indices, b_values, b_shape, thresh, name=None ) The input SparseTensor objects' indices are assumed ordered in standard lexicographic order. If this is not the case, before this step run SparseReorder to restore index ordering. By default, if two values sum to zero at some index, the output SparseTensor would still include that particular location in its index, storing a zero in the corresponding value slot. To override this, callers can specify thresh, indicating that if the sum has a magnitude strictly smaller than thresh, its corresponding value and index would then not be included. In particular, thresh == 0 (default) means everything is kept and actual thresholding happens only for a positive value. In the following shapes, nnz is the count after taking thresh into account. Args a_indices A Tensor of type int64. 2-D. The indices of the first SparseTensor, size [nnz, ndims] Matrix. a_values A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. 1-D. The values of the first SparseTensor, size [nnz] Vector. a_shape A Tensor of type int64. 1-D. The shape of the first SparseTensor, size [ndims] Vector. b_indices A Tensor of type int64. 2-D. The indices of the second SparseTensor, size [nnz, ndims] Matrix. b_values A Tensor. Must have the same type as a_values. 1-D. The values of the second SparseTensor, size [nnz] Vector. b_shape A Tensor of type int64. 1-D. The shape of the second SparseTensor, size [ndims] Vector. thresh A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64. 0-D. The magnitude threshold that determines if an output value/index pair takes space. name A name for the operation (optional). Returns A tuple of Tensor objects (sum_indices, sum_values, sum_shape). sum_indices A Tensor of type int64. sum_values A Tensor. Has the same type as a_values. sum_shape A Tensor of type int64.
tensorflow.raw_ops.sparseadd
tf.raw_ops.SparseAddGrad The gradient operator for the SparseAdd op. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseAddGrad tf.raw_ops.SparseAddGrad( backprop_val_grad, a_indices, b_indices, sum_indices, name=None ) The SparseAdd op calculates A + B, where A, B, and the sum are all represented as SparseTensor objects. This op takes in the upstream gradient w.r.t. non-empty values of the sum, and outputs the gradients w.r.t. the non-empty values of A and B. Args backprop_val_grad A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. 1-D with shape [nnz(sum)]. The gradient with respect to the non-empty values of the sum. a_indices A Tensor of type int64. 2-D. The indices of the SparseTensor A, size [nnz(A), ndims]. b_indices A Tensor of type int64. 2-D. The indices of the SparseTensor B, size [nnz(B), ndims]. sum_indices A Tensor of type int64. 2-D. The indices of the sum SparseTensor, size [nnz(sum), ndims]. name A name for the operation (optional). Returns A tuple of Tensor objects (a_val_grad, b_val_grad). a_val_grad A Tensor. Has the same type as backprop_val_grad. b_val_grad A Tensor. Has the same type as backprop_val_grad.
tensorflow.raw_ops.sparseaddgrad
tf.raw_ops.SparseApplyAdadelta var: Should be from a Variable(). View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseApplyAdadelta tf.raw_ops.SparseApplyAdadelta( var, accum, accum_update, lr, rho, epsilon, grad, indices, use_locking=False, name=None ) Args var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. accum A mutable Tensor. Must have the same type as var. Should be from a Variable(). accum_update A mutable Tensor. Must have the same type as var. Should be from a Variable(). lr A Tensor. Must have the same type as var. Learning rate. Must be a scalar. rho A Tensor. Must have the same type as var. Decay factor. Must be a scalar. epsilon A Tensor. Must have the same type as var. Constant factor. Must be a scalar. grad A Tensor. Must have the same type as var. The gradient. indices A Tensor. Must be one of the following types: int32, int64. A vector of indices into the first dimension of var and accum. use_locking An optional bool. Defaults to False. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as var.
tensorflow.raw_ops.sparseapplyadadelta
tf.raw_ops.SparseApplyAdagrad Update relevant entries in 'var' and 'accum' according to the adagrad scheme. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseApplyAdagrad tf.raw_ops.SparseApplyAdagrad( var, accum, lr, grad, indices, use_locking=False, update_slots=True, name=None ) That is for rows we have grad for, we update var and accum as follows: $$accum += grad * grad$$ $$var -= lr * grad * (1 / sqrt(accum))$$ Args var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable(). accum A mutable Tensor. Must have the same type as var. Should be from a Variable(). lr A Tensor. Must have the same type as var. Learning rate. Must be a scalar. grad A Tensor. Must have the same type as var. The gradient. indices A Tensor. Must be one of the following types: int32, int64. A vector of indices into the first dimension of var and accum. use_locking An optional bool. Defaults to False. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. update_slots An optional bool. Defaults to True. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as var.
tensorflow.raw_ops.sparseapplyadagrad
tf.raw_ops.SparseApplyAdagradDA Update entries in 'var' and 'accum' according to the proximal adagrad scheme. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseApplyAdagradDA tf.raw_ops.SparseApplyAdagradDA( var, gradient_accumulator, gradient_squared_accumulator, grad, indices, lr, l1, l2, global_step, use_locking=False, name=None ) Args var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable(). gradient_accumulator A mutable Tensor. Must have the same type as var. Should be from a Variable(). gradient_squared_accumulator A mutable Tensor. Must have the same type as var. Should be from a Variable(). grad A Tensor. Must have the same type as var. The gradient. indices A Tensor. Must be one of the following types: int32, int64. A vector of indices into the first dimension of var and accum. lr A Tensor. Must have the same type as var. Learning rate. Must be a scalar. l1 A Tensor. Must have the same type as var. L1 regularization. Must be a scalar. l2 A Tensor. Must have the same type as var. L2 regularization. Must be a scalar. global_step A Tensor of type int64. Training step number. Must be a scalar. use_locking An optional bool. Defaults to False. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as var.
tensorflow.raw_ops.sparseapplyadagradda
tf.raw_ops.SparseApplyAdagradV2 Update relevant entries in 'var' and 'accum' according to the adagrad scheme. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseApplyAdagradV2 tf.raw_ops.SparseApplyAdagradV2( var, accum, lr, epsilon, grad, indices, use_locking=False, update_slots=True, name=None ) That is for rows we have grad for, we update var and accum as follows: $$accum += grad * grad$$ $$var -= lr * grad * (1 / sqrt(accum))$$ Args var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable(). accum A mutable Tensor. Must have the same type as var. Should be from a Variable(). lr A Tensor. Must have the same type as var. Learning rate. Must be a scalar. epsilon A Tensor. Must have the same type as var. Constant factor. Must be a scalar. grad A Tensor. Must have the same type as var. The gradient. indices A Tensor. Must be one of the following types: int32, int64. A vector of indices into the first dimension of var and accum. use_locking An optional bool. Defaults to False. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. update_slots An optional bool. Defaults to True. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as var.
tensorflow.raw_ops.sparseapplyadagradv2
tf.raw_ops.SparseApplyCenteredRMSProp Update '*var' according to the centered RMSProp algorithm. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseApplyCenteredRMSProp tf.raw_ops.SparseApplyCenteredRMSProp( var, mg, ms, mom, lr, rho, momentum, epsilon, grad, indices, use_locking=False, name=None ) The centered RMSProp algorithm uses an estimate of the centered second moment (i.e., the variance) for normalization, as opposed to regular RMSProp, which uses the (uncentered) second moment. This often helps with training, but is slightly more expensive in terms of computation and memory. Note that in dense implementation of this algorithm, mg, ms, and mom will update even if the grad is zero, but in this sparse implementation, mg, ms, and mom will not update in iterations during which the grad is zero. mean_square = decay * mean_square + (1-decay) * gradient ** 2 mean_grad = decay * mean_grad + (1-decay) * gradient Delta = learning_rate * gradient / sqrt(mean_square + epsilon - mean_grad ** 2) $$ms $$mom $$var Args var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable(). mg A mutable Tensor. Must have the same type as var. Should be from a Variable(). ms A mutable Tensor. Must have the same type as var. Should be from a Variable(). mom A mutable Tensor. Must have the same type as var. Should be from a Variable(). lr A Tensor. Must have the same type as var. Scaling factor. Must be a scalar. rho A Tensor. Must have the same type as var. Decay rate. Must be a scalar. momentum A Tensor. Must have the same type as var. epsilon A Tensor. Must have the same type as var. Ridge term. Must be a scalar. grad A Tensor. Must have the same type as var. The gradient. indices A Tensor. Must be one of the following types: int32, int64. A vector of indices into the first dimension of var, ms and mom. use_locking An optional bool. Defaults to False. If True, updating of the var, mg, ms, and mom tensors is protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as var. Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license. Last updated 2021-02-18 UTC. Stay connected Blog GitHub Twitter YouTube Support Issue tracker Release notes Stack Overflow Brand guidelines Cite TensorFlow Terms Privacy Sign up for the TensorFlow monthly newsletter Subscribe Language English 中文 – 简体
tensorflow.raw_ops.sparseapplycenteredrmsprop
tf.raw_ops.SparseApplyFtrl Update relevant entries in '*var' according to the Ftrl-proximal scheme. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseApplyFtrl tf.raw_ops.SparseApplyFtrl( var, accum, linear, grad, indices, lr, l1, l2, lr_power, use_locking=False, multiply_linear_by_lr=False, name=None ) That is for rows we have grad for, we update var, accum and linear as follows: $$accum_new = accum + grad * grad$$ $$linear += grad + (accum_{new}^{-lr_{power} } - accum^{-lr_{power} } / lr * var$$ $$quadratic = 1.0 / (accum_{new}^{lr_{power} } * lr) + 2 * l2$$ $$var = (sign(linear) * l1 - linear) / quadratic\ if\ |linear| > l1\ else\ 0.0$$ $$accum = accum_{new}$$ Args var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable(). accum A mutable Tensor. Must have the same type as var. Should be from a Variable(). linear A mutable Tensor. Must have the same type as var. Should be from a Variable(). grad A Tensor. Must have the same type as var. The gradient. indices A Tensor. Must be one of the following types: int32, int64. A vector of indices into the first dimension of var and accum. lr A Tensor. Must have the same type as var. Scaling factor. Must be a scalar. l1 A Tensor. Must have the same type as var. L1 regularization. Must be a scalar. l2 A Tensor. Must have the same type as var. L2 regularization. Must be a scalar. lr_power A Tensor. Must have the same type as var. Scaling factor. Must be a scalar. use_locking An optional bool. Defaults to False. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. multiply_linear_by_lr An optional bool. Defaults to False. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as var.
tensorflow.raw_ops.sparseapplyftrl
tf.raw_ops.SparseApplyFtrlV2 Update relevant entries in '*var' according to the Ftrl-proximal scheme. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseApplyFtrlV2 tf.raw_ops.SparseApplyFtrlV2( var, accum, linear, grad, indices, lr, l1, l2, l2_shrinkage, lr_power, use_locking=False, multiply_linear_by_lr=False, name=None ) That is for rows we have grad for, we update var, accum and linear as follows: grad_with_shrinkage = grad + 2 * l2_shrinkage * var accum_new = accum + grad * grad linear += grad_with_shrinkage - (accum_new^(-lr_power) - accum^(-lr_power)) / lr * var quadratic = 1.0 / (accum_new^(lr_power) * lr) + 2 * l2 var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0 accum = accum_new Args var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable(). accum A mutable Tensor. Must have the same type as var. Should be from a Variable(). linear A mutable Tensor. Must have the same type as var. Should be from a Variable(). grad A Tensor. Must have the same type as var. The gradient. indices A Tensor. Must be one of the following types: int32, int64. A vector of indices into the first dimension of var and accum. lr A Tensor. Must have the same type as var. Scaling factor. Must be a scalar. l1 A Tensor. Must have the same type as var. L1 regularization. Must be a scalar. l2 A Tensor. Must have the same type as var. L2 shrinkage regularization. Must be a scalar. l2_shrinkage A Tensor. Must have the same type as var. lr_power A Tensor. Must have the same type as var. Scaling factor. Must be a scalar. use_locking An optional bool. Defaults to False. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. multiply_linear_by_lr An optional bool. Defaults to False. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as var.
tensorflow.raw_ops.sparseapplyftrlv2
tf.raw_ops.SparseApplyMomentum Update relevant entries in 'var' and 'accum' according to the momentum scheme. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseApplyMomentum tf.raw_ops.SparseApplyMomentum( var, accum, lr, grad, indices, momentum, use_locking=False, use_nesterov=False, name=None ) Set use_nesterov = True if you want to use Nesterov momentum. That is for rows we have grad for, we update var and accum as follows: $$accum = accum * momentum + grad$$ $$var -= lr * accum$$ Args var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable(). accum A mutable Tensor. Must have the same type as var. Should be from a Variable(). lr A Tensor. Must have the same type as var. Learning rate. Must be a scalar. grad A Tensor. Must have the same type as var. The gradient. indices A Tensor. Must be one of the following types: int32, int64. A vector of indices into the first dimension of var and accum. momentum A Tensor. Must have the same type as var. Momentum. Must be a scalar. use_locking An optional bool. Defaults to False. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. use_nesterov An optional bool. Defaults to False. If True, the tensor passed to compute grad will be var - lr * momentum * accum, so in the end, the var you get is actually var - lr * momentum * accum. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as var.
tensorflow.raw_ops.sparseapplymomentum
tf.raw_ops.SparseApplyProximalAdagrad Sparse update entries in 'var' and 'accum' according to FOBOS algorithm. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseApplyProximalAdagrad tf.raw_ops.SparseApplyProximalAdagrad( var, accum, lr, l1, l2, grad, indices, use_locking=False, name=None ) That is for rows we have grad for, we update var and accum as follows: $$accum += grad * grad$$ $$prox_v = var$$ $$prox_v -= lr * grad * (1 / sqrt(accum))$$ $$var = sign(prox_v)/(1+lr*l2) * max{|prox_v|-lr*l1,0}$$ Args var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable(). accum A mutable Tensor. Must have the same type as var. Should be from a Variable(). lr A Tensor. Must have the same type as var. Learning rate. Must be a scalar. l1 A Tensor. Must have the same type as var. L1 regularization. Must be a scalar. l2 A Tensor. Must have the same type as var. L2 regularization. Must be a scalar. grad A Tensor. Must have the same type as var. The gradient. indices A Tensor. Must be one of the following types: int32, int64. A vector of indices into the first dimension of var and accum. use_locking An optional bool. Defaults to False. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as var.
tensorflow.raw_ops.sparseapplyproximaladagrad
tf.raw_ops.SparseApplyProximalGradientDescent Sparse update '*var' as FOBOS algorithm with fixed learning rate. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseApplyProximalGradientDescent tf.raw_ops.SparseApplyProximalGradientDescent( var, alpha, l1, l2, grad, indices, use_locking=False, name=None ) That is for rows we have grad for, we update var as follows: $$prox_v = var - alpha * grad$$ $$var = sign(prox_v)/(1+alpha*l2) * max{|prox_v|-alpha*l1,0}$$ Args var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable(). alpha A Tensor. Must have the same type as var. Scaling factor. Must be a scalar. l1 A Tensor. Must have the same type as var. L1 regularization. Must be a scalar. l2 A Tensor. Must have the same type as var. L2 regularization. Must be a scalar. grad A Tensor. Must have the same type as var. The gradient. indices A Tensor. Must be one of the following types: int32, int64. A vector of indices into the first dimension of var and accum. use_locking An optional bool. Defaults to False. If True, the subtraction will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as var.
tensorflow.raw_ops.sparseapplyproximalgradientdescent
tf.raw_ops.SparseApplyRMSProp Update '*var' according to the RMSProp algorithm. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseApplyRMSProp tf.raw_ops.SparseApplyRMSProp( var, ms, mom, lr, rho, momentum, epsilon, grad, indices, use_locking=False, name=None ) Note that in dense implementation of this algorithm, ms and mom will update even if the grad is zero, but in this sparse implementation, ms and mom will not update in iterations during which the grad is zero. mean_square = decay * mean_square + (1-decay) * gradient ** 2 Delta = learning_rate * gradient / sqrt(mean_square + epsilon) $$ms $$mom $$var Args var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable(). ms A mutable Tensor. Must have the same type as var. Should be from a Variable(). mom A mutable Tensor. Must have the same type as var. Should be from a Variable(). lr A Tensor. Must have the same type as var. Scaling factor. Must be a scalar. rho A Tensor. Must have the same type as var. Decay rate. Must be a scalar. momentum A Tensor. Must have the same type as var. epsilon A Tensor. Must have the same type as var. Ridge term. Must be a scalar. grad A Tensor. Must have the same type as var. The gradient. indices A Tensor. Must be one of the following types: int32, int64. A vector of indices into the first dimension of var, ms and mom. use_locking An optional bool. Defaults to False. If True, updating of the var, ms, and mom tensors is protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as var. Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license. Last updated 2021-02-18 UTC. Stay connected Blog GitHub Twitter YouTube Support Issue tracker Release notes Stack Overflow Brand guidelines Cite TensorFlow Terms Privacy Sign up for the TensorFlow monthly newsletter Subscribe Language English 中文 – 简体
tensorflow.raw_ops.sparseapplyrmsprop
tf.raw_ops.SparseBincount Counts the number of occurrences of each value in an integer array. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseBincount tf.raw_ops.SparseBincount( indices, values, dense_shape, size, weights, binary_output=False, name=None ) Outputs a vector with length size and the same dtype as weights. If weights are empty, then index i stores the number of times the value i is counted in arr. If weights are non-empty, then index i stores the sum of the value in weights at each index where the corresponding value in arr is i. Values in arr outside of the range [0, size) are ignored. Args indices A Tensor of type int64. 2D int64 Tensor. values A Tensor. Must be one of the following types: int32, int64. 1D int Tensor. dense_shape A Tensor of type int64. 1D int64 Tensor. size A Tensor. Must have the same type as values. non-negative int scalar Tensor. weights A Tensor. Must be one of the following types: int32, int64, float32, float64. is an int32, int64, float32, or float64 Tensor with the same shape as input, or a length-0 Tensor, in which case it acts as all weights equal to 1. binary_output An optional bool. Defaults to False. bool; Whether the kernel should count the appearance or number of occurrences. name A name for the operation (optional). Returns A Tensor. Has the same type as weights.
tensorflow.raw_ops.sparsebincount
tf.raw_ops.SparseConcat Concatenates a list of SparseTensor along the specified dimension. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseConcat tf.raw_ops.SparseConcat( indices, values, shapes, concat_dim, name=None ) Concatenation is with respect to the dense versions of these sparse tensors. It is assumed that each input is a SparseTensor whose elements are ordered along increasing dimension number. All inputs' shapes must match, except for the concat dimension. The indices, values, and shapes lists must have the same length. The output shape is identical to the inputs', except along the concat dimension, where it is the sum of the inputs' sizes along that dimension. The output elements will be resorted to preserve the sort order along increasing dimension number. This op runs in O(M log M) time, where M is the total number of non-empty values across all inputs. This is due to the need for an internal sort in order to concatenate efficiently across an arbitrary dimension. For example, if concat_dim = 1 and the inputs are sp_inputs[0]: shape = [2, 3] [0, 2]: "a" [1, 0]: "b" [1, 1]: "c" sp_inputs[1]: shape = [2, 4] [0, 1]: "d" [0, 2]: "e" then the output will be shape = [2, 7] [0, 2]: "a" [0, 4]: "d" [0, 5]: "e" [1, 0]: "b" [1, 1]: "c" Graphically this is equivalent to doing [ a] concat [ d e ] = [ a d e ] [b c ] [ ] [b c ] Args indices A list of at least 2 Tensor objects with type int64. 2-D. Indices of each input SparseTensor. values A list with the same length as indices of Tensor objects with the same type. 1-D. Non-empty values of each SparseTensor. shapes A list with the same length as indices of Tensor objects with type int64. 1-D. Shapes of each SparseTensor. concat_dim An int. Dimension to concatenate along. Must be in range [-rank, rank), where rank is the number of dimensions in each input SparseTensor. name A name for the operation (optional). Returns A tuple of Tensor objects (output_indices, output_values, output_shape). output_indices A Tensor of type int64. output_values A Tensor. Has the same type as values. output_shape A Tensor of type int64.
tensorflow.raw_ops.sparseconcat
tf.raw_ops.SparseConditionalAccumulator A conditional accumulator for aggregating sparse gradients. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseConditionalAccumulator tf.raw_ops.SparseConditionalAccumulator( dtype, shape, container='', shared_name='', reduction_type='MEAN', name=None ) The accumulator accepts gradients marked with local_step greater or equal to the most recent global_step known to the accumulator. The average can be extracted from the accumulator, provided sufficient gradients have been accumulated. Extracting the average automatically resets the aggregate to 0, and increments the global_step recorded by the accumulator. Args dtype A tf.DType from: tf.float32, tf.float64, tf.int32, tf.uint8, tf.int16, tf.int8, tf.complex64, tf.int64, tf.qint8, tf.quint8, tf.qint32, tf.bfloat16, tf.uint16, tf.complex128, tf.half, tf.uint32, tf.uint64. The type of the value being accumulated. shape A tf.TensorShape or list of ints. The shape of the values. container An optional string. Defaults to "". If non-empty, this accumulator is placed in the given container. Otherwise, a default container is used. shared_name An optional string. Defaults to "". If non-empty, this accumulator will be shared under the given name across multiple sessions. reduction_type An optional string from: "MEAN", "SUM". Defaults to "MEAN". name A name for the operation (optional). Returns A Tensor of type mutable string.
tensorflow.raw_ops.sparseconditionalaccumulator
tf.raw_ops.SparseCountSparseOutput Performs sparse-output bin counting for a sparse tensor input. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseCountSparseOutput tf.raw_ops.SparseCountSparseOutput( indices, values, dense_shape, weights, binary_output, minlength=-1, maxlength=-1, name=None ) Counts the number of times each value occurs in the input. Args indices A Tensor of type int64. Tensor containing the indices of the sparse tensor to count. values A Tensor. Must be one of the following types: int32, int64. Tensor containing values of the sparse tensor to count. dense_shape A Tensor of type int64. Tensor containing the dense shape of the sparse tensor to count. weights A Tensor. Must be one of the following types: int32, int64, float32, float64. A Tensor of the same shape as indices containing per-index weight values. May also be the empty tensor if no weights are used. binary_output A bool. Whether to output the number of occurrences of each value or 1. minlength An optional int that is >= -1. Defaults to -1. Minimum value to count. Can be set to -1 for no minimum. maxlength An optional int that is >= -1. Defaults to -1. Maximum value to count. Can be set to -1 for no maximum. name A name for the operation (optional). Returns A tuple of Tensor objects (output_indices, output_values, output_dense_shape). output_indices A Tensor of type int64. output_values A Tensor. Has the same type as weights. output_dense_shape A Tensor of type int64.
tensorflow.raw_ops.sparsecountsparseoutput
tf.raw_ops.SparseCross Generates sparse cross from a list of sparse and dense tensors. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseCross tf.raw_ops.SparseCross( indices, values, shapes, dense_inputs, hashed_output, num_buckets, hash_key, out_type, internal_type, name=None ) The op takes two lists, one of 2D SparseTensor and one of 2D Tensor, each representing features of one feature column. It outputs a 2D SparseTensor with the batchwise crosses of these features. For example, if the inputs are inputs[0]: SparseTensor with shape = [2, 2] [0, 0]: "a" [1, 0]: "b" [1, 1]: "c" inputs[1]: SparseTensor with shape = [2, 1] [0, 0]: "d" [1, 0]: "e" inputs[2]: Tensor [["f"], ["g"]] then the output will be shape = [2, 2] [0, 0]: "a_X_d_X_f" [1, 0]: "b_X_e_X_g" [1, 1]: "c_X_e_X_g" if hashed_output=true then the output will be shape = [2, 2] [0, 0]: FingerprintCat64( Fingerprint64("f"), FingerprintCat64( Fingerprint64("d"), Fingerprint64("a"))) [1, 0]: FingerprintCat64( Fingerprint64("g"), FingerprintCat64( Fingerprint64("e"), Fingerprint64("b"))) [1, 1]: FingerprintCat64( Fingerprint64("g"), FingerprintCat64( Fingerprint64("e"), Fingerprint64("c"))) Args indices A list of Tensor objects with type int64. 2-D. Indices of each input SparseTensor. values A list of Tensor objects with types from: int64, string. 1-D. values of each SparseTensor. shapes A list with the same length as indices of Tensor objects with type int64. 1-D. Shapes of each SparseTensor. dense_inputs A list of Tensor objects with types from: int64, string. 2-D. Columns represented by dense Tensor. hashed_output A bool. If true, returns the hash of the cross instead of the string. This will allow us avoiding string manipulations. num_buckets An int that is >= 0. It is used if hashed_output is true. output = hashed_value%num_buckets if num_buckets > 0 else hashed_value. hash_key An int. Specify the hash_key that will be used by the FingerprintCat64 function to combine the crosses fingerprints. out_type A tf.DType from: tf.int64, tf.string. internal_type A tf.DType from: tf.int64, tf.string. name A name for the operation (optional). Returns A tuple of Tensor objects (output_indices, output_values, output_shape). output_indices A Tensor of type int64. output_values A Tensor of type out_type. output_shape A Tensor of type int64.
tensorflow.raw_ops.sparsecross
tf.raw_ops.SparseCrossHashed Generates sparse cross from a list of sparse and dense tensors. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseCrossHashed tf.raw_ops.SparseCrossHashed( indices, values, shapes, dense_inputs, num_buckets, strong_hash, salt, name=None ) The op takes two lists, one of 2D SparseTensor and one of 2D Tensor, each representing features of one feature column. It outputs a 2D SparseTensor with the batchwise crosses of these features. For example, if the inputs are inputs[0]: SparseTensor with shape = [2, 2] [0, 0]: "a" [1, 0]: "b" [1, 1]: "c" inputs[1]: SparseTensor with shape = [2, 1] [0, 0]: "d" [1, 0]: "e" inputs[2]: Tensor [["f"], ["g"]] then the output will be shape = [2, 2] [0, 0]: "a_X_d_X_f" [1, 0]: "b_X_e_X_g" [1, 1]: "c_X_e_X_g" if hashed_output=true then the output will be shape = [2, 2] [0, 0]: FingerprintCat64( Fingerprint64("f"), FingerprintCat64( Fingerprint64("d"), Fingerprint64("a"))) [1, 0]: FingerprintCat64( Fingerprint64("g"), FingerprintCat64( Fingerprint64("e"), Fingerprint64("b"))) [1, 1]: FingerprintCat64( Fingerprint64("g"), FingerprintCat64( Fingerprint64("e"), Fingerprint64("c"))) Args indices A list of Tensor objects with type int64. 2-D. Indices of each input SparseTensor. values A list of Tensor objects with types from: int64, string. 1-D. values of each SparseTensor. shapes A list with the same length as indices of Tensor objects with type int64. 1-D. Shapes of each SparseTensor. dense_inputs A list of Tensor objects with types from: int64, string. 2-D. Columns represented by dense Tensor. num_buckets A Tensor of type int64. It is used if hashed_output is true. output = hashed_value%num_buckets if num_buckets > 0 else hashed_value. strong_hash A Tensor of type bool. boolean, if true, siphash with salt will be used instead of farmhash. salt A Tensor of type int64. Specify the salt that will be used by the siphash function. name A name for the operation (optional). Returns A tuple of Tensor objects (output_indices, output_values, output_shape). output_indices A Tensor of type int64. output_values A Tensor of type int64. output_shape A Tensor of type int64.
tensorflow.raw_ops.sparsecrosshashed
tf.raw_ops.SparseCrossV2 Generates sparse cross from a list of sparse and dense tensors. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseCrossV2 tf.raw_ops.SparseCrossV2( indices, values, shapes, dense_inputs, sep, name=None ) The op takes two lists, one of 2D SparseTensor and one of 2D Tensor, each representing features of one feature column. It outputs a 2D SparseTensor with the batchwise crosses of these features. For example, if the inputs are inputs[0]: SparseTensor with shape = [2, 2] [0, 0]: "a" [1, 0]: "b" [1, 1]: "c" inputs[1]: SparseTensor with shape = [2, 1] [0, 0]: "d" [1, 0]: "e" inputs[2]: Tensor [["f"], ["g"]] then the output will be shape = [2, 2] [0, 0]: "a_X_d_X_f" [1, 0]: "b_X_e_X_g" [1, 1]: "c_X_e_X_g" if hashed_output=true then the output will be shape = [2, 2] [0, 0]: FingerprintCat64( Fingerprint64("f"), FingerprintCat64( Fingerprint64("d"), Fingerprint64("a"))) [1, 0]: FingerprintCat64( Fingerprint64("g"), FingerprintCat64( Fingerprint64("e"), Fingerprint64("b"))) [1, 1]: FingerprintCat64( Fingerprint64("g"), FingerprintCat64( Fingerprint64("e"), Fingerprint64("c"))) Args indices A list of Tensor objects with type int64. 2-D. Indices of each input SparseTensor. values A list of Tensor objects with types from: int64, string. 1-D. values of each SparseTensor. shapes A list with the same length as indices of Tensor objects with type int64. 1-D. Shapes of each SparseTensor. dense_inputs A list of Tensor objects with types from: int64, string. 2-D. Columns represented by dense Tensor. sep A Tensor of type string. string used when joining a list of string inputs, can be used as separator later. name A name for the operation (optional). Returns A tuple of Tensor objects (output_indices, output_values, output_shape). output_indices A Tensor of type int64. output_values A Tensor of type string. output_shape A Tensor of type int64.
tensorflow.raw_ops.sparsecrossv2
tf.raw_ops.SparseDenseCwiseAdd Adds up a SparseTensor and a dense Tensor, using these special rules: View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseDenseCwiseAdd tf.raw_ops.SparseDenseCwiseAdd( sp_indices, sp_values, sp_shape, dense, name=None ) (1) Broadcasts the dense side to have the same shape as the sparse side, if eligible; (2) Then, only the dense values pointed to by the indices of the SparseTensor participate in the cwise addition. By these rules, the result is a logical SparseTensor with exactly the same indices and shape, but possibly with different non-zero values. The output of this Op is the resultant non-zero values. Args sp_indices A Tensor of type int64. 2-D. N x R matrix with the indices of non-empty values in a SparseTensor, possibly not in canonical ordering. sp_values A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. 1-D. N non-empty values corresponding to sp_indices. sp_shape A Tensor of type int64. 1-D. Shape of the input SparseTensor. dense A Tensor. Must have the same type as sp_values. R-D. The dense Tensor operand. name A name for the operation (optional). Returns A Tensor. Has the same type as sp_values.
tensorflow.raw_ops.sparsedensecwiseadd
tf.raw_ops.SparseDenseCwiseDiv Component-wise divides a SparseTensor by a dense Tensor. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseDenseCwiseDiv tf.raw_ops.SparseDenseCwiseDiv( sp_indices, sp_values, sp_shape, dense, name=None ) Limitation: this Op only broadcasts the dense side to the sparse side, but not the other direction. Args sp_indices A Tensor of type int64. 2-D. N x R matrix with the indices of non-empty values in a SparseTensor, possibly not in canonical ordering. sp_values A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. 1-D. N non-empty values corresponding to sp_indices. sp_shape A Tensor of type int64. 1-D. Shape of the input SparseTensor. dense A Tensor. Must have the same type as sp_values. R-D. The dense Tensor operand. name A name for the operation (optional). Returns A Tensor. Has the same type as sp_values.
tensorflow.raw_ops.sparsedensecwisediv
tf.raw_ops.SparseDenseCwiseMul Component-wise multiplies a SparseTensor by a dense Tensor. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseDenseCwiseMul tf.raw_ops.SparseDenseCwiseMul( sp_indices, sp_values, sp_shape, dense, name=None ) The output locations corresponding to the implicitly zero elements in the sparse tensor will be zero (i.e., will not take up storage space), regardless of the contents of the dense tensor (even if it's +/-INF and that INF*0 == NaN). Limitation: this Op only broadcasts the dense side to the sparse side, but not the other direction. Args sp_indices A Tensor of type int64. 2-D. N x R matrix with the indices of non-empty values in a SparseTensor, possibly not in canonical ordering. sp_values A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. 1-D. N non-empty values corresponding to sp_indices. sp_shape A Tensor of type int64. 1-D. Shape of the input SparseTensor. dense A Tensor. Must have the same type as sp_values. R-D. The dense Tensor operand. name A name for the operation (optional). Returns A Tensor. Has the same type as sp_values.
tensorflow.raw_ops.sparsedensecwisemul
tf.raw_ops.SparseFillEmptyRows Fills empty rows in the input 2-D SparseTensor with a default value. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseFillEmptyRows tf.raw_ops.SparseFillEmptyRows( indices, values, dense_shape, default_value, name=None ) The input SparseTensor is represented via the tuple of inputs (indices, values, dense_shape). The output SparseTensor has the same dense_shape but with indices output_indices and values output_values. This op inserts a single entry for every row that doesn't have any values. The index is created as [row, 0, ..., 0] and the inserted value is default_value. For example, suppose sp_input has shape [5, 6] and non-empty values: [0, 1]: a [0, 3]: b [2, 0]: c [3, 1]: d Rows 1 and 4 are empty, so the output will be of shape [5, 6] with values: [0, 1]: a [0, 3]: b [1, 0]: default_value [2, 0]: c [3, 1]: d [4, 0]: default_value The output SparseTensor will be in row-major order and will have the same shape as the input. This op also returns an indicator vector shaped [dense_shape[0]] such that empty_row_indicator[i] = True iff row i was an empty row. And a reverse index map vector shaped [indices.shape[0]] that is used during backpropagation, reverse_index_map[j] = out_j s.t. indices[j, :] == output_indices[out_j, :] Args indices A Tensor of type int64. 2-D. the indices of the sparse tensor. values A Tensor. 1-D. the values of the sparse tensor. dense_shape A Tensor of type int64. 1-D. the shape of the sparse tensor. default_value A Tensor. Must have the same type as values. 0-D. default value to insert into location [row, 0, ..., 0] for rows missing from the input sparse tensor. output indices: 2-D. the indices of the filled sparse tensor. name A name for the operation (optional). Returns A tuple of Tensor objects (output_indices, output_values, empty_row_indicator, reverse_index_map). output_indices A Tensor of type int64. output_values A Tensor. Has the same type as values. empty_row_indicator A Tensor of type bool. reverse_index_map A Tensor of type int64.
tensorflow.raw_ops.sparsefillemptyrows
tf.raw_ops.SparseFillEmptyRowsGrad The gradient of SparseFillEmptyRows. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseFillEmptyRowsGrad tf.raw_ops.SparseFillEmptyRowsGrad( reverse_index_map, grad_values, name=None ) Takes vectors reverse_index_map, shaped [N], and grad_values, shaped [N_full], where N_full >= N and copies data into either d_values or d_default_value. Here d_values is shaped [N] and d_default_value is a scalar. d_values[j] = grad_values[reverse_index_map[j]] d_defaultvalue = sum{k : 0 .. N_full - 1} ( grad_values[k] * 1{k not in reverse_index_map}) Args reverse_index_map A Tensor of type int64. 1-D. The reverse index map from SparseFillEmptyRows. grad_values A Tensor. 1-D. The gradients from backprop. name A name for the operation (optional). Returns A tuple of Tensor objects (d_values, d_default_value). d_values A Tensor. Has the same type as grad_values. d_default_value A Tensor. Has the same type as grad_values.
tensorflow.raw_ops.sparsefillemptyrowsgrad
tf.raw_ops.SparseMatMul Multiply matrix "a" by matrix "b". View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseMatMul tf.raw_ops.SparseMatMul( a, b, transpose_a=False, transpose_b=False, a_is_sparse=False, b_is_sparse=False, name=None ) The inputs must be two-dimensional matrices and the inner dimension of "a" must match the outer dimension of "b". Both "a" and "b" must be Tensors not SparseTensors. This op is optimized for the case where at least one of "a" or "b" is sparse, in the sense that they have a large proportion of zero values. The breakeven for using this versus a dense matrix multiply on one platform was 30% zero values in the sparse matrix. The gradient computation of this operation will only take advantage of sparsity in the input gradient when that gradient comes from a Relu. Args a A Tensor. Must be one of the following types: float32, bfloat16. b A Tensor. Must be one of the following types: float32, bfloat16. transpose_a An optional bool. Defaults to False. transpose_b An optional bool. Defaults to False. a_is_sparse An optional bool. Defaults to False. b_is_sparse An optional bool. Defaults to False. name A name for the operation (optional). Returns A Tensor of type float32.
tensorflow.raw_ops.sparsematmul
tf.raw_ops.SparseMatrixAdd Sparse addition of two CSR matrices, C = alpha * A + beta * B. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseMatrixAdd tf.raw_ops.SparseMatrixAdd( a, b, alpha, beta, name=None ) The gradients of SparseMatrixAdd outputs with respect to alpha and beta are not currently defined (TensorFlow will return zeros for these entries). Args a A Tensor of type variant. A CSRSparseMatrix. b A Tensor of type variant. A CSRSparseMatrix. alpha A Tensor. Must be one of the following types: float32, float64, complex64, complex128. A constant scalar. beta A Tensor. Must have the same type as alpha. A constant scalar. name A name for the operation (optional). Returns A Tensor of type variant.
tensorflow.raw_ops.sparsematrixadd
tf.raw_ops.SparseMatrixMatMul Matrix-multiplies a sparse matrix with a dense matrix. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseMatrixMatMul tf.raw_ops.SparseMatrixMatMul( a, b, transpose_a=False, transpose_b=False, adjoint_a=False, adjoint_b=False, transpose_output=False, conjugate_output=False, name=None ) Returns a dense matrix. For inputs A and B, where A is CSR and B is dense; this op returns a dense C; If transpose_output is false, returns: C = A . B If transpose_output is true, returns: C = transpose(A . B) = transpose(B) . transpose(A) where the transposition is performed along the two innermost (matrix) dimensions. If conjugate_output is true, returns: C = conjugate(A . B) = conjugate(A) . conjugate(B) If both conjugate_output and transpose_output are true, returns: C = conjugate(transpose(A . B)) = conjugate(transpose(B)) . conjugate(transpose(A)) Args a A Tensor of type variant. A CSRSparseMatrix. b A Tensor. A dense tensor. transpose_a An optional bool. Defaults to False. Indicates whether a should be transposed. transpose_b An optional bool. Defaults to False. Indicates whether b should be transposed. adjoint_a An optional bool. Defaults to False. Indicates whether a should be conjugate-transposed. adjoint_b An optional bool. Defaults to False. Indicates whether b should be conjugate-transposed. transpose_output An optional bool. Defaults to False. Transposes the product of a and b. conjugate_output An optional bool. Defaults to False. Conjugates the product of a and b. name A name for the operation (optional). Returns A Tensor. Has the same type as b.
tensorflow.raw_ops.sparsematrixmatmul
tf.raw_ops.SparseMatrixMul Element-wise multiplication of a sparse matrix with a dense tensor. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseMatrixMul tf.raw_ops.SparseMatrixMul( a, b, name=None ) Returns a sparse matrix. The dense tensor b may be either a scalar; otherwise a must be a rank-3 SparseMatrix; in this case b must be shaped [batch_size, 1, 1] and the multiply operation broadcasts. Note: even if b is zero, the sparsity structure of the output does not change. Args a A Tensor of type variant. A CSRSparseMatrix. b A Tensor. A dense tensor. name A name for the operation (optional). Returns A Tensor of type variant.
tensorflow.raw_ops.sparsematrixmul
tf.raw_ops.SparseMatrixNNZ Returns the number of nonzeroes of sparse_matrix. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseMatrixNNZ tf.raw_ops.SparseMatrixNNZ( sparse_matrix, name=None ) Args sparse_matrix A Tensor of type variant. A CSRSparseMatrix. name A name for the operation (optional). Returns A Tensor of type int32.
tensorflow.raw_ops.sparsematrixnnz
tf.raw_ops.SparseMatrixOrderingAMD Computes the Approximate Minimum Degree (AMD) ordering of input. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseMatrixOrderingAMD tf.raw_ops.SparseMatrixOrderingAMD( input, name=None ) Computes the Approximate Minimum Degree (AMD) ordering for a sparse matrix. The returned permutation may be used to permute the rows and columns of the given sparse matrix. This typically results in permuted sparse matrix's sparse Cholesky (or other decompositions) in having fewer zero fill-in compared to decomposition of the original matrix. The input sparse matrix may have rank 2 or rank 3. The output Tensor, representing would then have rank 1 or 2 respectively, with the same batch shape as the input. Each component of the input sparse matrix must represent a square symmetric matrix; only the lower triangular part of the matrix is read. The values of the sparse matrix does not affect the returned permutation, only the sparsity pattern of the sparse matrix is used. Hence, a single AMD ordering may be reused for the Cholesky decompositions of sparse matrices with the same sparsity pattern but with possibly different values. Each batch component of the output permutation represents a permutation of N elements, where the input sparse matrix components each have N rows. That is, the component contains each of the integers {0, .. N-1} exactly once. The ith element represents the row index that the ith row maps to. Usage example: from tensorflow.python.ops.linalg.sparse import sparse_csr_matrix_ops a_indices = np.array([[0, 0], [1, 1], [2, 1], [2, 2], [3, 3]]) a_values = np.array([1.0, 2.0, 1.0, 3.0, 4.0], np.float32) a_dense_shape = [4, 4] with tf.Session() as sess: # Define (COO format) SparseTensor over Numpy array. a_st = tf.sparse.SparseTensor(a_indices, a_values, a_dense_shape) # Convert SparseTensors to CSR SparseMatrix. a_sm = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix( a_st.indices, a_st.values, a_st.dense_shape) # Obtain the AMD Ordering for the CSR SparseMatrix. ordering_amd = sparse_csr_matrix_ops.sparse_matrix_ordering_amd(sparse_matrix) ordering_amd_value = sess.run(ordering_amd) ordering_amd_value stores the AMD ordering: [1 2 3 0]. input: A CSRSparseMatrix. Args input A Tensor of type variant. A CSRSparseMatrix. name A name for the operation (optional). Returns A Tensor of type int32.
tensorflow.raw_ops.sparsematrixorderingamd
tf.raw_ops.SparseMatrixSoftmax Calculates the softmax of a CSRSparseMatrix. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseMatrixSoftmax tf.raw_ops.SparseMatrixSoftmax( logits, type, name=None ) Calculate the softmax of the innermost dimensions of a SparseMatrix. Missing values are treated as -inf (i.e., logits of zero probability); and the output has the same sparsity structure as the input (though missing values in the output may now be treated as having probability zero). Args logits A Tensor of type variant. A CSRSparseMatrix. type A tf.DType from: tf.float32, tf.float64. name A name for the operation (optional). Returns A Tensor of type variant.
tensorflow.raw_ops.sparsematrixsoftmax
tf.raw_ops.SparseMatrixSoftmaxGrad Calculates the gradient of the SparseMatrixSoftmax op. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseMatrixSoftmaxGrad tf.raw_ops.SparseMatrixSoftmaxGrad( softmax, grad_softmax, type, name=None ) Args softmax A Tensor of type variant. A CSRSparseMatrix. grad_softmax A Tensor of type variant. The gradient of softmax. type A tf.DType from: tf.float32, tf.float64. name A name for the operation (optional). Returns A Tensor of type variant.
tensorflow.raw_ops.sparsematrixsoftmaxgrad
tf.raw_ops.SparseMatrixSparseCholesky Computes the sparse Cholesky decomposition of input. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseMatrixSparseCholesky tf.raw_ops.SparseMatrixSparseCholesky( input, permutation, type, name=None ) Computes the Sparse Cholesky decomposition of a sparse matrix, with the given fill-in reducing permutation. The input sparse matrix and the fill-in reducing permutation permutation must have compatible shapes. If the sparse matrix has rank 3; with the batch dimension B, then the permutation must be of rank 2; with the same batch dimension B. There is no support for broadcasting. Furthermore, each component vector of permutation must be of length N, containing each of the integers {0, 1, ..., N - 1} exactly once, where N is the number of rows of each component of the sparse matrix. Each component of the input sparse matrix must represent a symmetric positive definite (SPD) matrix; although only the lower triangular part of the matrix is read. If any individual component is not SPD, then an InvalidArgument error is thrown. The returned sparse matrix has the same dense shape as the input sparse matrix. For each component A of the input sparse matrix, the corresponding output sparse matrix represents L, the lower triangular Cholesky factor satisfying the following identity: A = L * Lt where Lt denotes the transpose of L (or its conjugate transpose, if type is complex64 or complex128). The type parameter denotes the type of the matrix elements. The supported types are: float32, float64, complex64 and complex128. Usage example: from tensorflow.python.ops.linalg.sparse import sparse_csr_matrix_ops a_indices = np.array([[0, 0], [1, 1], [2, 1], [2, 2], [3, 3]]) a_values = np.array([1.0, 2.0, 1.0, 3.0, 4.0], np.float32) a_dense_shape = [4, 4] with tf.Session() as sess: # Define (COO format) SparseTensor over Numpy array. a_st = tf.sparse.SparseTensor(a_indices, a_values, a_dense_shape) # Convert SparseTensors to CSR SparseMatrix. a_sm = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix( a_st.indices, a_st.values, a_st.dense_shape) # Obtain the Sparse Cholesky factor using AMD Ordering for reducing zero # fill-in (number of structural non-zeros in the sparse Cholesky factor). ordering_amd = sparse_csr_matrix_ops.sparse_matrix_ordering_amd(sparse_matrix) cholesky_sparse_matrices = ( sparse_csr_matrix_ops.sparse_matrix_sparse_cholesky( sparse_matrix, ordering_amd, type=tf.float32)) # Convert the CSRSparseMatrix Cholesky factor to a dense Tensor dense_cholesky = sparse_csr_matrix_ops.csr_sparse_matrix_to_dense( cholesky_sparse_matrices, tf.float32) # Evaluate the dense Tensor value. dense_cholesky_value = sess.run(dense_cholesky) dense_cholesky_value stores the dense Cholesky factor: [[ 1. 0. 0. 0.] [ 0. 1.41 0. 0.] [ 0. 0.70 1.58 0.] [ 0. 0. 0. 2.]] input: A CSRSparseMatrix. permutation: A Tensor. type: The type of input. Args input A Tensor of type variant. A CSRSparseMatrix. permutation A Tensor of type int32. A fill-in reducing permutation matrix. type A tf.DType from: tf.float32, tf.float64, tf.complex64, tf.complex128. name A name for the operation (optional). Returns A Tensor of type variant.
tensorflow.raw_ops.sparsematrixsparsecholesky
tf.raw_ops.SparseMatrixSparseMatMul Sparse-matrix-multiplies two CSR matrices a and b. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseMatrixSparseMatMul tf.raw_ops.SparseMatrixSparseMatMul( a, b, type, transpose_a=False, transpose_b=False, adjoint_a=False, adjoint_b=False, name=None ) Performs a matrix multiplication of a sparse matrix a with a sparse matrix b; returns a sparse matrix a * b, unless either a or b is transposed or adjointed. Each matrix may be transposed or adjointed (conjugated and transposed) according to the Boolean parameters transpose_a, adjoint_a, transpose_b and adjoint_b. At most one of transpose_a or adjoint_a may be True. Similarly, at most one of transpose_b or adjoint_b may be True. The inputs must have compatible shapes. That is, the inner dimension of a must be equal to the outer dimension of b. This requirement is adjusted according to whether either a or b is transposed or adjointed. The type parameter denotes the type of the matrix elements. Both a and b must have the same type. The supported types are: float32, float64, complex64 and complex128. Both a and b must have the same rank. Broadcasting is not supported. If they have rank 3, each batch of 2D CSRSparseMatrices within a and b must have the same dense shape. The sparse matrix product may have numeric (non-structural) zeros. zeros. Usage example: from tensorflow.python.ops.linalg.sparse import sparse_csr_matrix_ops a_indices = np.array([[0, 0], [2, 3], [2, 4], [3, 0]]) a_values = np.array([1.0, 5.0, -1.0, -2.0], np.float32) a_dense_shape = [4, 5] b_indices = np.array([[0, 0], [3, 0], [3, 1]]) b_values = np.array([2.0, 7.0, 8.0], np.float32) b_dense_shape = [5, 3] with tf.Session() as sess: # Define (COO format) Sparse Tensors over Numpy arrays a_st = tf.sparse.SparseTensor(a_indices, a_values, a_dense_shape) b_st = tf.sparse.SparseTensor(b_indices, b_values, b_dense_shape) # Convert SparseTensors to CSR SparseMatrix a_sm = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix( a_st.indices, a_st.values, a_st.dense_shape) b_sm = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix( b_st.indices, b_st.values, b_st.dense_shape) # Compute the CSR SparseMatrix matrix multiplication c_sm = sparse_csr_matrix_ops.sparse_matrix_sparse_mat_mul( a=a_sm, b=b_sm, type=tf.float32) # Convert the CSR SparseMatrix product to a dense Tensor c_sm_dense = sparse_csr_matrix_ops.csr_sparse_matrix_to_dense( c_sm, tf.float32) # Evaluate the dense Tensor value c_sm_dense_value = sess.run(c_sm_dense) c_sm_dense_value stores the dense matrix product: [[ 2. 0. 0.] [ 0. 0. 0.] [ 35. 40. 0.] [ -4. 0. 0.]] a: A CSRSparseMatrix. b: A CSRSparseMatrix with the same type and rank as a. type: The type of both a and b. transpose_a: If True, a transposed before multiplication. transpose_b: If True, b transposed before multiplication. adjoint_a: If True, a adjointed before multiplication. adjoint_b: If True, b adjointed before multiplication. Args a A Tensor of type variant. A CSRSparseMatrix. b A Tensor of type variant. A CSRSparseMatrix. type A tf.DType from: tf.float32, tf.float64, tf.complex64, tf.complex128. transpose_a An optional bool. Defaults to False. Indicates whether a should be transposed. transpose_b An optional bool. Defaults to False. Indicates whether b should be transposed. adjoint_a An optional bool. Defaults to False. Indicates whether a should be conjugate-transposed. adjoint_b An optional bool. Defaults to False. Indicates whether b should be conjugate-transposed. name A name for the operation (optional). Returns A Tensor of type variant.
tensorflow.raw_ops.sparsematrixsparsematmul
tf.raw_ops.SparseMatrixTranspose Transposes the inner (matrix) dimensions of a CSRSparseMatrix. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseMatrixTranspose tf.raw_ops.SparseMatrixTranspose( input, type, conjugate=False, name=None ) Transposes the inner (matrix) dimensions of a SparseMatrix and optionally conjugates its values. Args input A Tensor of type variant. A CSRSparseMatrix. type A tf.DType from: tf.float32, tf.float64, tf.complex64, tf.complex128. conjugate An optional bool. Defaults to False. Indicates whether input should be conjugated. name A name for the operation (optional). Returns A Tensor of type variant.
tensorflow.raw_ops.sparsematrixtranspose
tf.raw_ops.SparseMatrixZeros Creates an all-zeros CSRSparseMatrix with shape dense_shape. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseMatrixZeros tf.raw_ops.SparseMatrixZeros( dense_shape, type, name=None ) Args dense_shape A Tensor of type int64. The desired matrix shape. type A tf.DType from: tf.float32, tf.float64, tf.complex64, tf.complex128. name A name for the operation (optional). Returns A Tensor of type variant.
tensorflow.raw_ops.sparsematrixzeros
tf.raw_ops.SparseReduceMax Computes the max of elements across dimensions of a SparseTensor. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.SparseReduceMax tf.raw_ops.SparseReduceMax( input_indices, input_values, input_shape, reduction_axes, keep_dims=False, name=None ) This Op takes a SparseTensor and is the sparse counterpart to tf.reduce_max(). In particular, this Op also returns a dense Tensor instead of a sparse one. Reduces sp_input along the dimensions given in reduction_axes. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in reduction_axes. If keep_dims is true, the reduced dimensions are retained with length 1. If reduction_axes has no entries, all dimensions are reduced, and a tensor with a single element is returned. Additionally, the axes can be negative, which are interpreted according to the indexing rules in Python. Args input_indices A Tensor of type int64. 2-D. N x R matrix with the indices of non-empty values in a SparseTensor, possibly not in canonical ordering. input_values A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64. 1-D. N non-empty values corresponding to input_indices. input_shape A Tensor of type int64. 1-D. Shape of the input SparseTensor. reduction_axes A Tensor of type int32. 1-D. Length-K vector containing the reduction axes. keep_dims An optional bool. Defaults to False. If true, retain reduced dimensions with length 1. name A name for the operation (optional). Returns A Tensor. Has the same type as input_values.
tensorflow.raw_ops.sparsereducemax