desc
stringlengths
3
26.7k
decl
stringlengths
11
7.89k
bodies
stringlengths
8
553k
'Connects the module into the graph, with input Tensor `inputs`. Args: inputs: A 4D Tensor of shape: [batch_size, input_height, input_width, input_channels]. Returns: A 4D Tensor of shape: [batch_size, output_height, output_width, output_channels]. Raises: ValueError: If connecting the module into the graph any time af...
def _build(self, inputs):
self._input_shape = tuple(inputs.get_shape().as_list()) if (len(self._input_shape) != 4): raise base.IncompatibleShapeError('Input Tensor must have shape (batch_size, input_height, input_width, input_channels)') if (self._input_shape[3] is None): raise base.Incompatib...
'Returns the number of input channels.'
@property def input_channels(self):
self._ensure_is_connected() return self._input_channels
'Returns the number of output channels.'
@property def output_channels(self):
return self._output_channels
'Returns the channel multiplier.'
@property def channel_multiplier(self):
return self._channel_multiplier
'Returns the input shape.'
@property def input_shape(self):
self._ensure_is_connected() return self._input_shape
'Returns the kernel shape.'
@property def kernel_shape(self):
return self._kernel_shape
'Returns the stride.'
@property def stride(self):
return self._stride
'Returns the padding algorithm.'
@property def padding(self):
return self._padding
'Returns the Variable containing the depthwise weight matrix.'
@property def w_dw(self):
self._ensure_is_connected() return self._w_dw
'Returns the Variable containing the pointwise weight matrix.'
@property def w_pw(self):
self._ensure_is_connected() return self._w_pw
'Returns the Variable containing the bias. Returns: Variable object containing the bias, from the most recent __call__. Raises: base.NotConnectedError: If the module has not been connected to the graph yet, meaning the variables do not exist. AttributeError: If the module does not use bias.'
@property def b(self):
self._ensure_is_connected() if (not self._use_bias): raise AttributeError('No bias Variable in SeparableConv2D Module when `use_bias=False`.') return self._b
'Returns `True` if bias Variable is present in the module.'
@property def has_bias(self):
return self._use_bias
'Returns the initializers dictionary.'
@property def initializers(self):
return self._initializers
'Returns the partitioners dictionary.'
@property def partitioners(self):
return self._partitioners
'Returns the regularizers dictionary.'
@property def regularizers(self):
return self._regularizers
'Constructs a Conv3D module. See the following documentation for an explanation of VALID versus SAME padding modes: https://www.tensorflow.org/api_guides/python/nn#Convolution Args: output_channels: Number of output channels. `output_channels` can be either a number or a callable. In the latter case, since the function...
def __init__(self, output_channels, kernel_shape, stride=1, rate=1, padding=SAME, use_bias=True, initializers=None, partitioners=None, regularizers=None, custom_getter=None, name='conv_3d'):
super(Conv3D, self).__init__(custom_getter=custom_getter, name=name) self._output_channels = output_channels self._input_shape = None self._kernel_shape = _fill_and_verify_parameter_shape(kernel_shape, 3, 'kernel') if (isinstance(stride, collections.Iterable) and (len(stride) == 5)): self._s...
'Connects the Conv3D module into the graph, with input Tensor `inputs`. If this is not the first time the module has been connected to the graph, the input Tensor provided here must have the same final dimension (i.e. `input_channels`), in order for the existing variables to be the correct size for the multiplication. ...
def _build(self, inputs):
self._input_shape = tuple(inputs.get_shape().as_list()) if (len(self._input_shape) != 5): raise base.IncompatibleShapeError('Input Tensor must have shape (batch_size, input_depth, input_height, input_width, input_channels)') if (self._input_shape[4] is None): raise...
'Returns the number of output channels.'
@property def output_channels(self):
if callable(self._output_channels): self._output_channels = self._output_channels() return self._output_channels
'Returns the input shape.'
@property def input_shape(self):
self._ensure_is_connected() return self._input_shape
'Returns the kernel shape.'
@property def kernel_shape(self):
return self._kernel_shape
'Returns the stride.'
@property def stride(self):
return (((1,) + self._stride) + (1,))
'Returns the padding algorithm.'
@property def padding(self):
return self._padding
'Returns the Variable containing the weight matrix.'
@property def w(self):
self._ensure_is_connected() return self._w
'Returns the Variable containing the bias.'
@property def b(self):
self._ensure_is_connected() if (not self._use_bias): raise AttributeError('No bias Variable in Conv2D Module when `use_bias=False`.') return self._b
'Returns `True` if bias Variable is present in the module.'
@property def has_bias(self):
return self._use_bias
'Returns the initializers dictionary.'
@property def initializers(self):
return self._initializers
'Returns the partitioners dictionary.'
@property def partitioners(self):
return self._partitioners
'Returns the regularizers dictionary.'
@property def regularizers(self):
return self._regularizers
'Returns matching `Conv3DTranspose` module. Args: name: Optional string assigning name of transpose module. The default name is constructed by appending "_transpose" to `self.name`. Returns: `Conv3DTranspose` module. Raises: base.NotSupportedError: If `rate` in any dimension > 1.'
def transpose(self, name=None):
if any(((x > 1) for x in self._rate)): raise base.NotSupportedError('Cannot transpose a dilated convolution module.') if (name is None): name = (self.module_name + '_transpose') return Conv3DTranspose(output_channels=(lambda : self.input_shape[(-1)]), output_shape=(lambda : se...
'Constructs a `Conv3DTranspose` module. See the following documentation for an explanation of VALID versus SAME padding modes: https://www.tensorflow.org/api_guides/python/nn#Convolution Args: output_channels: Number of output channels. `output_channels` can be either a number or a callable. In the latter case, since t...
def __init__(self, output_channels, output_shape=None, kernel_shape=None, stride=1, padding=SAME, use_bias=True, initializers=None, partitioners=None, regularizers=None, custom_getter=None, name='conv_3d_transpose'):
super(Conv3DTranspose, self).__init__(custom_getter=custom_getter, name=name) self._output_channels = output_channels if (output_shape is None): self._output_shape = None self._use_default_output_shape = True else: self._use_default_output_shape = False if callable(output...
'Connects the Conv3DTranspose module into the graph. If this is not the first time the module has been connected to the graph, the input Tensor provided here must have the same final dimension (i.e. `input_channels`), in order for the existing variables to be the correct size for the multiplication. The batch size may ...
def _build(self, inputs):
self._input_shape = tuple(inputs.get_shape().as_list()) if (len(self._input_shape) != 5): raise base.IncompatibleShapeError('Input Tensor must have shape (batch_size, input_depth, input_height, input_width, input_channels)') if (self._input_shape[4] is None): raise...
'Returns the number of output channels.'
@property def output_channels(self):
if callable(self._output_channels): self._output_channels = self._output_channels() return self._output_channels
'Returns the kernel shape.'
@property def kernel_shape(self):
return self._kernel_shape
'Returns the stride.'
@property def stride(self):
return self._stride
'Returns the output shape.'
@property def output_shape(self):
if (self._output_shape is None): self._ensure_is_connected() if callable(self._output_shape): self._output_shape = tuple(self._output_shape()) return self._output_shape
'Returns the padding algorithm.'
@property def padding(self):
return self._padding
'Returns the Variable containing the weight matrix.'
@property def w(self):
self._ensure_is_connected() return self._w
'Returns the Variable containing the bias. Returns: Variable object containing the bias, from the most recent __call__. Raises: module.NotConnectedError: If the module has not been connected to the graph yet, meaning the variables do not exist. AttributeError: If the module does not use bias.'
@property def b(self):
self._ensure_is_connected() if (not self._use_bias): raise AttributeError('No bias Variable in Conv3DTranspose Module when `use_bias=False`.') return self._b
'Returns `True` if bias Variable is present in the module.'
@property def has_bias(self):
return self._use_bias
'Returns the intializers dictionary.'
@property def initializers(self):
return self._initializers
'Returns the partitioners dictionary.'
@property def partitioners(self):
return self._partitioners
'Returns the regularizers dictionary.'
@property def regularizers(self):
return self._regularizers
'Returns the input shape.'
@property def input_shape(self):
self._ensure_is_connected() return self._input_shape
'Returns transposed Conv3DTranspose module, i.e. a Conv3D module.'
def transpose(self, name=None):
if (name is None): name = (self.module_name + '_transpose') return Conv3D(output_channels=(lambda : self.input_shape[(-1)]), kernel_shape=self.kernel_shape, stride=self.stride[1:(-1)], padding=self.padding, use_bias=self._use_bias, initializers=self.initializers, partitioners=self.partitioners, regulari...
'Tests that the op can be instantiated twice with appropriate results. Implementations with inappropriate global registration of gradients will fail this test.'
def testTwoOps(self):
x = tf.placeholder(tf.float32, [1]) y = (x * x) y = snt.scale_gradient(y, 0.1) y = snt.scale_gradient(y, 0.1) dydx = tf.gradients([y], [x])[0] with self.test_session() as sess: (dydx_, y_) = sess.run([dydx, y], feed_dict={x: [3.0]}) self.assertAlmostEqual(dydx_[0], ((2 * (0.1 ** ...
'Constructs a BatchNorm module. By default reduces over all input tensor dimensions apart from the final dimension. This has the effect of treating pixels in 1D/2D/3D images as additional elements of the minibatch. If this is not the desired behaviour, the user can specify the tensor indices to reduce over with `axis`....
def __init__(self, axis=None, offset=True, scale=False, decay_rate=0.999, eps=0.001, initializers=None, partitioners=None, regularizers=None, update_ops_collection='update_ops', fused=False, name='batch_norm'):
super(BatchNorm, self).__init__(name=name) self._axis = axis self._offset = offset self._scale = scale self._decay_rate = decay_rate self._eps = eps self._update_ops_collection = update_ops_collection self._fused = fused self._initializers = util.check_initializers(initializers, self...
'Builds the statistics part of the graph when using moving variance. Args: input_batch: Input batch Tensor. axis: Indices of `input_batch` to reduce over. use_batch_stats: Boolean to indicate if batch statistics should be calculated, otherwise moving averages are returned. dtype: TensorFlow datatype to use for the movi...
def _build_statistics(self, input_batch, axis, use_batch_stats, dtype):
if (self.MOVING_MEAN not in self._initializers): self._initializers[self.MOVING_MEAN] = create_mean_initializer() self._moving_mean = tf.get_variable('moving_mean', dtype=dtype, shape=self._mean_shape, collections=[tf.GraphKeys.MOVING_AVERAGE_VARIABLES, tf.GraphKeys.GLOBAL_VARIABLES], initializer=self._...
'Builds the moving average update ops when using moving variance. Args: mean: The mean value to update with. variance: The variance value to update with. is_training: Boolean Tensor to indicate if we\'re currently in training mode. Returns: Tuple of `(update_mean_op, update_variance_op)` when `is_training` is or could ...
def _build_update_ops(self, mean, variance, is_training):
def build_update_ops(): 'Builds the exponential moving average update ops.' update_mean_op = moving_averages.assign_moving_average(variable=self._moving_mean, value=mean, decay=self._decay_rate, zero_debias=False, name='update_moving_mean').op update_variance_op = moving_av...
'Infers the data format for the fused batch norm. It uses the axis option to infer this information. Specifically, the axis value (0, 1, 2) corresponds to data format NHWC and the axis value (0, 2, 3) to data format NCHW. Args: input_batch: A Tensor of arbitrary dimension. Returns: A string description of the data form...
def _infer_fused_data_format(self, input_batch):
input_shape = input_batch.get_shape().as_list() input_shape_len = len(input_shape) if (input_shape_len != 4): raise NotImplementedError('fused batch norm supports only input with 4 dimensions, it received input of dimensionality {:d}'.format(input_shape_len)...
'Creates a fused batch normalization op.'
def _fused_batch_norm_op(self, input_batch, mean, variance, use_batch_stats):
gamma_flatten = tf.reshape(self._gamma, shape=((-1),)) beta_flatten = tf.reshape(self._beta, shape=((-1),)) flatten_mean = tf.reshape(mean, shape=((-1),)) flatten_variance = tf.reshape(variance, shape=((-1),)) use_batch_stats = tf.convert_to_tensor(use_batch_stats) common_args = {'scale': gamma_...
'Creates a batch normalization op. It uses the tf.nn.batch_normalization op by default and the tf.nn.fused_batch_norm op to support fused batch normalization. Args: input_batch: A input Tensor of arbitrary dimension. mean: A mean tensor. variance: A variance tensor. use_batch_stats: A bool value that indicates whether ...
def _batch_norm_op(self, input_batch, mean, variance, use_batch_stats):
if self._fused: mean_shape = mean.get_shape() variance_shape = variance.get_shape() (batch_norm_op, mean, variance) = self._fused_batch_norm_op(input_batch, mean, variance, use_batch_stats) mean = tf.reshape(mean, mean_shape) variance = tf.reshape(variance, variance_shape) ...
'Sets up optional scale and offset factors.'
def _build_scale_offset(self, dtype):
self._beta = None if (self._offset or self._fused): if (self.BETA not in self._initializers): self._initializers[self.BETA] = create_beta_initializer() self._beta = tf.get_variable(self.BETA, dtype=dtype, shape=self._mean_shape, initializer=self._initializers[self.BETA], partitioner=...
'Connects the BatchNorm module into the graph. Args: input_batch: A Tensor of arbitrary dimension. By default, the final dimension is not reduced over when computing the minibatch statistics. is_training: A boolean to indicate if the module should be connected in training mode, meaning the moving averages are updated. ...
def _build(self, input_batch, is_training, test_local_stats=True):
input_shape = input_batch.get_shape() if (self._axis is not None): if (len(self._axis) > len(input_shape)): raise base.IncompatibleShapeError('Too many indices specified in axis: len({}) > len({}).'.format(self._axis, input_shape)) if (max(self._axis) >= len(i...
'Constructs an Embed module. Args: vocab_size: int. Number of unique tokens to embed. If not provided, an existing vocabulary matrix from which vocab_size can be inferred must be provided as existing_vocab. embed_dim: int or None. Number of dimensions to assign to each embedding. If not specified, a sensible default is...
def __init__(self, vocab_size=None, embed_dim=None, existing_vocab=None, initializers=None, partitioners=None, regularizers=None, trainable=True, name='embed'):
if ((vocab_size is None) and (existing_vocab is None)): raise ValueError('Must provide on of vocab_size or existing_vocab.') if ((existing_vocab is not None) and (not all(((x is None) for x in [vocab_size, embed_dim, initializers, partitioners])))): raise ValueError('If exis...
'Lookup embeddings. Looks up an embedding vector for each value in `ids`. All ids must be within [0, vocab_size), else an `InvalidArgumentError` is raised at runtime. Args: ids: Tensor of dtype int64. Returns: Tensor of tf.shape(ids) + [embedding_dim] and dtype float32.'
def _build(self, ids):
if (self._existing_vocab is None): if (self.EMBEDDINGS not in self._initializers): self._initializers[self.EMBEDDINGS] = basic.create_linear_initializer(self._vocab_size) self._embeddings = tf.get_variable('embeddings', shape=[self._vocab_size, self._embed_dim], dtype=tf.float32, initial...
'Size of input vocabulary.'
@property def vocab_size(self):
return self._vocab_size
'Size of embedding vectors.'
@property def embed_dim(self):
return self._embed_dim
'Returns the Variable containing embeddings. Returns: A 2D Variable containing one embedding vector per row, constructed in the most recent __call__. Raises: base.NotConnectedError: If the module has not been connected to the graph yet, meaning the variables do not exist.'
@property def embeddings(self):
self._ensure_is_connected() return self._embeddings
'Construct a SkipConnectionCore. Args: base_core: Base RNNCore to wrap. input_shape: Shape of the input as tuple, excluding the batch size. name: Name of the module.'
def __init__(self, base_core, input_shape=None, name='skip_connection_core'):
super(SkipConnectionCore, self).__init__(name=name) self._base_core = base_core self._input_shape = input_shape
'Check that custom getters work appropriately.'
def testCustomGetter(self):
def custom_getter(getter, *args, **kwargs): kwargs['trainable'] = False return getter(*args, **kwargs) inputs = tf.placeholder(tf.float32, shape=[self.batch_size, self.in_size]) lin1 = snt.Linear(output_size=self.out_size, custom_getter=custom_getter) lin1(inputs) self.assertEqual(0,...
'Tests a particular device (e.g. gpu, cpu) placement. This test ensures that the following device placement is possible: * The Linear module is on the gpu, * the optimizer is declared to be on the cpu, * but when calling minimize on the optimizer, we pass True to colocate_gradients_with_ops. The test exists because whi...
def testGradientColocation(self):
if (not any(((x.device_type == 'GPU') for x in device_lib.list_local_devices()))): tf.logging.warn('Skipping the gradient colocation test as there is no gpu available to tensorflow.') return n_outputs = 5 n_inputs = 3 batch_size = 7 linear = snt.Li...
'Test where idx is an integer.'
def testBasicSelect(self):
shape0 = [2, 3] shape1 = [2, 3, 4] input0 = tf.random_uniform(shape=shape0) input1 = tf.random_uniform(shape=shape1) mod = snt.SelectInput(idx=0) output = mod(input0, input1) output0 = tf.identity(input0) with self.test_session() as sess: out = sess.run([output, output0]) ...
'Test where idx is a tuple.'
def testTupleSelect(self):
shape0 = [1, 2] shape1 = [1, 2, 3] shape2 = [1, 2, 3, 4] input0 = tf.random_uniform(shape=shape0) input1 = tf.random_uniform(shape=shape1) input2 = tf.random_uniform(shape=shape2) mod = snt.SelectInput(idx=(0, 2)) output = mod(input0, input1, input2) output0 = tf.identity(input0) ...
'Test where idx is a nested list.'
def testNestedListSelect(self):
shape0 = [1, 2] shape1 = [1, 2, 3] shape2 = [1, 2, 3, 4] input0 = tf.random_uniform(shape=shape0) input1 = tf.random_uniform(shape=shape1) input2 = tf.random_uniform(shape=shape2) mod = snt.SelectInput(idx=[2, [1, 0, 1]]) output = mod(input0, input1, input2) output0 = tf.identity(inp...
'Checks error on invalid idx value.'
def testInvalidIdxValue(self):
input1 = tf.placeholder(tf.float32, shape=[2, 3, 4, 5, 6]) input2 = tf.placeholder(tf.float32, shape=[7, 8]) invalid_idx = 2 mod = snt.SelectInput(idx=[invalid_idx]) err = '`idx` contains out of bound entries \\(they should be in the range \\[0, 2\\)\\)' wi...
'Checks error on invalid idx type.'
def testInvalidIdxType(self):
invalid_idx = 0.5 err = '`idx` should be a \\(nested\\) array/tuple, or an integer.' with self.assertRaisesRegexp(TypeError, err): snt.SelectInput(idx=invalid_idx)
'Constructs a Linear module. Args: output_size: Output dimensionality. `output_size` can be either an integer or a callable. In the latter case, since the function invocation is deferred to graph construction time, the user must only ensure that output_size can be called, returning an integer, when build is called. use...
def __init__(self, output_size, use_bias=True, initializers=None, partitioners=None, regularizers=None, custom_getter=None, name='linear'):
super(Linear, self).__init__(custom_getter=custom_getter, name=name) self._output_size = output_size self._use_bias = use_bias self._input_shape = None self._w = None self._b = None self.possible_keys = self.get_possible_initializer_keys(use_bias=use_bias) self._initializers = util.check...
'Connects the Linear module into the graph, with input Tensor `inputs`. If this is not the first time the module has been connected to the graph, the Tensor provided here must have the same final dimension, in order for the existing variables to be the correct size for the multiplication. The batch size may differ for ...
def _build(self, inputs):
input_shape = tuple(inputs.get_shape().as_list()) if (len(input_shape) != 2): raise base.IncompatibleShapeError('{}: rank of shape must be 2 not: {}'.format(self.scope_name, len(input_shape))) if (input_shape[1] is None): raise base.IncompatibleShapeError('{}: Inpu...
'Returns the Variable containing the weight matrix. Returns: Variable object containing the weights, from the most recent __call__. Raises: base.NotConnectedError: If the module has not been connected to the graph yet, meaning the variables do not exist.'
@property def w(self):
self._ensure_is_connected() return self._w
'Returns the Variable containing the bias. Returns: Variable object containing the bias, from the most recent __call__. Raises: base.NotConnectedError: If the module has not been connected to the graph yet, meaning the variables do not exist. AttributeError: If the module does not use bias.'
@property def b(self):
self._ensure_is_connected() if (not self._use_bias): raise AttributeError('No bias Variable in Linear Module when `use_bias=False`.') return self._b
'Returns the module output size.'
@property def output_size(self):
if callable(self._output_size): self._output_size = self._output_size() return self._output_size
'Returns `True` if bias Variable is present in the module.'
@property def has_bias(self):
return self._use_bias
'Returns the initializers dictionary.'
@property def initializers(self):
return self._initializers
'Returns the partitioners dictionary.'
@property def partitioners(self):
return self._partitioners
'Returns the regularizers dictionary.'
@property def regularizers(self):
return self._regularizers
'Returns a cloned `Linear` module. Args: name: Optional string assigning name of cloned module. The default name is constructed by appending "_clone" to `self.module_name`. Returns: Cloned `Linear` module.'
def clone(self, name=None):
if (name is None): name = (self.module_name + '_clone') return Linear(output_size=self.output_size, use_bias=self._use_bias, initializers=self._initializers, partitioners=self._partitioners, regularizers=self._regularizers, name=name)
'Returns shape of input `Tensor` passed at last call to `build`.'
@property def input_shape(self):
self._ensure_is_connected() return self._input_shape
'Returns transposed `Linear` module. Args: name: Optional string assigning name of transpose module. The default name is constructed by appending "_transpose" to `self.module_name`. Returns: Transposed `Linear` module.'
def transpose(self, name=None):
if (name is None): name = (self.module_name + '_transpose') return Linear(output_size=(lambda : self.input_shape[1]), use_bias=self._use_bias, initializers=self._initializers, partitioners=self._partitioners, regularizers=self._regularizers, name=name)
'Constructs an AddBias module that supports broadcasting. Args: output_shape: Output dimensionality. `output_shape` can be either `None`, a `tuple`, or a `callable`. In the latter case, since the function invocation is deferred to graph construction time, the user must only ensure that `output_shape` can be called, ret...
def __init__(self, output_shape=None, bias_dims=None, initializers=None, partitioners=None, regularizers=None, name='add'):
super(AddBias, self).__init__(name=name) self._output_shape = output_shape self._input_shape = None self._bias_dims = bias_dims self._b = None self._initializers = util.check_initializers(initializers, self.POSSIBLE_INITIALIZER_KEYS) self._partitioners = util.check_partitioners(partitioners,...
'Connects the Add module into the graph, with input Tensor `inputs`. Args: inputs: A Tensor of size `[batch_size, input_size1, ...]`. multiplier: A scalar or Tensor which the bias term is multiplied by before adding it to `inputs`. Anything which works in the expression `bias * multiplier` is acceptable here. This may ...
def _build(self, inputs, multiplier=1):
input_shape = tuple(inputs.get_shape().as_list()) bias_shape = calculate_bias_shape(input_shape, self._bias_dims) if (len(input_shape) < 2): raise base.IncompatibleShapeError('Rank of input shape must be >=2 not: {}.'.format(len(input_shape))) if ((self._input_shape is no...
'Returns the Variable containing the bias. Returns: Variable object containing the bias, from the most recent __call__. Raises: base.NotConnectedError: If the module has not been connected to the graph yet, meaning the variables do not exist.'
@property def b(self):
self._ensure_is_connected() return self._b
'Returns shape of input `Tensor` passed at last call to `build`.'
@property def input_shape(self):
self._ensure_is_connected() return self._input_shape
'Returns transposed `AddBias` module. Args: name: Optional string assigning name of transpose module. The default name is constructed by appending "_transpose" to `self.module_name`. Returns: Transposed `AddBias` module.'
def transpose(self, name=None):
if (name is None): name = (self.module_name + '_transpose') return AddBias(output_shape=(lambda : self._input_shape), bias_dims=self._bias_dims, initializers=self._initializers, regularizers=self._regularizers, name=name)
'Constructs a BatchReshape module. Args: shape: Shape to reshape the input Tensor to while preserving its first `preserve_dims` dimensions; `shape` can be either a tuple/list, or a callable that returns the actual shape. The callable does not need to be ready to return something meaningful at construction time, but it ...
def __init__(self, shape, preserve_dims=1, name='batch_reshape'):
super(BatchReshape, self).__init__(name=name) self._input_shape = None self._shape = shape self._preserve_dims = preserve_dims if (preserve_dims <= 0): raise ValueError('Argument preserve_dims should be >= 1.') if (not callable(self._shape)): self._shape = tuple(se...
'Replaces the -1 wildcard in the output shape vector. This function infers the correct output shape given the input dimensions. Args: dimensions: List of input non-batch dimensions. Returns: Tuple of non-batch output dimensions.'
def _infer_shape(self, dimensions):
n = np.prod(dimensions) m = np.prod(abs(np.array(self._shape))) v = np.array(self._shape) v[(v == (-1))] = (n // m) return tuple(v)
'Connects the module into the graph, with input Tensor `inputs`. Args: inputs: A Tensor of shape [b_1, b_2, ..., b_preserve_dims, b_preserve_dims+1, ...]. Returns: A Tensor of shape [b_1, b_2, ..., b_preserve_dims, b_reshape_1, b_reshape_2, ...], with reshaping defined by the constructor `shape` parameter. Raises: Valu...
def _build(self, inputs):
full_input_shape = inputs.get_shape().as_list() if (len(full_input_shape) < self._preserve_dims): raise ValueError('Input tensor has {} dimensions, should have at least as many as preserve_dims={}'.format(len(full_input_shape), self._preserve_dims)) self._input_sh...
'Returns transpose batch reshape.'
def transpose(self, name=None):
if (name is None): name = (self.module_name + '_transpose') return BatchReshape(shape=(lambda : self.input_shape), preserve_dims=self._preserve_dims, name=name)
'Constructs a BatchFlatten module. Args: preserve_dims: Number of leading dimensions that will not be reshaped. For example, given an input Tensor with shape `[B, H, W, C]`: * `preserve_dims=1` will return a Tensor with shape `[B, H*W*C]`. * `preserve_dims=2` will return a Tensor with shape `[B, H, W*C]`. * `preserve_d...
def __init__(self, preserve_dims=1, name='batch_flatten'):
super(BatchFlatten, self).__init__(shape=((-1),), preserve_dims=preserve_dims, name=name)
'Constructs a FlattenTrailingDimensions module. For example, given an input Tensor with shape `[B, H, W, C]`: * `dim_from=1` will return a Tensor with shape `[B, H*W*C]`. * `dim_from=2` will return a Tensor with shape `[B, H, W*C]`. * `dim_from=3` will return the input itself. * `dim_from=4` will return a Tensor with s...
def __init__(self, dim_from, name='batch_dim_from'):
if (dim_from <= 0): raise ValueError('Argument dim_from should be >= 1.') super(FlattenTrailingDimensions, self).__init__(shape=((-1),), preserve_dims=dim_from, name=name)
'Constructs a TrainableVariable module. Args: shape: Tensor shape. dtype: Tensor data type. initializers: Optional dictionary containing ops to initialize the weight Tensor, with key \'w\'. partitioners: Optional dict containing a partitioner to partition the weight (with key \'w\'). As a default, no partitioner is use...
def __init__(self, shape, dtype=tf.float32, initializers=None, partitioners=None, regularizers=None, name='trainable_variable'):
super(TrainableVariable, self).__init__(name=name) self._shape = tuple(shape) self._dtype = dtype self._initializers = util.check_initializers(initializers, self.POSSIBLE_INITIALIZER_KEYS) self._partitioners = util.check_partitioners(partitioners, self.POSSIBLE_INITIALIZER_KEYS) self._regularize...
'Connects the TrainableTensor module into the graph. Returns: A Tensor of shape as determined in the constructor.'
def _build(self):
if ('w' not in self._initializers): stddev = (1 / math.sqrt(np.prod(self._shape))) self._initializers['w'] = tf.truncated_normal_initializer(stddev=stddev) self._w = tf.get_variable('w', shape=self._shape, dtype=self._dtype, initializer=self._initializers['w'], partitioner=self._partitioners.get...
'Returns the Variable containing the weights Tensor. Returns: Variable object containing the weights, from the most recent __call__. Raises: base.Error: If the module has not been connected to the graph yet, meaning the variables do not exist.'
@property def w(self):
self._ensure_is_connected() return self._w
'Constructor of the module. Args: module_or_op: Module or tensorflow op to apply to an input tensor. n_dims: Number of dimensions to merge before using module on the input of BatchApply. input_example_index: Index of input that has same shape for the first `n_dims` dimensions as `module_or_op` output(s). This is used f...
def __init__(self, module_or_op, n_dims=2, input_example_index=0, name='batch_apply'):
super(BatchApply, self).__init__(name=name) if (not isinstance(n_dims, int)): raise TypeError(('n_dims should be an integer, it is a %s instead.' % type(n_dims))) if (n_dims <= 0): raise ValueError('n_dims should be greater than zero.') self._mod...
'Connects the BatchApply module into the graph. Args: *args: a Tensor or a nested list or dictionary of Tensors. The input tensors will have their first dimensions merged, then an op or a module will be called on the input. The first dimension of the output tensor(s) will be split again based on the leading dimensions ...
def _build(self, *args, **kwargs):
flattened = nest.flatten_iterable([args, kwargs]) merged_flattened = [(merge_leading_dims(inp, self._n_dims) if (inp is not None) else None) for inp in flattened] (merged_args, merged_kwargs) = nest.pack_iterable_as([args, kwargs], merged_flattened) results = self._module(*merged_args, **merged_kwargs) ...
'Constructs the `SliceByDim` module. Args: dims: The dimensions to slice along, as a list of unique integers. Negative integers index from the final dimension backwards, as in python arrays. begin: The beginning indices of the slicing, as a list of integers. Must be the same length as the `dims` list. size: The size of...
def __init__(self, dims, begin, size, name='slice_by_dim'):
super(SliceByDim, self).__init__(name=name) self._dims = dims self._begin = begin self._size = size if (np.unique(dims).size != len(dims)): raise ValueError('dims must not have any repeated integers.') if (len(begin) != len(dims)): raise ValueError('begin mus...
'Connects the SliceByDim module into the graph. Args: inputs: `Tensor` to slice. Its rank must be greater than the maximum dimension specified in `dims` (plus one as python is 0 indexed). Returns: The sliced tensor. Raises: ValueError: If `inputs` tensor has insufficient rank.'
def _build(self, inputs):
shape_inputs = inputs.get_shape().as_list() rank = len(shape_inputs) max_dim = (np.max(self._dims) + 1) if (rank < max_dim): raise ValueError('Rank of inputs must be at least {}.'.format(max_dim)) full_begin = ([0] * rank) full_size = ([(-1)] * rank) for (dim, be...
'Constructs the `TileByDim` module. Args: dims: The dimensions to tile along, as a list of unique integers. multiples: The multiple of the tiling, as a list of integers. Must be the same length as the `dims` list. name: The name of the module. Raises: ValueError: If `dims` has non-unique integers, or if the size of `mu...
def __init__(self, dims, multiples, name='tile_by_dim'):
super(TileByDim, self).__init__(name=name) self._dims = dims self._multiples = multiples if (np.unique(dims).size != len(dims)): raise ValueError('dims must not have any repeated integers.') if (len(multiples) != len(dims)): raise ValueError('multiples must ha...
'Connects the `TileByDim` module into the graph. Args: inputs: `Tensor` to tile. Returns: The tiled tensor.'
def _build(self, inputs):
shape_inputs = inputs.get_shape().as_list() rank = len(shape_inputs) full_multiples = ([1] * rank) for (dim, multiple) in zip(self._dims, self._multiples): full_multiples[dim] = multiple return tf.tile(inputs, multiples=full_multiples)
'Constructs the MergeDims module. Args: start: Start of the range of dimensions to merge. size: Size the range of dimensions to merge. name: The name of the module. Raises: ValueError: If `size` is not strictly greater than 1.'
def __init__(self, start, size, name='merge_dims'):
super(MergeDims, self).__init__(name=name) self._start = start self._size = size if (size <= 1): raise ValueError('`size` should be strictly greater than 1.')
'Connects the MergeDims module into the graph. Args: inputs: Tensor or a nested list of Tensors to merge. Its rank must be greater than or equal to `start` + `size`. Returns: The merged Tensor or a nested list of merged Tensors. Raises: ValueError: If any of the `inputs` tensors has insufficient rank.'
def _build(self, inputs):
if nest.is_sequence(inputs): merged_tensors = [self._merge(tensor) for tensor in nest.flatten(inputs)] return nest.pack_sequence_as(inputs, merged_tensors) return self._merge(inputs)