The argument tensors can be a list or a dictionary of tensors.
The value returned by the function will be of the same type
as tensors.
The tensors entering this function are put into the bucket given by
which_bucket. Each bucket has its own queue. When a bucket contains
batch_size elements, this minibatch is pushed onto a top queue. The
tensors returned from this function are a the result of dequeueing the
next minibatch from this top queue.
This function is implemented using several queues. A QueueRunner for the
queues is added to the current Graph's QUEUE_RUNNER collection.
As the returned tensors are the result of a dequeue operation, evaluating
them will throw a tf.errors.OutOfRangeError when the input queue is
exhausted. If these tensors are feeding another input queue, its queue runner
will catch this exception, however, if they are used in your main thread
you are responsible for catching this yourself.
If dynamic_pad is True, it is sufficient that the rank of the
tensors is known, but individual dimensions may have shape None.
In this case, for each enqueue the dimensions with value None
may have a variable length; upon dequeue, the output tensors will be padded
on the right to the maximum shape of the tensors in the current minibatch.
For numbers, this padding takes value 0. For strings, this padding is
the empty string. See PaddingFIFOQueue for more info.
If allow_smaller_final_batch is True, a smaller batch value than
batch_size is returned when the queues are closed and there are not enough
elements to fill the batch, otherwise the pending elements are discarded.
In addition, all output tensors' static shapes, as accessed via the
get_shape() method will have a 0th Dimension value of None, and
operations that depend on fixed batch_size would fail.
Args
tensors
The list or dictionary of tensors, representing a single element,
to bucket. Nested lists are not supported.
which_bucket
An int32 scalar Tensor taking a value in [0, num_buckets).
batch_size
The new batch size pulled from the queue (all queues will have
the same size). If a list is passed in then each bucket will have a
different batch_size.
(python int, int32 scalar or iterable of integers of length num_buckets).
num_buckets
A python integer, the number of buckets.
num_threads
An integer. The number of threads enqueuing tensors.
capacity
An integer. The maximum number of minibatches in the top queue,
and also (by default) the maximum number of elements within each bucket.
bucket_capacities
(Optional) None or a list of integers, the capacities of
each bucket. If None, capacity is used (default). If specified, it must
be a list of integers of length num_buckets: the i-th element is used
as capacity for the i-th bucket queue.
shapes
(Optional) The shapes for each example. Defaults to the
inferred shapes for tensors.
dynamic_pad
Boolean. Allow variable dimensions in input shapes.
The given dimensions are padded upon dequeue so that tensors within a
batch have the same shapes.
allow_smaller_final_batch
(Optional) Boolean. If True, allow the final
batches to be smaller if there are insufficient items left in the queues.
keep_input
A bool scalar Tensor. If provided, this tensor controls
whether the input is added to the queue or not. If it evaluates True,
then tensors are added to the bucket; otherwise they are dropped. This
tensor essentially acts as a filtering mechanism.
shared_name
(Optional). If set, the queues will be shared under the given
name across multiple sessions.
name
(Optional) A name for the operations.
Returns
A tuple (bucket, outputs) where bucket is
a int32 scalar tensor and outputs is a list or
dictionary of batched outputs corresponding to elements of tensors.
Every step will receive a new bucket of outputs.
Raises
ValueError
If the shapes are not specified, and cannot be
inferred from the elements of tensors or if batch_size is a sequence
but its length != num_buckets. Also if bucket_capacities is not None but
its length != num_buckets.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2020-10-01 UTC."],[],[],null,["# tf.contrib.training.bucket\n\n\u003cbr /\u003e\n\n|---------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/training/python/training/bucket_ops.py#L63-L300) |\n\nLazy bucketing of input tensors according to `which_bucket`. \n\n tf.contrib.training.bucket(\n tensors, which_bucket, batch_size, num_buckets, num_threads=1, capacity=32,\n bucket_capacities=None, shapes=None, dynamic_pad=False,\n allow_smaller_final_batch=False, keep_input=True, shared_name=None, name=None\n )\n\nThe argument `tensors` can be a list or a dictionary of tensors.\nThe value returned by the function will be of the same type\nas `tensors`.\n\nThe tensors entering this function are put into the bucket given by\n`which_bucket`. Each bucket has its own queue. When a bucket contains\n`batch_size` elements, this minibatch is pushed onto a top queue. The\ntensors returned from this function are a the result of dequeueing the\nnext minibatch from this top queue.\n\nThis function is implemented using several queues. A `QueueRunner` for the\nqueues is added to the current `Graph`'s `QUEUE_RUNNER` collection.\n\nAs the returned tensors are the result of a dequeue operation, evaluating\nthem will throw a [`tf.errors.OutOfRangeError`](../../../tf/errors/OutOfRangeError) when the input queue is\nexhausted. If these tensors are feeding another input queue, its queue runner\nwill catch this exception, however, if they are used in your main thread\nyou are responsible for catching this yourself.\n| **Note:** If `dynamic_pad` is `False`, you must ensure that either (i) the `shapes` argument is passed, or (ii) all of the tensors in `tensors` must have fully-defined shapes. `ValueError` will be raised if neither of these conditions holds.\n\nIf `dynamic_pad` is `True`, it is sufficient that the *rank* of the\ntensors is known, but individual dimensions may have shape `None`.\nIn this case, for each enqueue the dimensions with value `None`\nmay have a variable length; upon dequeue, the output tensors will be padded\non the right to the maximum shape of the tensors in the current minibatch.\nFor numbers, this padding takes value 0. For strings, this padding is\nthe empty string. See `PaddingFIFOQueue` for more info.\n\nIf `allow_smaller_final_batch` is `True`, a smaller batch value than\n`batch_size` is returned when the queues are closed and there are not enough\nelements to fill the batch, otherwise the pending elements are discarded.\nIn addition, all output tensors' static shapes, as accessed via the\n`get_shape()` method will have a 0th `Dimension` value of `None`, and\noperations that depend on fixed batch_size would fail.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-----------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `tensors` | The list or dictionary of tensors, representing a single element, to bucket. Nested lists are not supported. |\n| `which_bucket` | An `int32` scalar Tensor taking a value in `[0, num_buckets)`. |\n| `batch_size` | The new batch size pulled from the queue (all queues will have the same size). If a list is passed in then each bucket will have a different batch_size. (python int, int32 scalar or iterable of integers of length num_buckets). |\n| `num_buckets` | A python integer, the number of buckets. |\n| `num_threads` | An integer. The number of threads enqueuing `tensors`. |\n| `capacity` | An integer. The maximum number of minibatches in the top queue, and also (by default) the maximum number of elements within each bucket. |\n| `bucket_capacities` | (Optional) None or a list of integers, the capacities of each bucket. If None, capacity is used (default). If specified, it must be a list of integers of length num_buckets: the i-th element is used as capacity for the i-th bucket queue. |\n| `shapes` | (Optional) The shapes for each example. Defaults to the inferred shapes for `tensors`. |\n| `dynamic_pad` | Boolean. Allow variable dimensions in input shapes. The given dimensions are padded upon dequeue so that tensors within a batch have the same shapes. |\n| `allow_smaller_final_batch` | (Optional) Boolean. If `True`, allow the final batches to be smaller if there are insufficient items left in the queues. |\n| `keep_input` | A `bool` scalar Tensor. If provided, this tensor controls whether the input is added to the queue or not. If it evaluates `True`, then `tensors` are added to the bucket; otherwise they are dropped. This tensor essentially acts as a filtering mechanism. |\n| `shared_name` | (Optional). If set, the queues will be shared under the given name across multiple sessions. |\n| `name` | (Optional) A name for the operations. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A tuple `(bucket, outputs)` where `bucket` is a `int32` scalar tensor and `outputs` is a list or dictionary of batched outputs corresponding to elements of `tensors`. Every step will receive a new bucket of outputs. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `ValueError` | If the `shapes` are not specified, and cannot be inferred from the elements of `tensors` or if batch_size is a sequence but its length != num_buckets. Also if bucket_capacities is not None but its length != num_buckets. |\n\n\u003cbr /\u003e"]]