tf.keras.preprocessing.sequence.TimeseriesGenerator
Stay organized with collections
Save and categorize content based on your preferences.
Utility class for generating batches of temporal data.
Inherits From: Sequence
tf.keras.preprocessing.sequence.TimeseriesGenerator(
data, targets, length, sampling_rate=1, stride=1, start_index=0, end_index=None,
shuffle=False, reverse=False, batch_size=128
)
This class takes in a sequence of data-points gathered at
equal intervals, along with time series parameters such as
stride, length of history, etc., to produce batches for
training/validation.
Arguments:
data: Indexable generator (such as list or Numpy array)
containing consecutive data points (timesteps).
The data should be at 2D, and axis 0 is expected
to be the time dimension.
targets: Targets corresponding to timesteps in data
.
It should have same length as data
.
length: Length of the output sequences (in number of timesteps).
sampling_rate: Period between successive individual timesteps
within sequences. For rate r
, timesteps
data[i]
, data[i-r]
, ... data[i - length]
are used for create a sample sequence.
stride: Period between successive output sequences.
For stride s
, consecutive output samples would
be centered around data[i]
, data[i+s]
, data[i+2*s]
, etc.
start_index: Data points earlier than start_index
will not be used
in the output sequences. This is useful to reserve part of the
data for test or validation.
end_index: Data points later than end_index
will not be used
in the output sequences. This is useful to reserve part of the
data for test or validation.
shuffle: Whether to shuffle output samples,
or instead draw them in chronological order.
reverse: Boolean: if true
, timesteps in each output sample will be
in reverse chronological order.
batch_size: Number of timeseries samples in each batch
(except maybe the last one).
Returns:
A Sequence instance.
Examples:
from keras.preprocessing.sequence import TimeseriesGenerator
import numpy as np
data = np.array([[i] for i in range(50)])
targets = np.array([[i] for i in range(50)])
data_gen = TimeseriesGenerator(data, targets,
length=10, sampling_rate=2,
batch_size=2)
assert len(data_gen) == 20
batch_0 = data_gen[0]
x, y = batch_0
assert np.array_equal(x,
np.array([[[0], [2], [4], [6], [8]],
[[1], [3], [5], [7], [9]]]))
assert np.array_equal(y,
np.array([[10], [11]]))
Methods
get_config
View source
get_config()
Returns the TimeseriesGenerator configuration as Python dictionary.
Returns |
A Python dictionary with the TimeseriesGenerator configuration.
|
on_epoch_end
View source
on_epoch_end()
Method called at the end of every epoch.
to_json
View source
to_json(
**kwargs
)
Returns a JSON string containing the timeseries generator
configuration. To load a generator from a JSON string, use
keras.preprocessing.sequence.timeseries_generator_from_json(json_string)
.
Arguments |
**kwargs
|
Additional keyword arguments
to be passed to json.dumps() .
|
Returns |
A JSON string containing the tokenizer configuration.
|
__getitem__
View source
__getitem__(
index
)
__iter__
View source
__iter__()
Create a generator that iterate over the Sequence.
__len__
View source
__len__()
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2021-08-16 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2021-08-16 UTC."],[],[],null,["# tf.keras.preprocessing.sequence.TimeseriesGenerator\n\n\u003cbr /\u003e\n\n|-------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------|\n| [TensorFlow 1 version](/versions/r1.15/api_docs/python/tf/keras/preprocessing/sequence/TimeseriesGenerator) | [View source on GitHub](https://github.com/keras-team/keras/tree/master/keras/preprocessing/sequence.py#L30-L85) |\n\nUtility class for generating batches of temporal data.\n\nInherits From: [`Sequence`](../../../../tf/keras/utils/Sequence)\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator`](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/TimeseriesGenerator)\n\n\u003cbr /\u003e\n\n tf.keras.preprocessing.sequence.TimeseriesGenerator(\n data, targets, length, sampling_rate=1, stride=1, start_index=0, end_index=None,\n shuffle=False, reverse=False, batch_size=128\n )\n\nThis class takes in a sequence of data-points gathered at\nequal intervals, along with time series parameters such as\nstride, length of history, etc., to produce batches for\ntraining/validation.\nArguments:\ndata: Indexable generator (such as list or Numpy array)\ncontaining consecutive data points (timesteps).\nThe data should be at 2D, and axis 0 is expected\nto be the time dimension.\ntargets: Targets corresponding to timesteps in `data`.\nIt should have same length as `data`.\nlength: Length of the output sequences (in number of timesteps).\nsampling_rate: Period between successive individual timesteps\nwithin sequences. For rate `r`, timesteps\n`data[i]`, `data[i-r]`, ... `data[i - length]`\nare used for create a sample sequence.\nstride: Period between successive output sequences.\nFor stride `s`, consecutive output samples would\nbe centered around `data[i]`, `data[i+s]`, `data[i+2*s]`, etc.\nstart_index: Data points earlier than `start_index` will not be used\nin the output sequences. This is useful to reserve part of the\ndata for test or validation.\nend_index: Data points later than `end_index` will not be used\nin the output sequences. This is useful to reserve part of the\ndata for test or validation.\nshuffle: Whether to shuffle output samples,\nor instead draw them in chronological order.\nreverse: Boolean: if `true`, timesteps in each output sample will be\nin reverse chronological order.\nbatch_size: Number of timeseries samples in each batch\n(except maybe the last one).\nReturns:\nA [Sequence](https://www.tensorflow.org/api_docs/python/tf/keras/utils/Sequence) instance.\nExamples: \n\n from keras.preprocessing.sequence import TimeseriesGenerator\n import numpy as np\n data = np.array([[i] for i in range(50)])\n targets = np.array([[i] for i in range(50)])\n data_gen = TimeseriesGenerator(data, targets,\n length=10, sampling_rate=2,\n batch_size=2)\n assert len(data_gen) == 20\n batch_0 = data_gen[0]\n x, y = batch_0\n assert np.array_equal(x,\n np.array([[[0], [2], [4], [6], [8]],\n [[1], [3], [5], [7], [9]]]))\n assert np.array_equal(y,\n np.array([[10], [11]]))\n\nMethods\n-------\n\n### `get_config`\n\n[View source](https://github.com/keras-team/keras-preprocessing/tree/master/keras_preprocessing/sequence.py#L380-L413) \n\n get_config()\n\nReturns the TimeseriesGenerator configuration as Python dictionary.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| A Python dictionary with the TimeseriesGenerator configuration. ||\n\n\u003cbr /\u003e\n\n### `on_epoch_end`\n\n[View source](https://github.com/keras-team/keras/tree/master/keras/utils/data_utils.py#L476-L479) \n\n on_epoch_end()\n\nMethod called at the end of every epoch.\n\n### `to_json`\n\n[View source](https://github.com/keras-team/keras-preprocessing/tree/master/keras_preprocessing/sequence.py#L415-L432) \n\n to_json(\n **kwargs\n )\n\nReturns a JSON string containing the timeseries generator\nconfiguration. To load a generator from a JSON string, use\n`keras.preprocessing.sequence.timeseries_generator_from_json(json_string)`.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Arguments ||\n|------------|--------------------------------------------------------------|\n| `**kwargs` | Additional keyword arguments to be passed to `json.dumps()`. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| A JSON string containing the tokenizer configuration. ||\n\n\u003cbr /\u003e\n\n### `__getitem__`\n\n[View source](https://github.com/keras-team/keras-preprocessing/tree/master/keras_preprocessing/sequence.py#L363-L378) \n\n __getitem__(\n index\n )\n\n### `__iter__`\n\n[View source](https://github.com/keras-team/keras/tree/master/keras/utils/data_utils.py#L481-L484) \n\n __iter__()\n\nCreate a generator that iterate over the Sequence.\n\n### `__len__`\n\n[View source](https://github.com/keras-team/keras-preprocessing/tree/master/keras_preprocessing/sequence.py#L359-L361) \n\n __len__()"]]