tf.raw_ops.GRUBlockCell
Stay organized with collections
Save and categorize content based on your preferences.
Computes the GRU cell forward propagation for 1 time step.
tf.raw_ops.GRUBlockCell(
x, h_prev, w_ru, w_c, b_ru, b_c, name=None
)
Args
x: Input to the GRU cell.
h_prev: State input from the previous GRU cell.
w_ru: Weight matrix for the reset and update gate.
w_c: Weight matrix for the cell connection gate.
b_ru: Bias vector for the reset and update gate.
b_c: Bias vector for the cell connection gate.
Returns
r: Output of the reset gate.
u: Output of the update gate.
c: Output of the cell connection gate.
h: Current state of the GRU cell.
Note on notation of the variables:
Concatenation of a and b is represented by a_b
Element-wise dot product of a and b is represented by ab
Element-wise dot product is represented by \circ
Matrix multiplication is represented by *
Biases are initialized with :
b_ru
- constant_initializer(1.0)
b_c
- constant_initializer(0.0)
This kernel op implements the following mathematical equations:
x_h_prev = [x, h_prev]
[r_bar u_bar] = x_h_prev * w_ru + b_ru
r = sigmoid(r_bar)
u = sigmoid(u_bar)
h_prevr = h_prev \circ r
x_h_prevr = [x h_prevr]
c_bar = x_h_prevr * w_c + b_c
c = tanh(c_bar)
h = (1-u) \circ c + u \circ h_prev
Args |
x
|
A Tensor . Must be one of the following types: float32 .
|
h_prev
|
A Tensor . Must have the same type as x .
|
w_ru
|
A Tensor . Must have the same type as x .
|
w_c
|
A Tensor . Must have the same type as x .
|
b_ru
|
A Tensor . Must have the same type as x .
|
b_c
|
A Tensor . Must have the same type as x .
|
name
|
A name for the operation (optional).
|
Returns |
A tuple of Tensor objects (r, u, c, h).
|
r
|
A Tensor . Has the same type as x .
|
u
|
A Tensor . Has the same type as x .
|
c
|
A Tensor . Has the same type as x .
|
h
|
A Tensor . Has the same type as x .
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-04-26 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-04-26 UTC."],[],[],null,["# tf.raw_ops.GRUBlockCell\n\n\u003cbr /\u003e\n\nComputes the GRU cell forward propagation for 1 time step.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.raw_ops.GRUBlockCell`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/GRUBlockCell)\n\n\u003cbr /\u003e\n\n tf.raw_ops.GRUBlockCell(\n x, h_prev, w_ru, w_c, b_ru, b_c, name=None\n )\n\nArgs\nx: Input to the GRU cell.\nh_prev: State input from the previous GRU cell.\nw_ru: Weight matrix for the reset and update gate.\nw_c: Weight matrix for the cell connection gate.\nb_ru: Bias vector for the reset and update gate.\nb_c: Bias vector for the cell connection gate.\n\nReturns\nr: Output of the reset gate.\nu: Output of the update gate.\nc: Output of the cell connection gate.\nh: Current state of the GRU cell.\n\nNote on notation of the variables:\n\nConcatenation of a and b is represented by a_b\nElement-wise dot product of a and b is represented by ab\nElement-wise dot product is represented by \\\\circ\nMatrix multiplication is represented by \\*\n\nBiases are initialized with :\n`b_ru` - constant_initializer(1.0)\n`b_c` - constant_initializer(0.0)\n\nThis kernel op implements the following mathematical equations: \n\n x_h_prev = [x, h_prev]\n\n [r_bar u_bar] = x_h_prev * w_ru + b_ru\n\n r = sigmoid(r_bar)\n u = sigmoid(u_bar)\n\n h_prevr = h_prev \\circ r\n\n x_h_prevr = [x h_prevr]\n\n c_bar = x_h_prevr * w_c + b_c\n c = tanh(c_bar)\n\n h = (1-u) \\circ c + u \\circ h_prev\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|----------|------------------------------------------------------------|\n| `x` | A `Tensor`. Must be one of the following types: `float32`. |\n| `h_prev` | A `Tensor`. Must have the same type as `x`. |\n| `w_ru` | A `Tensor`. Must have the same type as `x`. |\n| `w_c` | A `Tensor`. Must have the same type as `x`. |\n| `b_ru` | A `Tensor`. Must have the same type as `x`. |\n| `b_c` | A `Tensor`. Must have the same type as `x`. |\n| `name` | A name for the operation (optional). |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|-----|---------------------------------------|\n| A tuple of `Tensor` objects (r, u, c, h). ||\n| `r` | A `Tensor`. Has the same type as `x`. |\n| `u` | A `Tensor`. Has the same type as `x`. |\n| `c` | A `Tensor`. Has the same type as `x`. |\n| `h` | A `Tensor`. Has the same type as `x`. |\n\n\u003cbr /\u003e"]]