The following structures are available globally.
-
A concatenation of two sequences with the same element type.
Declaration
public struct Concatenation<Base1: Sequence, Base2: Sequence>: Sequence where Base1.Element == Base2.Element
extension Concatenation: Collection where Base1: Collection, Base2: Collection
extension Concatenation: BidirectionalCollection where Base1: BidirectionalCollection, Base2: BidirectionalCollection
extension Concatenation: RandomAccessCollection where Base1: RandomAccessCollection, Base2: RandomAccessCollection
-
A rotated view onto a collection.
Declaration
public struct RotatedCollection<Base> : Collection where Base : Collection
extension RotatedCollection: BidirectionalCollection where Base: BidirectionalCollection
extension RotatedCollection: RandomAccessCollection where Base: RandomAccessCollection
-
Declaration
public struct AnyDifferentiable : Differentiable
-
A type-erased derivative value.
The
AnyDerivative
type forwards its operations to an arbitrary underlying base derivative value conforming toDifferentiable
andAdditiveArithmetic
, hiding the specifics of the underlying value.Declaration
@frozen public struct AnyDerivative : Differentiable & AdditiveArithmetic
-
A multidimensional array of elements that is a generalization of vectors and matrices to potentially higher dimensions.
The generic parameter
Scalar
describes the type of scalars in the tensor (such asInt32
,Float
, etc).Declaration
@frozen public struct Tensor<Scalar> where Scalar : TensorFlowScalar
extension Tensor: Collatable
extension Tensor: CopyableToDevice
extension Tensor: AnyTensor
extension Tensor: ExpressibleByArrayLiteral
extension Tensor: CustomStringConvertible
extension Tensor: CustomPlaygroundDisplayConvertible
extension Tensor: CustomReflectable
extension Tensor: TensorProtocol
extension Tensor: TensorGroup
extension Tensor: ElementaryFunctions where Scalar: TensorFlowFloatingPoint
extension Tensor: VectorProtocol where Scalar: TensorFlowFloatingPoint
extension Tensor: Mergeable where Scalar: TensorFlowFloatingPoint
extension Tensor: Equatable where Scalar: Equatable
extension Tensor: Codable where Scalar: Codable
extension Tensor: AdditiveArithmetic where Scalar: Numeric
extension Tensor: PointwiseMultiplicative where Scalar: Numeric
extension Tensor: Differentiable & EuclideanDifferentiable where Scalar: TensorFlowFloatingPoint
extension Tensor: DifferentiableTensorProtocol where Scalar: TensorFlowFloatingPoint
-
A pullback function that performs the transpose of broadcasting two
Tensors
.Declaration
public struct BroadcastingPullback
-
A context that stores thread-local contextual information used by deep learning APIs such as layers.
Use
Context.local
to retrieve the current thread-local context.Examples:
- Set the current learning phase to training so that layers like
BatchNorm
will compute mean and variance when applied to inputs.
Context.local.learningPhase = .training
- Set the current learning phase to inference so that layers like
Dropout
will not drop out units when applied to inputs.
Context.local.learningPhase = .inference
Declaration
public struct Context
- Set the current learning phase to training so that layers like
-
A 1-D convolution layer (e.g. temporal convolution over a time-series).
This layer creates a convolution filter that is convolved with the layer input to produce a tensor of outputs.
Declaration
@frozen public struct Conv1D<Scalar> : Layer where Scalar : TensorFlowFloatingPoint
-
A 2-D convolution layer (e.g. spatial convolution over images).
This layer creates a convolution filter that is convolved with the layer input to produce a tensor of outputs.
Declaration
@frozen public struct Conv2D<Scalar> : Layer where Scalar : TensorFlowFloatingPoint
-
A 3-D convolution layer for spatial/spatio-temporal convolution over images.
This layer creates a convolution filter that is convolved with the layer input to produce a tensor of outputs.
Declaration
@frozen public struct Conv3D<Scalar> : Layer where Scalar : TensorFlowFloatingPoint
-
A 1-D transposed convolution layer (e.g. temporal transposed convolution over images).
This layer creates a convolution filter that is transpose-convolved with the layer input to produce a tensor of outputs.
Declaration
@frozen public struct TransposedConv1D<Scalar> : Layer where Scalar : TensorFlowFloatingPoint
-
A 2-D transposed convolution layer (e.g. spatial transposed convolution over images).
This layer creates a convolution filter that is transpose-convolved with the layer input to produce a tensor of outputs.
Declaration
@frozen public struct TransposedConv2D<Scalar> : Layer where Scalar : TensorFlowFloatingPoint
-
A 3-D transposed convolution layer (e.g. spatial transposed convolution over images).
This layer creates a convolution filter that is transpose-convolved with the layer input to produce a tensor of outputs.
Declaration
@frozen public struct TransposedConv3D<Scalar> : Layer where Scalar : TensorFlowFloatingPoint
-
A 2-D depthwise convolution layer.
This layer creates seperable convolution filters that are convolved with the layer input to produce a tensor of outputs.
Declaration
@frozen public struct DepthwiseConv2D<Scalar> : Layer where Scalar : TensorFlowFloatingPoint
-
A layer for adding zero-padding in the temporal dimension.
Declaration
public struct ZeroPadding1D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint
-
A layer for adding zero-padding in the spatial dimensions.
Declaration
public struct ZeroPadding2D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint
-
A layer for adding zero-padding in the spatial/spatio-temporal dimensions.
Declaration
public struct ZeroPadding3D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint
-
A 1-D separable convolution layer.
This layer performs a depthwise convolution that acts separately on channels followed by a pointwise convolution that mixes channels.
Declaration
@frozen public struct SeparableConv1D<Scalar> : Layer where Scalar : TensorFlowFloatingPoint
-
A 2-D Separable convolution layer.
This layer performs a depthwise convolution that acts separately on channels followed by a pointwise convolution that mixes channels.
Declaration
@frozen public struct SeparableConv2D<Scalar> : Layer where Scalar : TensorFlowFloatingPoint
-
A flatten layer.
A flatten layer flattens the input when applied without affecting the batch size.
Declaration
@frozen public struct Flatten<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint
-
A reshape layer.
Declaration
@frozen public struct Reshape<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint
-
A layer that encloses a custom differentiable function.
Declaration
public struct Function<Input, Output> : ParameterlessLayer where Input : Differentiable, Output : Differentiable
-
A TensorFlow dynamic type value that can be created from types that conform to
TensorFlowScalar
.Declaration
public struct TensorDataType : Equatable
-
Declaration
@frozen public struct BFloat16
extension BFloat16: TensorFlowScalar
extension BFloat16: XLAScalarType
-
Represents a potentially large set of elements.
A
Dataset
can be used to represent an input pipeline as a collection of element tensors.Declaration
@available(*, deprecated, message: "Datasets will be removed in S4TF v0.10. Please use the new Batches API instead.") @frozen public struct Dataset<Element> where Element : TensorGroup
extension Dataset: Sequence
-
The type that allows iteration over a dataset’s elements.
Declaration
@available(*, deprecated) @frozen public struct DatasetIterator<Element> where Element : TensorGroup
extension DatasetIterator: IteratorProtocol
-
A 2-tuple-like struct that conforms to TensorGroup that represents a tuple of 2 types conforming to
TensorGroup
.Declaration
@frozen public struct Zip2TensorGroup<T, U> : TensorGroup where T : TensorGroup, U : TensorGroup
-
A densely-connected neural network layer.
Dense
implements the operationactivation(matmul(input, weight) + bias)
, whereweight
is a weight matrix,bias
is a bias vector, andactivation
is an element-wise activation function.This layer also supports 3-D weight tensors with 2-D bias matrices. In this case the first dimension of both is treated as the batch size that is aligned with the first dimension of
input
and the batch variant of thematmul(_:_:)
operation is used, thus using a different weight and bias for each element in input batch.Declaration
@frozen public struct Dense<Scalar> : Layer where Scalar : TensorFlowFloatingPoint
-
A device on which
Tensor
s can be allocated.Declaration
public struct Device
extension Device: Equatable
extension Device: CustomStringConvertible
-
A dropout layer.
Dropout consists in randomly setting a fraction of input units to
0
at each update during training time, which helps prevent overfitting.Declaration
@frozen public struct Dropout<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint
-
GaussianNoise
adds noise sampled from a normal distribution.The noise added always has mean zero, but has a configurable standard deviation.
Declaration
public struct GaussianNoise<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint
-
GaussianDropout
multiplies the input with the noise sampled from a normal distribution with mean 1.0.Because this is a regularization layer, it is only active during training time. During inference,
GaussianDropout
passes through the input unmodified.Declaration
public struct GaussianDropout<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint
-
An Alpha dropout layer.
Alpha Dropout is a
Dropout
that keeps mean and variance of inputs to their original values, in order to ensure the self-normalizing property even after this dropout. Alpha Dropout fits well to Scaled Exponential Linear Units by randomly setting activations to the negative saturation value.Source : Self-Normalizing Neural Networks: https://arxiv.org/abs/1706.02515
Declaration
@frozen public struct AlphaDropout<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint
-
An embedding layer.
Embedding
is effectively a lookup table that maps indices from a fixed vocabulary to fixed-size (dense) vector representations, e.g.[[0], [3]] -> [[0.25, 0.1], [0.6, -0.2]]
.Declaration
public struct Embedding<Scalar> : Module where Scalar : TensorFlowFloatingPoint
-
An empty struct representing empty
TangentVector
s for parameterless layers.Declaration
public struct EmptyTangentVector: EuclideanDifferentiable, VectorProtocol, ElementaryFunctions, PointwiseMultiplicative, KeyPathIterable
-
Pair of first and second moments (i.e., mean and variance).
Note
This is needed because tuple types are not differentiable.Declaration
public struct Moments<Scalar> : Differentiable where Scalar : TensorFlowFloatingPoint
-
A 2-D morphological dilation layer
This layer returns the morphogical dilation of the input tensor with the provided filters
Declaration
@frozen public struct Dilation2D<Scalar> : Layer where Scalar : TensorFlowFloatingPoint
-
A 2-D morphological erosion layer
This layer returns the morphogical erosion of the input tensor with the provided filters
Declaration
@frozen public struct Erosion2D<Scalar> : Layer where Scalar : TensorFlowFloatingPoint
-
A lazy selection of elements, in a given order, from some base collection.
Declaration
public struct Sampling<Base: Collection, Selection: Collection> where Selection.Element == Base.Index
extension Sampling: SamplingProtocol
extension Sampling: Collection
extension Sampling: BidirectionalCollection where Selection: BidirectionalCollection
extension Sampling: RandomAccessCollection where Selection: RandomAccessCollection
-
A collection of the longest non-overlapping contiguous slices of some
Base
collection, starting with its first element, and having some fixed maximum length.The elements of this collection, except for the last, all have a
count
ofbatchSize
, unlessBase.count % batchSize !=0
, in which case the last batch’scount
isbase.count % batchSize.
Declaration
public struct Slices<Base> where Base : Collection
extension Slices: Collection
-
A batch normalization layer.
Normalizes the activations of the previous layer at each batch, i.e. applies a transformation that maintains the mean activation close to
0
and the activation standard deviation close to1
.Reference: Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.
Declaration
@frozen public struct BatchNorm<Scalar> : Layer where Scalar : TensorFlowFloatingPoint
-
A layer that applies layer normalization over a mini-batch of inputs.
Reference: Layer Normalization.
Declaration
@frozen public struct LayerNorm<Scalar> : Layer where Scalar : TensorFlowFloatingPoint
-
A layer that applies group normalization over a mini-batch of inputs.
Reference: Group Normalization.
Declaration
@frozen public struct GroupNorm<Scalar> : Layer where Scalar : TensorFlowFloatingPoint
-
A layer that applies instance normalization over a mini-batch of inputs.
Reference: Instance Normalization: The Missing Ingredient for Fast Stylization.
Declaration
@frozen public struct InstanceNorm<Scalar> : Layer where Scalar : TensorFlowFloatingPoint
-
State for a single step of a single weight inside an optimizer.
Declaration
public struct OptimizerWeightStepState
-
Global state accessed through
StateAccessor
.Declaration
public struct OptimizerState
-
[String: Float]
but elements can be accessed as though they were members.Declaration
@dynamicMemberLookup public struct HyperparameterDictionary
-
An optimizer that works on a single parameter group.
Declaration
public struct ParameterGroupOptimizer
-
A type-safe wrapper around an
Int
index value for optimizer local values.Declaration
public struct LocalAccessor
-
A type-safe wrapper around an
Int
index value for optimizer global values.Declaration
public struct GlobalAccessor
-
A type-safe wrapper around an
Int
index value for optimizer state values.Declaration
public struct StateAccessor
-
Builds a
ParameterGroupOptimizer
. This is used at essentially the level of a single weight in the model. A mapping from parameter groups selected by ([Bool]
to ParameterGroupOptimizer) defines the final optimizer.Declaration
public struct ParameterGroupOptimizerBuilder
-
A max pooling layer for temporal data.
Declaration
@frozen public struct MaxPool1D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint
-
A max pooling layer for spatial data.
Declaration
@frozen public struct MaxPool2D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint
-
A max pooling layer for spatial or spatio-temporal data.
Declaration
@frozen public struct MaxPool3D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint
-
An average pooling layer for temporal data.
Declaration
@frozen public struct AvgPool1D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint
-
An average pooling layer for spatial data.
Declaration
@frozen public struct AvgPool2D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint
-
An average pooling layer for spatial or spatio-temporal data.
Declaration
@frozen public struct AvgPool3D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint
-
A global average pooling layer for temporal data.
Declaration
@frozen public struct GlobalAvgPool1D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint
-
A global average pooling layer for spatial data.
Declaration
@frozen public struct GlobalAvgPool2D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint
-
A global average pooling layer for spatial and spatio-temporal data.
Declaration
@frozen public struct GlobalAvgPool3D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint
-
A global max pooling layer for temporal data.
Declaration
@frozen public struct GlobalMaxPool1D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint
-
A global max pooling layer for spatial data.
Declaration
@frozen public struct GlobalMaxPool2D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint
-
A global max pooling layer for spatial and spatio-temporal data.
Declaration
@frozen public struct GlobalMaxPool3D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint
-
A fractional max pooling layer for spatial data. Note:
FractionalMaxPool
does not have an XLA implementation, and thus may have performance implications.Declaration
@frozen public struct FractionalMaxPool2D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint
-
PythonObject
represents an object in Python and supports dynamic member lookup. Any member access likeobject.foo
will dynamically request the Python runtime for a member with the specified name in this object.PythonObject
is passed to and returned from all Python function calls and member references. It supports standard Python arithmetic and comparison operators.Internally,
PythonObject
is implemented as a reference-counted pointer to a Python C APIPyObject
.Declaration
@dynamicCallable @dynamicMemberLookup public struct PythonObject
extension PythonObject : CustomStringConvertible
extension PythonObject : CustomPlaygroundDisplayConvertible
extension PythonObject : CustomReflectable
extension PythonObject : PythonConvertible, ConvertibleFromPython
extension PythonObject : SignedNumeric
extension PythonObject : Strideable
extension PythonObject : Equatable, Comparable
extension PythonObject : Hashable
extension PythonObject : MutableCollection
extension PythonObject : Sequence
extension PythonObject : ExpressibleByBooleanLiteral, ExpressibleByIntegerLiteral, ExpressibleByFloatLiteral, ExpressibleByStringLiteral
extension PythonObject : ExpressibleByArrayLiteral, ExpressibleByDictionaryLiteral
-
A
PythonObject
wrapper that enables throwing method calls. Exceptions produced by Python functions are reflected as Swift errors and thrown.Note
It is intentional thatThrowingPythonObject
does not have the@dynamicCallable
attribute because the call syntax is unintuitive:x.throwing(arg1, arg2, ...)
. The methods will still be nameddynamicallyCall
until further discussion/design.Declaration
public struct ThrowingPythonObject
-
A
PythonObject
wrapper that enables member accesses. Member access operations return anOptional
result. When member access fails,nil
is returned.Declaration
@dynamicMemberLookup public struct CheckingPythonObject
-
An interface for Python.
PythonInterface
allows interaction with Python. It can be used to import modules and dynamically access Python builtin types and functions.Note
It is not intended forPythonInterface
to be initialized directly. Instead, please use the global instance ofPythonInterface
calledPython
.Declaration
@dynamicMemberLookup public struct PythonInterface
-
Declaration
public struct PythonLibrary
-
A type-erased random number generator.
The
AnyRandomNumberGenerator
type forwards random number generating operations to an underlying random number generator, hiding its specific underlying type.Declaration
public struct AnyRandomNumberGenerator : RandomNumberGenerator
-
An implementation of
SeedableRandomNumberGenerator
using ARC4.ARC4 is a stream cipher that generates a pseudo-random stream of bytes. This PRNG uses the seed as its key.
ARC4 is described in Schneier, B., “Applied Cryptography: Protocols, Algorithms, and Source Code in C”, 2nd Edition, 1996.
An individual generator is not thread-safe, but distinct generators do not share state. The random data generated is of high-quality, but is not suitable for cryptographic applications.
Declaration
@frozen public struct ARC4RandomNumberGenerator : SeedableRandomNumberGenerator
-
An implementation of
SeedableRandomNumberGenerator
using Threefry. Salmon et al. SC 2011. Parallel random numbers: as easy as 1, 2, 3. http://www.thesalmons.org/john/random123/papers/random123sc11.pdfThis struct implements a 20-round Threefry2x32 PRNG. It must be seeded with a 64-bit value.
An individual generator is not thread-safe, but distinct generators do not share state. The random data generated is of high-quality, but is not suitable for cryptographic applications.
Declaration
public struct ThreefryRandomNumberGenerator : SeedableRandomNumberGenerator
-
An implementation of
SeedableRandomNumberGenerator
using Philox. Salmon et al. SC 2011. Parallel random numbers: as easy as 1, 2, 3. http://www.thesalmons.org/john/random123/papers/random123sc11.pdfThis struct implements a 10-round Philox4x32 PRNG. It must be seeded with a 64-bit value.
An individual generator is not thread-safe, but distinct generators do not share state. The random data generated is of high-quality, but is not suitable for cryptographic applications.
Declaration
public struct PhiloxRandomNumberGenerator : SeedableRandomNumberGenerator
-
Declaration
@frozen public struct UniformIntegerDistribution<T> : RandomDistribution where T : FixedWidthInteger
-
Declaration
@frozen public struct UniformFloatingPointDistribution<T: BinaryFloatingPoint>: RandomDistribution where T.RawSignificand: FixedWidthInteger
-
Declaration
@frozen public struct NormalDistribution<T: BinaryFloatingPoint>: RandomDistribution where T.RawSignificand: FixedWidthInteger
-
Declaration
@frozen public struct BetaDistribution : RandomDistribution
-
An input to a recurrent neural network.
Declaration
public struct RNNCellInput<Input, State> : Differentiable where Input : Differentiable, State : Differentiable
extension RNNCellInput: EuclideanDifferentiable where Input: EuclideanDifferentiable, State: EuclideanDifferentiable
-
An output to a recurrent neural network.
Declaration
public struct RNNCellOutput<Output, State> : Differentiable where Output : Differentiable, State : Differentiable
extension RNNCellOutput: EuclideanDifferentiable where Output: EuclideanDifferentiable, State: EuclideanDifferentiable
-
A basic RNN cell.
Declaration
public struct BasicRNNCell<Scalar> : RecurrentLayerCell where Scalar : TensorFlowFloatingPoint
-
An LSTM cell.
Declaration
public struct LSTMCell<Scalar> : RecurrentLayerCell where Scalar : TensorFlowFloatingPoint
-
An GRU cell.
Declaration
public struct GRUCell<Scalar> : RecurrentLayerCell where Scalar : TensorFlowFloatingPoint
-
Declaration
public struct RecurrentLayer<Cell> : Layer where Cell : RecurrentLayerCell
extension RecurrentLayer: Equatable where Cell: Equatable
extension RecurrentLayer: AdditiveArithmetic where Cell: AdditiveArithmetic
-
Declaration
public struct BidirectionalRecurrentLayer<Cell: RecurrentLayerCell>: Layer where Cell.TimeStepOutput: Mergeable
-
A layer that sequentially composes two or more other layers.
Examples:
- Build a simple 2-layer perceptron model for MNIST:
let inputSize = 28 * 28 let hiddenSize = 300 var classifier = Sequential { Dense<Float>(inputSize: inputSize, outputSize: hiddenSize, activation: relu) Dense<Float>(inputSize: hiddenSize, outputSize: 3, activation: identity) }
- Build an autoencoder for MNIST:
var autoencoder = Sequential { // The encoder. Dense<Float>(inputSize: 28 * 28, outputSize: 128, activation: relu) Dense<Float>(inputSize: 128, outputSize: 64, activation: relu) Dense<Float>(inputSize: 64, outputSize: 12, activation: relu) Dense<Float>(inputSize: 12, outputSize: 3, activation: relu) // The decoder. Dense<Float>(inputSize: 3, outputSize: 12, activation: relu) Dense<Float>(inputSize: 12, outputSize: 64, activation: relu) Dense<Float>(inputSize: 64, outputSize: 128, activation: relu) Dense<Float>(inputSize: 128, outputSize: imageHeight * imageWidth, activation: tanh) }
-
Declaration
@_functionBuilder public struct LayerBuilder
-
ShapedArray
is a multi-dimensional array. It has a shape, which has type[Int]
and defines the array dimensions, and uses aTensorBuffer
internally as storage.Declaration
@frozen public struct ShapedArray<Scalar> : _ShapedArrayProtocol
extension ShapedArray: RandomAccessCollection, MutableCollection
extension ShapedArray: CustomStringConvertible
extension ShapedArray: CustomPlaygroundDisplayConvertible
extension ShapedArray: CustomReflectable
extension ShapedArray: ExpressibleByArrayLiteral where Scalar: TensorFlowScalar
extension ShapedArray: Equatable where Scalar: Equatable
extension ShapedArray: Hashable where Scalar: Hashable
extension ShapedArray: Codable where Scalar: Codable
-
A contiguous slice of a
ShapedArray
orShapedArraySlice
instance.ShapedArraySlice
enables fast, efficient operations on contiguous slices ofShapedArray
instances.ShapedArraySlice
instances do not have their own storage. Instead, they provides a view onto the storage of their baseShapedArray
.ShapedArraySlice
can represent two different kinds of slices: element arrays and subarrays.Element arrays are subdimensional elements of a
ShapedArray
: their rank is one less than that of their base. Element array slices are obtained by indexing aShapedArray
instance with a singularInt32
index.For example:
var matrix = ShapedArray(shape: [2, 2], scalars: [0, 1, 2, 3]) // `matrix` represents [[0, 1], [2, 3]]. let element = matrix[0] // `element` is a `ShapedArraySlice` with shape [2]. It is an element // array, specifically the first element in `matrix`: [0, 1]. matrix[1] = ShapedArraySlice(shape: [2], scalars: [4, 8]) // The second element in `matrix` has been mutated. // `matrix` now represents [[0, 1, 4, 8]].
Subarrays are a contiguous range of the elements in a
ShapedArray
. The rank of a subarray is the same as that of its base, but its leading dimension is the count of the slice range. Subarray slices are obtained by indexing aShapedArray
with aRange<Int32>
that represents a range of elements (in the leading dimension). Methods likeprefix(:)
andsuffix(:)
that internally index with a range also produce subarray.For example:
let zeros = ShapedArray(repeating: 0, shape: [3, 2]) var matrix = ShapedArray(shape: [3, 2], scalars: Array(0..<6)) // `zeros` represents [[0, 0], [0, 0], [0, 0]]. // `matrix` represents [[0, 1], [2, 3], [4, 5]]. let subarray = matrix.prefix(2) // `subarray` is a `ShapedArraySlice` with shape [2, 2]. It is a slice // of the first 2 elements in `matrix` and represents [[0, 1], [2, 3]]. matrix[0..<2] = zeros.prefix(2) // The first 2 elements in `matrix` have been mutated. // `matrix` now represents [[0, 0], [0, 0], [4, 5]].
Declaration
@frozen public struct ShapedArraySlice<Scalar> : _ShapedArrayProtocol
extension ShapedArraySlice: RandomAccessCollection, MutableCollection
extension ShapedArraySlice: CustomStringConvertible
extension ShapedArraySlice: CustomPlaygroundDisplayConvertible
extension ShapedArraySlice: CustomReflectable
extension ShapedArraySlice: ExpressibleByArrayLiteral where Scalar: TensorFlowScalar
extension ShapedArraySlice: Equatable where Scalar: Equatable
extension ShapedArraySlice: Hashable where Scalar: Hashable
extension ShapedArraySlice: Codable where Scalar: Codable
-
StringTensor
is a multi-dimensional array whose elements areString
s.Declaration
@frozen public struct StringTensor
extension StringTensor: TensorGroup
-
TensorHandle
is the type used by ops. It includes aScalar
type, which compiler internals can use to determine the datatypes of parameters when they are extracted into a tensor program.Declaration
public struct TensorHandle<Scalar> where Scalar : _TensorFlowDataTypeCompatible
extension TensorHandle: TensorGroup
-
Declaration
public struct ResourceHandle
extension ResourceHandle: TensorGroup
-
Declaration
public struct VariantHandle
extension VariantHandle: TensorGroup
-
A struct representing the shape of a tensor.
TensorShape
is a thin wrapper around an array of integers that represent shape dimensions. All tensor types useTensorShape
to represent their shape.Declaration
@frozen public struct TensorShape : ExpressibleByArrayLiteral
extension TensorShape: Collection, MutableCollection
extension TensorShape: RandomAccessCollection
extension TensorShape: RangeReplaceableCollection
extension TensorShape: Equatable
extension TensorShape: Codable
extension TensorShape: CustomStringConvertible
-
TensorVisitorPlan approximates
[WritableKeyPath<Base, Tensor<Float>]
but is more efficient. This is useful for writing generic optimizers which want to map over the gradients, the existing weights, and an index which can be used to find auxiliarily stored weights. This is slightly more efficient (~2x) but it could be better because it trades off slightly higher overheads (extra pointer dereference) for not having to do O(depth_of_tree) work that is required with a plain list to track down each individual KeyPath.Declaration
public struct TensorVisitorPlan<Base>
-
An upsampling layer for 1-D inputs.
Declaration
@frozen public struct UpSampling1D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint
-
An upsampling layer for 2-D inputs.
Declaration
@frozen public struct UpSampling2D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint
-
An upsampling layer for 3-D inputs.
Declaration
@frozen public struct UpSampling3D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint
-
Collects correct prediction counters and loss totals.
Declaration
public struct HostStatistics