View source on GitHub |
Abstract class for DoFns that need the shared models.
tfma.utils.DoFnWithModels(
model_loaders: Dict[str, tfma.types.ModelLoader
]
)
Child Classes
Methods
default_label
default_label()
default_type_hints
default_type_hints()
display_data
display_data()
Returns the display data associated to a pipeline component.
It should be reimplemented in pipeline components that wish to have static display data.
Returns | |
---|---|
Dict[str, Any]: A dictionary containing key:value pairs.
The value might be an integer, float or string value; a
:class:DisplayDataItem for values that have more data
(e.g. short value, label, url); or a :class:HasDisplayData instance
that has more display data that should be picked up. For example::
{ 'key1': 'string_value', 'key2': 1234, 'key3': 3.14159265, 'key4': DisplayDataItem('apache.org', url='http://apache.org'), 'key5': subComponent } |
finish_bundle
finish_bundle()
Called after a bundle of elements is processed on a worker.
from_callable
@staticmethod
from_callable( fn )
from_runner_api
@classmethod
from_runner_api( fn_proto: beam_runner_api_pb2.FunctionSpec, context: 'PipelineContext' ) -> RunnerApiFnT
Converts from an FunctionSpec to a Fn object.
Prefer registering a urn with its parameter type and constructor.
get_function_arguments
get_function_arguments(
func
)
get_input_batch_type
get_input_batch_type(
input_element_type
) -> typing.Optional[typing.Union[TypeConstraint, type]]
Determine the batch type expected as input to process_batch.
The default implementation of get_input_batch_type
simply observes the
input typehint for the first parameter of process_batch
. A Batched DoFn
may override this method if a dynamic approach is required.
Args | |
---|---|
input_element_type
|
The element type of the input PCollection this DoFn is being applied to. |
Returns | |
---|---|
None if this DoFn cannot accept batches, else a Beam typehint or
a native Python typehint.
|
get_output_batch_type
get_output_batch_type(
input_element_type
) -> typing.Optional[typing.Union[TypeConstraint, type]]
Determine the batch type produced by this DoFn's process_batch
implementation and/or its process
implementation with @yields_batch
.
The default implementation of this method observes the return type
annotations on process_batch
and/or process
. A Batched DoFn may
override this method if a dynamic approach is required.
Args | |
---|---|
input_element_type
|
The element type of the input PCollection this DoFn is being applied to. |
Returns | |
---|---|
None if this DoFn will never yield batches, else a Beam typehint or
a native Python typehint.
|
get_type_hints
get_type_hints()
Gets and/or initializes type hints for this object.
If type hints have not been set, attempts to initialize type hints in this order:
- Using self.default_type_hints().
- Using self.class type hints.
infer_output_type
infer_output_type(
input_type
)
process
process(
elem
)
Method to use for processing elements.
This is invoked by DoFnRunner
for each element of a input
PCollection
.
The following parameters can be used as default values on process
arguments to indicate that a DoFn accepts the corresponding parameters. For
example, a DoFn might accept the element and its timestamp with the
following signature::
def process(element=DoFn.ElementParam, timestamp=DoFn.TimestampParam): ...
The full set of parameters is:
DoFn.ElementParam
: element to be processed, should not be mutated.DoFn.SideInputParam
: a side input that may be used when processing.DoFn.TimestampParam
: timestamp of the input element.DoFn.WindowParam
:Window
the input element belongs to.DoFn.TimerParam
: auserstate.RuntimeTimer
object defined by the spec of the parameter.DoFn.StateParam
: auserstate.RuntimeState
object defined by the spec of the parameter.DoFn.KeyParam
: key associated with the element.DoFn.RestrictionParam
: aniobase.RestrictionTracker
will be provided here to allow treatment as a SplittableDoFn
. The restriction tracker will be derived from the restriction provider in the parameter.DoFn.WatermarkEstimatorParam
: a function that can be used to track output watermark of SplittableDoFn
implementations.DoFn.BundleContextParam
: allows a shared context manager to be used per bundleDoFn.SetupContextParam
: allows a shared context manager to be used per DoFn
Args | |
---|---|
element
|
The element to be processed |
*args
|
side inputs |
**kwargs
|
other keyword arguments. |
Returns | |
---|---|
An Iterable of output elements or None. |
process_batch
process_batch(
batch, *args, **kwargs
)
register_pickle_urn
@classmethod
register_pickle_urn( pickle_urn )
Registers and implements the given urn via pickling.
register_urn
@classmethod
register_urn( urn, parameter_type, fn=None )
Registers a urn with a constructor.
For example, if 'beam:fn:foo' had parameter type FooPayload, one could
write RunnerApiFn.register_urn('bean:fn:foo', FooPayload, foo_from_proto)
where foo_from_proto took as arguments a FooPayload and a PipelineContext.
This function can also be used as a decorator rather than passing the
callable in as the final parameter.
A corresponding to_runner_api_parameter method would be expected that returns the tuple ('beam:fn:foo', FooPayload)
setup
setup()
Called to prepare an instance for processing bundles of elements.
This is a good place to initialize transient in-memory resources, such as
network connections. The resources can then be disposed in
DoFn.teardown
.
start_bundle
start_bundle()
Called before a bundle of elements is processed on a worker.
Elements to be processed are split into bundles and distributed to workers. Before a worker calls process() on the first element of its bundle, it calls this method.
teardown
teardown()
Called to use to clean up this instance before it is discarded.
A runner will do its best to call this method on any given instance to prevent leaks of transient resources, however, there may be situations where this is impossible (e.g. process crash, hardware failure, etc.) or unnecessary (e.g. the pipeline is shutting down and the process is about to be killed anyway, so all transient resources will be released automatically by the OS). In these cases, the call may not happen. It will also not be retried, because in such situations the DoFn instance no longer exists, so there's no instance to retry it on.
Thus, all work that depends on input elements, and all externally important
side effects, must be performed in DoFn.process
or
DoFn.finish_bundle
.
to_runner_api
to_runner_api(
context: 'PipelineContext'
) -> beam_runner_api_pb2.FunctionSpec
Returns an FunctionSpec encoding this Fn.
Prefer overriding self.to_runner_api_parameter.
to_runner_api_parameter
to_runner_api_parameter(
context
)
unbounded_per_element
@staticmethod
unbounded_per_element()
A decorator on process fn specifying that the fn performs an unbounded amount of work per input element.
with_input_types
with_input_types(
*arg_hints, **kwarg_hints
) -> WithTypeHintsT
with_output_types
with_output_types(
*arg_hints, **kwarg_hints
) -> WithTypeHintsT
yields_batches
@staticmethod
yields_batches( fn )
A decorator to apply to process
indicating it yields batches.
By default process
is assumed to both consume and produce
individual elements at a time. This decorator indicates that process
produces "batches", which are collections of multiple logical Beam
elements.
yields_elements
@staticmethod
yields_elements( fn )
A decorator to apply to process_batch
indicating it yields elements.
By default process_batch
is assumed to both consume and produce
"batches", which are collections of multiple logical Beam elements. This
decorator indicates that process_batch
produces individual elements
at a time. process_batch
is always expected to consume batches.