이 노트북 기반 자습서에서는 간단한 분류 모델을 만들고 여러 실행에서 성능을 분석하는 TFX 파이프라인을 만들고 실행합니다. 이 노트북은 우리가 내장 된 TFX 파이프 라인을 기반으로 간단한 TFX 파이프 라인 튜토리얼 . 해당 튜토리얼을 아직 읽지 않았다면 이 노트북을 계속 진행하기 전에 읽어야 합니다.
모델을 조정하거나 새 데이터 세트로 훈련할 때 모델이 개선되었는지 또는 악화되었는지 확인해야 합니다. 정확도와 같은 최상위 측정항목을 확인하는 것만으로는 충분하지 않을 수 있습니다. 훈련된 모든 모델은 프로덕션으로 푸시하기 전에 평가되어야 합니다.
우리는 추가합니다 Evaluator 이전 자습서에서 만든 파이프 라인 구성 요소를. Evaluator 구성 요소는 모델에 대한 심층 분석을 수행하고 새 모델을 기준선과 비교하여 "충분히 좋은" 상태인지 확인합니다. 그것은 사용하여 구현됩니다 TensorFlow 모델 분석 라이브러리를.
참조하시기 바랍니다 TFX 파이프 라인은 이해 TFX에서 다양한 개념에 대해 더 배울 수 있습니다.
설정
설정 과정은 이전 튜토리얼과 동일합니다.
먼저 TFX Python 패키지를 설치하고 모델에 사용할 데이터 세트를 다운로드해야 합니다.
핍 업그레이드
로컬에서 실행할 때 시스템에서 Pip를 업그레이드하지 않으려면 Colab에서 실행 중인지 확인하세요. 물론 로컬 시스템은 별도로 업그레이드할 수 있습니다.
try:
import colab
!pip install --upgrade pip
except:
pass
TFX 설치
pip install -U tfx
런타임을 다시 시작하셨습니까?
Google Colab을 사용하는 경우 위의 셀을 처음 실행할 때 위의 "RESTART RUNTIME" 버튼을 클릭하거나 "런타임 > 런타임 다시 시작..." 메뉴를 사용하여 런타임을 다시 시작해야 합니다. Colab이 패키지를 로드하는 방식 때문입니다.
TensorFlow 및 TFX 버전을 확인하세요.
import tensorflow as tf
print('TensorFlow version: {}'.format(tf.__version__))
from tfx import v1 as tfx
print('TFX version: {}'.format(tfx.__version__))
TensorFlow version: 2.6.2 TFX version: 1.4.0
변수 설정
파이프라인을 정의하는 데 사용되는 몇 가지 변수가 있습니다. 이러한 변수를 원하는 대로 사용자 지정할 수 있습니다. 기본적으로 파이프라인의 모든 출력은 현재 디렉터리 아래에 생성됩니다.
import os
PIPELINE_NAME = "penguin-tfma"
# Output directory to store artifacts generated from the pipeline.
PIPELINE_ROOT = os.path.join('pipelines', PIPELINE_NAME)
# Path to a SQLite DB file to use as an MLMD storage.
METADATA_PATH = os.path.join('metadata', PIPELINE_NAME, 'metadata.db')
# Output directory where created models from the pipeline will be exported.
SERVING_MODEL_DIR = os.path.join('serving_model', PIPELINE_NAME)
from absl import logging
logging.set_verbosity(logging.INFO) # Set default logging level.
예시 데이터 준비
우리는 같은 사용 팔머 펭귄 데이터 집합을 .
이 데이터 세트에는 이미 [0,1] 범위를 갖도록 정규화된 4개의 숫자 기능이 있습니다. 우리는 예측하는 분류 모델을 구축 할 것입니다 species 펭귄을.
TFX ExampleGen은 디렉토리에서 입력을 읽기 때문에 디렉토리를 만들고 여기에 데이터 세트를 복사해야 합니다.
import urllib.request
import tempfile
DATA_ROOT = tempfile.mkdtemp(prefix='tfx-data') # Create a temporary directory.
_data_url = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/penguin/data/labelled/penguins_processed.csv'
_data_filepath = os.path.join(DATA_ROOT, "data.csv")
urllib.request.urlretrieve(_data_url, _data_filepath)
('/tmp/tfx-datal5lxy_yw/data.csv', <http.client.HTTPMessage at 0x7fa18a9da150>)
파이프라인 생성
우리는 추가합니다 Evaluator 우리가에서 만든 파이프 라인 구성 요소를 단순 TFX 파이프 라인 튜토리얼 .
평가자 구성 요소는의 입력 데이터가 필요 ExampleGen 구성 요소와에서 모델 Trainer 구성 요소와 tfma.EvalConfig 객체를. 새로 훈련된 모델과 메트릭을 비교하는 데 사용할 수 있는 기준 모델을 선택적으로 제공할 수 있습니다.
평가자는 출력 유물, 두 종류의 생성 ModelEvaluation 및 ModelBlessing . ModelEvaluation에는 TFMA 라이브러리로 조사 및 시각화할 수 있는 자세한 평가 결과가 포함되어 있습니다. ModelBlessing은 모델이 주어진 기준을 통과했는지 여부에 대한 부울 결과를 포함하고 신호로 푸셔와 같은 이후 구성 요소에서 사용할 수 있습니다.
모델 학습 코드 작성
우리는에서와 동일한 모델 코드를 사용하는 간단한 TFX 파이프 라인 튜토리얼 .
_trainer_module_file = 'penguin_trainer.py'
%%writefile {_trainer_module_file}
# Copied from https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple
from typing import List
from absl import logging
import tensorflow as tf
from tensorflow import keras
from tensorflow_transform.tf_metadata import schema_utils
from tfx.components.trainer.executor import TrainerFnArgs
from tfx.components.trainer.fn_args_utils import DataAccessor
from tfx_bsl.tfxio import dataset_options
from tensorflow_metadata.proto.v0 import schema_pb2
_FEATURE_KEYS = [
'culmen_length_mm', 'culmen_depth_mm', 'flipper_length_mm', 'body_mass_g'
]
_LABEL_KEY = 'species'
_TRAIN_BATCH_SIZE = 20
_EVAL_BATCH_SIZE = 10
# Since we're not generating or creating a schema, we will instead create
# a feature spec. Since there are a fairly small number of features this is
# manageable for this dataset.
_FEATURE_SPEC = {
**{
feature: tf.io.FixedLenFeature(shape=[1], dtype=tf.float32)
for feature in _FEATURE_KEYS
},
_LABEL_KEY: tf.io.FixedLenFeature(shape=[1], dtype=tf.int64)
}
def _input_fn(file_pattern: List[str],
data_accessor: DataAccessor,
schema: schema_pb2.Schema,
batch_size: int = 200) -> tf.data.Dataset:
"""Generates features and label for training.
Args:
file_pattern: List of paths or patterns of input tfrecord files.
data_accessor: DataAccessor for converting input to RecordBatch.
schema: schema of the input data.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
"""
return data_accessor.tf_dataset_factory(
file_pattern,
dataset_options.TensorFlowDatasetOptions(
batch_size=batch_size, label_key=_LABEL_KEY),
schema=schema).repeat()
def _build_keras_model() -> tf.keras.Model:
"""Creates a DNN Keras model for classifying penguin data.
Returns:
A Keras Model.
"""
# The model below is built with Functional API, please refer to
# https://www.tensorflow.org/guide/keras/overview for all API options.
inputs = [keras.layers.Input(shape=(1,), name=f) for f in _FEATURE_KEYS]
d = keras.layers.concatenate(inputs)
for _ in range(2):
d = keras.layers.Dense(8, activation='relu')(d)
outputs = keras.layers.Dense(3)(d)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.Adam(1e-2),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy()])
model.summary(print_fn=logging.info)
return model
# TFX Trainer will call this function.
def run_fn(fn_args: TrainerFnArgs):
"""Train the model based on given args.
Args:
fn_args: Holds args used to train the model as name/value pairs.
"""
# This schema is usually either an output of SchemaGen or a manually-curated
# version provided by pipeline author. A schema can also derived from TFT
# graph if a Transform component is used. In the case when either is missing,
# `schema_from_feature_spec` could be used to generate schema from very simple
# feature_spec, but the schema returned would be very primitive.
schema = schema_utils.schema_from_feature_spec(_FEATURE_SPEC)
train_dataset = _input_fn(
fn_args.train_files,
fn_args.data_accessor,
schema,
batch_size=_TRAIN_BATCH_SIZE)
eval_dataset = _input_fn(
fn_args.eval_files,
fn_args.data_accessor,
schema,
batch_size=_EVAL_BATCH_SIZE)
model = _build_keras_model()
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps)
# The result of the training should be saved in `fn_args.serving_model_dir`
# directory.
model.save(fn_args.serving_model_dir, save_format='tf')
Writing penguin_trainer.py
파이프라인 정의 작성
TFX 파이프라인을 생성하는 함수를 정의합니다. 우리는 위에서 언급 한 평가자 구성 요소에 더하여, 우리는라는 또 하나 개의 노드를 추가합니다 Resolver . 새 모델이 이전 모델보다 좋아지고 있는지 확인하려면 기준선이라고 하는 이전에 게시된 모델과 비교해야 합니다. ML 메타 데이터 (MLMD)은 파이프 라인의 이전의 모든 유물을 추적하고 Resolver 최신 축복 모델이 무엇인지 찾을 수 있습니다 - 모델이 성공적으로 평가자를 통과 - MLMD에서 호출 전략 클래스 사용 LatestBlessedModelStrategy .
import tensorflow_model_analysis as tfma
def _create_pipeline(pipeline_name: str, pipeline_root: str, data_root: str,
module_file: str, serving_model_dir: str,
metadata_path: str) -> tfx.dsl.Pipeline:
"""Creates a three component penguin pipeline with TFX."""
# Brings data into the pipeline.
example_gen = tfx.components.CsvExampleGen(input_base=data_root)
# Uses user-provided Python function that trains a model.
trainer = tfx.components.Trainer(
module_file=module_file,
examples=example_gen.outputs['examples'],
train_args=tfx.proto.TrainArgs(num_steps=100),
eval_args=tfx.proto.EvalArgs(num_steps=5))
# NEW: Get the latest blessed model for Evaluator.
model_resolver = tfx.dsl.Resolver(
strategy_class=tfx.dsl.experimental.LatestBlessedModelStrategy,
model=tfx.dsl.Channel(type=tfx.types.standard_artifacts.Model),
model_blessing=tfx.dsl.Channel(
type=tfx.types.standard_artifacts.ModelBlessing)).with_id(
'latest_blessed_model_resolver')
# NEW: Uses TFMA to compute evaluation statistics over features of a model and
# perform quality validation of a candidate model (compared to a baseline).
eval_config = tfma.EvalConfig(
model_specs=[tfma.ModelSpec(label_key='species')],
slicing_specs=[
# An empty slice spec means the overall slice, i.e. the whole dataset.
tfma.SlicingSpec(),
# Calculate metrics for each penguin species.
tfma.SlicingSpec(feature_keys=['species']),
],
metrics_specs=[
tfma.MetricsSpec(per_slice_thresholds={
'sparse_categorical_accuracy':
tfma.PerSliceMetricThresholds(thresholds=[
tfma.PerSliceMetricThreshold(
slicing_specs=[tfma.SlicingSpec()],
threshold=tfma.MetricThreshold(
value_threshold=tfma.GenericValueThreshold(
lower_bound={'value': 0.6}),
# Change threshold will be ignored if there is no
# baseline model resolved from MLMD (first run).
change_threshold=tfma.GenericChangeThreshold(
direction=tfma.MetricDirection.HIGHER_IS_BETTER,
absolute={'value': -1e-10}))
)]),
})],
)
evaluator = tfx.components.Evaluator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'],
baseline_model=model_resolver.outputs['model'],
eval_config=eval_config)
# Checks whether the model passed the validation steps and pushes the model
# to a file destination if check passed.
pusher = tfx.components.Pusher(
model=trainer.outputs['model'],
model_blessing=evaluator.outputs['blessing'], # Pass an evaluation result.
push_destination=tfx.proto.PushDestination(
filesystem=tfx.proto.PushDestination.Filesystem(
base_directory=serving_model_dir)))
components = [
example_gen,
trainer,
# Following two components were added to the pipeline.
model_resolver,
evaluator,
pusher,
]
return tfx.dsl.Pipeline(
pipeline_name=pipeline_name,
pipeline_root=pipeline_root,
metadata_connection_config=tfx.orchestration.metadata
.sqlite_metadata_connection_config(metadata_path),
components=components)
우리는 평가자를 통해 다음 정보 제공해야 eval_config :
- 구성할 추가 메트릭(모델에 정의된 것보다 더 많은 메트릭이 필요한 경우).
- 구성할 슬라이스
- 유효성 검사가 포함되는지 확인하기 위한 모델 유효성 검사 임계값
때문에 SparseCategoricalAccuracy 이미 포함 된 model.compile() 호출, 자동 분석에 포함됩니다. 따라서 여기에 추가 측정항목을 추가하지 않습니다. SparseCategoricalAccuracy 모델도 충분히 좋은 여부를 결정하는 데 사용됩니다.
전체 데이터 세트와 각 펭귄 종에 대한 메트릭을 계산합니다. SlicingSpec 우리가 선언 된 통계를 집계하는 방법을 지정합니다.
새 모델이 통과해야 하는 두 가지 임계값이 있습니다. 하나는 0.6의 절대 임계값이고 다른 하나는 기준 모델보다 높아야 하는 상대 임계값입니다. 처음으로 파이프 라인을 실행하면 change_threshold 무시됩니다 만 value_threshold 확인할 수 있습니다. 당신이 한 번 이상 파이프 라인을 실행하면 Resolver 이전 실행에서 모델을 발견하고 그것은 비교를위한 기준 모델로 사용됩니다.
참조 평가자 구성 요소 가이드 자세한 내용은.
파이프라인 실행
우리는 사용 LocalDagRunner 이전 튜토리얼한다.
tfx.orchestration.LocalDagRunner().run(
_create_pipeline(
pipeline_name=PIPELINE_NAME,
pipeline_root=PIPELINE_ROOT,
data_root=DATA_ROOT,
module_file=_trainer_module_file,
serving_model_dir=SERVING_MODEL_DIR,
metadata_path=METADATA_PATH))
INFO:absl:Generating ephemeral wheel package for '/tmpfs/src/temp/docs/tutorials/tfx/penguin_trainer.py' (including modules: ['penguin_trainer']).
INFO:absl:User module package has hash fingerprint version 1e19049dced0ccb21e0af60dae1c6e0ef09b63d1ff0e370d7f699920c2735703.
INFO:absl:Executing: ['/tmpfs/src/tf_docs_env/bin/python', '/tmp/tmpr3anh67s/_tfx_generated_setup.py', 'bdist_wheel', '--bdist-dir', '/tmp/tmp6s2sw4dj', '--dist-dir', '/tmp/tmp6jr76e54']
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/setuptools/command/install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
setuptools.SetuptoolsDeprecationWarning,
listing git files failed - pretending there aren't any
INFO:absl:Successfully built user code wheel distribution at 'pipelines/penguin-tfma/_wheels/tfx_user_code_Trainer-0.0+1e19049dced0ccb21e0af60dae1c6e0ef09b63d1ff0e370d7f699920c2735703-py3-none-any.whl'; target user module is 'penguin_trainer'.
INFO:absl:Full user module path is 'penguin_trainer@pipelines/penguin-tfma/_wheels/tfx_user_code_Trainer-0.0+1e19049dced0ccb21e0af60dae1c6e0ef09b63d1ff0e370d7f699920c2735703-py3-none-any.whl'
INFO:absl:Using deployment config:
executor_specs {
key: "CsvExampleGen"
value {
beam_executable_spec {
python_executor_spec {
class_path: "tfx.components.example_gen.csv_example_gen.executor.Executor"
}
}
}
}
executor_specs {
key: "Evaluator"
value {
beam_executable_spec {
python_executor_spec {
class_path: "tfx.components.evaluator.executor.Executor"
}
}
}
}
executor_specs {
key: "Pusher"
value {
python_class_executable_spec {
class_path: "tfx.components.pusher.executor.Executor"
}
}
}
executor_specs {
key: "Trainer"
value {
python_class_executable_spec {
class_path: "tfx.components.trainer.executor.GenericExecutor"
}
}
}
custom_driver_specs {
key: "CsvExampleGen"
value {
python_class_executable_spec {
class_path: "tfx.components.example_gen.driver.FileBasedDriver"
}
}
}
metadata_connection_config {
sqlite {
filename_uri: "metadata/penguin-tfma/metadata.db"
connection_mode: READWRITE_OPENCREATE
}
}
INFO:absl:Using connection config:
sqlite {
filename_uri: "metadata/penguin-tfma/metadata.db"
connection_mode: READWRITE_OPENCREATE
}
INFO:absl:Component CsvExampleGen is running.
INFO:absl:Running launcher for node_info {
type {
name: "tfx.components.example_gen.csv_example_gen.component.CsvExampleGen"
}
id: "CsvExampleGen"
}
contexts {
contexts {
type {
name: "pipeline"
}
name {
field_value {
string_value: "penguin-tfma"
}
}
}
contexts {
type {
name: "pipeline_run"
}
name {
field_value {
string_value: "2021-12-05T10:34:23.517028"
}
}
}
contexts {
type {
name: "node"
}
name {
field_value {
string_value: "penguin-tfma.CsvExampleGen"
}
}
}
}
outputs {
outputs {
key: "examples"
value {
artifact_spec {
type {
name: "Examples"
properties {
key: "span"
value: INT
}
properties {
key: "split_names"
value: STRING
}
properties {
key: "version"
value: INT
}
}
}
}
}
}
parameters {
parameters {
key: "input_base"
value {
field_value {
string_value: "/tmp/tfx-datal5lxy_yw"
}
}
}
parameters {
key: "input_config"
value {
field_value {
string_value: "{\n \"splits\": [\n {\n \"name\": \"single_split\",\n \"pattern\": \"*\"\n }\n ]\n}"
}
}
}
parameters {
key: "output_config"
value {
field_value {
string_value: "{\n \"split_config\": {\n \"splits\": [\n {\n \"hash_buckets\": 2,\n \"name\": \"train\"\n },\n {\n \"hash_buckets\": 1,\n \"name\": \"eval\"\n }\n ]\n }\n}"
}
}
}
parameters {
key: "output_data_format"
value {
field_value {
int_value: 6
}
}
}
parameters {
key: "output_file_format"
value {
field_value {
int_value: 5
}
}
}
}
downstream_nodes: "Evaluator"
downstream_nodes: "Trainer"
execution_options {
caching_options {
}
}
INFO:absl:MetadataStore with DB connection initialized
running bdist_wheel
running build
running build_py
creating build
creating build/lib
copying penguin_trainer.py -> build/lib
installing to /tmp/tmp6s2sw4dj
running install
running install_lib
copying build/lib/penguin_trainer.py -> /tmp/tmp6s2sw4dj
running install_egg_info
running egg_info
creating tfx_user_code_Trainer.egg-info
writing tfx_user_code_Trainer.egg-info/PKG-INFO
writing dependency_links to tfx_user_code_Trainer.egg-info/dependency_links.txt
writing top-level names to tfx_user_code_Trainer.egg-info/top_level.txt
writing manifest file 'tfx_user_code_Trainer.egg-info/SOURCES.txt'
reading manifest file 'tfx_user_code_Trainer.egg-info/SOURCES.txt'
writing manifest file 'tfx_user_code_Trainer.egg-info/SOURCES.txt'
Copying tfx_user_code_Trainer.egg-info to /tmp/tmp6s2sw4dj/tfx_user_code_Trainer-0.0+1e19049dced0ccb21e0af60dae1c6e0ef09b63d1ff0e370d7f699920c2735703-py3.7.egg-info
running install_scripts
creating /tmp/tmp6s2sw4dj/tfx_user_code_Trainer-0.0+1e19049dced0ccb21e0af60dae1c6e0ef09b63d1ff0e370d7f699920c2735703.dist-info/WHEEL
creating '/tmp/tmp6jr76e54/tfx_user_code_Trainer-0.0+1e19049dced0ccb21e0af60dae1c6e0ef09b63d1ff0e370d7f699920c2735703-py3-none-any.whl' and adding '/tmp/tmp6s2sw4dj' to it
adding 'penguin_trainer.py'
adding 'tfx_user_code_Trainer-0.0+1e19049dced0ccb21e0af60dae1c6e0ef09b63d1ff0e370d7f699920c2735703.dist-info/METADATA'
adding 'tfx_user_code_Trainer-0.0+1e19049dced0ccb21e0af60dae1c6e0ef09b63d1ff0e370d7f699920c2735703.dist-info/WHEEL'
adding 'tfx_user_code_Trainer-0.0+1e19049dced0ccb21e0af60dae1c6e0ef09b63d1ff0e370d7f699920c2735703.dist-info/top_level.txt'
adding 'tfx_user_code_Trainer-0.0+1e19049dced0ccb21e0af60dae1c6e0ef09b63d1ff0e370d7f699920c2735703.dist-info/RECORD'
removing /tmp/tmp6s2sw4dj
WARNING: Logging before InitGoogleLogging() is written to STDERR
I1205 10:34:23.723806 28099 rdbms_metadata_access_object.cc:686] No property is defined for the Type
I1205 10:34:23.730262 28099 rdbms_metadata_access_object.cc:686] No property is defined for the Type
I1205 10:34:23.736788 28099 rdbms_metadata_access_object.cc:686] No property is defined for the Type
I1205 10:34:23.744907 28099 rdbms_metadata_access_object.cc:686] No property is defined for the Type
INFO:absl:select span and version = (0, None)
INFO:absl:latest span and version = (0, None)
INFO:absl:MetadataStore with DB connection initialized
I1205 10:34:23.758380 28099 rdbms_metadata_access_object.cc:686] No property is defined for the Type
INFO:absl:Going to run a new execution 1
INFO:absl:Going to run a new execution: ExecutionInfo(execution_id=1, input_dict={}, output_dict=defaultdict(<class 'list'>, {'examples': [Artifact(artifact: uri: "pipelines/penguin-tfma/CsvExampleGen/examples/1"
custom_properties {
key: "input_fingerprint"
value {
string_value: "split:single_split,num_files:1,total_bytes:25648,xor_checksum:1638700463,sum_checksum:1638700463"
}
}
custom_properties {
key: "name"
value {
string_value: "penguin-tfma:2021-12-05T10:34:23.517028:CsvExampleGen:examples:0"
}
}
custom_properties {
key: "span"
value {
int_value: 0
}
}
, artifact_type: name: "Examples"
properties {
key: "span"
value: INT
}
properties {
key: "split_names"
value: STRING
}
properties {
key: "version"
value: INT
}
)]}), exec_properties={'output_file_format': 5, 'output_config': '{\n "split_config": {\n "splits": [\n {\n "hash_buckets": 2,\n "name": "train"\n },\n {\n "hash_buckets": 1,\n "name": "eval"\n }\n ]\n }\n}', 'input_config': '{\n "splits": [\n {\n "name": "single_split",\n "pattern": "*"\n }\n ]\n}', 'output_data_format': 6, 'input_base': '/tmp/tfx-datal5lxy_yw', 'span': 0, 'version': None, 'input_fingerprint': 'split:single_split,num_files:1,total_bytes:25648,xor_checksum:1638700463,sum_checksum:1638700463'}, execution_output_uri='pipelines/penguin-tfma/CsvExampleGen/.system/executor_execution/1/executor_output.pb', stateful_working_dir='pipelines/penguin-tfma/CsvExampleGen/.system/stateful_working_dir/2021-12-05T10:34:23.517028', tmp_dir='pipelines/penguin-tfma/CsvExampleGen/.system/executor_execution/1/.temp/', pipeline_node=node_info {
type {
name: "tfx.components.example_gen.csv_example_gen.component.CsvExampleGen"
}
id: "CsvExampleGen"
}
contexts {
contexts {
type {
name: "pipeline"
}
name {
field_value {
string_value: "penguin-tfma"
}
}
}
contexts {
type {
name: "pipeline_run"
}
name {
field_value {
string_value: "2021-12-05T10:34:23.517028"
}
}
}
contexts {
type {
name: "node"
}
name {
field_value {
string_value: "penguin-tfma.CsvExampleGen"
}
}
}
}
outputs {
outputs {
key: "examples"
value {
artifact_spec {
type {
name: "Examples"
properties {
key: "span"
value: INT
}
properties {
key: "split_names"
value: STRING
}
properties {
key: "version"
value: INT
}
}
}
}
}
}
parameters {
parameters {
key: "input_base"
value {
field_value {
string_value: "/tmp/tfx-datal5lxy_yw"
}
}
}
parameters {
key: "input_config"
value {
field_value {
string_value: "{\n \"splits\": [\n {\n \"name\": \"single_split\",\n \"pattern\": \"*\"\n }\n ]\n}"
}
}
}
parameters {
key: "output_config"
value {
field_value {
string_value: "{\n \"split_config\": {\n \"splits\": [\n {\n \"hash_buckets\": 2,\n \"name\": \"train\"\n },\n {\n \"hash_buckets\": 1,\n \"name\": \"eval\"\n }\n ]\n }\n}"
}
}
}
parameters {
key: "output_data_format"
value {
field_value {
int_value: 6
}
}
}
parameters {
key: "output_file_format"
value {
field_value {
int_value: 5
}
}
}
}
downstream_nodes: "Evaluator"
downstream_nodes: "Trainer"
execution_options {
caching_options {
}
}
, pipeline_info=id: "penguin-tfma"
, pipeline_run_id='2021-12-05T10:34:23.517028')
INFO:absl:Generating examples.
WARNING:apache_beam.runners.interactive.interactive_environment:Dependencies required for Interactive Beam PCollection visualization are not available, please use: `pip install apache-beam[interactive]` to install necessary dependencies to enable all data visualization features.
INFO:absl:Processing input csv data /tmp/tfx-datal5lxy_yw/* to TFExample.
WARNING:root:Make sure that locally built Python SDK docker image has Python 3.7 interpreter.
WARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.
INFO:absl:Examples generated.
INFO:absl:Cleaning up stateless execution info.
INFO:absl:Execution 1 succeeded.
INFO:absl:Cleaning up stateful execution info.
INFO:absl:Publishing output artifacts defaultdict(<class 'list'>, {'examples': [Artifact(artifact: uri: "pipelines/penguin-tfma/CsvExampleGen/examples/1"
custom_properties {
key: "input_fingerprint"
value {
string_value: "split:single_split,num_files:1,total_bytes:25648,xor_checksum:1638700463,sum_checksum:1638700463"
}
}
custom_properties {
key: "name"
value {
string_value: "penguin-tfma:2021-12-05T10:34:23.517028:CsvExampleGen:examples:0"
}
}
custom_properties {
key: "span"
value {
int_value: 0
}
}
custom_properties {
key: "tfx_version"
value {
string_value: "1.4.0"
}
}
, artifact_type: name: "Examples"
properties {
key: "span"
value: INT
}
properties {
key: "split_names"
value: STRING
}
properties {
key: "version"
value: INT
}
)]}) for execution 1
INFO:absl:MetadataStore with DB connection initialized
INFO:absl:Component CsvExampleGen is finished.
INFO:absl:Component latest_blessed_model_resolver is running.
INFO:absl:Running launcher for node_info {
type {
name: "tfx.dsl.components.common.resolver.Resolver"
}
id: "latest_blessed_model_resolver"
}
contexts {
contexts {
type {
name: "pipeline"
}
name {
field_value {
string_value: "penguin-tfma"
}
}
}
contexts {
type {
name: "pipeline_run"
}
name {
field_value {
string_value: "2021-12-05T10:34:23.517028"
}
}
}
contexts {
type {
name: "node"
}
name {
field_value {
string_value: "penguin-tfma.latest_blessed_model_resolver"
}
}
}
}
inputs {
inputs {
key: "model"
value {
channels {
context_queries {
type {
name: "pipeline"
}
name {
field_value {
string_value: "penguin-tfma"
}
}
}
artifact_query {
type {
name: "Model"
}
}
}
}
}
inputs {
key: "model_blessing"
value {
channels {
context_queries {
type {
name: "pipeline"
}
name {
field_value {
string_value: "penguin-tfma"
}
}
}
artifact_query {
type {
name: "ModelBlessing"
}
}
}
}
}
resolver_config {
resolver_steps {
class_path: "tfx.dsl.input_resolution.strategies.latest_blessed_model_strategy.LatestBlessedModelStrategy"
config_json: "{}"
input_keys: "model"
input_keys: "model_blessing"
}
}
}
downstream_nodes: "Evaluator"
execution_options {
caching_options {
}
}
INFO:absl:Running as an resolver node.
INFO:absl:MetadataStore with DB connection initialized
WARNING:absl:Artifact type Model is not found in MLMD.
WARNING:absl:Artifact type ModelBlessing is not found in MLMD.
I1205 10:34:24.899447 28099 rdbms_metadata_access_object.cc:686] No property is defined for the Type
INFO:absl:Component latest_blessed_model_resolver is finished.
INFO:absl:Component Trainer is running.
INFO:absl:Running launcher for node_info {
type {
name: "tfx.components.trainer.component.Trainer"
}
id: "Trainer"
}
contexts {
contexts {
type {
name: "pipeline"
}
name {
field_value {
string_value: "penguin-tfma"
}
}
}
contexts {
type {
name: "pipeline_run"
}
name {
field_value {
string_value: "2021-12-05T10:34:23.517028"
}
}
}
contexts {
type {
name: "node"
}
name {
field_value {
string_value: "penguin-tfma.Trainer"
}
}
}
}
inputs {
inputs {
key: "examples"
value {
channels {
producer_node_query {
id: "CsvExampleGen"
}
context_queries {
type {
name: "pipeline"
}
name {
field_value {
string_value: "penguin-tfma"
}
}
}
context_queries {
type {
name: "pipeline_run"
}
name {
field_value {
string_value: "2021-12-05T10:34:23.517028"
}
}
}
context_queries {
type {
name: "node"
}
name {
field_value {
string_value: "penguin-tfma.CsvExampleGen"
}
}
}
artifact_query {
type {
name: "Examples"
}
}
output_key: "examples"
}
min_count: 1
}
}
}
outputs {
outputs {
key: "model"
value {
artifact_spec {
type {
name: "Model"
}
}
}
}
outputs {
key: "model_run"
value {
artifact_spec {
type {
name: "ModelRun"
}
}
}
}
}
parameters {
parameters {
key: "custom_config"
value {
field_value {
string_value: "null"
}
}
}
parameters {
key: "eval_args"
value {
field_value {
string_value: "{\n \"num_steps\": 5\n}"
}
}
}
parameters {
key: "module_path"
value {
field_value {
string_value: "penguin_trainer@pipelines/penguin-tfma/_wheels/tfx_user_code_Trainer-0.0+1e19049dced0ccb21e0af60dae1c6e0ef09b63d1ff0e370d7f699920c2735703-py3-none-any.whl"
}
}
}
parameters {
key: "train_args"
value {
field_value {
string_value: "{\n \"num_steps\": 100\n}"
}
}
}
}
upstream_nodes: "CsvExampleGen"
downstream_nodes: "Evaluator"
downstream_nodes: "Pusher"
execution_options {
caching_options {
}
}
INFO:absl:MetadataStore with DB connection initialized
INFO:absl:MetadataStore with DB connection initialized
I1205 10:34:24.924589 28099 rdbms_metadata_access_object.cc:686] No property is defined for the Type
INFO:absl:Going to run a new execution 3
INFO:absl:Going to run a new execution: ExecutionInfo(execution_id=3, input_dict={'examples': [Artifact(artifact: id: 1
type_id: 15
uri: "pipelines/penguin-tfma/CsvExampleGen/examples/1"
properties {
key: "split_names"
value {
string_value: "[\"train\", \"eval\"]"
}
}
custom_properties {
key: "file_format"
value {
string_value: "tfrecords_gzip"
}
}
custom_properties {
key: "input_fingerprint"
value {
string_value: "split:single_split,num_files:1,total_bytes:25648,xor_checksum:1638700463,sum_checksum:1638700463"
}
}
custom_properties {
key: "name"
value {
string_value: "penguin-tfma:2021-12-05T10:34:23.517028:CsvExampleGen:examples:0"
}
}
custom_properties {
key: "payload_format"
value {
string_value: "FORMAT_TF_EXAMPLE"
}
}
custom_properties {
key: "span"
value {
int_value: 0
}
}
custom_properties {
key: "tfx_version"
value {
string_value: "1.4.0"
}
}
state: LIVE
create_time_since_epoch: 1638700464882
last_update_time_since_epoch: 1638700464882
, artifact_type: id: 15
name: "Examples"
properties {
key: "span"
value: INT
}
properties {
key: "split_names"
value: STRING
}
properties {
key: "version"
value: INT
}
)]}, output_dict=defaultdict(<class 'list'>, {'model_run': [Artifact(artifact: uri: "pipelines/penguin-tfma/Trainer/model_run/3"
custom_properties {
key: "name"
value {
string_value: "penguin-tfma:2021-12-05T10:34:23.517028:Trainer:model_run:0"
}
}
, artifact_type: name: "ModelRun"
)], 'model': [Artifact(artifact: uri: "pipelines/penguin-tfma/Trainer/model/3"
custom_properties {
key: "name"
value {
string_value: "penguin-tfma:2021-12-05T10:34:23.517028:Trainer:model:0"
}
}
, artifact_type: name: "Model"
)]}), exec_properties={'train_args': '{\n "num_steps": 100\n}', 'custom_config': 'null', 'eval_args': '{\n "num_steps": 5\n}', 'module_path': 'penguin_trainer@pipelines/penguin-tfma/_wheels/tfx_user_code_Trainer-0.0+1e19049dced0ccb21e0af60dae1c6e0ef09b63d1ff0e370d7f699920c2735703-py3-none-any.whl'}, execution_output_uri='pipelines/penguin-tfma/Trainer/.system/executor_execution/3/executor_output.pb', stateful_working_dir='pipelines/penguin-tfma/Trainer/.system/stateful_working_dir/2021-12-05T10:34:23.517028', tmp_dir='pipelines/penguin-tfma/Trainer/.system/executor_execution/3/.temp/', pipeline_node=node_info {
type {
name: "tfx.components.trainer.component.Trainer"
}
id: "Trainer"
}
contexts {
contexts {
type {
name: "pipeline"
}
name {
field_value {
string_value: "penguin-tfma"
}
}
}
contexts {
type {
name: "pipeline_run"
}
name {
field_value {
string_value: "2021-12-05T10:34:23.517028"
}
}
}
contexts {
type {
name: "node"
}
name {
field_value {
string_value: "penguin-tfma.Trainer"
}
}
}
}
inputs {
inputs {
key: "examples"
value {
channels {
producer_node_query {
id: "CsvExampleGen"
}
context_queries {
type {
name: "pipeline"
}
name {
field_value {
string_value: "penguin-tfma"
}
}
}
context_queries {
type {
name: "pipeline_run"
}
name {
field_value {
string_value: "2021-12-05T10:34:23.517028"
}
}
}
context_queries {
type {
name: "node"
}
name {
field_value {
string_value: "penguin-tfma.CsvExampleGen"
}
}
}
artifact_query {
type {
name: "Examples"
}
}
output_key: "examples"
}
min_count: 1
}
}
}
outputs {
outputs {
key: "model"
value {
artifact_spec {
type {
name: "Model"
}
}
}
}
outputs {
key: "model_run"
value {
artifact_spec {
type {
name: "ModelRun"
}
}
}
}
}
parameters {
parameters {
key: "custom_config"
value {
field_value {
string_value: "null"
}
}
}
parameters {
key: "eval_args"
value {
field_value {
string_value: "{\n \"num_steps\": 5\n}"
}
}
}
parameters {
key: "module_path"
value {
field_value {
string_value: "penguin_trainer@pipelines/penguin-tfma/_wheels/tfx_user_code_Trainer-0.0+1e19049dced0ccb21e0af60dae1c6e0ef09b63d1ff0e370d7f699920c2735703-py3-none-any.whl"
}
}
}
parameters {
key: "train_args"
value {
field_value {
string_value: "{\n \"num_steps\": 100\n}"
}
}
}
}
upstream_nodes: "CsvExampleGen"
downstream_nodes: "Evaluator"
downstream_nodes: "Pusher"
execution_options {
caching_options {
}
}
, pipeline_info=id: "penguin-tfma"
, pipeline_run_id='2021-12-05T10:34:23.517028')
INFO:absl:Train on the 'train' split when train_args.splits is not set.
INFO:absl:Evaluate on the 'eval' split when eval_args.splits is not set.
INFO:absl:udf_utils.get_fn {'train_args': '{\n "num_steps": 100\n}', 'custom_config': 'null', 'eval_args': '{\n "num_steps": 5\n}', 'module_path': 'penguin_trainer@pipelines/penguin-tfma/_wheels/tfx_user_code_Trainer-0.0+1e19049dced0ccb21e0af60dae1c6e0ef09b63d1ff0e370d7f699920c2735703-py3-none-any.whl'} 'run_fn'
INFO:absl:Installing 'pipelines/penguin-tfma/_wheels/tfx_user_code_Trainer-0.0+1e19049dced0ccb21e0af60dae1c6e0ef09b63d1ff0e370d7f699920c2735703-py3-none-any.whl' to a temporary directory.
INFO:absl:Executing: ['/tmpfs/src/tf_docs_env/bin/python', '-m', 'pip', 'install', '--target', '/tmp/tmpc97ini82', 'pipelines/penguin-tfma/_wheels/tfx_user_code_Trainer-0.0+1e19049dced0ccb21e0af60dae1c6e0ef09b63d1ff0e370d7f699920c2735703-py3-none-any.whl']
Processing ./pipelines/penguin-tfma/_wheels/tfx_user_code_Trainer-0.0+1e19049dced0ccb21e0af60dae1c6e0ef09b63d1ff0e370d7f699920c2735703-py3-none-any.whl
INFO:absl:Successfully installed 'pipelines/penguin-tfma/_wheels/tfx_user_code_Trainer-0.0+1e19049dced0ccb21e0af60dae1c6e0ef09b63d1ff0e370d7f699920c2735703-py3-none-any.whl'.
INFO:absl:Training model.
INFO:absl:Feature body_mass_g has a shape dim {
size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature culmen_depth_mm has a shape dim {
size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature culmen_length_mm has a shape dim {
size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature flipper_length_mm has a shape dim {
size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature species has a shape dim {
size: 1
}
. Setting to DenseTensor.
Installing collected packages: tfx-user-code-Trainer
Successfully installed tfx-user-code-Trainer-0.0+1e19049dced0ccb21e0af60dae1c6e0ef09b63d1ff0e370d7f699920c2735703
INFO:absl:Feature body_mass_g has a shape dim {
size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature culmen_depth_mm has a shape dim {
size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature culmen_length_mm has a shape dim {
size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature flipper_length_mm has a shape dim {
size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature species has a shape dim {
size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature body_mass_g has a shape dim {
size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature culmen_depth_mm has a shape dim {
size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature culmen_length_mm has a shape dim {
size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature flipper_length_mm has a shape dim {
size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature species has a shape dim {
size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature body_mass_g has a shape dim {
size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature culmen_depth_mm has a shape dim {
size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature culmen_length_mm has a shape dim {
size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature flipper_length_mm has a shape dim {
size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature species has a shape dim {
size: 1
}
. Setting to DenseTensor.
INFO:absl:Model: "model"
INFO:absl:__________________________________________________________________________________________________
INFO:absl:Layer (type) Output Shape Param # Connected to
INFO:absl:==================================================================================================
INFO:absl:culmen_length_mm (InputLayer) [(None, 1)] 0
INFO:absl:__________________________________________________________________________________________________
INFO:absl:culmen_depth_mm (InputLayer) [(None, 1)] 0
INFO:absl:__________________________________________________________________________________________________
INFO:absl:flipper_length_mm (InputLayer) [(None, 1)] 0
INFO:absl:__________________________________________________________________________________________________
INFO:absl:body_mass_g (InputLayer) [(None, 1)] 0
INFO:absl:__________________________________________________________________________________________________
INFO:absl:concatenate (Concatenate) (None, 4) 0 culmen_length_mm[0][0]
INFO:absl: culmen_depth_mm[0][0]
INFO:absl: flipper_length_mm[0][0]
INFO:absl: body_mass_g[0][0]
INFO:absl:__________________________________________________________________________________________________
INFO:absl:dense (Dense) (None, 8) 40 concatenate[0][0]
INFO:absl:__________________________________________________________________________________________________
INFO:absl:dense_1 (Dense) (None, 8) 72 dense[0][0]
INFO:absl:__________________________________________________________________________________________________
INFO:absl:dense_2 (Dense) (None, 3) 27 dense_1[0][0]
INFO:absl:==================================================================================================
INFO:absl:Total params: 139
INFO:absl:Trainable params: 139
INFO:absl:Non-trainable params: 0
INFO:absl:__________________________________________________________________________________________________
100/100 [==============================] - 1s 3ms/step - loss: 0.5273 - sparse_categorical_accuracy: 0.8175 - val_loss: 0.2412 - val_sparse_categorical_accuracy: 0.9600
2021-12-05 10:34:29.879208: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
INFO:tensorflow:Assets written to: pipelines/penguin-tfma/Trainer/model/3/Format-Serving/assets
INFO:tensorflow:Assets written to: pipelines/penguin-tfma/Trainer/model/3/Format-Serving/assets
INFO:absl:Training complete. Model written to pipelines/penguin-tfma/Trainer/model/3/Format-Serving. ModelRun written to pipelines/penguin-tfma/Trainer/model_run/3
INFO:absl:Cleaning up stateless execution info.
INFO:absl:Execution 3 succeeded.
INFO:absl:Cleaning up stateful execution info.
INFO:absl:Publishing output artifacts defaultdict(<class 'list'>, {'model_run': [Artifact(artifact: uri: "pipelines/penguin-tfma/Trainer/model_run/3"
custom_properties {
key: "name"
value {
string_value: "penguin-tfma:2021-12-05T10:34:23.517028:Trainer:model_run:0"
}
}
custom_properties {
key: "tfx_version"
value {
string_value: "1.4.0"
}
}
, artifact_type: name: "ModelRun"
)], 'model': [Artifact(artifact: uri: "pipelines/penguin-tfma/Trainer/model/3"
custom_properties {
key: "name"
value {
string_value: "penguin-tfma:2021-12-05T10:34:23.517028:Trainer:model:0"
}
}
custom_properties {
key: "tfx_version"
value {
string_value: "1.4.0"
}
}
, artifact_type: name: "Model"
)]}) for execution 3
INFO:absl:MetadataStore with DB connection initialized
I1205 10:34:30.399760 28099 rdbms_metadata_access_object.cc:686] No property is defined for the Type
I1205 10:34:30.404250 28099 rdbms_metadata_access_object.cc:686] No property is defined for the Type
INFO:absl:Component Trainer is finished.
INFO:absl:Component Evaluator is running.
INFO:absl:Running launcher for node_info {
type {
name: "tfx.components.evaluator.component.Evaluator"
}
id: "Evaluator"
}
contexts {
contexts {
type {
name: "pipeline"
}
name {
field_value {
string_value: "penguin-tfma"
}
}
}
contexts {
type {
name: "pipeline_run"
}
name {
field_value {
string_value: "2021-12-05T10:34:23.517028"
}
}
}
contexts {
type {
name: "node"
}
name {
field_value {
string_value: "penguin-tfma.Evaluator"
}
}
}
}
inputs {
inputs {
key: "baseline_model"
value {
channels {
producer_node_query {
id: "latest_blessed_model_resolver"
}
context_queries {
type {
name: "pipeline"
}
name {
field_value {
string_value: "penguin-tfma"
}
}
}
context_queries {
type {
name: "pipeline_run"
}
name {
field_value {
string_value: "2021-12-05T10:34:23.517028"
}
}
}
context_queries {
type {
name: "node"
}
name {
field_value {
string_value: "penguin-tfma.latest_blessed_model_resolver"
}
}
}
artifact_query {
type {
name: "Model"
}
}
output_key: "model"
}
}
}
inputs {
key: "examples"
value {
channels {
producer_node_query {
id: "CsvExampleGen"
}
context_queries {
type {
name: "pipeline"
}
name {
field_value {
string_value: "penguin-tfma"
}
}
}
context_queries {
type {
name: "pipeline_run"
}
name {
field_value {
string_value: "2021-12-05T10:34:23.517028"
}
}
}
context_queries {
type {
name: "node"
}
name {
field_value {
string_value: "penguin-tfma.CsvExampleGen"
}
}
}
artifact_query {
type {
name: "Examples"
}
}
output_key: "examples"
}
min_count: 1
}
}
inputs {
key: "model"
value {
channels {
producer_node_query {
id: "Trainer"
}
context_queries {
type {
name: "pipeline"
}
name {
field_value {
string_value: "penguin-tfma"
}
}
}
context_queries {
type {
name: "pipeline_run"
}
name {
field_value {
string_value: "2021-12-05T10:34:23.517028"
}
}
}
context_queries {
type {
name: "node"
}
name {
field_value {
string_value: "penguin-tfma.Trainer"
}
}
}
artifact_query {
type {
name: "Model"
}
}
output_key: "model"
}
}
}
}
outputs {
outputs {
key: "blessing"
value {
artifact_spec {
type {
name: "ModelBlessing"
}
}
}
}
outputs {
key: "evaluation"
value {
artifact_spec {
type {
name: "ModelEvaluation"
}
}
}
}
}
parameters {
parameters {
key: "eval_config"
value {
field_value {
string_value: "{\n \"metrics_specs\": [\n {\n \"per_slice_thresholds\": {\n \"sparse_categorical_accuracy\": {\n \"thresholds\": [\n {\n \"slicing_specs\": [\n {}\n ],\n \"threshold\": {\n \"change_threshold\": {\n \"absolute\": -1e-10,\n \"direction\": \"HIGHER_IS_BETTER\"\n },\n \"value_threshold\": {\n \"lower_bound\": 0.6\n }\n }\n }\n ]\n }\n }\n }\n ],\n \"model_specs\": [\n {\n \"label_key\": \"species\"\n }\n ],\n \"slicing_specs\": [\n {},\n {\n \"feature_keys\": [\n \"species\"\n ]\n }\n ]\n}"
}
}
}
parameters {
key: "example_splits"
value {
field_value {
string_value: "null"
}
}
}
parameters {
key: "fairness_indicator_thresholds"
value {
field_value {
string_value: "null"
}
}
}
}
upstream_nodes: "CsvExampleGen"
upstream_nodes: "Trainer"
upstream_nodes: "latest_blessed_model_resolver"
downstream_nodes: "Pusher"
execution_options {
caching_options {
}
}
INFO:absl:MetadataStore with DB connection initialized
I1205 10:34:30.428037 28099 rdbms_metadata_access_object.cc:686] No property is defined for the Type
INFO:absl:MetadataStore with DB connection initialized
INFO:absl:Going to run a new execution 4
INFO:absl:Going to run a new execution: ExecutionInfo(execution_id=4, input_dict={'examples': [Artifact(artifact: id: 1
type_id: 15
uri: "pipelines/penguin-tfma/CsvExampleGen/examples/1"
properties {
key: "split_names"
value {
string_value: "[\"train\", \"eval\"]"
}
}
custom_properties {
key: "file_format"
value {
string_value: "tfrecords_gzip"
}
}
custom_properties {
key: "input_fingerprint"
value {
string_value: "split:single_split,num_files:1,total_bytes:25648,xor_checksum:1638700463,sum_checksum:1638700463"
}
}
custom_properties {
key: "name"
value {
string_value: "penguin-tfma:2021-12-05T10:34:23.517028:CsvExampleGen:examples:0"
}
}
custom_properties {
key: "payload_format"
value {
string_value: "FORMAT_TF_EXAMPLE"
}
}
custom_properties {
key: "span"
value {
int_value: 0
}
}
custom_properties {
key: "tfx_version"
value {
string_value: "1.4.0"
}
}
state: LIVE
create_time_since_epoch: 1638700464882
last_update_time_since_epoch: 1638700464882
, artifact_type: id: 15
name: "Examples"
properties {
key: "span"
value: INT
}
properties {
key: "split_names"
value: STRING
}
properties {
key: "version"
value: INT
}
)], 'model': [Artifact(artifact: id: 3
type_id: 19
uri: "pipelines/penguin-tfma/Trainer/model/3"
custom_properties {
key: "name"
value {
string_value: "penguin-tfma:2021-12-05T10:34:23.517028:Trainer:model:0"
}
}
custom_properties {
key: "tfx_version"
value {
string_value: "1.4.0"
}
}
state: LIVE
create_time_since_epoch: 1638700470409
last_update_time_since_epoch: 1638700470409
, artifact_type: id: 19
name: "Model"
)], 'baseline_model': []}, output_dict=defaultdict(<class 'list'>, {'blessing': [Artifact(artifact: uri: "pipelines/penguin-tfma/Evaluator/blessing/4"
custom_properties {
key: "name"
value {
string_value: "penguin-tfma:2021-12-05T10:34:23.517028:Evaluator:blessing:0"
}
}
, artifact_type: name: "ModelBlessing"
)], 'evaluation': [Artifact(artifact: uri: "pipelines/penguin-tfma/Evaluator/evaluation/4"
custom_properties {
key: "name"
value {
string_value: "penguin-tfma:2021-12-05T10:34:23.517028:Evaluator:evaluation:0"
}
}
, artifact_type: name: "ModelEvaluation"
)]}), exec_properties={'example_splits': 'null', 'eval_config': '{\n "metrics_specs": [\n {\n "per_slice_thresholds": {\n "sparse_categorical_accuracy": {\n "thresholds": [\n {\n "slicing_specs": [\n {}\n ],\n "threshold": {\n "change_threshold": {\n "absolute": -1e-10,\n "direction": "HIGHER_IS_BETTER"\n },\n "value_threshold": {\n "lower_bound": 0.6\n }\n }\n }\n ]\n }\n }\n }\n ],\n "model_specs": [\n {\n "label_key": "species"\n }\n ],\n "slicing_specs": [\n {},\n {\n "feature_keys": [\n "species"\n ]\n }\n ]\n}', 'fairness_indicator_thresholds': 'null'}, execution_output_uri='pipelines/penguin-tfma/Evaluator/.system/executor_execution/4/executor_output.pb', stateful_working_dir='pipelines/penguin-tfma/Evaluator/.system/stateful_working_dir/2021-12-05T10:34:23.517028', tmp_dir='pipelines/penguin-tfma/Evaluator/.system/executor_execution/4/.temp/', pipeline_node=node_info {
type {
name: "tfx.components.evaluator.component.Evaluator"
}
id: "Evaluator"
}
contexts {
contexts {
type {
name: "pipeline"
}
name {
field_value {
string_value: "penguin-tfma"
}
}
}
contexts {
type {
name: "pipeline_run"
}
name {
field_value {
string_value: "2021-12-05T10:34:23.517028"
}
}
}
contexts {
type {
name: "node"
}
name {
field_value {
string_value: "penguin-tfma.Evaluator"
}
}
}
}
inputs {
inputs {
key: "baseline_model"
value {
channels {
producer_node_query {
id: "latest_blessed_model_resolver"
}
context_queries {
type {
name: "pipeline"
}
name {
field_value {
string_value: "penguin-tfma"
}
}
}
context_queries {
type {
name: "pipeline_run"
}
name {
field_value {
string_value: "2021-12-05T10:34:23.517028"
}
}
}
context_queries {
type {
name: "node"
}
name {
field_value {
string_value: "penguin-tfma.latest_blessed_model_resolver"
}
}
}
artifact_query {
type {
name: "Model"
}
}
output_key: "model"
}
}
}
inputs {
key: "examples"
value {
channels {
producer_node_query {
id: "CsvExampleGen"
}
context_queries {
type {
name: "pipeline"
}
name {
field_value {
string_value: "penguin-tfma"
}
}
}
context_queries {
type {
name: "pipeline_run"
}
name {
field_value {
string_value: "2021-12-05T10:34:23.517028"
}
}
}
context_queries {
type {
name: "node"
}
name {
field_value {
string_value: "penguin-tfma.CsvExampleGen"
}
}
}
artifact_query {
type {
name: "Examples"
}
}
output_key: "examples"
}
min_count: 1
}
}
inputs {
key: "model"
value {
channels {
producer_node_query {
id: "Trainer"
}
context_queries {
type {
name: "pipeline"
}
name {
field_value {
string_value: "penguin-tfma"
}
}
}
context_queries {
type {
name: "pipeline_run"
}
name {
field_value {
string_value: "2021-12-05T10:34:23.517028"
}
}
}
context_queries {
type {
name: "node"
}
name {
field_value {
string_value: "penguin-tfma.Trainer"
}
}
}
artifact_query {
type {
name: "Model"
}
}
output_key: "model"
}
}
}
}
outputs {
outputs {
key: "blessing"
value {
artifact_spec {
type {
name: "ModelBlessing"
}
}
}
}
outputs {
key: "evaluation"
value {
artifact_spec {
type {
name: "ModelEvaluation"
}
}
}
}
}
parameters {
parameters {
key: "eval_config"
value {
field_value {
string_value: "{\n \"metrics_specs\": [\n {\n \"per_slice_thresholds\": {\n \"sparse_categorical_accuracy\": {\n \"thresholds\": [\n {\n \"slicing_specs\": [\n {}\n ],\n \"threshold\": {\n \"change_threshold\": {\n \"absolute\": -1e-10,\n \"direction\": \"HIGHER_IS_BETTER\"\n },\n \"value_threshold\": {\n \"lower_bound\": 0.6\n }\n }\n }\n ]\n }\n }\n }\n ],\n \"model_specs\": [\n {\n \"label_key\": \"species\"\n }\n ],\n \"slicing_specs\": [\n {},\n {\n \"feature_keys\": [\n \"species\"\n ]\n }\n ]\n}"
}
}
}
parameters {
key: "example_splits"
value {
field_value {
string_value: "null"
}
}
}
parameters {
key: "fairness_indicator_thresholds"
value {
field_value {
string_value: "null"
}
}
}
}
upstream_nodes: "CsvExampleGen"
upstream_nodes: "Trainer"
upstream_nodes: "latest_blessed_model_resolver"
downstream_nodes: "Pusher"
execution_options {
caching_options {
}
}
, pipeline_info=id: "penguin-tfma"
, pipeline_run_id='2021-12-05T10:34:23.517028')
INFO:absl:udf_utils.get_fn {'example_splits': 'null', 'eval_config': '{\n "metrics_specs": [\n {\n "per_slice_thresholds": {\n "sparse_categorical_accuracy": {\n "thresholds": [\n {\n "slicing_specs": [\n {}\n ],\n "threshold": {\n "change_threshold": {\n "absolute": -1e-10,\n "direction": "HIGHER_IS_BETTER"\n },\n "value_threshold": {\n "lower_bound": 0.6\n }\n }\n }\n ]\n }\n }\n }\n ],\n "model_specs": [\n {\n "label_key": "species"\n }\n ],\n "slicing_specs": [\n {},\n {\n "feature_keys": [\n "species"\n ]\n }\n ]\n}', 'fairness_indicator_thresholds': 'null'} 'custom_eval_shared_model'
INFO:absl:Request was made to ignore the baseline ModelSpec and any change thresholds. This is likely because a baseline model was not provided: updated_config=
model_specs {
label_key: "species"
}
slicing_specs {
}
slicing_specs {
feature_keys: "species"
}
metrics_specs {
per_slice_thresholds {
key: "sparse_categorical_accuracy"
value {
thresholds {
slicing_specs {
}
threshold {
value_threshold {
lower_bound {
value: 0.6
}
}
}
}
}
}
}
INFO:absl:Using pipelines/penguin-tfma/Trainer/model/3/Format-Serving as model.
INFO:absl:The 'example_splits' parameter is not set, using 'eval' split.
INFO:absl:Evaluating model.
INFO:absl:udf_utils.get_fn {'example_splits': 'null', 'eval_config': '{\n "metrics_specs": [\n {\n "per_slice_thresholds": {\n "sparse_categorical_accuracy": {\n "thresholds": [\n {\n "slicing_specs": [\n {}\n ],\n "threshold": {\n "change_threshold": {\n "absolute": -1e-10,\n "direction": "HIGHER_IS_BETTER"\n },\n "value_threshold": {\n "lower_bound": 0.6\n }\n }\n }\n ]\n }\n }\n }\n ],\n "model_specs": [\n {\n "label_key": "species"\n }\n ],\n "slicing_specs": [\n {},\n {\n "feature_keys": [\n "species"\n ]\n }\n ]\n}', 'fairness_indicator_thresholds': 'null'} 'custom_extractors'
INFO:absl:Request was made to ignore the baseline ModelSpec and any change thresholds. This is likely because a baseline model was not provided: updated_config=
model_specs {
label_key: "species"
}
slicing_specs {
}
slicing_specs {
feature_keys: "species"
}
metrics_specs {
model_names: ""
per_slice_thresholds {
key: "sparse_categorical_accuracy"
value {
thresholds {
slicing_specs {
}
threshold {
value_threshold {
lower_bound {
value: 0.6
}
}
}
}
}
}
}
INFO:absl:Request was made to ignore the baseline ModelSpec and any change thresholds. This is likely because a baseline model was not provided: updated_config=
model_specs {
label_key: "species"
}
slicing_specs {
}
slicing_specs {
feature_keys: "species"
}
metrics_specs {
model_names: ""
per_slice_thresholds {
key: "sparse_categorical_accuracy"
value {
thresholds {
slicing_specs {
}
threshold {
value_threshold {
lower_bound {
value: 0.6
}
}
}
}
}
}
}
INFO:absl:Request was made to ignore the baseline ModelSpec and any change thresholds. This is likely because a baseline model was not provided: updated_config=
model_specs {
label_key: "species"
}
slicing_specs {
}
slicing_specs {
feature_keys: "species"
}
metrics_specs {
model_names: ""
per_slice_thresholds {
key: "sparse_categorical_accuracy"
value {
thresholds {
slicing_specs {
}
threshold {
value_threshold {
lower_bound {
value: 0.6
}
}
}
}
}
}
}
WARNING:root:Make sure that locally built Python SDK docker image has Python 3.7 interpreter.
INFO:absl:Evaluation complete. Results written to pipelines/penguin-tfma/Evaluator/evaluation/4.
INFO:absl:Checking validation results.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow_model_analysis/writers/metrics_plots_and_validations_writer.py:114: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.
Instructions for updating:
Use eager execution and:
`tf.data.TFRecordDataset(path)`
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow_model_analysis/writers/metrics_plots_and_validations_writer.py:114: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.
Instructions for updating:
Use eager execution and:
`tf.data.TFRecordDataset(path)`
INFO:absl:Blessing result True written to pipelines/penguin-tfma/Evaluator/blessing/4.
INFO:absl:Cleaning up stateless execution info.
INFO:absl:Execution 4 succeeded.
INFO:absl:Cleaning up stateful execution info.
INFO:absl:Publishing output artifacts defaultdict(<class 'list'>, {'blessing': [Artifact(artifact: uri: "pipelines/penguin-tfma/Evaluator/blessing/4"
custom_properties {
key: "name"
value {
string_value: "penguin-tfma:2021-12-05T10:34:23.517028:Evaluator:blessing:0"
}
}
custom_properties {
key: "tfx_version"
value {
string_value: "1.4.0"
}
}
, artifact_type: name: "ModelBlessing"
)], 'evaluation': [Artifact(artifact: uri: "pipelines/penguin-tfma/Evaluator/evaluation/4"
custom_properties {
key: "name"
value {
string_value: "penguin-tfma:2021-12-05T10:34:23.517028:Evaluator:evaluation:0"
}
}
custom_properties {
key: "tfx_version"
value {
string_value: "1.4.0"
}
}
, artifact_type: name: "ModelEvaluation"
)]}) for execution 4
INFO:absl:MetadataStore with DB connection initialized
I1205 10:34:35.040588 28099 rdbms_metadata_access_object.cc:686] No property is defined for the Type
I1205 10:34:35.045548 28099 rdbms_metadata_access_object.cc:686] No property is defined for the Type
INFO:absl:Component Evaluator is finished.
INFO:absl:Component Pusher is running.
INFO:absl:Running launcher for node_info {
type {
name: "tfx.components.pusher.component.Pusher"
}
id: "Pusher"
}
contexts {
contexts {
type {
name: "pipeline"
}
name {
field_value {
string_value: "penguin-tfma"
}
}
}
contexts {
type {
name: "pipeline_run"
}
name {
field_value {
string_value: "2021-12-05T10:34:23.517028"
}
}
}
contexts {
type {
name: "node"
}
name {
field_value {
string_value: "penguin-tfma.Pusher"
}
}
}
}
inputs {
inputs {
key: "model"
value {
channels {
producer_node_query {
id: "Trainer"
}
context_queries {
type {
name: "pipeline"
}
name {
field_value {
string_value: "penguin-tfma"
}
}
}
context_queries {
type {
name: "pipeline_run"
}
name {
field_value {
string_value: "2021-12-05T10:34:23.517028"
}
}
}
context_queries {
type {
name: "node"
}
name {
field_value {
string_value: "penguin-tfma.Trainer"
}
}
}
artifact_query {
type {
name: "Model"
}
}
output_key: "model"
}
}
}
inputs {
key: "model_blessing"
value {
channels {
producer_node_query {
id: "Evaluator"
}
context_queries {
type {
name: "pipeline"
}
name {
field_value {
string_value: "penguin-tfma"
}
}
}
context_queries {
type {
name: "pipeline_run"
}
name {
field_value {
string_value: "2021-12-05T10:34:23.517028"
}
}
}
context_queries {
type {
name: "node"
}
name {
field_value {
string_value: "penguin-tfma.Evaluator"
}
}
}
artifact_query {
type {
name: "ModelBlessing"
}
}
output_key: "blessing"
}
}
}
}
outputs {
outputs {
key: "pushed_model"
value {
artifact_spec {
type {
name: "PushedModel"
}
}
}
}
}
parameters {
parameters {
key: "custom_config"
value {
field_value {
string_value: "null"
}
}
}
parameters {
key: "push_destination"
value {
field_value {
string_value: "{\n \"filesystem\": {\n \"base_directory\": \"serving_model/penguin-tfma\"\n }\n}"
}
}
}
}
upstream_nodes: "Evaluator"
upstream_nodes: "Trainer"
execution_options {
caching_options {
}
}
INFO:absl:MetadataStore with DB connection initialized
I1205 10:34:35.068168 28099 rdbms_metadata_access_object.cc:686] No property is defined for the Type
INFO:absl:MetadataStore with DB connection initialized
INFO:absl:Going to run a new execution 5
INFO:absl:Going to run a new execution: ExecutionInfo(execution_id=5, input_dict={'model': [Artifact(artifact: id: 3
type_id: 19
uri: "pipelines/penguin-tfma/Trainer/model/3"
custom_properties {
key: "name"
value {
string_value: "penguin-tfma:2021-12-05T10:34:23.517028:Trainer:model:0"
}
}
custom_properties {
key: "tfx_version"
value {
string_value: "1.4.0"
}
}
state: LIVE
create_time_since_epoch: 1638700470409
last_update_time_since_epoch: 1638700470409
, artifact_type: id: 19
name: "Model"
)], 'model_blessing': [Artifact(artifact: id: 4
type_id: 21
uri: "pipelines/penguin-tfma/Evaluator/blessing/4"
custom_properties {
key: "blessed"
value {
int_value: 1
}
}
custom_properties {
key: "current_model"
value {
string_value: "pipelines/penguin-tfma/Trainer/model/3"
}
}
custom_properties {
key: "current_model_id"
value {
int_value: 3
}
}
custom_properties {
key: "name"
value {
string_value: "penguin-tfma:2021-12-05T10:34:23.517028:Evaluator:blessing:0"
}
}
custom_properties {
key: "tfx_version"
value {
string_value: "1.4.0"
}
}
state: LIVE
create_time_since_epoch: 1638700475049
last_update_time_since_epoch: 1638700475049
, artifact_type: id: 21
name: "ModelBlessing"
)]}, output_dict=defaultdict(<class 'list'>, {'pushed_model': [Artifact(artifact: uri: "pipelines/penguin-tfma/Pusher/pushed_model/5"
custom_properties {
key: "name"
value {
string_value: "penguin-tfma:2021-12-05T10:34:23.517028:Pusher:pushed_model:0"
}
}
, artifact_type: name: "PushedModel"
)]}), exec_properties={'custom_config': 'null', 'push_destination': '{\n "filesystem": {\n "base_directory": "serving_model/penguin-tfma"\n }\n}'}, execution_output_uri='pipelines/penguin-tfma/Pusher/.system/executor_execution/5/executor_output.pb', stateful_working_dir='pipelines/penguin-tfma/Pusher/.system/stateful_working_dir/2021-12-05T10:34:23.517028', tmp_dir='pipelines/penguin-tfma/Pusher/.system/executor_execution/5/.temp/', pipeline_node=node_info {
type {
name: "tfx.components.pusher.component.Pusher"
}
id: "Pusher"
}
contexts {
contexts {
type {
name: "pipeline"
}
name {
field_value {
string_value: "penguin-tfma"
}
}
}
contexts {
type {
name: "pipeline_run"
}
name {
field_value {
string_value: "2021-12-05T10:34:23.517028"
}
}
}
contexts {
type {
name: "node"
}
name {
field_value {
string_value: "penguin-tfma.Pusher"
}
}
}
}
inputs {
inputs {
key: "model"
value {
channels {
producer_node_query {
id: "Trainer"
}
context_queries {
type {
name: "pipeline"
}
name {
field_value {
string_value: "penguin-tfma"
}
}
}
context_queries {
type {
name: "pipeline_run"
}
name {
field_value {
string_value: "2021-12-05T10:34:23.517028"
}
}
}
context_queries {
type {
name: "node"
}
name {
field_value {
string_value: "penguin-tfma.Trainer"
}
}
}
artifact_query {
type {
name: "Model"
}
}
output_key: "model"
}
}
}
inputs {
key: "model_blessing"
value {
channels {
producer_node_query {
id: "Evaluator"
}
context_queries {
type {
name: "pipeline"
}
name {
field_value {
string_value: "penguin-tfma"
}
}
}
context_queries {
type {
name: "pipeline_run"
}
name {
field_value {
string_value: "2021-12-05T10:34:23.517028"
}
}
}
context_queries {
type {
name: "node"
}
name {
field_value {
string_value: "penguin-tfma.Evaluator"
}
}
}
artifact_query {
type {
name: "ModelBlessing"
}
}
output_key: "blessing"
}
}
}
}
outputs {
outputs {
key: "pushed_model"
value {
artifact_spec {
type {
name: "PushedModel"
}
}
}
}
}
parameters {
parameters {
key: "custom_config"
value {
field_value {
string_value: "null"
}
}
}
parameters {
key: "push_destination"
value {
field_value {
string_value: "{\n \"filesystem\": {\n \"base_directory\": \"serving_model/penguin-tfma\"\n }\n}"
}
}
}
}
upstream_nodes: "Evaluator"
upstream_nodes: "Trainer"
execution_options {
caching_options {
}
}
, pipeline_info=id: "penguin-tfma"
, pipeline_run_id='2021-12-05T10:34:23.517028')
INFO:absl:Model version: 1638700475
INFO:absl:Model written to serving path serving_model/penguin-tfma/1638700475.
INFO:absl:Model pushed to pipelines/penguin-tfma/Pusher/pushed_model/5.
INFO:absl:Cleaning up stateless execution info.
INFO:absl:Execution 5 succeeded.
INFO:absl:Cleaning up stateful execution info.
INFO:absl:Publishing output artifacts defaultdict(<class 'list'>, {'pushed_model': [Artifact(artifact: uri: "pipelines/penguin-tfma/Pusher/pushed_model/5"
custom_properties {
key: "name"
value {
string_value: "penguin-tfma:2021-12-05T10:34:23.517028:Pusher:pushed_model:0"
}
}
custom_properties {
key: "tfx_version"
value {
string_value: "1.4.0"
}
}
, artifact_type: name: "PushedModel"
)]}) for execution 5
INFO:absl:MetadataStore with DB connection initialized
I1205 10:34:35.098553 28099 rdbms_metadata_access_object.cc:686] No property is defined for the Type
INFO:absl:Component Pusher is finished.
파이프라인이 완료되면 다음과 같은 내용을 볼 수 있습니다.
INFO:absl:Blessing result True written to pipelines/penguin-tfma/Evaluator/blessing/4.
또는 생성된 아티팩트가 저장된 출력 디렉토리를 수동으로 확인할 수도 있습니다. 당신이 방문하는 경우 pipelines/penguin-tfma/Evaluator/blessing/ 파일이 broswer, 당신은 이름의 파일을 볼 수있는 BLESSED 하거나 NOT_BLESSED 평가 결과에 따라.
축복의 결과 인 경우 False , 푸셔는에 모델을 밀어 거부합니다 serving_model_dir 모델이 생산에 사용되는 충분한 아니기 때문에.
다른 평가 구성으로 파이프라인을 다시 실행할 수 있습니다. 당신이 동일한 설정 및 데이터 세트와 파이프 라인을 실행하더라도, 훈련 모델이 될 수 있습니다 모델 교육의 고유의 임의성으로 인해 약간 다를 수 있습니다 NOT_BLESSED 모델.
파이프라인의 출력 검사
TFMA를 사용하여 ModelEvaluation 아티팩트에서 평가 결과를 조사하고 시각화할 수 있습니다.
출력 아티팩트에서 분석 결과 가져오기
MLMD API를 사용하여 프로그래밍 방식으로 이러한 출력을 찾을 수 있습니다. 먼저 방금 생성된 출력 아티팩트를 검색하기 위한 몇 가지 유틸리티 함수를 정의합니다.
from ml_metadata.proto import metadata_store_pb2
# Non-public APIs, just for showcase.
from tfx.orchestration.portable.mlmd import execution_lib
# TODO(b/171447278): Move these functions into the TFX library.
def get_latest_artifacts(metadata, pipeline_name, component_id):
"""Output artifacts of the latest run of the component."""
context = metadata.store.get_context_by_type_and_name(
'node', f'{pipeline_name}.{component_id}')
executions = metadata.store.get_executions_by_context(context.id)
latest_execution = max(executions,
key=lambda e:e.last_update_time_since_epoch)
return execution_lib.get_artifacts_dict(metadata, latest_execution.id,
[metadata_store_pb2.Event.OUTPUT])
우리는 최신 실행을 찾을 수 있습니다 Evaluator 구성 요소와 그것의 출력 결과물을 얻을.
# Non-public APIs, just for showcase.
from tfx.orchestration.metadata import Metadata
from tfx.types import standard_component_specs
metadata_connection_config = tfx.orchestration.metadata.sqlite_metadata_connection_config(
METADATA_PATH)
with Metadata(metadata_connection_config) as metadata_handler:
# Find output artifacts from MLMD.
evaluator_output = get_latest_artifacts(metadata_handler, PIPELINE_NAME,
'Evaluator')
eval_artifact = evaluator_output[standard_component_specs.EVALUATION_KEY][0]
INFO:absl:MetadataStore with DB connection initialized
Evaluator 항상 하나 개의 평가 유물을 반환하고, 우리는 TensorFlow 모델 분석 라이브러리를 사용하여 시각화 할 수 있습니다. 예를 들어 다음 코드는 각 펭귄 종에 대한 정확도 메트릭을 렌더링합니다.
import tensorflow_model_analysis as tfma
eval_result = tfma.load_eval_result(eval_artifact.uri)
tfma.view.render_slicing_metrics(eval_result, slicing_column='species')
SlicingMetricsViewer(config={'weightedExamplesColumn': 'example_count'}, data=[{'slice': 'species:0', 'metrics…
당신이 'sparse_categorical_accuracy'를 선택하면 Show 드롭 다운 목록을, 당신은 종에 따라 정확도 값을 볼 수 있습니다. 더 많은 조각을 추가하고 모델이 모든 분포에 적합한지 그리고 가능한 편향이 있는지 확인하고 싶을 수 있습니다.
다음 단계
에서 모델 분석을 자세히 알아 TensorFlow 모델 분석 라이브러리 튜토리얼 .
당신은 더 많은 자원을 찾을 수 있습니다 https://www.tensorflow.org/tfx/tutorials을
참조하시기 바랍니다 TFX 파이프 라인은 이해 TFX에서 다양한 개념에 대해 더 배울 수 있습니다.
TensorFlow.org에서 보기
Google Colab에서 실행
GitHub에서 소스 보기
노트북 다운로드