MLIR
Stay organized with collections
Save and categorize content based on your preferences.
Overview
MLIR, or Multi-Level Intermediate Representation, is a representation format
and library of compiler utilities that sits between the model representation
and low-level compilers/executors that generate hardware-specific code.
MLIR is, at its heart, a flexible infrastructure for modern optimizing
compilers. This means it consists of a specification for intermediate
representations (IR) and a code toolkit to perform transformations on that
representation. (In compiler parlance, as you move from higher-level
representations to lower-level representations, these transformations can be
called “lowerings”)
MLIR is highly influenced by LLVM and unabashedly reuses
many great ideas from it. It has a flexible type system, and allows
representing, analyzing and transforming graphs combining multiple levels of
abstraction in the same compilation unit. These abstractions include TensorFlow
operations, nested polyhedral loop regions, and even LLVM instructions and fixed
hardware operations and types.
We expect MLIR to be of interest to many groups, including:
- Compiler researchers and implementers looking to optimize performance and
memory consumption of machine learning models
- Hardware makers looking for a way to connect their hardware to TensorFlow,
such as TPUs, portable neural hardware in phones, and other custom ASICs
- People writing language bindings that want to take advantage of optimizing
compilers and hardware acceleration.
The TensorFlow ecosystem contains a number of compilers and optimizers that
operate at multiple levels of the software and hardware stack. We expect the
gradual adoption of MLIR to simplify every aspect of this stack.

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2021-01-28 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2021-01-28 UTC."],[],[],null,["# MLIR\n\n\u003cbr /\u003e\n\nOverview\n--------\n\nMLIR, or Multi-Level Intermediate Representation, is a representation format\nand library of compiler utilities that sits between the model representation\nand low-level compilers/executors that generate hardware-specific code.\n\nMLIR is, at its heart, a flexible infrastructure for modern optimizing\ncompilers. This means it consists of a specification for intermediate\nrepresentations (IR) and a code toolkit to perform transformations on that\nrepresentation. (In compiler parlance, as you move from higher-level\nrepresentations to lower-level representations, these transformations can be\ncalled \"lowerings\")\n\nMLIR is highly influenced by [LLVM](https://llvm.org/) and unabashedly reuses\nmany great ideas from it. It has a flexible type system, and allows\nrepresenting, analyzing and transforming graphs combining multiple levels of\nabstraction in the same compilation unit. These abstractions include TensorFlow\noperations, nested polyhedral loop regions, and even LLVM instructions and fixed\nhardware operations and types.\n\nWe expect MLIR to be of interest to many groups, including:\n\n- Compiler researchers and implementers looking to optimize performance and memory consumption of machine learning models\n- Hardware makers looking for a way to connect their hardware to TensorFlow, such as TPUs, portable neural hardware in phones, and other custom ASICs\n- People writing language bindings that want to take advantage of optimizing compilers and hardware acceleration.\n\nThe TensorFlow ecosystem contains a number of compilers and optimizers that\noperate at multiple levels of the software and hardware stack. We expect the\ngradual adoption of MLIR to simplify every aspect of this stack."]]