Developers

Learn how to get your models up and running fast on Tenstorrent hardware. With two open source SDKs, you can get as close to the metal as possible, or let our AI compiler do the work.
Developers

Getting started on Tenstorrent

Looking for other documentation?
Upcoming Events
Jul 23
Building AI agents with Tenstorrent

Tenstorrent hardware is designed to optimize the operations that power AI. Learn how you can run models on Tenstorrent hardware to build multi-agent systems and workflows.

Aug 1
AI Meetup Malaysia

​Join Tenstorrent and AWS and learn about AI with an emphasis on computer architecture, MLOps, and models.

Aug 9
COSCUP 2025

Stop by our booth at the Conference for Open Source Coders, Users & Promoters (COSCUP), the largest open source conference in Asia.

Educational Content

Tutorials

Intro to TT-Forge
Tutorials
Intro to TT-Forge
An overview of TT-Forge, Tenstorrent's MLIR-based compiler.
TT-Metalium Programming Overview
Tutorials
TT-Metalium Programming Overview
A guide to the Metalium programming model, Tenstorrent’s primary programming model.
How to Use the TTNN Visualizer
Tutorials
How to Use the TTNN Visualizer
Learn how to install and run ttnn-visualizer. ttnn-visualizer helps you gain a complete understanding of a model.

Written Tutorials

Bring up LLMs with TTNN
Get guidance on how to bring up high-performance multi-chip models on Tenstorrent hardware using the TT-Metalium stack.
Get started with TTNN-Visualizer
A quickstart guide to setting up ttnn-visualizer.
Op Writer's Guide to Dispatch Overhead
This tutorial covers different methods to optimize dispatch overhead resource allocation, kernel initialization, and runtime arguments.

Join the Community

Get access to support on anything from setting up new hardware, running models or optimizing your setup, plus the latest on Tenstorrent hardware and software.
Community

Interested in contributing?

Tenstorrent's AI software stack is open source. Getting started is as easy as filing an issue.