Skip to content

Excited about the future of AI hardware? Apply to be a part of the Tenstorrent team >

Who is Tenstorrent?

Tenstorrent is a team of competent and motivated people that came together to build the best computing platform for AI and Software 2.0.

Can I use your AI hardware for training or inference?

Yes, our processors are optimized for neural network inference and training. They can also execute other types of parallel computation.

What is Tenstorrent doing differently than GPUs?

Tenstorrent processors comprise a grid of cores known as Tensix cores. Our processors are capable of executing small and large tensor computations efficiently. Network communication hardware is present in each processor, and they talk with one another directly over networks, instead of through DRAM. Compared to GPUs, our processors are easier to program, scale better, and excel at handling run-time sparsity and conditional computation.

What is a 'Tensix' core?

Tenstorrent's processor cores are known as Tensix cores. Each Tensix core includes 5 RISC processors, an array math unit for tensor operations, a SIMD unit for vector operations, hardware for accelerating network packet operations and compression/decompression, and 1-2 megabytes of SRAM.

What does Tenstorrent mean by conditional compute?

As model parameters grow along with a corresponding growth in computational volume, Tenstorrent applies proprietary heuristics to dynamically determine which parts need to be computed (pruning out the rest). This improves inference latency and allows the training to converge quicker.

What is the difference between your listed 'Silicon TOPS' vs 'Software Enabled TOPS'?

Tenstorrent hardware comes with programming interfaces that help drive big increases in throughput through conditional processing. By implementing features like dynamic sparsification (enabled in the software layer) we are able to provide much higher throughput as compared to a GPU.

What ML frameworks are supported by your hardware?

We have focused our efforts on Pytorch for now, but are capable of interfacing to other frameworks via ONNX. We also have a low-level C++ based programming framework called BUDA, and are able to run computations on NumPy arrays.

Is my system compatible with a Tenstorrent Grayskull card?

You’ll need a Linux system running Ubuntu, with one or more free PCIe x16 slots, and a system power supply that is able to provide adequate power. For detailed minimum specs please refer to

How do I get started?

Documentation and reference designs can be found at Documentation includes User Guide or Quick Start Guide, datasheets, API guide, reference designs, errata, and changelogs.

I'd like to evaluate a Tenstorrent board for my application, how do I do that?

with us to discuss your needs.

Where can I buy a Tenstorrent card or system?

You can buy Grayskull products directly from us or through one of our distribution partners.

I host my ML applications through a Cloud Service Provider. What options do I have?

For starters, you can try using our services offering through Tenstorrent AICloud. We are admitting early access to a small number of beta customers today. If you would like to be part of the beta trial, contact us for more information.

What is the installation process and what support do I get?

Install the card into your system, download our driver and install it by running an install script as root, and download and build our docker file. Installation usually takes 30 minutes to an hour. If you have trouble with the process, contact us and we'll help you through it.

What is included in the software package?

The Software Package includes:

  • Tenstorrent Driver, our PCIe driver enabling communication between the host and Tenstorrent board.

  • Tenstorrent Compiler Stack, which builds and deploys your AI algorithm.

  • TT-SMI, a command-line utility to aid in the management and monitoring of your Tenstorrent board.

Where do I find support if I need help?

Support can come in two different ways. You can email us at for individual-specific questions.