Skip to content

Excited about the future of AI hardware? Apply to be a part of the Tenstorrent team >

Frequently Asked Questions

Tenstorrent is a team of competent and motivated people that came together to build the best computing platform for AI and Software 2.0.

Yes, our processors are optimized for neural network inference and training. They can also execute other types of parallel computation.

Tenstorrent processors comprise a grid of cores known as Tensix cores. Our processors are capable of executing small and large tensor computations efficiently. Network communication hardware is present in each processor, and they talk with one another directly over networks, instead of through DRAM. Compared to GPUs, our processors are easier to program, scale better, and excel at handling run-time sparsity and conditional computation.

Tenstorrent’s processor cores are known as Tensix cores. Each Tensix core includes 5 RISC processors, an array math unit for tensor operations, a SIMD unit for vector operations, hardware for accelerating network packet operations and compression/decompression, and 1-2 megabytes of SRAM.

As model parameters grow along with a corresponding growth in computational volume, Tenstorrent applies proprietary heuristics to dynamically determine which parts need to be computed (pruning out the rest). This improves inference latency and allows the training to converge quicker.

Tenstorrent hardware comes with programming interfaces that help drive big increases in throughput through conditional processing. By implementing features like dynamic sparsification (enabled in the software layer) we are able to provide much higher throughput as compared to a GPU.

We have focused our efforts on Pytorch for now, but are capable of interfacing to other frameworks via ONNX. We also have a low-level C++ based programming framework called BUDA, and are able to run computations on NumPy arrays.

You’ll need a Linux system running Ubuntu, with one or more free PCIe x16 slots, and a system power supply that is able to provide adequate power. For detailed minimum specs please refer to

Documentation and reference designs can be found at Documentation includes User Guide or Quick Start Guide, datasheets, API guide, reference designs, errata, and changelogs.

with us to discuss your needs.

You can buy Grayskull products directly from us or through one of our distribution partners.

For starters, you can try using our services offering through Tenstorrent AICloud. We are admitting early access to a small number of beta customers today. If you would like to be part of the beta trial, contact us for more information.

Install the card into your system, download our driver and install it by running an install script as root, and download and build our docker file. Installation usually takes 30 minutes to an hour. If you have trouble with the process, contact us and we’ll help you through it.

The Software Package includes:

  • Tenstorrent Driver, our PCIe driver enabling communication between the host and Tenstorrent board.
  • Tenstorrent Compiler Stack, which builds and deploys your AI algorithm.
  • TT-SMI, a command-line utility to aid in the management and monitoring of your Tenstorrent board.

Support can come in two different ways. You can email us at for individual-specific questions.