Skip to content

Excited about the future of AI hardware? Apply to be a part of the Tenstorrent team >

Frequently Asked Questions

Tenstorrent is a team of competent and motivated people that came together to build the best computing platform for AI and Software 2.0.

Yes, our processors are optimized for neural network inference and training. They can also execute other types of parallel computation.

Tenstorrent processors comprise a grid of cores known as Tensix Cores. Our processors are capable of executing small and large tensor computations efficiently. Network communication hardware is present in each processor, and they talk with one another directly over networks, instead of through DRAM. Compared to GPUs, our processors are easier to program, scale better, and excel at handling run-time sparsity and conditional computation.

Tenstorrent’s processor cores are called Tensix Cores. Each Tensix Core includes five RISC-V processors, an array math unit for tensor operations, an SIMD unit for vector operations, hardware for accelerating network packet operations and compression/decompression, and up to 1.5 MB of SRAM.

As model parameters grow along with a corresponding growth in computational volume, Tenstorrent applies proprietary heuristics to dynamically determine which parts need to be computed (pruning out the rest). This improves inference latency and allows the training to converge quicker.

Tenstorrent hardware comes with programming interfaces that help drive big increases in throughput through conditional processing. By implementing features like dynamic sparsification (enabled in the software layer) we are able to provide much higher throughput as compared to a GPU.

We have focused our efforts on PyTorch for now, but are capable of interfacing with other frameworks via ONNX. We also have a low-level C++-based programming framework called TT-Buda, and are able to run computations on NumPy arrays.

You’ll need a Linux system running Ubuntu, with one or more free PCIe x16 slots, and a system power supply that is able to provide adequate power. For detailed minimum specs please refer to docs.tenstorrent.com.

Documentation and reference material, including user guides, API guides, system specifications, and more is available at docs.tenstorrent.com.

Click to contact us, or visit our Discord.

You can buy Tenstorrent machine learning accelerators by contacting us . Systems are available by contacting us at support@tenstorrent.com.

You can evaluate our technology through Tenstorrent AICloud. We are admitting early access to a small number of beta customers today. If you would like to be part of the beta trial, contact us for more information.

Instructions for installing your card and setting up the drivers, utilities, and software are available at docs.tenstorrent.com. Installation typically takes between 30 minutes to an hour. If you have issues or require support, please visit our community Discord.

Please visit our community Discord or contact us directly at support@tenstorrent.com.