Llama-3.1 Announcement
We are happy to announce that we have brought up support for Llama-3.1-70B inference on Tenstorrent’s 8-chip systems, the TT-QuietBox and the TT-LoudBox.

The source code for Llama-3.1-70B and other models that are supported is on our GitHub. We have also merged support for Llama-3.1-8B, running on our single-chip n150 card.
Implementation highlights:
- Fractured with 8-way tensor parallelism
- Uses FlashAttention and FlashDecode
- Uses Mixed BF16, BFP8, and BFP4 precision
- Performance was measured in eager mode with tracing disabled
We are working on optimizations which will get us to our target of 20 tokens/second/user. Buy our 8-chip systems (TT-QuietBox and TT-LoudBox) to try Llama-3.1-70B at home on Tenstorrent hardware!
Other articles

Tenstorrent and AutoCore Announce Strategic Partnership
Tenstorrent and AutoCore are combining AutoCore’s production-proven automotive software platform with Tenstorrent’s best-in-class TT-Ascalon™ RISC-V cores to offer a scalable, open-standard architecture.

Tenstorrent Announces Availability of TT-Ascalon™
Ascalon delivers a complete solution to customers seeking to own and customize their IP.

The Open Hardware Revolution
Inside Tenstorrent’s TT-Blueprint Event