Llama-3.1 Announcement
We are happy to announce that we have brought up support for Llama-3.1-70B inference on Tenstorrent’s 8-chip systems, the TT-QuietBox and the TT-LoudBox.

The source code for Llama-3.1-70B and other models that are supported is on our GitHub. We have also merged support for Llama-3.1-8B, running on our single-chip n150 card.
Implementation highlights:
- Fractured with 8-way tensor parallelism
- Uses FlashAttention and FlashDecode
- Uses Mixed BF16, BFP8, and BFP4 precision
- Performance was measured in eager mode with tracing disabled
We are working on optimizations which will get us to our target of 20 tokens/second/user. Buy our 8-chip systems (TT-QuietBox and TT-LoudBox) to try Llama-3.1-70B at home on Tenstorrent hardware!
Other articles

Tenstorrent Acquires Blue Cheetah Analog Design
Tenstorrent announced today that it has acquired Blue Cheetah Analog Design, a start-up building highly-customized analog mixed-signal IP.

AIREV and Tenstorrent Unite to Launch Advanced Agentic AI Stack
A Defining Moment for UAE–US Tech Collaboration and the Globalization of Emirati AI

Tenstorrent Launches Blackhole™ Developer Products at Tenstorrent Dev Day
Tenstorrent launched the next generation Blackhole™ chip family today at their DevDay event in San Francisco.