AI for programmers
Introducing , our first generation AI chip made to power Software 2.0
We make software, silicon and systems.
Grayskull is the first of a new generation of AI computers. Built to run AI, math engine for matrix multiply, convolution and tensor manipulation. Organized as an array of networked 4 TeraOp engines. Built to run Graphs, compute and data layout is handled by the compiler, data movement handled by the hardware. Clean abstraction layers top to bottom. Open software stack coming soon, expert programmers invited to explore.
Try your model on our .
And we make
We are building a small DevCloud to let anybody try out their models on Tenstorrent AI computers. Inference works now, training is coming soon. To start, contact us, we’ll get you time and the keys to log in. A more general interface is coming soon. Our initial machines have 4-8 Greyskull AI computers and dual socket AMD servers. Later in the year, we’ll build bigger and bigger machines. By the end of the year, we’ll have our first big beast online. Stay tuned for details.
We’re pretty good at .
Thinking on AI
we care about.
Pytorch naturally describes a graph of computation.
The objects in the graphs can be anything from a small computation, data movement, data transformation to very large matrix multiplies. Our graph computer splits all computations to the right size objects to be computed on our AI engines for both compute and data. Then the Graph compiler lays the compute out across the engines in an optimized way. The compiler does the work so the programmer can focus on the AI problem.