First, let's talk about TPU 8t, which is designed for large-scale training and inference throughput....

The pod size is increased slightly to 9600 chips, and provides ~3X the FP4 performance per pod vs. Ironwood (8t has 121 exaflops/pod vs. 42.5 exaflops/pod for Ironwood). In https://t.co/XGvH54kuww" / X
Jeff Dean on X: "First, let's talk about TPU 8t, which is designed for large-scale training and inference throughput. The pod size is increased slightly to 9600 chips, and provides ~3X the FP4 performance per pod vs. Ironwood (8t has 121 exaflops/pod vs. 42.5 exaflops/pod for Ironwood). In https://t.co/XGvH54kuww" / X
Don’t miss what’s happening
People on X are the first to know.
Post
See new posts
Conversation

First, let's talk about TPU 8t, which is designed for large-scale training and inference throughput. The pod size is increased slightly to 9600 chips, and provides ~3X the FP4 performance per pod vs. Ironwood (8t has 121 exaflops/pod vs. 42.5 exaflops/pod for Ironwood). In addition, the ICI network bandwidth is 2X higher per chip and the scale out datacenter networking is 4X higher per chip. Importantly, this system also offers 2X the performance / watt, continuing the trend of significant energy effiency improvements that we've had for many generations of TPUs (8t offers ~60X the performance/watt as TPU v2).

·
2
3
75
10
New to X?
Sign up now to get your own personalized timeline!
Sign up with Apple
By signing up, you agree to the Terms of Service and Privacy Policy, including Cookie Use.
Relevant people
-  Jeff Dean @JeffDean Follow Click to Follow JeffDean Chief Scientist, Google DeepMind & Google Research. Gemini Lead. Opinions stated here are my own, not those of Google. TensorFlow, MapReduce, Bigtable, ...
Trending now
What’s happening
Trending in United States
Intel
Sports · Trending
Tanner Scott
Politics · Trending
Lisa Murkowski
Trending with Thom Tillis
Sports · Trending
#FlyTheW
|
|
|
|
|
More
© 2026 X Corp.