Training large deep learning models doesn't have to be complex. In this video, Yufeng Guo walks you through the Keras 3 Distribution API, showing you how it leverages JAX for efficient data and model parallelism. Whether you're scaling across multiple GPUs or a cluster of TPUs, Keras 3 has you covered.
Resources:
Distributed training with Keras 3 →
Multi-device distribution →
LayoutMap API →
Gemma get_layout_map →
Chapters:
0:00 - Intro
0:17 - The Keras 3 Distribution API
0:51 - The Global Programming Model (SPMD Expansion)
1:26 - Using the JAX Backend for Scalability
1:55 - Creating a Device Mesh & Tensor Layout
2:46 - Implementing Data Parallelism
3:45 - Understanding Model Parallelism
4:27 - Sharding with LayoutMap
5:43 - Tuning Your Device Mesh for Performance
6:14 - Conclusion & Next Steps
Subscribe to Google for Developers →
Speaker: Yufeng Guo
Products Mentioned: Google AI
|
Sample code → Firebase AI Logic Docs →...
Bonus update! Two new features in Fireba...
How do you turn a generic AI agent into ...
How to create Agent Skills for Gemini CL...
Unlock new possibilities for your user i...
Free career strategy call to help you ca...
Download your free Python Cheat Sheet he...
Download your free Python Cheat Sheet he...
Don't let device failures or power outag...
Ross Richards, Senior Product Marketing ...
Now Playing has a new app that automatic...