Akshay ๐ (@akshay_pachaar)
2025-11-23 | โค๏ธ 1628 | ๐ 135
Youโre in an ML Engineer interview at Google.
Interviewer: We need to train an LLM across 1,000 GPUs. How would you make sure all GPUs share what they learn?
You: Use a central parameter server to aggregate and redistribute the weights.
Interview over.
Hereโs what you missed: