This page contains slides for a talk at the Workshop on Future Directions in Extreme-Scale Computing for Scientific Grand Challenges, held at the Texas Advanced Computing Center, January 9-10, 2020.
Your public critiques (and praise) are welcome: @hpcgarage
- [PDF slides (~ 22 MiB)] “Deep learning may be wasting our time, energy, and power,” by Rich Vuduc.
Tweet-ish abstract: Applications of deep learning are good, and they utilize today's machines well. But are they truly “energy-efficient?” What about other workloads? And what might machines tuned for such workloads look like, if we had fine-grained control of the distribution of resources that affect performance, like power and die area?