This page contains slides for Rich Vuduc's talk at the Salishan Conference on High-Speed Computing on Wednesday, April 26, 2023 in Gleneden Beach, Oregon, USA.

Your public critiques (and praise) are welcome: @hpcgarage

- [PDF slides (~ 25 MiB)] “Embracing communication.”

Links to HPC Garage papers referenced in this talk:

- C. Yin, D. Zhang, N. Israt, C. Faloutsos, G. Karypis, R. Vuduc. “Nimble GNN embedding with tensor-train decomposition.” In
*KDD'22*, doi:10.1145/3534678.3539423. - S. Karamati, et al. “‘Smarter’ NICs for molecular dynamics: a case study.” In
*IPDPS'22*. doi:10.1109/IPDPS53621.2022.00063.**Finalist, Best Paper.** - J. Young, R. Vuduc. “Finding balance in the post-Moore's Law era.&rdquo 2016 Workshop on Post-Moore's Law Era Supercomputing (PMES): [PDF link on GitHub]
- K. Czechowski, R. Vuduc. “A theoretical framework for algorithm-architecture co-design.” In
*IPDPS'13*. doi:10.1109/IPDPS.2013.99

Links to papers by others referenced in this talk:

- A. Chien et al. (2015). “The Zero-Carbon Cloud: High-value, dispatchable demand for renewable power generators.” doi:10.1016/j.tej.2015.09.010.
- G. Abowd (2016). “Beyond Weiser: From ubiquitous computing to collective computing.” DOI: 10.1109/MC.2016.22.
- T. Ben-Nun & T. Hoefler (2019). “Demystifying parallel and distributed deep learning: an in-depth concurrency analysis.” doi:10.1145/3320060.
- N. Thompson et al. (2020). “The computational limits of deep learning.” arXiv:2007.05558v1.
- P. Witte (2020). “Software and algorithms for large-scale seismic inverse problems.” Ph.D. Dissertation at GT. https://hdl.handle.net/1853/62754.
- G. Guidi et al. (2021). “Ten years later: cloud computing is closing the performance gap.” doi:10.1145/3447545.3451183.
- De Sensi et al. (2022). “Noise in the clouds: influence of network performance variability on application scalability.” arXiv:2210.15315.
- S. Matsuoka et al. (2023). “Myths and legends in high-performance computing.” arXiv:2301.02432.