Parallel, Distributed, and High Performance Computing
Research in parallel, distributed, and high-performance computing centers around many fundamental questions. How can we most effectively use novel and unique architectures? What is the programmer’s view of the machine? How can we develop programs that execute efficiently on a wide variety of high-performance architectures? How do we find bugs in high-performance programs?
At the University of Arizona, our research in this area revolves around fundamental questions that center around one goal: that users of high-performance computing systems can write and visualize programs easily and still have these programs execute efficiently. Our work ranges from finding new paradigms for expressing high-performance computing applications in high-level languages (Strout), to understanding aspects of the application when it executes, via visualization (Isaacs), to building models and systems that ensure the efficient execution of these programs (Lowenthal). While our research emphasizes fundamental concepts, for our research to be useful to the broader scientific community we implement our techniques within new or existing high-performance computing systems.
Our group has many connections to real-world high-performance computing. We have collaborated with nearly every national laboratory, with a focus on Lawrence Livermore and Argonne. In addition, we partner with colleagues from around the UA campus to ensure that we build tools and systems that are useful to UA faculty who work in other disciplines.