Invited Speakers 2012

Abstracts of Invited Talks and Speakers

On the Road to Exaflops - Some Reality Checks

Bernd Winkelsträter, Fujitsu

After breaking the PetaFlop barrier, immediately the next target for HPC Systems achieving ExaFlop performance was set. This talk will give an overview and background on the technical and economic boundaries, providing some reality checks for any plan how to get there. Special attention is also paid to the needs of commercial HPC, which often differ from research oriented HPC. Last but not least, the architecture of Fujitsu's 10 PFlop K-Computer will be presented.

Short Bio  of Bernd Winkelsträter:
Bernd Winkelsträter works as senior technology analyst for the PRIMERGY server development within Fujitsu Technology Solutions. He tracks many different technologies to determine their potential to influence future server designs and hardware/software architectures. This covers many hardware technologies like CPUs, memory, NVM, chipsets, network, interconnects, storage and coprocessor technologies, but also how software layers like virtualization, operating system and key applications like databases could use such innovative hardware architectures. He joined Nixdorf Computers back in 1987, working as a system programmer on a fault tolerant single system image UNIX operating system for a high-speed clustered set of multiprocessors. Over the years, he has been involved in several leading edge server technology projects at Siemens-Nixdorf, Siemens, Fujitsu Siemens Computers, and now Fujitsu Technology Solutions. In 2000, he switched from system software development to hardware development and took his current position as the senior technology analyst for PRIMERGY x86 servers. He represents Fujitsu in several industry standardization bodies on innovative technologies, e.g. the SSD Form Factor Working Group, the NVMe Working Group, and the SNIA TWG on NVM Programming.


 

Exascale Preparations using Miniapps

Richard Barrett, Sandia National Labs

Preparations for exascale computing for science and engineering applications are driven by the realization architectures will be significantly different in structure and design from petascale architectures. One of the greatest concerns facing programs such as the U. S. Department of Energy’s Advanced Simulation and Computing (ASC) initiative is how best to port full applications that have been developed over nearly two decades so that they exploit the increased computational capabilities. These applications typically consist of millions of lines of source code, codifying significant bodies of knowledge that have developed over multiple generations of scientists, and must support on-going programmatic-level work. The Mantevo project has developed a set of application proxies, which we call miniapps, that provide a tractable means for exploring key performance issues in these application codes. In this talk I will give an overview of the Mantevo project, introduce a methodology for understanding the predictive capabilities of them relative to a set of mission critical application codes, and illustrate how these miniapps have been used to understand the capabilities and characteristics of some current, emerging, and future architectures.

Short Bio of Richard Barrett:
Richard Barrett is a Principal Member of the Technical Staff in the Extreme-scale computing group in the Center for Computing Research at Sandia National Laboratories. He leads the Application Performance Modeling and Analysis Team (PMAT), whose goals are to understand and characterize application performance on key HPC platforms that are currently deployed, and to predict performance on future platforms using mathematical modeling methods and techniques, simulation, and an empirical knowledge base
.

MPI-3.0: A Response to New Challenges in Hardware and Software


Torsten Höfler, ETH Zürich

The Message Passing Interface (MPI) is one of the most successful parallel programming frameworks in High Performance Computing (HPC). Implementations evolved over the last 15 years and provide a rather stable software base. However, the challenges of programming systems that constantly grow in scale, exhibit more intelligent networks, multi- and manycore processors, and intra-node heterogeneity, cannot be ignored. The MPI Forum is going to ratify a new version of MPI (3.0) which reacts to the changing environment by improving the support for topological communications, collective communications, intra- and inter-node direct memory accesses, and various other important concepts. The new programming interfaces pose challenges in various directions. First, programmers need to understand how to effectively utilize the new interfaces and how to restructure their codes. Second, implementers need to map the new functions onto novel hardware to achieve highest performance. We will discuss both issues and their relation to multicore architectures in detail and point out numerous research challenges and opportunities from theoretical as well as practical perspectives.



Short Bio of Torsten Höfler:

Torsten is an Assistant Professor of Computer Science at ETH Zürich, Switzerland. Before joining ETH, he lead the performance modeling and simulation efforts of parallel petascale applications for the NSF-funded Blue Waters project. He is also a key member of the Message Passing Interface (MPI) Forum where he chairs the "Collective Operations and Topologies" working group. Torsten received his Ph.D. in Computer Science from Indiana University. He won the best paper award at the ACM/IEEE Supercomputing Conference 2010 (SC10), published over 40 peer-reviewed scientific conference and journal articles and authored chapters of the MPI-2.2 and MPI-3.0 standards. Torsten received the SIAM SIAG/Supercomputing Junior Scientist Prize in 2012. His research interests revolve around the central topic of "Performance-centric Software Development" and deal with scalable networks, parallel programming techniques, and performance modeling. Additional information about Torsten can be found on his homepage at unixer.de
.


DEEP and EXTOLL, the scalable interconnect for the DEEP Booster architecture


Mondrian Nüssle, EXTOLL

EXTOLL is a new, scalable and high-performance interconnection network originally developed at the University of Heidelberg. The DEEP project, one of the three Exascale projects funded by the EU 7th framework programme, chose to employ the novel EXTOLL network for the booster part of the DEEP architecture. The booster concept of DEEP takes the concept of accelerators to a new level: instead of adding accelerator cards to cluster nodes, an accelerator cluster - the booster - will complement a conventional HPC system. This talk will give an overview of the architecture including the booster nodes featuring Intel Xeon Phi (Knights Corner) processors. There will be a special emphasize on the EXTOLL interconnection network and how it contributes to the overall goals of the DEEP project

Short Bio of Mondrian Nüssle:
Mondrian Nüssle is one of the founders of EXTOLL GmbH and the CTO of EXTOLL. He worked on the EXTOLL project from day one at the Computer Architecture Group of the University of Heidelberg. He was also involved with the ATOLL project as well as multiple other computer architecture related projects, both at the University of Heidelberg and the University of Mannheim. His principal research interests are the hard- and software architecture of high-speed and high-performance interconnection networks. Dr. Nüssle received his doctorate degree from the University of Mannheim.