LENS2015 International Workshop

Invited Speakers

  • Abhinav Vishnu − Pacific Northwest National Laboratory
    "Towards Exascale with Global Arrays using Communication Runtime at Extreme Scale (ComEx)"
    Abstract
    Abstract Global Arrays is a Partitioned Global Address Space Programming model, which uses Communication Runtime at Extreme Scale (ComEx) as its communication backend for large scale systems. In this talk, Dr. Vishnu will present the research conducted by the group on performance, and fault tolerance aspects of Global Arrays and ComEx. He will present approaches for designing ComEx on upcoming systems by using MPI as the backend ‹ by using two-sided and one-sided semantics. A performance evaluation of this design using NWChem and several other kernels shows the effectiveness of this approach ‹ and similar performance in comparison to the native ports.
    Biography
    Biography Abhinav Vishnu a senior research scientist at Pacific Northwest National Laboratory. Dr. Vishnu's primary interests in designing Scalable, Fault tolerant and Energy Efficient programming models with specific applications to Machine Learning and Data Mining algorithms. Dr. Vishnu has served as a co-editor for several Journals --- Parallel Computing (ParCo), International Journal of High Performance Computing and Applications (IJHPCA) and Journal of Supercomputing (JoS). He has served as a Program Co-chair for several workshops --- Programming Models and Systems Software (P2S2) and ParLearning. He has published over 50 Journal and Conference publications, and his research has been disseminated in several open source software --- MVAPICH2 (High Performance MPI over InfiniBand), Communication Runtime on Extreme Scale (ComEx) and Machine Learning Toolkit for Extreme Scale (MaTEx). Dr. Vishnu completed his PhD from The Ohio State University in 2007, under Dr. Dhabaleswar (DK) Panda.
  • Dhabaleswar K. Panda − Professor, The Ohio State University
    "Designing Hybrid MPI+PGAS Library for Exascale Systems: MVAPICH2-X Experience "
    Abstract
    Abstract This talk will focus on challenges in designing hybrid MPI+PGAS library for exascale systems. Motivations, features and design guidelines for supporting hybrid MPI and PGAS (OpenSHMEM, UPC and CAF) programming model with the MVAPICH2-X library will be presented. The role of unified communication runtime to support hybrid programming models on InfiniBand, accelerators and co-processors will be outlined. Unique capabilities of the hybrid MPI+PGAS model to re-design HPC applications to harness performance and scalability will also be presented through a set of case-studies.
    Biography
    Biography DK Panda is a Professor and University Distinguished Scholar of Computer Science and Engineering at the Ohio State University. He has published over 350 papers in the area of high-end computing and networking. The MVAPICH2 libraries with support for MPI and PGAS on IB, iWARP, RoCE, GPGPUs, Xeon Phis and virtualization (http://mvapich.cse.ohio-state.edu), are currently being used by more than 2,425 organizations worldwide (in 75 countries). This software is empowering several InfiniBand clusters (including the 8th, 11th and 22nd ranked ones) in the TOP500 list. As of Jul '15, more than 279,000 downloads have taken place from this project's site. This software is also being distributed by many InfiniBand, 10GigE/iWARP and RoCE vendors in their software distributions. He is an IEEE Fellow. More details about Prof. Panda are available at http://www.cse.ohio-state.edu/~panda.
  • Deepak Eachempati − Postdoctoral Researcher, University of Houston
    "Experiences in Supporting Fortran Coarrays for HPC"
    Abstract
    Abstract In the most recent version of the standard, Fortran 2008, new parallel processing features based on coarrays were incorporated into the specification. The speaker will describe techniques developed and implemented for supporting these features. He also will discuss upcoming features which are expected to be adopted in the next revision of the standard, and provide results based on an early implementation.
    Biography
    Biography Deepak Eachempati is currently a postdoc working in the HPCTools research group from University of Houston's Computer Science department. As part of his work, he has been responsible for developing compiler and runtime system technology for supporting parallel programming models, including PGAS languages such as Coarray Fortran and shared memory models such as OpenMP. His research interests include compiler optimization for facilitating efficient parallelization, task scheduling runtime systems, and tool support for performance analysis of parallel applications. He received his B.Sc. in Computer Engineering from the University of Illinois at Urbana-Champaign and his M.Sc. and Ph.D in Computer Science at the University of Houston.
  • Norbert Eicker − Bergisch University / Juelich Supercomputer Cneter
    "Taming Heterogeneity by Segregation - The DEEP and DEEP-ER take on Heterogeneous Cluster Architectures"
    Abstract
    Abstract

    On the way to explore the path to Exascale, the DEEP/-ER projects take a radically different approach on heterogeneity. Instead of combining different computing elements within single nodes DEEP/-ER's Cluster-Booster concept integrates multi-core processors in a standard Cluster while combining many-core processors in a separate cluster of accelerators, the so-called Booster. For this, DEEP's Booster consists solely of Intel Xeon Phi processors interconnected by the EXTOLL network.

    The talk will not only share insights on the challenges of the hardware integration in a most energy efficient way but also discuss the strong requirements the architecture poses on the corresponding programming model. While it turns out that MPI provides all the low-level semantics required to utilize the Cluster-Booster system, the project uses an OmpSs abstraction layer in order to support software developers to adapt their applications to the heterogeneous hardware. The ultimate goal is to reduce the burden on the application developers. To this end DEEP/-ER provides a well-accustomed programming environment that saves application developers from some of the tedious and often costly code modernisation work. Confining this work to code-annotation as proposed by DEEP/-ER is a major advancement.

    The presentation is completed by final results of the DEEP project that is finalized by end of August 2015.

    Biography
    Biography

    Norbert Eicker is Professor for Parallel Hard- and Software Systems at Bergische Universitat Wuppertal and head of the research group Cluster Computing at Julich Supercomputing Centre. Before joining JSC in 2004 Norbert was with ParTec from 2001 on working on the Cluster Middleware ParaStation. During his career he was involved in several research and development projects including the ALiCE-cluster in Wuppertal, the JULI project at JSC as well as JSC's general purpose supercomputers JuRoPA and JURECA. Currently he act as the chief architect for the DEEP and DEEP-ER projects.

    Norbert holds a PhD in Theoretical Particle Physics from Wuppertal University.

  • Manjunath Gorentla Venkata − Oak Ridge National Laboratory
    "OpenSHMEM: Introduction, Version 1.3, and Beyond"
    Abstract
    Abstract

    The OpenSHMEM is a predominant PGAS library interface specification. It is a community effort to standardize the SHMEM programming models, driven by Oak Ridge National Laboratory (ORNL), Department of Defense (DoD), and University of Houston (UH). The community has released three versions of the OpenSHMEM specification, and it will the latest version version 1.3 at SC15. In this talk, first, I will introduce OpenSHMEM, present its history, and discuss the upcoming features. Then, I will discuss the efforts preparing OpenSHMEM for the exascale era, and provide an overview of the OpenSHMEM activities, which includes specification development, reference implementation, and research. Lastly, I will provide an overview of OpenSHMEM reference implementation and its network layer, UCX.

    Biography
    Biography

    Manjunath Gorentla Venkata is a research scientist in Oak Ridge National Laboratory's Computer Science and Mathematics Division pursuing research and development efforts focused on abstractions and mechanisms that enables non-computer scientists to use the supercomputers and clusters in an efficient way. He is primarily responsible for conceiving, designing and leading the development of scalable communication interfaces, protocols and implementations for extreme-scale systems. Dr. Gorentla has published several peer-reviewed research articles in this area, contributed to various international standards, and his research has influenced commercially available network interfaces. He contributes to many open source software systems, particularly Open MPI, OpenSHMEM, and UCX. He is a senior member of the Institute of Electrical and Electronics Engineers.

    Dr. Gorentla holds a Affiliate Professor appointment in the Department of Computer Science and Software Engineering at Auburn University.