CEED Third Annual Meeting
August 6-8, 2019
CEED will hold its third annual meeting August 6-8, 2019 at Virginia Tech in Goodwin Hall 155. The goal of the meeting is to report on the progress in the center, deepen existing and establish new connections with ECP hardware vendors, ECP software technologies projects and other collaborators, plan project activities and brainstorm/work as a group to make technical progress. In addition to gathering together many of the CEED researchers, the meeting will include representatives of the ECP management, hardware vendors, software technology and other interested projects.
If you plan to attend, please register no later than June 30. The registration fee for the meeting is $60, payable at the door in cash or check upon arrival at the meeting. Checks should be made payable to the Treasurer of Virginia Tech. The fee covers snacks and refreshments each day as well as a CEED T-shirt.
UPDATE 08/07/2019: Due to potentially inclement weather, the hike to the Cascades has been postponed to Thursday afternoon. The AMD HIP tutorial and hackathon have been moved to Wednesday afternoon in its place.
Tuesday, August 6
|8:00-8:30||Coffee & Welcome
Tim Warburton (VT) and Tim Germann (LANL)
Tzanio Kolev (LLNL)
|9:00-9:30||Finite Element Thrust Overview
Veselin Dobrev (LLNL)
|9:30-10:00||High-Order Solver Developments at UIUC
Paul Fischer (UIUC)
|10:30-11:00||libParanumal Progress Update
Tim Warburton (VT)
libParanumal has proved capable of running SEM incompressible calculations in weak scaling mode on at least half of the GPUs on Summit at 90% efficiency. In this talk we will discuss ongoing efforts to reduce the number of degrees of freedom required to stay in the weak scaling regime.
David Medina (Occalytics)
|11:30-12:00||Supporting Complex Geometry RF Adaptive Simulations
Mark Shephard (RPI)
|12:00-12:30||Applications Thrust Overview
Misun Min (ANL)
|1:30-2:00||Accelerating Numerical Methods for a Next Generation Multi-Physics Code
Arturo Vargas (LLNL)
MARBL is LLNL’s next generation multi-physics code for simulating high energy density physics. A distinguishing feature of this code is the modular CS infrastructure (Axom), and choice of high order numerical methods. The choice of high order schemes leads to higher arithmetic intensity per data access, a trait favored by modern computing processors. In this talk, we provide an overview of recent developments within the Arbitrary Lagrangian-Eulerian package, Blast, and focus on our adoption of programming models and algorithmic tailoring to leverage next gen super computers. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-774160.
Aleks Obabko (ANL)
|2:30-3:00||ExaConstit: Towards an Exascale Crystal Plasticity FEM Code
Robert Carson (LLNL)
An overview of ExaConstit, a new crystal plasticity code built upon the MFEM framework, will be given. It’s being designed as a component of the Exascale Additive Manufacturing Project (ExaAM) to couple microstructure development and local property analysis with process simulation. Initial results of ExaConstit being applied to a generated microstructure from AM processing condition will be discussed. A strong scaling study of ExaConstit on Sierra/Summit like machines will also be presented.
Robert Knaus (SNL)
|3:30-4:00||Coffee Break & Group Photo|
|4:00-4:30||Software Thrust & libCEED Overview
Jed Brown (CU Boulder)
|4:30-5:00||libCEED under PETSc: Current and Future Efforts
Oana Marin (ANL)
We describe benefits of libCEED to PETSc and aspects to be pursued, from integration under complex unstructured meshes, to machine learning coupling and efficient support for research and implementation of matrix free preconditioners.
|5:00-5:30||Hardware Thrust & MAGMA
Stanmire Tomov (UTK)
|5:30-6:00||Aurora: Argonne's Exascale System
Scott Parker (ANL)
Presentation and discussion of the publicly available information on the Aurora system and approaches to preparing applications.
Wednesday, August 7
|8:30-9:00||VTK-m Plans for Higher-Order Elements
Hank Childs (Oregon)
|9:00-9:30||Devil Ray: Ray Tracing High Order Meshes for Visualization
Matt Larsen (LLNL)
This talk will outline our efforts to develop a portable ray tracing library for high-order meshes. Our technology stack includes RAJA and Umpire, which allows Devil Ray to run on Nvidia and AMD GPUs as well as many-core CPUs. Initially, Devil Ray is focused on image-based visualization tasks such as iso-surface, volume rendering, and slicing, but our long term goals include extracting high-order iso-surfaces. When released, our library will be deployed inside of Ascent, a in situ visualization library.
|9:30-10:00||Parallel I/O in MFEM Using ADIOS2
William Godoy (ORNL)
This talk provides a brief overview of the integration efforts and available features of ADIOS2 as the parallel I/O solution to MFEM for code coupling, in-situ visualization, and scalable file I/O.
|10:30-11:00||Low-Synch Gram-Schmidt Projection Schemes for GMRES-AMG and for Moving Mesh Solvers
Stephen Thomas (NREL)
In collaboration with Tim Warburton and Anthony Austin, we apply the low-synch classical Gram-Schmidt algorithms recently proposed by Swirydowicz et al (2018) to the least-squares polynomial projection schemes introduced by Fischer (1998). We modify the original projection scheme to instead compute an improved initial guess for a preconditioned GMRES-AMG pressure solver for the incompressible Navier-Stokes equations. For the Nalu Wind model, the GMRES iteration count drops to zero with a relatively small window of previous solutions.
|11:00-11:30||Fast Direct Solvers in MFEM
Pieter Ghysels (LBNL)
We present recent improvements in the SuperLU and STRUMPACK sparse direct solvers and preconditioners, which are available from MFEM. Support for GPUs has been improved or added. The preconditioners in STRUMPACK have been improved and tested on an indefinite Maxwell problem using MFEM.
|11:30-12:00||Algebraic Multigrid Preconditioners for Matrix-Free Discretizations
Bruno Turcksin (ORNL)
Multigrid preconditioners are very popular preconditioners because of their convergence properties and scalability. Multigrid methods can be divided into geometric multigrid (GMG) and algebraic multigrid (AMG). GMG are based on building different coarser grids and therefore, are excellent as matrix-free preconditioners. The challenge is that GMG cannot be easily applied when the mesh is unstructured. AMG, however, use the entries in the matrix of the system to build coarser grids. This makes AMG more flexible when the matrix is available but it limits their use as matrix-free preconditioners. We surpass the limitation of knowing the matrix entries by basing our AMG on the spectral AMGe method. Unlike other AMG, the spectral AMGe method does not use the matrix of the system. Instead it requires the eigenvectors of the operator evaluated on parts of the domain (agglomerates). This means that, similar to GMG, a mesh is required. However the constraints on how the agglomerates are built are minimal since we do not need to rediscretize the operator on the agglomerates. While at the fine level no matrix associated to the operator is assembled, we still assemble a matrix at the coarsest level in order to use a direct solver.
|12:00-12:30||Recent Progress on Accelerating Algebraic Multigrid Solvers with GPUs
Ruipeng Li (LLNL)
Modern many-core processors such as graphics processing units (GPUs) are becoming an integral part of many high performance computing systems nowadays. These processors yield enormous raw processing power in the form of massive SIMD parallelism. Accelerating multigrid methods on GPUs has drawn a lot of research attention in recent years. For instance, in recent releases of the HYPRE package, the structured multigrid solvers (SMG, PFMG) have full GPU-support for both the setup and the solve phases, whereas the algebraic multigrid (AMG) solver, namely BoomerAMG, has only its solve phase been ported and the setup can still be computed on CPUs only. In this talk, we will provide an overview of the available GPU acceleration in HYPRE and present our current work on performing AMG setup on GPUs. In particular, we focus on the computation of distributed triple-matrix multiplications (i.e., the Galerkin product), which often represents a significant cost of the entire setup phase of AMG. We will discuss in detail the composing parts of this computation that include several fast GPU sparse matrix kernels and communications between GPUs. The recent results as well as future work will also be included.
|2:00-3:00||AMD HIP Tutorial
Noel Chalmers and Damon McDougall (AMD)
|Dinner on your own.|
Thursday, August 8
|10:30-12:00||Meetings and discussions|
|2:00-6:00||Hike to the Cascades|
|Dinner on your own.|
The Inn at Virginia Tech is a 10-minute walk from the meeting. CEED attendees are eligible for a group rate of $139.00/night for the nights of August 5-8 while rooms are available. To reserve a room at that rate, click here.
Other lodging options adjacent to campus:
- Residence Inn Blacksburg-University by Marriott - 10-minute walk
- Hyatt Place Blacksburg/University - 10-minute walk
- Main Street Inn - 15-20-minute walk
The following hotels are further away (30-35-minute walk) but are still relatively close to campus. Attendees can drive (5 minutes; see below for information about parking) or take Blacksburg Transit (BT) to get to the meeting. The HWA and HWB bus routes (schedule PDF) connect the stop nearest these hotels (Plantation/Prices Fork with the one nearest the meeting (Old Security Building/Stanger/Old Turner). Note that BT will be operating on reduced service schedules during the summer break.
Getting to Blacksburg
The nearest airport to Blacksburg served by major commercial airlines is the regional airport in Roanoke (IATA symbol ROA), which is about 45 minutes away by car. Other nearby airports include Lynchburg (LYH, 1 hour and 30 minutes), Greensboro (GSO, 2 hours and 30 minutes), and Charlotte (CLT, 3 hours).
Driving Directions from Roanoke Airport: From the airport's main entrance, turn right onto Aviation Dr. and exit immediately to westbound VA-101 / Hershberger Rd. Then, exit to I-581 N and follow signs to I-81 S toward Bristol. Proceed on I-81 S for about 25 mi. and then take exit 118-B to US-460 W toward Christiansburg and Blacksburg. After about 9 mi., exit to Prices Fork Rd., following signs to downtown. Campus is to the right following the intersection at University City Blvd.
Buses and Shuttles from Roanoke Airport: The Smart Way Bus runs between Roanoke Airport and the Squires Student Center on campus. The fare is $4.00 one way, payable to the driver in exact cash. The Roanoke Airport Transportation Service operates shuttles between Roanoke Airport and Blacksburg that can drop passengers off at the Inn at Virginia Tech, the Holiday Inn, or the Squires Student Center. Reservations (required) can be made by calling 1-800-288-1958.
Parking is available on campus at the North End Center Garage, from which it is about a 5-minute walk to the meeting venue. There is a daily fee. Alternatively, one can acquire (also for a fee) a visitor's campus parking permit at the Visitor Center. Goodwin Hall is immediately adjacent to Perry Street Lot 3. Further information about parking on campus may be found here.
For questions, please contact the meeting organizers at firstname.lastname@example.org.
We gratefully acknowledge support for this meeting from the Virginia Tech College of Science and Department of Mathematics.