CEED 5.0 Software Distribution
The CEED distribution is a collection of software packages that can be integrated together to enable efficient discretizations in a variety of high-order applications on unstructured grids.
CEED is using the Spack package manager for compatible building and installation of its software components.
In this version, CEED 5.0, the CEED software suite consists of the following 15 packages, plus the CEED meta-package:
- GSLIB-1.0.6
- Laghos-3.1
- libCEED-0.10
- MAGMA-2.6.2
- MFEM-4.4
- Nek5000-19.0
- NekRS-21.0
- Nekbone-17.0
- NekCEM-c8db04b
- OCCA-1.1.0
- Omega_h-10.1.0
- PETSc-3.17
- PUMI-2.2.7
- Ratel-0.1.0
- Remhos-1.0
If you are interested in the previous releases, see the CEED-4.0, CEED-3.0, CEED-2.0 and CEED-1.0 pages.
First-time users should read Simple Installation and
Using the Installation below. (Quick summary: you can
build and install all of the above packages with: spack install ceed
)
If you are familiar with Spack, consider using the following machine-specific configurations for CEED (see also the spack-configs repository and the xSDK's config files).
Platform | Architecture | Spack Configuration |
---|---|---|
Mac | darwin-highsierra-x86_64 |
packages |
Linux (RHEL7) | linux-rhel7-x86_64 |
packages |
Linux (Ubuntu) | ubuntu19.10-x86_64 |
packages |
Lassen (LLNL) | linux-rhel7-ppc64le |
packages compilers |
Summit (ORNL) | linux-rhel7-ppc64le |
packages |
For additional details, please consult the following sections:
The CEED team can be contacted by posting to our User Forum or via email at ceed-users@llnl.gov. For issues related to the CEED Spack packages, please start a discussion on the GitHub @spack/ceed page.
Simple Installation
If Spack is already available on your system and is visible in your PATH
, you
can install the CEED software simply with:
spack install -v ceed
To enable package testing during the build process, use instead:
spack install -v --test=all ceed
If you don't have Spack, you can download it and install CEED with the following commands:
git clone https://github.com/spack/spack.git
cd spack
./bin/spack install -v ceed
To avoid long compile times, we strongly recommend that you add a packages.yaml
file
for your platform, see above and the Tips and Troubleshooting
section.
Using the Installation
Spack will install the CEED packages (and the libraries they depend on) in a
subtree of ./opt/spack/<architecture>/<compiler>/
that is specific to the
architecture and compiler used (multiple compiler and/or architecture builds can
coexist in a single Spack directory).
Below are several examples of how the Spack installation can be linked with and used in user applications.
Building MFEM-based Applications
The simplest way to use the Spack installation is through the
spack location
command. For example, MFEM-based codes, such as
the MFEM examples, can be simply built as follows:
git clone https://github.com/mfem/mfem.git
cd mfem; git checkout v4.4
cd examples
make CONFIG_MK=`spack location -i mfem`/share/mfem/config.mk
cd ../miniapps/electromagnetics
make CONFIG_MK=`spack location -i mfem`/share/mfem/config.mk
Note that if you already have multiple Spack installations of MFEM, the above
spack location
commands will fail and you will need to use a more concrete
spec such as mfem~cuda
instead of simply mfem
.
Alternatively, the Spack installation can be exported to a local directory:
mkdir ceed
spack view --verbose symlink ceed/mfem mfem
Once again, if you already have multiple Spack installations of MFEM, then you
will need to use a more concrete spec such as mfem~cuda
instead of simply
mfem
.
The ceed/mfem
directory now contains the Spack-built MFEM with all of its
dependencies (technically, it contains links to all the build files inside the
./opt/spack/
subdirectory for MFEM). In particular, the MFEM library in
ceed/mfem/lib
and the MFEM build configuration file in
ceed/mfem/share/mfem/config.mk
.
This directory can be used to build the MFEM examples as follows:
# Clone MFEM next to the "ceed" directory created above
git clone https://github.com/mfem/mfem.git
cd mfem; git checkout v4.4
cd examples
make CONFIG_MK=$PWD/../../ceed/mfem/share/mfem/config.mk
The MFEM electromagnetics miniapps can further be built with:
# Continuing from the last directory above: .../mfem/examples
cd ../miniapps/electromagnetics
make CONFIG_MK=../../../ceed/mfem/share/mfem/config.mk
Building libCEED-based Applications
Below we illustrate how to use the Spack installation to build libCEED-based applications, by building the examples in the current libCEED distribution.
Using spack location
, the libCEED examples can be built as
follows:
git clone https://github.com/CEED/libCEED.git
cd libCEED/examples/ceed
make CEED_DIR=`spack location -i libceed`
./ex1-volume -ceed /cpu/self
If you have multiple builds of libceed
or occa
you need to be more specific
in the above spack location
command. To list all libceed
and
occa
versions use spack find
:
spack find -lv libceed occa
Then either use variants to choose a unique version, e.g.
libceed~cuda
, or specify the hashes printed in front of the libceed
spec, e.g. libceed/yb3fvek
or just /yb3fvek
(and similarly
for occa
).
The serial, OpenMP, OpenCL and GPU OCCA backends can be used with:
./ex1-volume -ceed /cpu/occa
./ex1-volume -ceed /omp/occa
./ex1-volume -ceed /ocl/occa
./ex1-volume -ceed /gpu/occa
In order to use the OCCA GPU backend, one needs to install CEED with the cuda
variant enabled, i.e. using the spec
ceed+cuda ^magma cuda_arch=70 ^mfem cuda_arch=sm_70
:
spack install -v ceed+cuda ^magma cuda_arch=70 ^mfem cuda_arch=sm_70
For more details, see the section GPU demo below.
With the MAGMA backend, the /gpu/magma
resource descriptor can also be used.
The MFEM/libCEED and PETSc/libCEED examples can be further built with:
# Continue from the last dir above: .../libCEED/examples/ceed
cd ../mfem
make CEED_DIR=`spack location -i libceed` MFEM_DIR=`spack location -i mfem`
./bp1 -no-vis -o 2 -ceed /cpu/self -m `spack location -i mfem`/share/mfem/data/star.mesh
./bp3 -no-vis -o 2 -ceed /cpu/self -m `spack location -i mfem`/share/mfem/data/star.mesh
cd ../petsc
make CEED_DIR=`spack location -i libceed` PETSC_DIR=`spack location -i petsc`
./bps -degree 2 -ceed /cpu/self
Note that if PETSC_ARCH
is set in your environment, you must either unset it
or also pass PETSC_ARCH=
in the above command.
Depending on the available backends, additional CEED resource descriptors,
e.g. petsc/bps -problem bp1 -degree 2 -ceed /ocl/occa
or mfem/bp1 -no-vis --order 2 -ceed
/gpu/occa
can be provided.
See libCEED Examples for
more on available examples/miniapps and how to run them.
Finally, the Nek5000/libCEED examples can be built as follows:
cd ../nek5000
export CEED_DIR=`spack location -i libceed` NEK5K_DIR=`spack location -i nek5000`
./make-nek-examples.sh
Then you can run the Nek5000 examples as follows:
export MPIEXEC=`spack location -i openmpi`/bin/mpiexec
./run-nek-example.sh -e bp1 -c /cpu/self -n 2 -b 3
In the above example, replace openmpi
with whatever the MPI implementation
you have installed with spack. Also, you can do ./run-nek-example.sh -h
to find out the options supported by the run script.
options:
-h|-help Print this usage information and exit
-c|-ceed Ceed backend to be used for the run (optional, default: /cpu/self)
-e|-example Example name (optional, default: bp1)
-n|-np Specify number of MPI ranks for the run (optional, default: 4)
-b|-box Specify the box geometry to be found in ./boxes/ directory (Mandatory)
More information on running the Nek5000 examples can be found in the libCEED documentation.
Alternatively, one can export the Spack install to a local directory:
# In the directory that contains the cloned "libCEED" directory
spack view --verbose symlink ceed/libceed libceed
spack view --verbose symlink ceed/petsc petsc
spack view --verbose symlink ceed/mfem mfem
spack view --verbose symlink ceed/nek5000 nek5000
and use that to specify the CEED_DIR
, MFEM_DIR
and PETSC_DIR
variables:
cd libCEED/examples/ceed
make CEED_DIR=../../../ceed/libceed
./ex1-volume -ceed /cpu/self
cd ../mfem
make CEED_DIR=../../../ceed/libceed MFEM_DIR=../../../ceed/mfem
./bp1 -no-vis -o 2 -ceed /cpu/self -m ../../../ceed/mfem/share/mfem/data/star.mesh
./bp3 -no-vis -o 2 -ceed /cpu/self -m ../../../ceed/mfem/share/mfem/data/star.mesh
cd ../petsc
make CEED_DIR=../../../ceed/mfem PETSC_DIR=../../../ceed/petsc
./bps -problem bp1 -degree 2 -ceed /cpu/self
Using Containers
Docker is a popular container system available on Linux, Mac, and Windows. After installing Docker, running one command
docker run -it --rm -v `pwd`:/ceed jedbrown/ceed bash
gives you a development environment with CEED installed via Spack and the host's current working directory mounted at /ceed
(the current working directory in the container).
For example,
host$ git clone https://github.com/ceed/libceed
host$ cd libceed/examples/petsc
host$ docker run -it --rm -v `pwd`:/ceed jedbrown/ceed bash
container$ make PETSC_DIR=`spack location -i petsc` CEED_DIR=`spack location -i libceed`
container$ mpiexec -n 2 ./bp1
Global dofs: 2541
Process decomposition: 2 1 1
Local elements: 1000 = 10 10 10
Owned dofs: 1210 = 10 11 11
KSP cg CONVERGED_RTOL iterations 34 rnorm 3.992091e-09
Pointwise error (max) 1.267540e-02
See the Dockerfile to understand how this image was prepared and/or create your own images.
NERSC's Shifter
Containers also work at NERSC using Shifter, a container system designed for HPC. To pull the latest CEED image, use
shifterimg pull docker:jedbrown/ceed:5.0
then build code using shifter
commands in place of the docker
commands above, e.g.,
host$ shifter --image=docker:jedbrown/ceed:5.0 bash
container$ make PETSC_DIR=`spack location -i petsc` CEED_DIR=`spack location -i libceed`
where we see that shifter
defaults behave similarly to the options we had to give manually for docker
.
Batch jobs can be submitted via
sbatch --image docker:jedbrown/ceed:latest ...
with the following in your submission script:
srun -n 64 shifter ./your-petsc-app
Singularity
Singularity is another HPC container system with usage similar to Shifter above.
host$ git clone https://github.com/ceed/libceed
host$ cd libceed
host$ singularity shell docker://jedbrown/ceed:latest
Singularity> make PETSC_DIR=`spack location -i petsc` CEED_DIR=`spack location -i libceed` build/petsc-bps
One can run local jobs in the container:
Singularity> mpiexec -n 4 build/petsc-bps -problem bp3 -benchmark
or submit batch jobs from the host with the likes of:
srun -n 64 singularity exec docker://jedbrown/ceed:latest build/petsc-bps -problem bp3 -benchmark
(see instructions for your batch system).
GPU demo
Below is the full set of commands to install the CEED distribution on a GPU-capable machine and then use its libCEED GPU kernels to accelerate MFEM, PETSc and Nek examples. Note that these are very different codes (C++, C, F77) which can nevertheless take advantage through libCEED of a common set of GPU kernels.
The setenv
commands below assume csh
/tcsh
. We strongly recommend to add a
packages.yaml
file in order to avoid long compile times, see Tips and
Troubleshooting.
# Install CEED 5.0 distribution via Spack
git clone https://github.com/spack/spack.git
cd spack
spack install ceed+cuda ^magma cuda_arch=70 ^mfem cuda_arch=sm_70
# Setup CEED component directories
setenv CEED_DIR `spack location -i libceed`
setenv MFEM_DIR `spack location -i mfem`
setenv PETSC_DIR `spack location -i petsc`
setenv NEK5K_DIR `spack location -i nek5000`
# Clean OCCA cache
# rm -rf ~/.occa
# Clone libCEED examples directory as proxy for libCEED-based codes
git clone https://github.com/CEED/libCEED.git
mv libCEED/examples ceed-examples
rm -rf libCEED
# libCEED examples on CPU and GPU
cd ceed-examples/ceed
make
./ex1-volume -ceed /cpu/self/ref/blocked
./ex1-volume -ceed /gpu/cuda/gen
cd ../..
# MFEM+libCEED examples on CPU and GPU
cd ceed-examples/mfem
make OPT= CEED_LIBS=
./bp1 -ceed /cpu/self/ref/blocked -no-vis -m $MFEM_DIR/share/mfem/data/star.mesh
./bp1 -ceed /gpu/cuda/gen -no-vis -m $MFEM_DIR/share/mfem/data/star.mesh
cd ../..
# PETSc+libCEED examples on CPU and GPU
cd ceed-examples/petsc
make
./bps -problem bp1 -ceed /cpu/self/ref/blocked
./bps -problem bp1 -ceed /gpu/cuda/gen
cd ../..
# Nek+libCEED examples on CPU and GPU
cd ceed-examples/nek5000
./make-nek-examples.sh
./run-nek-example.sh -ceed /cpu/self/ref/blocked -b 3
./run-nek-example.sh -ceed /gpu/cuda/gen -b 3
cd ../..
Spack for Beginners
Spack is a package manager for scientific software that supports multiple versions, configurations, platforms, and compilers.
While Spack does not change the build system that already exists in each CEED component, it coordinates the dependencies between these components and enables them to be built with the same compilers and options.
If you are new to Spack, here are some Spack commands and options that you may find useful:
-
Spack is a set of Python scripts so there is nothing to install! Just download with
git clone https://github.com/spack/spack.git
and addspack/bin
to your path with the following commands:. share/spack/setup-env.sh
forbash
/zsh
orsetenv SPACK_ROOT `pwd`; source $SPACK_ROOT/share/spack/setup-env.csh
forcsh
/tcsh
. -
Spack should automatically locate the standard compilers on your system. Use
spack compilers
to list the ones that have been found. If you need to configure additional compilers, you can do that through the config file,~/.spack/compilers.yaml
, or the platform-specific config file,~/.spack/<platform>/compilers.yaml
. Some examples of such files are provided below. Check the Spack documentation for additional details. -
Spack likes to build all of its packages. The file
~/.spack/packages.yaml
, and similarly the platform-specific,~/.spack/<platform>/packages.yaml
, allow you to list the packages already installed on your system for Spack to use instead of compiling them itself. Some examples are provided below. -
Skip the
-v
option ofspack install
to see only a summary for the building of each package (as opposed to the compilation of individual files):spack install ceed
. You can still turn the detailed build output on and off by pressing thev
key in the Spack terminal. -
To troubleshoot the spack install process:
spack --debug --verbose install ceed
. -
To do a dry run of the spack install process:
spack install --fake ceed
. Note that you will have to runspack uninstall --all
to clean up after this. -
To see the specific packages that will be installed for a particular package, e.g.
ceed
, use:spack spec -I ceed
. -
To see the list of all installed packages:
spack find
. -
To list the location where all different versions of the
ceed
package were installed:spack find --long --paths ceed
. Alternatively, for a specific version you can usespack location --install-dir ceed
. -
To uninstall a package, e.g. mfem, including all packages that depend on it:
spack uninstall --all --dependents mfem
, orspack uninstall /qzn2u7t
for a particular hash. -
To uninstall all packages that were ever installed by Spack:
spack uninstall --all
. In this case you may also want to clear the caches that Spack maintains with:spack clean -a
.
Tips and Troubleshooting
Building on a Mac
The file ceed3-darwin-highsierra-x86_64-packages.yaml
provides a sample packages.yaml
file based on Homebrew, that should work on
most Macs. (You can use MacPorts instead of Homebrew if you prefer.)
packages:
all:
compiler: [clang]
providers:
blas: [veclibfort]
lapack: [veclibfort]
mpi: [openmpi]
openmpi:
paths:
openmpi@3.0.0: ~/brew
buildable: False
cmake:
paths:
cmake@3.10.2: ~/brew
buildable: False
cuda:
paths:
cuda@9.1.85: /usr/local/cuda
buildable: False
libx11:
paths:
libx11@system: /opt/X11
version: [system]
buildable: False
libxt:
paths:
libxt@system: /opt/X11
version: [system]
buildable: False
xproto:
paths:
# see /opt/X11/lib/pkgconfig/xproto.pc
xproto@7.0.31: /opt/X11
version: [7.0.31]
buildable: False
python:
paths:
python@2.7.10: /usr
buildable: False
zlib:
paths:
zlib@1.2.11: /usr
buildable: False
The packages in ~/brew
were installed with brew install package
. If you
don't have Homebrew, you can install it and the needed tools with:
git clone https://github.com/Homebrew/brew.git
cd brew
bin/brew install openmpi cmake python zlib
The packages in /usr
are provided by Apple and come pre-built with Mac OS
X. The cuda
package is provided by NVIDIA and should be installed separately
by downloading it from NVIDIA. We are using the Clang compiler,
OpenMPI, and Apple's BLAS/LAPACK accelerator library.
Building on a Linux Desktop
The file ceed3-linux-rhel7-x86_64-packages.yaml
provides a sample packages.yaml
file that can be adapted to work on a RHEL7 Linux
desktop
packages:
all:
compiler: [gcc]
providers:
mpi: [openmpi]
blas: [netlib-lapack]
lapack: [netlib-lapack]
netlib-lapack:
paths:
netlib-lapack@system: /usr/lib64
buildable: False
openmpi:
paths:
openmpi@3.0.0: ~/local
buildable: False
cmake:
paths:
cmake@3.10.2: ~/local
buildable: False
cuda:
paths:
cuda@9.1.85: ~/local/cuda
buildable: False
libx11:
paths:
libx11@system: /usr
version: [system]
buildable: False
libxt:
paths:
libxt@system: /usr
version: [system]
buildable: False
xproto:
paths:
xproto@7.0.32: /usr
version: [7.0.32]
buildable: False
python:
paths:
python@2.7.14: /usr
buildable: False
zlib:
paths:
zlib@1.2.11: /usr/lib64
buildable: False
The above file uses user-installed OpenMPI, CMake and CUDA packages, with the rest of the CEED prerequisites installed via the yum package manager.
A very similar file, ceed3-ubuntu19.10-packages.yaml provides Spack configuration for the Ubuntu distribution:
packages:
all:
compiler: [gcc]
providers:
mpi: [mpich]
blas: [blis]
lapack: [netlib-lapack]
blis:
paths:
blis@0.6.0: /usr/lib
buildable: False
netlib-lapack:
paths:
netlib-lapack@3.8.0: /usr/lib
variants: +external-blas~lapacke
buildable: False
openblas:
buildable: False
libflame:
buildable: False
veclibfort:
buildable: False
intel-parallel-studio:
buildable: False
intel-mkl:
buildable: False
cray-libsci:
buildable: False
atlas:
buildable: False
mpich:
paths:
mpich@3.3.2: /usr/local
buildable: False
cmake:
paths:
cmake@3.13.4: /usr
buildable: False
libx11:
paths:
libx11@system: /usr
version: [system]
buildable: False
libxt:
paths:
libxt@system: /usr
version: [system]
buildable: False
xproto:
paths: # See /usr/share/pkgconfig/xproto.pc for version
xproto@7.0.32: /usr
buildable: False
python:
paths:
python@3.7.5: /usr
buildable: False
zlib:
paths:
zlib@1.2.11: /usr/lib
buildable: False
In this case we use GCC and other development packages via apt install
and
with MPICH installed separately (as needed to use containerized HPC environments
like Shifter and Singularity). You can use
docker pull jedbrown/ceed-base
to get a build environment with this packages.yaml
and prerequisites for spack install ceed
.
Building at LLNL's Computing Center
Lassen
The file ceed3-lassen-packages.yaml is an
example of a packages.yaml
file for the Lassen system at LLNL's Livermore
Computing center, which is similar to the Sierra supercomputer.
packages:
all:
compiler: [xl_r, xl, gcc]
providers:
mpi: [spectrum-mpi]
blas: [essl]
lapack: [netlib-lapack]
essl:
paths:
essl@6.1.0: /usr/tcetmp/packages/essl/essl-6.1.0
variants: threads=none
version: [6.1.0]
buildable: False
veclibfort:
buildable: False
intel-parallel-studio:
buildable: False
intel-mkl:
buildable: False
atlas:
buildable: False
openblas: # OpenBLAS can be built only with gcc
buildable: False
netlib-lapack: # prefer netlib-lapack with '+external-blas' and '~lapacke' variant
variants: +external-blas~lapacke
spectrum-mpi:
paths:
spectrum-mpi@2019-01-30%xl_r@16.1.1: /usr/tce/packages/spectrum-mpi/spectrum-mpi-2019.01.30-xl-2019.02.07
spectrum-mpi@2019-01-30%gcc@4.9.3: /usr/tce/packages/spectrum-mpi/spectrum-mpi-2019.01.30-gcc-4.9.3
spectrum-mpi@2019-01-30%gcc@7.3.1: /usr/tce/packages/spectrum-mpi/spectrum-mpi-2019.01.30-gcc-7.3.1
buildable: False
cmake:
paths:
cmake@3.14.5: /usr/tce/packages/cmake/cmake-3.14.5
version: [3.14.5]
buildable: False
cuda:
paths:
cuda@10.1.243: /usr/tce/packages/cuda/cuda-10.1.243
version: [10.1.243]
buildable: False
libx11:
paths:
libx11@system: /usr
version: [system]
buildable: False
libxt:
paths:
libxt@system: /usr
version: [system]
buildable: False
xproto:
paths:
# see /usr/share/pkgconfig/xproto.pc
xproto@7.0.31: /usr
version: [7.0.31]
buildable: False
python:
paths:
python@2.7.14: /usr/tce/packages/python/python-2.7.14
version: [2.7.14]
buildable: False
The above file can be used to build CEED with different compilers (xl being the default), for example:
spack install ceed%gcc~petsc
A corresponding compilers.yaml
file for Lassen can be found here:
ceed3-lassen-compilers.yaml.
Building at OLCF
Summit
The file ceed3-summit-packages.yaml is an example of a
packages.yaml
file for the Summit system at OLCF.
The packages.yaml file gives updated locations for spectrum-mpi and other packeges. Then one has to make sure other modules like xalt are not loaded because xalt provides a conflicting version of "ld" which breaks the build. One may have to load a gcc compiler, e.g., version 6.4.0, and configure it in spack as a compiler. Then one can compile ceed and dependencies using netlib-lapack as the blas and lapack provider. Here are the commands to do the full compile:
git clone https://github.com/spack/spack
source spack/share/spack/setup-env.sh
cp packages.yaml spack/etc/spack/
module purge
module load gcc/6.4.0 spectrum-mpi/10.3.1.2-20200121 cmake netlib-lapack netlib-scalapack
spack compiler find
./bin/spack install ceed%gcc@6.4.0
Installing CUDA
-
Several CEED packages can use CUDA: OCCA, MAGMA, libCEED, PETSc, and MFEM.
-
To build these, add the
cuda
variant to the Spack build while specifying the desired CUDA architecture for MAGMA and MFEM:
spack install ceed+cuda ^magma cuda_arch=70 ^mfem cuda_arch=sm_70
- You will need to have the NVIDIA CUDA SDK and driver installed on your system,
see developer.nvidia.com, and specify it in the
packages.yaml
file. See, for example, thecuda
section in Building on a Mac, or the ceed3-linux-rhel7-x86_64-packages.yaml file.