CEED 3.0 Software Distribution
The CEED distribution is a collection of software packages that can be integrated together to enable efficient discretizations in a variety of high-order applications on unstructured grids.
CEED is using the Spack package manager for compatible building and installation of its software components.
In this version, CEED 3.0, the CEED software suite consists of the following 12 packages, plus the CEED meta-package:
For additional details, please consult the following sections:
- Simple Installation
- Using the Installation
- Spack for Beginners
- Tips and Troubleshooting
The CEED team can be contacted by posting to our User Forum or via email at email@example.com. For issues related to the CEED Spack packages, please start a discussion on the GitHub @spack/ceed page.
If Spack is already available on your system and is visible in your
can install the CEED software simply with:
spack install -v ceed
To enable package testing during the build process, use instead:
spack install -v --test=all ceed
If you don't have Spack, you can download it and install CEED with the following commands:
git clone https://github.com/spack/spack.git cd spack ./bin/spack install -v ceed
To avoid long compile times, we strongly recommend that you add a
for your platform, see above and the Tips and Troubleshooting
Using the Installation
Spack will install the CEED packages (and the libraries they depend on) in a
./opt/spack/<architecture>/<compiler>/ that is specific to the
architecture and compiler used (multiple compiler and/or architecture builds can
coexist in a single Spack directory).
Below are several examples of how the Spack installation can be linked with and used in user applications.
Building MFEM-based Applications
The simplest way to use the Spack installation is through the
spack location command. For example, MFEM-based codes, such as
the MFEM examples, can be simply built as follows:
git clone https://github.com/mfem/mfem.git cd mfem; git checkout v4.1 cd examples make CONFIG_MK=`spack location -i mfem`/share/mfem/config.mk cd ../miniapps/electromagnetics make CONFIG_MK=`spack location -i mfem`/share/mfem/config.mk
Note that if you already have multiple Spack installations of MFEM, the above
spack location commands will fail and you will need to use a more concrete
spec such as
mfem~cuda instead of simply
Alternatively, the Spack installation can be exported to a local directory:
mkdir ceed spack view --verbose symlink ceed/mfem mfem
Once again, if you already have multiple Spack installations of MFEM, then you
will need to use a more concrete spec such as
mfem~cuda instead of simply
ceed/mfem directory now contains the Spack-built MFEM with all of its
dependencies (technically, it contains links to all the build files inside the
./opt/spack/ subdirectory for MFEM). In particular, the MFEM library in
ceed/mfem/lib and the MFEM build configuration file in
This directory can be used to build the MFEM examples as follows:
# Clone MFEM next to the "ceed" directory created above git clone https://github.com/mfem/mfem.git cd mfem; git checkout v4.1 cd examples make CONFIG_MK=$PWD/../../ceed/mfem/share/mfem/config.mk
The MFEM electromagnetics miniapps can further be built with:
# Continuing from the last directory above: .../mfem/examples cd ../miniapps/electromagnetics make CONFIG_MK=../../../ceed/mfem/share/mfem/config.mk
Building libCEED-based Applications
Below we illustrate how to use the Spack installation to build libCEED-based applications, by building the examples in the current libCEED distribution.
spack location, the libCEED examples can be built as
git clone https://github.com/CEED/libCEED.git cd libCEED/examples/ceed make CEED_DIR=`spack location -i libceed` ./ex1-volume -ceed /cpu/self
spack find -lv libceed occa
The serial, OpenMP, OpenCL and GPU OCCA backends can be used with:
./ex1-volume -ceed /cpu/occa ./ex1-volume -ceed /omp/occa ./ex1-volume -ceed /ocl/occa ./ex1-volume -ceed /gpu/occa
spack install -v ceed+cuda ^magma cuda_arch=70 ^mfem cuda_arch=sm_70
For more details, see the section GPU demo below.
With the MAGMA backend, the
/gpu/magma resource descriptor can also be used.
The MFEM/libCEED and PETSc/libCEED examples can be further built with:
# Continue from the last dir above: .../libCEED/examples/ceed cd ../mfem make CEED_DIR=`spack location -i libceed` MFEM_DIR=`spack location -i mfem` ./bp1 -no-vis -o 2 -ceed /cpu/self -m `spack location -i mfem`/share/mfem/data/star.mesh ./bp3 -no-vis -o 2 -ceed /cpu/self -m `spack location -i mfem`/share/mfem/data/star.mesh cd ../petsc make CEED_DIR=`spack location -i libceed` PETSC_DIR=`spack location -i petsc` ./bps -degree 2 -ceed /cpu/self
Note that if
PETSC_ARCH is set in your environment, you must either unset it
or also pass
PETSC_ARCH= in the above command.
Depending on the available backends, additional CEED resource descriptors,
petsc/bps -problem bp1 -degree 2 -ceed /ocl/occa or
mfem/bp1 -no-vis --order 2 -ceed
/gpu/occa can be provided.
See libCEED Examples for
more on available examples/miniapps and how to run them.
Finally, the Nek5000/libCEED examples can be built as follows:
cd ../nek5000 export CEED_DIR=`spack location -i libceed` NEK5K_DIR=`spack location -i nek5000` ./make-nek-examples.sh
Then you can run the Nek5000 examples as follows:
export MPIEXEC=`spack location -i openmpi`/bin/mpiexec ./run-nek-example.sh -e bp1 -c /cpu/self -n 2 -b 3
In the above example, replace
openmpi with whatever the MPI implementation
you have installed with spack. Also, you can do
to find out the options supported by the run script.
options: -h|-help Print this usage information and exit -c|-ceed Ceed backend to be used for the run (optional, default: /cpu/self) -e|-example Example name (optional, default: bp1) -n|-np Specify number of MPI ranks for the run (optional, default: 4) -b|-box Specify the box geometry to be found in ./boxes/ directory (Mandatory)
More information on running the Nek5000 examples can be found in the libCEED documentation.
Alternatively, one can export the Spack install to a local directory:
# In the directory that contains the cloned "libCEED" directory spack view --verbose symlink ceed/libceed libceed spack view --verbose symlink ceed/petsc petsc spack view --verbose symlink ceed/mfem mfem spack view --verbose symlink ceed/nek5000 nek5000
and use that to specify the
cd libCEED/examples/ceed make CEED_DIR=../../../ceed/libceed ./ex1-volume -ceed /cpu/self cd ../mfem make CEED_DIR=../../../ceed/libceed MFEM_DIR=../../../ceed/mfem ./bp1 -no-vis -o 2 -ceed /cpu/self -m ../../../ceed/mfem/share/mfem/data/star.mesh ./bp3 -no-vis -o 2 -ceed /cpu/self -m ../../../ceed/mfem/share/mfem/data/star.mesh cd ../petsc make CEED_DIR=../../../ceed/mfem PETSC_DIR=../../../ceed/petsc ./bps -problem bp1 -degree 2 -ceed /cpu/self
Docker is a popular container system available on Linux, Mac, and Windows. After installing Docker, running one command
docker run -it --rm -v `pwd`:/ceed jedbrown/ceed bash
gives you a development environment with CEED installed via Spack and the host's current working directory mounted at
/ceed (the current working directory in the container).
host$ git clone https://github.com/ceed/libceed host$ cd libceed/examples/petsc host$ docker run -it --rm -v `pwd`:/ceed jedbrown/ceed bash container$ make PETSC_DIR=`spack location -i petsc` CEED_DIR=`spack location -i libceed` container$ mpiexec -n 2 ./bp1 Global dofs: 2541 Process decomposition: 2 1 1 Local elements: 1000 = 10 10 10 Owned dofs: 1210 = 10 11 11 KSP cg CONVERGED_RTOL iterations 34 rnorm 3.992091e-09 Pointwise error (max) 1.267540e-02
See the Dockerfile to understand how this image was prepared and/or create your own images.
Containers also work at NERSC using Shifter, a container system designed for HPC. To pull the latest CEED image, use
shifterimg pull docker:jedbrown/ceed:3.0
then build code using
shifter commands in place of the
docker commands above, e.g.,
host$ shifter --image=docker:jedbrown/ceed:3.0 bash container$ make PETSC_DIR=`spack location -i petsc` CEED_DIR=`spack location -i libceed`
where we see that
shifter defaults behave similarly to the options we had to give manually for
Batch jobs can be submitted via
sbatch --image docker:jedbrown/ceed:latest ...
with the following in your submission script:
srun -n 64 shifter ./your-petsc-app
Singularity is another HPC container system with usage similar to Shifter above.
host$ git clone https://github.com/ceed/libceed host$ cd libceed host$ singularity shell docker://jedbrown/ceed:latest Singularity> make PETSC_DIR=`spack location -i petsc` CEED_DIR=`spack location -i libceed` build/petsc-bps
One can run local jobs in the container:
Singularity> mpiexec -n 4 build/petsc-bps -problem bp3 -benchmark
or submit batch jobs from the host with the likes of:
srun -n 64 singularity exec docker://jedbrown/ceed:latest build/petsc-bps -problem bp3 -benchmark
(see instructions for your batch system).
Below is the full set of commands to install the CEED distribution on a GPU-capable machine and then use its libCEED GPU kernels to accelerate MFEM, PETSc and Nek examples. Note that these are very different codes (C++, C, F77) which can nevertheless take advantage through libCEED of a common set of GPU kernels.
setenv commands below assume
tcsh. We strongly recommend to add a
packages.yaml file in order to avoid long compile times, see Tips and
# Install CEED 3.0 distribution via Spack git clone https://github.com/spack/spack.git cd spack spack install ceed+cuda ^magma cuda_arch=70 ^mfem cuda_arch=sm_70 # Setup CEED component directories setenv CEED_DIR `spack location -i libceed` setenv MFEM_DIR `spack location -i mfem` setenv PETSC_DIR `spack location -i petsc` setenv NEK5K_DIR `spack location -i nek5000` # Clean OCCA cache # rm -rf ~/.occa # Clone libCEED examples directory as proxy for libCEED-based codes git clone https://github.com/CEED/libCEED.git mv libCEED/examples ceed-examples rm -rf libCEED # libCEED examples on CPU and GPU cd ceed-examples/ceed make ./ex1-volume -ceed /cpu/self/ref/blocked ./ex1-volume -ceed /gpu/cuda/gen cd ../.. # MFEM+libCEED examples on CPU and GPU cd ceed-examples/mfem make OPT= CEED_LIBS= ./bp1 -ceed /cpu/self/ref/blocked -no-vis -m $MFEM_DIR/share/mfem/data/star.mesh ./bp1 -ceed /gpu/cuda/gen -no-vis -m $MFEM_DIR/share/mfem/data/star.mesh cd ../.. # PETSc+libCEED examples on CPU and GPU cd ceed-examples/petsc make ./bps -problem bp1 -ceed /cpu/self/ref/blocked ./bps -problem bp1 -ceed /gpu/cuda/gen cd ../.. # Nek+libCEED examples on CPU and GPU cd ceed-examples/nek5000 ./make-nek-examples.sh ./run-nek-example.sh -ceed /cpu/self/ref/blocked -b 3 ./run-nek-example.sh -ceed /gpu/cuda/gen -b 3 cd ../..
Spack for Beginners
Spack is a package manager for scientific software that supports multiple versions, configurations, platforms, and compilers.
While Spack does not change the build system that already exists in each CEED component, it coordinates the dependencies between these components and enables them to be built with the same compilers and options.
If you are new to Spack, here are some Spack commands and options that you may find useful:
Spack is a set of Python scripts so there is nothing to install! Just download with
git clone https://github.com/spack/spack.gitand add
spack/binto your path with the following commands:
setenv SPACK_ROOT `pwd`; source $SPACK_ROOT/share/spack/setup-env.cshfor
Spack should automatically locate the standard compilers on your system. Use
spack compilersto list the ones that have been found. If you need to configure additional compilers, you can do that through the config file,
~/.spack/compilers.yaml, or the platform-specific config file,
~/.spack/<platform>/compilers.yaml. Some examples of such files are provided below. Check the Spack documentation for additional details.
Spack likes to build all of its packages. The file
~/.spack/packages.yaml, and similarly the platform-specific,
~/.spack/<platform>/packages.yaml, allow you to list the packages already installed on your system for Spack to use instead of compiling them itself. Some examples are provided below.
spack installto see only a summary for the building of each package (as opposed to the compilation of individual files):
spack install ceed. You can still turn the detailed build output on and off by pressing the
vkey in the Spack terminal.
To troubleshoot the spack install process:
spack --debug --verbose install ceed.
To do a dry run of the spack install process:
spack install --fake ceed. Note that you will have to run
spack uninstall --allto clean up after this.
To see the specific packages that will be installed for a particular package, e.g.
spack spec -I ceed.
To see the list of all installed packages:
To list the location where all different versions of the
ceedpackage were installed:
spack find --long --paths ceed. Alternatively, for a specific version you can use
spack location --install-dir ceed.
To uninstall a package, e.g. mfem, including all packages that depend on it:
spack uninstall --all --dependents mfem, or
spack uninstall /qzn2u7tfor a particular hash.
To uninstall all packages that were ever installed by Spack:
spack uninstall --all. In this case you may also want to clear the caches that Spack maintains with:
spack clean -a.
Tips and Troubleshooting
Building on a Mac
packages: all: compiler: [clang] providers: blas: [veclibfort] lapack: [veclibfort] mpi: [openmpi] openmpi: paths: firstname.lastname@example.org: ~/brew buildable: False cmake: paths: email@example.com: ~/brew buildable: False cuda: paths: firstname.lastname@example.org: /usr/local/cuda buildable: False libx11: paths: libx11@system: /opt/X11 version: [system] buildable: False libxt: paths: libxt@system: /opt/X11 version: [system] buildable: False xproto: paths: # see /opt/X11/lib/pkgconfig/xproto.pc email@example.com: /opt/X11 version: [7.0.31] buildable: False python: paths: firstname.lastname@example.org: /usr buildable: False zlib: paths: email@example.com: /usr buildable: False
The packages in
~/brew were installed with
brew install package. If you
don't have Homebrew, you can install it and the needed tools with:
git clone https://github.com/Homebrew/brew.git cd brew bin/brew install openmpi cmake python zlib
The packages in
/usr are provided by Apple and come pre-built with Mac OS
cuda package is provided by NVIDIA and should be installed separately
by downloading it from NVIDIA. We are using the Clang compiler,
OpenMPI, and Apple's BLAS/LAPACK accelerator library.
Building on a Linux Desktop
The file ceed3-linux-rhel7-x86_64-packages.yaml
provides a sample
packages.yaml file that can be adapted to work on a RHEL7 Linux
packages: all: compiler: [gcc] providers: mpi: [openmpi] blas: [netlib-lapack] lapack: [netlib-lapack] netlib-lapack: paths: netlib-lapack@system: /usr/lib64 buildable: False openmpi: paths: firstname.lastname@example.org: ~/local buildable: False cmake: paths: email@example.com: ~/local buildable: False cuda: paths: firstname.lastname@example.org: ~/local/cuda buildable: False libx11: paths: libx11@system: /usr version: [system] buildable: False libxt: paths: libxt@system: /usr version: [system] buildable: False xproto: paths: email@example.com: /usr version: [7.0.32] buildable: False python: paths: firstname.lastname@example.org: /usr buildable: False zlib: paths: email@example.com: /usr/lib64 buildable: False
The above file uses user-installed OpenMPI, CMake and CUDA packages, with the rest of the CEED prerequisites installed via the yum package manager.
A very similar file, ceed3-ubuntu19.10-packages.yaml provides Spack configuration for the Ubuntu distribution:
packages: all: compiler: [gcc] providers: mpi: [mpich] blas: [blis] lapack: [netlib-lapack] blis: paths: firstname.lastname@example.org: /usr/lib buildable: False netlib-lapack: paths: email@example.com: /usr/lib variants: +external-blas~lapacke buildable: False openblas: buildable: False libflame: buildable: False veclibfort: buildable: False intel-parallel-studio: buildable: False intel-mkl: buildable: False cray-libsci: buildable: False atlas: buildable: False mpich: paths: firstname.lastname@example.org: /usr/local buildable: False cmake: paths: email@example.com: /usr buildable: False libx11: paths: libx11@system: /usr version: [system] buildable: False libxt: paths: libxt@system: /usr version: [system] buildable: False xproto: paths: # See /usr/share/pkgconfig/xproto.pc for version firstname.lastname@example.org: /usr buildable: False python: paths: email@example.com: /usr buildable: False zlib: paths: firstname.lastname@example.org: /usr/lib buildable: False
In this case we use GCC and other development packages via
apt install and
with MPICH installed separately (as needed to use containerized HPC environments
like Shifter and Singularity). You can use
docker pull jedbrown/ceed-base
to get a build environment with this
packages.yaml and prerequisites for
spack install ceed.
Building at LLNL's Computing Center
packages: all: compiler: [xl_r, xl, gcc] providers: mpi: [spectrum-mpi] blas: [essl] lapack: [netlib-lapack] essl: paths: email@example.com: /usr/tcetmp/packages/essl/essl-6.1.0 variants: threads=none version: [6.1.0] buildable: False veclibfort: buildable: False intel-parallel-studio: buildable: False intel-mkl: buildable: False atlas: buildable: False openblas: # OpenBLAS can be built only with gcc buildable: False netlib-lapack: # prefer netlib-lapack with '+external-blas' and '~lapacke' variant variants: +external-blas~lapacke spectrum-mpi: paths: spectrum-mpi@firstname.lastname@example.org: /usr/tce/packages/spectrum-mpi/spectrum-mpi-2019.01.30-xl-2019.02.07 spectrum-mpi@email@example.com: /usr/tce/packages/spectrum-mpi/spectrum-mpi-2019.01.30-gcc-4.9.3 spectrum-mpi@firstname.lastname@example.org: /usr/tce/packages/spectrum-mpi/spectrum-mpi-2019.01.30-gcc-7.3.1 buildable: False cmake: paths: email@example.com: /usr/tce/packages/cmake/cmake-3.14.5 version: [3.14.5] buildable: False cuda: paths: firstname.lastname@example.org: /usr/tce/packages/cuda/cuda-10.1.243 version: [10.1.243] buildable: False libx11: paths: libx11@system: /usr version: [system] buildable: False libxt: paths: libxt@system: /usr version: [system] buildable: False xproto: paths: # see /usr/share/pkgconfig/xproto.pc email@example.com: /usr version: [7.0.31] buildable: False python: paths: firstname.lastname@example.org: /usr/tce/packages/python/python-2.7.14 version: [2.7.14] buildable: False
The above file can be used to build CEED with different compilers (xl being the default), for example:
spack install ceed%gcc~petsc
compilers.yaml file for Lassen can be found here:
Building at OLCF
The packages.yaml file gives updated locations for spectrum-mpi and other packeges. Then one has to make sure other modules like xalt are not loaded because xalt provides a conflicting version of "ld" which breaks the build. One may have to load a gcc compiler, e.g., version 6.4.0, and configure it in spack as a compiler. Then one can compile ceed and dependencies using netlib-lapack as the blas and lapack provider. Here are the commands to do the full compile:
git clone https://github.com/spack/spack source spack/share/spack/setup-env.sh cp packages.yaml spack/etc/spack/ module purge module load gcc/6.4.0 spectrum-mpi/10.3.1.2-20200121 cmake netlib-lapack netlib-scalapack spack compiler find ./bin/spack install email@example.com
Several CEED packages can use CUDA: OCCA, MAGMA, libCEED, PETSc, and MFEM.
To build these, add the
cudavariant to the Spack build while specifying the desired CUDA architecture for MAGMA and MFEM:
spack install ceed+cuda ^magma cuda_arch=70 ^mfem cuda_arch=sm_70