Manual Installation LLVM/MPICH
Minimum System Requirements
In general, the following is required for MOOSE-based development:
C++11 compliant compiler (GCC 4.8.4, Clang 3.4.0, and Intel 2018 or greater)
(included in any of our redistributable packages if you choose to install one)
Memory: 16 GBs (debug builds)
Processor: 64-bit x86
Disk: 30GB
Prerequisites
Cmake 3.4 or greater will be needed for building PETSc and LLVM. Unless your system is very old, one should be able to use their system's package manager (apt-get, yum, zypper, etc) to install a compatible version of Cmake. For older systems, you will need to obtain cmake source from http://www.cmake.org, and build it appropriately for your system.
A sane environment. This means having a clean, nothing but the bare minimum as far as available libraries go in your running environment. No additional LD_LIBRARY_PATHs, or other extra PATHs set. No strange UMASK settings. No odd aliases. It might even be best, to create a separate account, strictly for the use of these instructions. I have created an account called 'moose', and will assume you have done the same.
Environment
Lets try to make our environment as sane as possible, while setting up all the locations we will need.
module purge #(may fail with command not found)
unset LD_LIBRARY_PATH
unset CPLUS_INCLUDE_PATH
unset C_INCLUDE_PATH
export PACKAGES_DIR=/opt/moose
export STACK_SRC=/tmp/moose_stack_src
umask 022
What ever terminal window you were in, when you performed the above exports and umask commands, you _MUST_ use that same window, for the remainder of these instructions. If this window is closed, or the machine is rebooted, it will be necessary to perform the above commands again, before continuing any step. You will also _need_ to perform any exports in any previous steps you continued from.
And now we create our target installation location. We will also chown
the location to our own user id for now. This will allow us to perform all the make install
commands with out the need of sudo, which can complicate things.
mkdir -p $STACK_SRC
sudo mkdir -p $PACKAGES_DIR
sudo chown -R moose $PACKAGES_DIR
GCC
We need a modern C++11 capable compiler. Our minimum requirements are: GCC 4.8.4, Clang 3.4.0, and Intel 2018. This section will focus on building a GCC 7.3.0 compiler stack.
What version of GCC do we have?
gcc --version
gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4)
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
If your version is less than 4.8.4, you will need to build a newer version. If your version is at or greater than 4.8.4, you have the option of skipping the GCC section.
cd $STACK_SRC
curl -L -O http://mirrors.concertpass.com/gcc/releases/gcc-7.3.0/gcc-7.3.0.tar.gz
tar -xf gcc-7.3.0.tar.gz -C .
Obtain GCC pre-reqs:
cd $STACK_SRC/gcc-7.3.0
./contrib/download_prerequisites
Configure, build and install GCC:
mkdir $STACK_SRC/gcc-build
cd $STACK_SRC/gcc-build
../gcc-7.3.0/configure --prefix=$PACKAGES_DIR/gcc-7.3.0 \
--disable-multilib \
--enable-languages=c,c++,fortran,jit \
--enable-checking=release \
--enable-host-shared \
--with-pic
make -j # (where # is the number of cores available)
make install
Any errors during configure/make will need to be investigated on your own. Every operating system I have come across has its own nuances of getting stuff built. Normally any issues are going to be solved by installing the necessary development libraries using your system package manager (apt-get, yum, zypper, etc). Hint: I would search the internet for 'how to build GCC 7.3.0 on (insert the name/version of your operating system here)'
In order to utilize our newly built GCC 7.3.0 compiler, we need to set some variables:
export PATH=$PACKAGES_DIR/gcc-7.3.0/bin:$PATH
export LD_LIBRARY_PATH=$PACKAGES_DIR/gcc-7.3.0/lib64:$PACKAGES_DIR/gcc-7.3.0/lib:$PACKAGES_DIR/gcc-7.3.0/lib/gcc/x86_64-unknown-linux-gnu/7.3.0:$PACKAGES_DIR/gcc-7.3.0/libexec/gcc/x86_64-unknown-linux-gnu/7.3.0:$LD_LIBRARY_PATH
LLVM/Clang
We will clone all the necessary repositories involved with building LLVM/Clang from source:
mkdir -p $STACK_SRC/llvm-src
cd $STACK_SRC/llvm-src
git clone https://github.com/llvm-mirror/llvm.git
git clone https://github.com/llvm-mirror/clang.git $STACK_SRC/llvm-src/llvm/tools/clang
git clone https://github.com/llvm-mirror/compiler-rt.git $STACK_SRC/llvm-src/llvm/projects/compiler-rt
git clone https://github.com/llvm-mirror/libcxx.git $STACK_SRC/llvm-src/llvm/projects/libcxx
git clone https://github.com/llvm-mirror/libcxxabi.git $STACK_SRC/llvm-src/llvm/projects/libcxxabi
git clone https://github.com/llvm-mirror/openmp.git $STACK_SRC/llvm-src/llvm/projects/openmp
git clone https://github.com/llvm-mirror/clang-tools-extra.git $STACK_SRC/llvm-src/llvm/tools/clang/tools/extra
cd $STACK_SRC/llvm-src/llvm
git checkout release_50
cd $STACK_SRC/llvm-src/llvm/tools/clang
git checkout release_50
cd $STACK_SRC/llvm-src/llvm/projects/compiler-rt
git checkout release_50
cd $STACK_SRC/llvm-src/llvm/projects/libcxx
git checkout release_50
cd $STACK_SRC/llvm-src/llvm/projects/libcxxabi
git checkout release_50
cd $STACK_SRC/llvm-src/llvm/projects/openmp
git checkout release_50
cd $STACK_SRC/llvm-src/llvm/tools/clang/tools/extra
git checkout release_50
And now we configure, build, and install Clang:
mkdir -p $STACK_SRC/llvm-src/build
cd $STACK_SRC/llvm-src/build
cmake ../llvm -G 'Unix Makefiles' \
-DCMAKE_INSTALL_PREFIX=$PACKAGES_DIR/llvm-5.0.1 \
-DCMAKE_INSTALL_RPATH:STRING=$PACKAGES_DIR/llvm-5.0.1/lib \
-DCMAKE_INSTALL_NAME_DIR:STRING=$PACKAGES_DIR/llvm-5.0.1/lib \
-DCMAKE_BUILD_WITH_INSTALL_RPATH=1 \
-DLLVM_TARGETS_TO_BUILD="X86" \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_MACOSX_RPATH:BOOL=OFF \
-DPYTHON_EXECUTABLE=`which python2.7` \
-DCMAKE_CXX_LINK_FLAGS="-L$PACKAGES_DIR/gcc-7.3.0/lib64 -Wl,-rpath,$PACKAGES_DIR/gcc-7.3.0/lib64" \
-DGCC_INSTALL_PREFIX=$PACKAGES_DIR/gcc-7.3.0 \
-DCMAKE_CXX_COMPILER=$PACKAGES_DIR/gcc-7.3.0/bin/g++ \
-DCMAKE_C_COMPILER=$PACKAGES_DIR/gcc-7.3.0/bin/gcc
make -j # (where # is the number of cores available)
make install
The above configuration assumes you are using the custom version of GCC built in the previous section (note the several gcc-7.3.0 paths). If this is not the case, you will need to provide the correct paths to your current toolchain. It is also possible LLVM may build successfully if you omit the -D lines referencing gcc-7.3.0 entirely.
In order to utilize our newly built LLVM-Clang compiler, we need to export some variables:
export CC=clang
export CXX=clang++
export PATH=$PACKAGES_DIR/llvm-5.0.1/bin:$PATH
export LD_LIBRARY_PATH=$PACKAGES_DIR/llvm-5.0.1/lib:$LD_LIBRARY_PATH
MPICH
Download MPICH 3.2
cd $STACK_SRC
curl -L -O http://www.mpich.org/static/downloads/3.2/mpich-3.2.tar.gz
tar -xf mpich-3.2.tar.gz -C .
Now we create an out-of-tree build location, configure, build, and install it
mkdir $STACK_SRC/mpich-3.2/llvm-build
cd $STACK_SRC/mpich-3.2/llvm-build
../configure --prefix=$PACKAGES_DIR/mpich-3.2 \
--enable-shared \
--enable-sharedlibs=clang \
--enable-fast=03 \
--enable-debuginfo \
--enable-totalview \
--enable-two-level-namespace \
FC=gfortran \
F77=gfortran \
F90='' \
CFLAGS='' \
CXXFLAGS='' \
FFLAGS='' \
FCFLAGS='' \
F90FLAGS='' \
F77FLAGS=''
make -j # (where # is the number of cores available)
make install
In order to utilize our newly built MPI wrapper, we need to set some variables:
export PATH=$PACKAGES_DIR/mpich-3.2/bin:$PATH
export CC=mpicc
export CXX=mpicxx
export FC=mpif90
export F90=mpif90
export C_INCLUDE_PATH=$PACKAGES_DIR/mpich-3.2/include:$C_INCLUDE_PATH
export CPLUS_INCLUDE_PATH=$PACKAGES_DIR/mpich-3.2/include:$CPLUS_INCLUDE_PATH
export FPATH=$PACKAGES_DIR/mpich-3.2/include:$FPATH
export MANPATH=$PACKAGES_DIR/mpich-3.2/share/man:$MANPATH
export LD_LIBRARY_PATH=$PACKAGES_DIR/mpich-3.2/lib:$LD_LIBRARY_PATH
PETSc
Download PETSc 3.8.3
cd $STACK_SRC
curl -L -O http://ftp.mcs.anl.gov/pub/petsc/release-snapshots/petsc-3.8.3.tar.gz
tar -xf petsc-3.8.3.tar.gz -C .
Now we configure, build, and install it
cd $STACK_SRC/petsc-3.8.3
./configure \
--prefix=$PACKAGES_DIR/petsc-3.8.3 \
--download-hypre=1 \
--with-ssl=0 \
--with-debugging=no \
--with-pic=1 \
--with-shared-libraries=1 \
--with-cc=mpicc \
--with-cxx=mpicxx \
--with-fc=mpif90 \
--download-fblaslapack=1 \
--download-metis=1 \
--download-parmetis=1 \
--download-superlu_dist=1 \
--download-mumps=1 \
--download-scalapack=1 \
--CC=mpicc --CXX=mpicxx --FC=mpif90 --F77=mpif77 --F90=mpif90 \
--CFLAGS='-fPIC -fopenmp' \
--CXXFLAGS='-fPIC -fopenmp' \
--FFLAGS='-fPIC -fopenmp' \
--FCFLAGS='-fPIC -fopenmp' \
--F90FLAGS='-fPIC -fopenmp' \
--F77FLAGS='-fPIC -fopenmp' \
PETSC_DIR=`pwd`
Once configure is done, we build PETSc
make PETSC_DIR=$STACK_SRC/petsc-3.8.3 PETSC_ARCH=arch-linux2-c-opt all
Everything good so far? PETSc should be asking to run more make commands
make PETSC_DIR=$STACK_SRC/petsc-3.8.3 PETSC_ARCH=arch-linux2-c-opt install
And now after the install, we can run some built-in tests
make PETSC_DIR=$PACKAGES_DIR/petsc-3.8.3 PETSC_ARCH="" test
Running the tests should produce some output like the following:
[moose@centos-7 petsc-3.8.3]$ make PETSC_DIR=$PACKAGES_DIR/petsc-3.8.3 PETSC_ARCH="" test
Running test examples to verify correct installation
Using PETSC_DIR=/opt/moose/petsc-3.8.3 and PETSC_ARCH=
C/C++ example src/snes/examples/tutorials/ex19 run successfully with 1 MPI process
C/C++ example src/snes/examples/tutorials/ex19 run successfully with 2 MPI processes
Fortran example src/snes/examples/tutorials/ex5f run successfully with 1 MPI process
Completed test examples
=========================================
Miniconda
Peacock (an optional MOOSE GUI frontend) uses many libraries. The easiest way to obtain these libraries, is to install miniconda, along with several miniconda/pip packages.
cd $STACK_SRC
curl -L -O https://repo.continuum.io/miniconda/Miniconda2-latest-Linux-x86_64.sh
sh Miniconda2-latest-Linux-x86_64.sh -b -p $PACKAGES_DIR/miniconda
PATH=$PACKAGES_DIR/miniconda/bin:$PATH conda config --set ssl_verify false
PATH=$PACKAGES_DIR/miniconda/bin:$PATH conda install -c idaholab python=2.7 coverage \
reportlab \
mako \
numpy \
scipy \
scikit-learn \
h5py \
hdf5 \
scikit-image \
requests \
vtk=7.1.0 \
pyyaml \
matplotlib \
pip \
lxml \
pyflakes \
pandas \
conda-build \
mock \
yaml \
pyqt \
swig --yes
Peacock (as well as the TestHarness sytem in MOOSE), does not work with Python3. Please chose Miniconda2 for Python 2.7 instead.
Next, we need to use pip
to install additional libraries not supplied by conda:
PATH=$PACKAGES_DIR/miniconda/bin:$PATH pip install --no-cache-dir pybtex livereload==2.5.1 daemonlite pylint==1.6.5 lxml pylatexenc anytree
Change Ownership
We are done building libraries, so lets chown up the target directory appropriately
sudo chown -R root:root $PACKAGES_DIR
This is more of a formality step, so any potential user of your newly built compiler stack does not see everything owned by a non-root user.
bash_profile
Now that everything has been installed, its time to wrap all these environment variables up, and throw them in a bash shell profile somewhere.
Append the following contents into a new file called moose-environment.sh
:
#!/bin/bash
### MOOSE Environment Profile
# GCC 7.3.0
# LLVM 5.0.1
# MPICH 3.2
# PETSc 3.8.3
export PACKAGES_DIR=<what ever you exported initially during the Environment setup>
export PATH=$PACKAGES_DIR/llvm-5.0.1/bin:$PACKAGES_DIR/gcc-7.3.0/bin:$PACKAGES_DIR/mpich-3.2/bin:$PACKAGES_DIR/miniconda/bin:$PATH
export LD_LIBRARY_PATH=$PACKAGES_DIR/llvm-5.0.1/lib:$PACKAGES_DIR/gcc-7.3.0/lib64:$PACKAGES_DIR/gcc-7.3.0/lib:$PACKAGES_DIR/gcc-7.3.0/lib/gcc/x86_64-unknown-linux-gnu/7.3.0:$PACKAGES_DIR/gcc-7.3.0/libexec/gcc/x86_64-unknown-linux-gnu/7.3.0:$PACKAGES_DIR/mpich-3.2/lib:$LD_LIBRARY_PATH
export C_INCLUDE_PATH=$PACKAGES_DIR/mpich-3.2/include:$C_INCLUDE_PATH
export CPLUS_INCLUDE_PATH=$PACKAGES_DIR/mpich-3.2/include:$CPLUS_INCLUDE_PATH
export FPATH=$PACKAGES_DIR/mpich-3.2/include:$FPATH
export MANPATH=$PACKAGES_DIR/mpich-3.2/share/man:$MANPATH
export PETSC_DIR=$PACKAGES_DIR/petsc-3.8.3
export CC=mpicc
export CXX=mpicxx
export FC=mpif90
export F90=mpif90
Thats it! Now you can either source this file manually each time you need to work on a MOOSE based application:
source /path/to/moose-environment.sh
Or you can permanently have it loaded each time you open a terminal by adding the above source
command in your ~/.bash_profile (or ~/.bashrc which ever your system uses).