DiFfRG
|
DiFfRG is a set of tools for the discretization of flow equations arising in the functional Renormalization Group (fRG). It supports the setup and calculation of large systems of flow equations allowing for complex combinations of vertex and derivative expansions.
For spatial discretizations, i.e. discretizations of field space mostly used for derivative expansions, DiFfRG makes different finite element (FE) methods available. These include:
The FEM methods included in DiFfRG are built upon the deal.ii finite element library, which is highly parallelized and allows for great performance and flexibility. PDEs consisting of RG-time dependent equations, as well as stationary equations can be solved together during the flow, allowing for techniques like flowing fields in a very accessible way.
Both explicit and implicit timestepping methods are available and allow thus for efficient RG-time integration in the symmetric and symmetry-broken regime.
We also include a set of tools for the evaluation of integrals and discretization of momentum dependencies.
For an overview, please see the accompanying paper, the tutorial page in the documentation and the examples in Examples/
.
This library has been developed within the fQCD Collaboration.
If you use DiFfRG in your scientific work, please cite the corresponding paper:
To compile and run this project, there are very few requirements which you can easily install using your package manager on Linux or MacOS:
AppleClang
, but in principle, ICC or standard Clang should also work.The following requirements are optional:
nvcc
, e.g. g++
<=13.2 for CUDA 12.5All other requirements are bundled and automatically built with DiFfRG. The framework has been tested with the following systems:
For a CUDA-enabled build, additionally install
The second line is necessary to switch into a shell where g++-12
is available
For a CUDA-enabled build, additionally
First, install xcode and homebrew, then run
If using Windows, instead of running the project directly, it is recommended to use WSL and then go through the installation as if on Linux (e.g. Arch or Ubuntu).
Although a native install should be unproblematic in most cases, the setup with CUDA functionality may be daunting. Especially on high-performance clusters, and also depending on the packages available for chosen distribution, it may be much easier to work with the framework inside a container.
The specific choice of runtime environment is up to the user, however we provide a small build script to create docker container in which DiFfRG will be built. To do this, you will need docker
, docker-buildx
and the NVIDIA container toolkit in case you wish to create a CUDA-compatible image.
For a CUDA-enabled build, run
in the above, you may want to replace the version 12.5.1
with another version you can find on docker hub at nvidia/cuda. Alternatively, for a CUDA-less build, run simply
If using other environments, e.g. ENROOT, the preferred approach is simply to build an image on top of the CUDA images by NVIDIA. Optimal compatibility is given using nvidia/cuda:12.5.1-devel-rockylinux
. Proceed with the installation setup for Rocky Linux above.
For example, with ENROOT a DiFfRG image can be built by following these steps:
Afterwards, one proceeds with the above Rocky Linux setup.
If all requirements are met, you can clone the git to a directory of your choice,
and start the build after switching to the git directory.
The build_DiFfRG.sh
bash script will build and setup the DiFfRG project and all its requirements. This can take up to half an hour as the deal.ii library is quite large. This script has the following options:
-c
Use CUDA when building the DiFfRG library.-i <directory>
Set the installation directory for the library.-j <threads>
Set the number of threads passed to make and git fetch.--help
Display this information.Depending on your amount of CPU cores, you should adjust the -j
parameter which indicates the number of threads used in the build process. Note that choosing this too large may lead to extreme RAM usage, so tread carefully.
As soon as the build has finished, you can find a full install of the library in the DiFfRG_install
subdirectory.
If you have changes to the library code, you can update the library by running
where once again the -j
parameter should be adjusted to your amount of CPU cores. The update_DiFfRG.sh
script takes the following optional arguments:
-c
Use CUDA when building the DiFfRG library.-i <directory>
Set the installation directory for the library.-j <threads>
Set the number of threads passed to make and git fetch.-m
Install the Mathematica package locally.--help
Display this information.For an overview, please see the tutorial page in the documentation. A local documentation is also always built automatically when running the setup script, but can also be built manually by running
inside the DiFfRG_build
directory. You can find then a code reference in the top directory.
All backend code is contained in the DiFfRG directory.
Several simulations are defined in the Applications directory, which can be used as a starting point for your own simulations.
During building and installing DiFfRG, logs are created at every step. You may find the logs for the setup of external dependencies in external/logs
and the logs for the build of DiFfRG itself in logs/
.
If DiFfRG fails to build on your machine, first check the appropriate logfile. If DiFfRG proves to be incompatible with your machine, please open an Issue on GitHub here, or alternatively send an email to the author (see the publication).
DiFfRG is a work in progress. If you find some feature missing, a bug, or some other kind of improvement, you can get involved in the further development of DiFfRG.
Thanks to the collaborative nature of GitHub, you can simply fork the project and work on a private copy on your own GitHub account. Feel also encouraged to open an issue, or if you already have a (partially) ready contribution, open a pull request.
A DiFfRG simulation requires you to provide a valid parameters.json
file in the execution path, or alternatively provide another JSON-file using the -p
flag (see below).
To generate a "stock" parameters.json
in the current folder, you can call any DiFfRG application as
Before usage, don't forget to put in the parameters you defined in your own simulation!
To monitor the progress of the simulation, one can set the verbosity
parameter either in the parameter file,
or from the CLI,
Any DiFfRG simulation using the DiFfRG::ConfigurationHelper
class can be asked to give some syntax pertaining to the configuration:
e.g.
In general, the IDA
timestepper from the SUNDIALS
-suite has proven to be the optimal choice for any fRG-flow with convexity restoration. Additionally, this solver allows for out-of-the-box solving of additional algebraic systems, which is handy for more complicated fRG setups.
If solving purely variable-dependent systems, one of the Boost
time steppers, Boost_RK45
, Boost_RK78
or Boost_ABM
. The latter is especially excellent for extremely large systems which have no extremely fast dynamics, but lacks adaptive timestepping. In practice, choosing Boost_ABM
over one of the RK steppers may speed up a Yang-Mills simulation with full momentum dependences by more than a factor of 10.
For systems with both spatial discretisations and variables, consider one of the implicit-explicit mixtures, SUNDIALS_IDA_Boost_RK45
, SUNDIALS_IDA_Boost_RK78
or SUNDIALS_IDA_Boost_ABM
.
The following third-party libraries are utilised by DiFfRG. They are automatically built and installed DiFfRG during the build process.