[ SUNDIALS::CVODE::CVODES::ARKode::IDA::IDAS::KINSOL::sundialsTB ]
Description of SUNDIALS
SUNDIALS
was implemented with the goal
of providing robust time integrators and nonlinear solvers that
can easily be incorporated into existing simulation codes. The
primary design goals were to require minimal information from the
user, allow users to easily supply their own data structures
underneath the solvers, and allow for easy incorporation of
user-supplied linear solvers and preconditioners.
The main numerical operations performed in these codes are operations on data vectors, and the codes have been written in terms of interfaces to these vector operations. The result of this design is that users can relatively easily provide their own data structures to the solvers by telling the solver about their structures and providing the required operations on them. The codes also come with default vector structures with pre-defined operation implementations for both serial and distributed memory parallel environments in case a user prefers not to supply their own structures. In addition, all parallelism is contained within specific vector operations (norms, dot products, etc.) No other operations within the solvers require knowledge of parallelism. Thus, using a solver in parallel consists of using a parallel vector implementation, either the one provided with SUNDIALS, or the user's own parallel vector structure, underneath the solver. Hence, we do not make a distinction between parallel and serial versions of the codes.
The main numerical operations performed in these codes are operations on data vectors, and the codes have been written in terms of interfaces to these vector operations. The result of this design is that users can relatively easily provide their own data structures to the solvers by telling the solver about their structures and providing the required operations on them. The codes also come with default vector structures with pre-defined operation implementations for both serial and distributed memory parallel environments in case a user prefers not to supply their own structures. In addition, all parallelism is contained within specific vector operations (norms, dot products, etc.) No other operations within the solvers require knowledge of parallelism. Thus, using a solver in parallel consists of using a parallel vector implementation, either the one provided with SUNDIALS, or the user's own parallel vector structure, underneath the solver. Hence, we do not make a distinction between parallel and serial versions of the codes.
[ Top ]
[
SUNDIALS::CVODE::CVODES::ARKode::IDA::IDAS::KINSOL::sundialsTB
]
Description of CVODE
CVODE
is a solver for stiff and nonstiff ordinary differential equation (ODE)
systems (initial value problem) given in explicit form y' = f(t,y).
The methods used in CVODE are variable-order, variable-step multistep methods. For nonstiff problems, CVODE includes the Adams-Moulton formulas, with the order varying between 1 and 12. For stiff problems, CVODE includes the Backward Differentiation Formulas (BDFs) in so-called fixed-leading coefficient form, with order varying between 1 and 5. For either choice of formula, the resulting nonlinear system is solved (approximately) at each integration step. For this, CVODE offers the choice of either functional iteration, suitable only for nonstiff systems, and various versions of Newton iteration. In the cases of a direct linear solver (dense or banded), the Newton iteration is a Modified Newton iteration, in that the Jacobian is fixed (and usually out of date). When using a Krylov method as the linear solver, the iteration is an Inexact Newton iteration, using the current Jacobian (through matrix-free products), in which the linear residual is nonzero but controlled.
When used in conjunction with the serial NVECTOR module, CVODE provides direct (dense and band) solvers, a sparse direct solver (KLU), a multi-threaded sparse solver (SuperLUMT), and three preconditioned Krylov (iterative) solvers (GMRES, Bi-CGStab, and TFQMR). In the parallel versions (CVODE used with a parallel NVECTOR module) only the Krylov linear solvers are available. An approximate diagonal Jacobian option is also available with both versions. For the serial version, there is a banded preconditioner module called CVBANDPRE for use with the Krylov solvers, while for the parallel version there is a preconditioner module called CVBBDPRE which provides a band-block-diagonal preconditioner.
For use with Fortran applications, a set of Fortran/C interface routines, called FCVODE, is also supplied. These are written in C, but assume that the user calling program and all user-supplied routines are in Fortran.
The methods used in CVODE are variable-order, variable-step multistep methods. For nonstiff problems, CVODE includes the Adams-Moulton formulas, with the order varying between 1 and 12. For stiff problems, CVODE includes the Backward Differentiation Formulas (BDFs) in so-called fixed-leading coefficient form, with order varying between 1 and 5. For either choice of formula, the resulting nonlinear system is solved (approximately) at each integration step. For this, CVODE offers the choice of either functional iteration, suitable only for nonstiff systems, and various versions of Newton iteration. In the cases of a direct linear solver (dense or banded), the Newton iteration is a Modified Newton iteration, in that the Jacobian is fixed (and usually out of date). When using a Krylov method as the linear solver, the iteration is an Inexact Newton iteration, using the current Jacobian (through matrix-free products), in which the linear residual is nonzero but controlled.
When used in conjunction with the serial NVECTOR module, CVODE provides direct (dense and band) solvers, a sparse direct solver (KLU), a multi-threaded sparse solver (SuperLUMT), and three preconditioned Krylov (iterative) solvers (GMRES, Bi-CGStab, and TFQMR). In the parallel versions (CVODE used with a parallel NVECTOR module) only the Krylov linear solvers are available. An approximate diagonal Jacobian option is also available with both versions. For the serial version, there is a banded preconditioner module called CVBANDPRE for use with the Krylov solvers, while for the parallel version there is a preconditioner module called CVBBDPRE which provides a band-block-diagonal preconditioner.
For use with Fortran applications, a set of Fortran/C interface routines, called FCVODE, is also supplied. These are written in C, but assume that the user calling program and all user-supplied routines are in Fortran.
[ Top ]
[
SUNDIALS::CVODE::CVODES::ARKode::IDA::IDAS::KINSOL::sundialsTB
]
Description of CVODES
CVODES
is a solver for stiff and nonstiff ODE systems (initial value problem)
given in explicit form y' = f(t,y,p) with sensitivity analysis capabilities
(both forward and adjoint modes).
CVODES is a superset of CVODE and hence all options available to CVODE (with the exception of the FCVODE interface module) are also available for CVODES. Both integration methods (Adams-Moulton and BDF) and the corresponding nonlinear iteration methods, as well as all linear solver and preconditioner modules are available for the integration of the original ODEs, the sensitivity systems, or the adjoint system.
Depending on the number of model parameters and the number of functional outputs, one of two sensitivity methods is more appropriate. The forward sensitivity analysis (FSA) method is mostly suitable when the gradients of many outputs (for example the entire solution vector) with respect to relatively few parameters are needed. In this approach, the model is differentiated with respect to each parameter in turn to yield an additional system of the same size as the original one, the result of which is the solution sensitivity. The gradient of any output function depending on the solution can then be directly obtained from these sensitivities by applying the chain rule of differentiation. The adjoint sensitivity analysis (ASA) method is more practical than the forward approach when the number of parameters is large and the gradients of only few output functionals are needed. In this approach, the solution sensitivities need not be computed explicitly. Instead, for each output functional of interest, an additional system, adjoint to the original one, is formed and solved. The solution of the adjoint system can then be used to evaluate the gradient of the output functional with respect to any set of model parameters.
The FSA module in CVODES implements a simultaneous corrector method as well as two flavors of staggered corrector methods -- for the case when sensitivity right hand sides are generated all at once or separated for each model parameter. The ASA module provides the infrastructure required for the backward integration in time of systems of differential equations dependent on the solution of the original ODEs. It employs a checkpointing scheme for efficient interpolation of forward solutions during the backward integration.
CVODES is a superset of CVODE and hence all options available to CVODE (with the exception of the FCVODE interface module) are also available for CVODES. Both integration methods (Adams-Moulton and BDF) and the corresponding nonlinear iteration methods, as well as all linear solver and preconditioner modules are available for the integration of the original ODEs, the sensitivity systems, or the adjoint system.
Depending on the number of model parameters and the number of functional outputs, one of two sensitivity methods is more appropriate. The forward sensitivity analysis (FSA) method is mostly suitable when the gradients of many outputs (for example the entire solution vector) with respect to relatively few parameters are needed. In this approach, the model is differentiated with respect to each parameter in turn to yield an additional system of the same size as the original one, the result of which is the solution sensitivity. The gradient of any output function depending on the solution can then be directly obtained from these sensitivities by applying the chain rule of differentiation. The adjoint sensitivity analysis (ASA) method is more practical than the forward approach when the number of parameters is large and the gradients of only few output functionals are needed. In this approach, the solution sensitivities need not be computed explicitly. Instead, for each output functional of interest, an additional system, adjoint to the original one, is formed and solved. The solution of the adjoint system can then be used to evaluate the gradient of the output functional with respect to any set of model parameters.
The FSA module in CVODES implements a simultaneous corrector method as well as two flavors of staggered corrector methods -- for the case when sensitivity right hand sides are generated all at once or separated for each model parameter. The ASA module provides the infrastructure required for the backward integration in time of systems of differential equations dependent on the solution of the original ODEs. It employs a checkpointing scheme for efficient interpolation of forward solutions during the backward integration.
[ Top ]
[
SUNDIALS::CVODE::CVODES::ARKode::IDA::IDAS::KINSOL::sundialsTB
]
Description of ARKode
ARKode
is a solver library that provides adaptive-step time integration of
the initial value problem for systems of stiff, nonstiff, and
multi-rate systems of ordinary differential equations (ODEs) given in
linearly explicit form M y' = fE(t,y) + fI(t,y), where M is a given
nonsingular matrix (possibly time dependent).
The right-hand side function is partitioned into two components -- fE(t,y), containing the "slow" time scale components to be integrated explicitly, and fI(t,y), containing the "fast" time scale components to be integrated implicitly.
The methods used in ARKode are adaptive-step additive Runge Kutta methods, defined by combining two complementary Runge-Kutta methods -- one explicit (ERK) and the other diagonally implicit (DIRK). Only the components in fI(t,y) must be solved implicitly, allowing for splittings tuned for use with optimal implicit solvers.
ARKode is packaged with a wide array of built-in methods, including adaptive explicit methods of orders 2-6, adaptive implicit methods of orders 2-5, and adaptive implicit-explicit (IMEX) methods of orders 3-5.
The implicit nonlinear systems are solved approximately at each integration step, using a modified Newton method, an Inexact Newton method, or an accelerated fixed-point solver. For the Newton-based methods and the serial NVECTOR module in SUNDIALS, ARKode provides both direct (dense and band) and preconditioned Krylov iterative (GMRES, BiCGStab, TFQMR, FGMRES, PCG) linear solvers. When used with one of the parallel NVECTOR modules or a user-provided vector data structure, only the Krylov solvers are available, although a user may supply their own linear solver for any data structures if desired.
For use with Fortran applications, a set of Fortran/C interface routines, called FARKode, is also supplied. These are written in C, but assume that the user calling program and all user-suipplied routines are in Fortran.
The right-hand side function is partitioned into two components -- fE(t,y), containing the "slow" time scale components to be integrated explicitly, and fI(t,y), containing the "fast" time scale components to be integrated implicitly.
The methods used in ARKode are adaptive-step additive Runge Kutta methods, defined by combining two complementary Runge-Kutta methods -- one explicit (ERK) and the other diagonally implicit (DIRK). Only the components in fI(t,y) must be solved implicitly, allowing for splittings tuned for use with optimal implicit solvers.
ARKode is packaged with a wide array of built-in methods, including adaptive explicit methods of orders 2-6, adaptive implicit methods of orders 2-5, and adaptive implicit-explicit (IMEX) methods of orders 3-5.
The implicit nonlinear systems are solved approximately at each integration step, using a modified Newton method, an Inexact Newton method, or an accelerated fixed-point solver. For the Newton-based methods and the serial NVECTOR module in SUNDIALS, ARKode provides both direct (dense and band) and preconditioned Krylov iterative (GMRES, BiCGStab, TFQMR, FGMRES, PCG) linear solvers. When used with one of the parallel NVECTOR modules or a user-provided vector data structure, only the Krylov solvers are available, although a user may supply their own linear solver for any data structures if desired.
For use with Fortran applications, a set of Fortran/C interface routines, called FARKode, is also supplied. These are written in C, but assume that the user calling program and all user-suipplied routines are in Fortran.
[ Top ]
[
SUNDIALS::CVODE::CVODES::ARKode::IDA::IDAS::KINSOL::sundialsTB
]
Description of IDA
IDA
is a package for the solution of differential-algebraic equation
(DAE) systems in the form F(t,y,y')=0.
It is written in C, but derived from the package DASPK
which is written in Fortran.
The integration method in IDA is variable-order, variable-coefficient BDF, in fixed-leading-coefficient form. The method order varying between 1 and 5. The solution of the resulting nonlinear system is accomplished with some form of Newton iteration. In the cases of a direct linear solver (dense or banded), the nonlinear iteration is a Modified Newton iteration, in that the Jacobian is fixed (and usually out of date). When using any of the Krylov methods as the linear solver, the iteration is an Inexact Newton iteration, using the current Jacobian (through matrix-free products), in which the linear residual is nonzero but controlled.
With the serial version of NVECTOR, IDA provides direct (dense and band) solvers, a sparse direct solver (KLU), a multi-threaded sparse solver (SuperLUMT), and three preconditioned Krylov (iterative) solvers (GMRES, Bi-CGStab, and TFQMR). In the parallel version (IDA used with a parallel NVECTOR module) only the Krylov solvers are available. In addition to the basic Krylov method modules, the IDA package also contains a preconditioner module called IDABBDPRE, which provides a band-block-diagonal preconditioner for use with the parallel version.
For use with Fortran applications, a set of Fortran/C interface routines, called FIDA, is also supplied. These are written in C, but assume that the user calling program and all user-supplied routines are in Fortran.
The integration method in IDA is variable-order, variable-coefficient BDF, in fixed-leading-coefficient form. The method order varying between 1 and 5. The solution of the resulting nonlinear system is accomplished with some form of Newton iteration. In the cases of a direct linear solver (dense or banded), the nonlinear iteration is a Modified Newton iteration, in that the Jacobian is fixed (and usually out of date). When using any of the Krylov methods as the linear solver, the iteration is an Inexact Newton iteration, using the current Jacobian (through matrix-free products), in which the linear residual is nonzero but controlled.
With the serial version of NVECTOR, IDA provides direct (dense and band) solvers, a sparse direct solver (KLU), a multi-threaded sparse solver (SuperLUMT), and three preconditioned Krylov (iterative) solvers (GMRES, Bi-CGStab, and TFQMR). In the parallel version (IDA used with a parallel NVECTOR module) only the Krylov solvers are available. In addition to the basic Krylov method modules, the IDA package also contains a preconditioner module called IDABBDPRE, which provides a band-block-diagonal preconditioner for use with the parallel version.
For use with Fortran applications, a set of Fortran/C interface routines, called FIDA, is also supplied. These are written in C, but assume that the user calling program and all user-supplied routines are in Fortran.
[ Top ]
[
SUNDIALS::CVODE::CVODES::ARKode::IDA::IDAS::KINSOL::sundialsTB
]
Description of IDAS
IDAS
is a package for the solution of differential-algebraic equation
(DAE) systems in the form F(t,y,y',p)=0 with sensitivity analysis capabilities
(both forward and adjoint modes).
IDAS is a superset of IDA and hence all options available to IDA (with the exception of the FIDA interface module) are also available for IDAS.
Depending on the number of model parameters and the number of functional outputs, one of two sensitivity methods is more appropriate. The forward sensitivity analysis (FSA) method is mostly suitable when the gradients of many outputs (for example the entire solution vector) with respect to relatively few parameters are needed. In this approach, the model is differentiated with respect to each parameter in turn to yield an additional system of the same size as the original one, the result of which is the solution sensitivity. The gradient of any output function depending on the solution can then be directly obtained from these sensitivities by applying the chain rule of differentiation. The adjoint sensitivity analysis (ASA) method is more practical than the forward approach when the number of parameters is large and the gradients of only few output functionals are needed. In this approach, the solution sensitivities need not be computed explicitly. Instead, for each output functional of interest, an additional system, adjoint to the original one, is formed and solved. The solution of the adjoint system can then be used to evaluate the gradient of the output functional with respect to any set of model parameters.
The FSA module in IDAS offers the choice between a simultaneous corrector method and a staggered corrector methods. The ASA module provides the infrastructure required for the backward integration in time of systems of differential-algebraic equations dependent on the solution of the original DAEs. It employs a checkpointing scheme for efficient interpolation of forward solutions during the backward integration.
IDAS is a superset of IDA and hence all options available to IDA (with the exception of the FIDA interface module) are also available for IDAS.
Depending on the number of model parameters and the number of functional outputs, one of two sensitivity methods is more appropriate. The forward sensitivity analysis (FSA) method is mostly suitable when the gradients of many outputs (for example the entire solution vector) with respect to relatively few parameters are needed. In this approach, the model is differentiated with respect to each parameter in turn to yield an additional system of the same size as the original one, the result of which is the solution sensitivity. The gradient of any output function depending on the solution can then be directly obtained from these sensitivities by applying the chain rule of differentiation. The adjoint sensitivity analysis (ASA) method is more practical than the forward approach when the number of parameters is large and the gradients of only few output functionals are needed. In this approach, the solution sensitivities need not be computed explicitly. Instead, for each output functional of interest, an additional system, adjoint to the original one, is formed and solved. The solution of the adjoint system can then be used to evaluate the gradient of the output functional with respect to any set of model parameters.
The FSA module in IDAS offers the choice between a simultaneous corrector method and a staggered corrector methods. The ASA module provides the infrastructure required for the backward integration in time of systems of differential-algebraic equations dependent on the solution of the original DAEs. It employs a checkpointing scheme for efficient interpolation of forward solutions during the backward integration.
[ Top ]
[
SUNDIALS::CVODE::CVODES::ARKode::IDA::IDAS::KINSOL::sundialsTB
]
Description of KINSOL
KINSOL
is a solver for nonlinear algebraic systems based on Newton-Krylov solver technology.
It is newly rewritten in the C language, based on the previous Fortran package
NKSOL of Brown and Saad.
KINSOL employs the Inexact Newton method. As this solver is intended mainly for large systems, four iterative methods are provided to solve the resulting linear systems -- GMRES, Bi-CGStab, TFQMR, and FGMRES. These are Krylov methods, implemented with scaling and preconditioning, and can be used with both serial and parallel versions of the NVECTOR module.
For the sake of convenience to users with smaller systems, KINSOL (used with the serial NVECTOR module) also includes direct (dense and band) linear solvers for the linear systems. In this case the nonlinear iteration is a Modified Newton method.
In addition, KINSOL (used with the serial NVECTOR module) also includes a sparse direct solver (KLU), a multi-threaded sparse solver (SuperLUMT),
In addition to the basic Krylov method modules, the KINSOL package includes a module called KINBBDPRE, which provides a band-block-diagonal preconditioner for the parallel version.
For use with Fortran applications, a set of Fortran/C interface routines, called FKINSOL, is also supplied. These are written in C, but assume that the user calling program and all user-supplied routines are in Fortran.
KINSOL employs the Inexact Newton method. As this solver is intended mainly for large systems, four iterative methods are provided to solve the resulting linear systems -- GMRES, Bi-CGStab, TFQMR, and FGMRES. These are Krylov methods, implemented with scaling and preconditioning, and can be used with both serial and parallel versions of the NVECTOR module.
For the sake of convenience to users with smaller systems, KINSOL (used with the serial NVECTOR module) also includes direct (dense and band) linear solvers for the linear systems. In this case the nonlinear iteration is a Modified Newton method.
In addition, KINSOL (used with the serial NVECTOR module) also includes a sparse direct solver (KLU), a multi-threaded sparse solver (SuperLUMT),
In addition to the basic Krylov method modules, the KINSOL package includes a module called KINBBDPRE, which provides a band-block-diagonal preconditioner for the parallel version.
For use with Fortran applications, a set of Fortran/C interface routines, called FKINSOL, is also supplied. These are written in C, but assume that the user calling program and all user-supplied routines are in Fortran.
[ Top ]
[
SUNDIALS::CVODE::CVODES::ARKode::IDA::IDAS::KINSOL::sundialsTB
]
Description of sundialsTB
sundialsTB
is a collection of Matlab functions which provide interfaces to the
SUNDIALS solvers CVODES, IDAS, and KINSOL.
The core of each Matlab interface in sundialsTB is a single mex file which interfaces to the various user-callable functions for that solver. However, this mex file should not be called directly, but rather through the user-callable functions provided for each Matlab interface.
The core of each Matlab interface in sundialsTB is a single mex file which interfaces to the various user-callable functions for that solver. However, this mex file should not be called directly, but rather through the user-callable functions provided for each Matlab interface.
[ Top ]
[
SUNDIALS::CVODE::CVODES::ARKode::IDA::IDAS::KINSOL::sundialsTB
]