Solvers

Most finite element discretizations of elliptic or parabolic equations lead to large sparse linear equation systems \( Ax = b \) to be solved. There are many algorithms and implementations around, which fall into two distinct classes: direct and iterative solvers.

Direct Solvers

Algorithms based on Gaussian elimination (\( LU\) factorization) for general matrices or Cholesky’s method (\(LL^T\) factorization) for symmetric positive definite matrices compute the solution vector \( x \) exactly (up to rounding errors) in a finite number of operations. For sparse systems, their computational effort and in particular memory demand depends on the amount of fill-in, i.e. the number of structurally nonzero entries in \(A\) which are no longer zero in \(L\) and need to be stored in addition. The fill-in depends on the sparsity pattern of \(A\), which in turn depends on the spatial dimension of the problem and on the ordering of degrees of freedom - the latter is addressed by sparse solvers, which employ heuristics such as approximate minimum degree or Reverse Cuthill-McKee to minimize fill-in.

For 2D problems, direct solvers are often the methods of choice, with a run time and memory demand of about \(\mathcal{O}(n^{1.2})\) or less. Larger 3D problems are more densely connected, and hence more difficult, with a memory demand of about \(\mathcal{O}(n^{1.4})\) often being the limiting factor.

Kaskade 7 provides interfaces to the direct solver libraries UMFPACK, CHOLMOD, and MUMPS, and can easily be extended to further solvers.

Iterative Solvers

Iterative solvers create sequences \( (x_i)_{i=1,\dots} \) of approximate solutions that converge to the solution at a usually linear rate. Most prominent are Krylov and Richardson methods. The former compute best (w.r.t. some criterion like energy or residual) approximations in the low-dimensional Krylov spaces \( \mathcal{K}_i = \mathrm{span}\{ A^j b \mid j=0,\dots,i \} \), while the latter are essentially gradient descent methods for the quadratic variational functional.

Symmetric indefinite problems arising, e.g., from incompressible fluid or mechanics as well as from optimal control and inverse problems are often addressed using special block-structured methods like the Uzawa iteration.

The memory demand is in general much lower than for direct methods due to no fill-in being created, but convergence may be slow depending in particular on the condition number \(\kappa(A)\), which in general grows quickly on mesh refinement. Thus, for both Richardson and Krylov methods, transforming the system to \(B^{-1}Ax = B^{-1}b\) using an in general positive definite preconditioner \(B\), which is a cheaply solvable approximation of \(A\) such that \( \kappa(B^{-1}A)\) is relatively small.

Kaskade 7 provides on top of the Dune solvers an energy-controlled conjugate gradient and an Uzawa solver.

Preconditioners

A huge variety of preconditioners have been proposed and analyzed, ranging from the simple Jacobi \(D\) or Gauss-Seidel \(D+L\) preconditioners based on a matrix decomposition \(A=L+D+R\) to sophisticated ones like multigrid or domain decomposition methods such as the subspace correction preconditioner \[ B^{-1} = \sum_k P_k (P_k^T A P_k)^{-1} P_k^T \] formulated in terms of prolongations \( P_k \) from subspaces to the full space. The definition and implementation of the prolongations determines convergence rate and computational effort. Multilevel preconditioners can achieve optimal complexity, i.e. a condition number \(\kappa(B^{-1}A)\) bounded independently of the mesh width, and with computational effort \(\mathcal{O}(n)\).

Kaskade 7 provides on top of the Dune preconditioners efficient geometric multigrid methods for grid and ansatz order hierarchies, semi-geometric multigrid, additive Schwarz overlapping domain decomposition preconditioners for higher-order finite elements, and a non-overlapping BDDC preconditioner.


Page last modified: 2022-01-11 14:53:26 +0100 CET