This lecture is devoted to various methods of a posteriori error control developed in the 20s century, namely the hypercircle method, variational approach of Mikhlin, iteration estimates of Ostrovski, explicit residual method, indicators based on post processing of approximate solutions, and dual weighted residual method. We discuss areas of applicability, drawbacks and advantages and address to the corresponding literature.
This lecture is concerned with estimates that provide computable bounds of the difference between exact solutions of PDE's and any function from the corresponding (energy) space. The estimates must be computable, consistent and possess necessary continuity properties. In the context of PDE theory, deriving such type estimates presents one of the fundamental problems, which (unlike, e.g., regularity theory) is focused on studying neighborhoods of exact solutions. Being applied to numerical approximations these estimates imply a unified way of a posteriori error estimation and results in the so called a posteriori estimates of the functional type. The latter estimates do not involve generic (interpolation) constants and provide guaranteed and computable bounds for any conforming approximation. They can be also used for the analysis of modelling errors and errors caused by incomplete knowledge on the problem data. Derivation of such an estimate is shown with the paradigm of the Stokes problem. Modelling errors related to dimension reduction are discussed with the paradigm of diffusion and elasticity problems.
This lecture continues the subject of Lecture 2 in the context of nonlinear problems in continuum mechanics. Mainly, it is focused on two problems: variational inequality for the obstacle problem and Hencky plasticity. At the end an overview of other results and the corresponding literature is given.
We start with a short motivation of preconditioning methods. Next we define multiplicative and additive two-level preconditioners obtained from approximate two-level block factorization of a general sparse symmetric positive definite matrix. These preconditioners give rise to exact two-level methods, which, however, are not feasible in practice. One way to avoid their shortcomings is to approximate the solution of the arising coarse-grid problem in the exact two-level method using polynomial acceleration techniques. This results in inexact two-level methods.
In this lecture we derive the error propagation relation for inexact two-level methods of this type, comment on different choices of the acceleration (stabilization) polynomial and discuss classical algebraic multilevel iteration (AMLI) methods and also some variants of it. We show how to prove condition number estimates for the linear AMLI preconditioners.
In this second lecture we focus on the so-called variable-step preconditioners (nonlinear AMLI and K-cycle methods). We give their recursive definition and compare them to the linear AMLI methods in terms of computational complexity (of one outer iteration). Further, we comment on the advantages and the disadvantages of both classes of methods. Finally, we present a convergence analysis of the nonlinear AMLI methods, which gives an idea of their efficiency (as compared to the linear AMLI methods).
In the third lecture we consider different examples of finite element methods for elliptic boundary value problems in order to demonstrate the wide range of applicability of AMLI methods. We start with proving the robustness of the AMLI preconditioners for anisotropic scalar elliptic problems when the discretization is performed by using linear conforming elements. Next we show that similar results also hold for (linear) nonconforming elements and even for symmetric interior penalty discontinuous Galerkin approximations. Finally, we comment on recent developments in the area of elliptic problems with high-frequency high-contrast coefficients. The theoretical results presented in this third lecture are illustrated by numerical experiments.
Matrices of large size arise in particular from elliptic partial differential equations and integral equations. In the former case one make use of the sparsity, in the latter case a standard treatment of the matrices leads already to storage problems. The technique of hierarchical matrices allows to organise the storage as well as all matrix operations (including inversion) with almost linear complexity. The hierarchical matrix operations yield only approximations, but the arising error can be made at least as small as the discretisation error. The lecture explains the matrix representation, the organisation of the operations and underlines the black-box character of the method. Furthermore, applications are described which are usually considered to be impossible for large scale matrices: computation of functions of matrices and solution of matrix equations (Lyapunov etc.).
We introduce tensors and discuss where they appear in numerical applications (discretisations of PDEs, approximation of multivariate functions). Although the data size may be by far larger than any computer storage, there may be data-sparse approximations which are rather accurate. The key of the numerical treatment is an appropriate tensor representation. We show how operations with tensors can be performed.
We analyze a general multigrid method with aggressive coarsening and polynomial smoothing. We use a special polynomial smoother that originates in the context of the smoothed aggregation method. Assuming the degree of the smoothing polynomial is, on each level k, at least Chk+1 / hk, we prove convergence result independent of hk+1 / hk. Suggested smoother is cheaper than overlapping Schwarz method that allows to prove the same result. Moreover, unlike in the case of overlapping Schwarz method, analysis of our smoother is completely algebraic and independent of geometry of the problem and prolongators (the geometry of coarse spaces).
While a huge amount of papers are dealing with robust multilevel methods and algorithms for linear FEM elliptic systems, the related higher order FEM problems are much less studied. It is well known that the standard hierarchical basis two-level splittings deteriorate for strongly anisotropic quadratic FEM problems. First robust multilevel preconditioners for biquadratic anisotropic FEM elliptic problems were recently developed. We study the behavior of the constant in the strengthened CBS inequality for semi-coarsening mesh refinement which is a quality measure for hierarchical two-level splittings of the considered biquadratic FEM stiffness matrices. Some new results for the case of balanced semi-coarsening is of a particular interest.
The presented theoretical estimates are supported by numerically computed CBS constants for a rich set of parameters (coarsening factor and anisotropy ratio). Combining the proven uniform estimates with the theory of the Algebraic MultiLevel Iteration (AMLI) methods we obtain optimal order multilevel algorithms whose total computational cost is proportional to the size of the discrete problem with a proportionality constant independent of the involved small parameters. The provided comparative analysis of the pure and balanced semi-coarsening algorithms addresses both computational complexity and parallel implementation issues.
One of the most important and frequently used preconditioning techniques for solving symmetric positive definite systems is based on computing the approximate inverse factorizations. It is also a well-known fact that such factors can be computed column by the orthogonalization process applied to the unit basis vectors provided that we use a non-standard inner product induced by the positive definite system matrix A. In this contribution we consider the classical Gram-Schmidt algorithm (CGS), the modified Gram-Schmidt algorithm (MGS) and also yet another variant of sequential orthogonalization, which is motivated originally by the AINV preconditioner and which uses oblique projections.
The orthogonality between computed vectors is crucial for the quality of the preconditioner constructed in the approximate inverse factorization. While for the case of the standard inner product there exists a complete rounding error analysis for all main orthogonalization schemes, the numerical properties of the schemes with a non-standard inner product are much less understood. We will formulate results on the loss of orthogonality and on the factorization error for all previously mentioned orthogonalization schemes. This contribution is joint work with Jirí Kopal (Technical University Liberec), Miroslav Tuma and Alicja Smoktunowicz (Warsaw University of Technology)
We analyze variational inequalities related to problems in the theory of elasticity that involve unilateral boundary conditions and problems with friction. We are focused on deriving upper bounds of the difference between exact solutions of such type variational inequalities and any functions lying in the admissible functional class. These estimates are obtained by a modification of duality technique earlier used for variational problems with uniformly convex functionals. Several numerical tests are presented to demonstrate the quality of our estimates.
Effective implementation of some efficient FETI methods assumes application of a direct method for solving a system of linear equations with a symmetric positive semidefinite matrix. The latter usually comprises a triangular decomposition of a nonsingular diagonal block of the subdomain stiffness matrix and an effective evaluation of the action of a generalized inverse of the corresponding Schur complement. The goal is to review our results which address both problems.
We report the results of our research in development of the algorithms with asymptotically linear complexity for the solution of large multibody contact problems with Tresca friction. The algorithm can be used to implement the fixed point iterations for the solution of contact problems with the Coulomb friction. The time of computation can be reduced nearly proportionally to the number of available processors. Our talk covers 2D and 3D problems discretized by the finite element or boundary element method, possibly with "floating" bodies. A characteristic feature of the problems considered in our talk is a strong nonlinearity due to the interface conditions. Since even the algorithms for the solution of linear problems have at least linear complexity, it follows that a scalable algorithm for contact problem has to treat the nonlinearity in a sense for free.
After introducing the relations that describe the equilibrium of a system of elastic bodies in mutual contact, we brifly review the TFETI/TBETI (total finite/boundary element tearing and interconnecting) based domain decomposition methodology adapted to the solution of the contact problems with friction. Recall that TFETI differs from the classical FETI or FETI2 as introduced by Farhat and Roux by imposing the prescribed displacements by the Lagrange multipliers and treating all subdomains as "floating".
Then we present our in a sense optimal algorithms for the solution of the resulting QPQC (quadratic programming with quadratic constraints) problems. A unique feature of these algorithms is their capability to solve the class of such problems with homogeneous equality constraints and separable quadratic inequality constraints in O(1) matrix-vector multiplications provided the spectrum of the Hessian of the cost function is in a given positive interval.
Finally we put together the above results to develop scalable algorithms for the solution of contact problems with friction. We illustrate the results by numerical experiments including academic problems with millions of nodal variables and analysis of mine support comprising seven bodies.
Poroelastic models have many applications, for instance geo-applications in reservoir modelling, nuclear waste deposition, CO2 sequestration etc. Under certain assumptions, they involve a time–dependent coupled system consisting of a Navier–Lame equation for the displacements, Darcy’s flow equation for the fluid velocity and a divergence constraint equation.
First, we touch the question of time discretization and discuss methods to handle the lack of regularity at initial times. Second, after time and space discretization, a block matrix system in saddle point form arises at at each time step. Mixed space discretization methods and a regularization method to stabilize the system and avoid locking in the pressure variable are presented. Then four types of block matrix preconditioners are considered. It is shown that the eigenvalues of the preconditioned matrix are favourably clustered but need inner iterations for certain matrix blocks. The strong clustering leads to very few outer iterations. Various approaches to construct outer and inner iteration preconditioners are presented and compared. The sensitivity of number of outer iterations to the stopping accuracy of the inner iterations are illustrated numerically.
A fully coupled transient heat and moisture transport in a masonry structure will be presented. The nonlinear diffusion model proposed by Kunzel is used. Because of strong material heterogeneity, the first order homogenization in a spatial domain is used. Huge computational demands lead to parallelization of the problem.
In this talk we present a generalized hierarchical bases for symmetric positive definite matrices arising from interior penalty discontinuous Galerkin discretization of second-order partial differential equations. We will show that the CBS constant related to the decomposition of DG space into coarse space and its hierarchical complement is uniformly bounded with respect to the anisotropy ratio. The proposed approach is rather general and is not limited to the particular choice of elements.
This contribution deals with properties of discrete 3D contact problems with orthotropic Coulomb friction and coefficients of friction which may depend on the solution itself. We show that at least one solution exists for any coefficient represented by continuous, non-negative and bounded functions. Moreover, this solution is unique provided that additional mesh- dependent assumptions on the coefficients are satisfied. This is a joint contribution with V. Janovský and T. Ligurský. This research was supported by the grant 201/07/0294 of GAČR
The main goal of the project IT4Innovations is to build the first research infrastructure in the field of supercomputing and information technologies in the Czech Republic. As a part of this infrastructure, the biggest supercomputer in the Czech Republic will be installed and the competence center in the field of HPC will be established. In the presentation, IT4Innovations project structure will be introduced and the main goals of the research programmes will be briefly described. Furthermore, the process of acquisition of supercomputing facilities together with their preliminary configuration will be presented. Finally, the access scheme to computing resources for users from external research institutions will be introduced.
We present a monolithic approach for solving the fluid-structure interaction problem with general constitutive laws for the fluid and solid parts. It is based on the ALE formulation of the balance equations for the fluid and solid in the time dependent domain. The discretization is done by the finite element method. Our treatment of the problem as a one system suggests to use the same finite elements on both, the solid part and the fluid region. The discretized system of nonlinear algebraic equations is solved using approximate Newton method with line-search strategy as the basic iteration and geometric multi grid as linear solver. Since we know the sparsity pattern of the Jacobian matrix in advance, its approximate computation can be done by using finite differences in an efficient way so that the linear solver remains the dominant part in terms of the cpu time.
We consider flow of incompressible fluids in domains with corners. The mathematical model stands on Navier-Stokes equations. The solution exhibits singularities caused either by nonconvex internal corners of the domain or by abrupt changes of boundary conditions. We present numerical algorithm based on the finite element method that provides solution precise up to prescribed tolerance. The reliability of the solution is checked by a posteriori error estimates.