[1] Sandia National Laboratories, Livermore, California [2] Dept. of Mechanical Engr., California State University, Chico [3] Lawrence Livermore Laboratories, Livermore, California [4] Sandia National Laboratories, Albequerque, New Mexico

Abstract

One of the main classes of large sparse matrix problems are those constructed using finite-element discretizations arising from computational simulations in physics and engineering. Unfortunately, many existing scalable solver packages do not explicitly recognize the generic data types (such as square element stiffness matrices) that exist within most implicit finite-element (FE) applications. Furthermore, there is currently no standard interface which provides a simple means for FE users to switch between solver packages. This research project is concerned with the design and implementation of an abstraction layer that mediates the passing of data and control parameters between a finite-element client application and a solution services library. The finite-element/solver interface that we have designed and implemented is intended to be as general and as reusable as is possible. It recognizes a wide variety of finite-element data types, such as stiffness matrices and loads (including elemental contributions arising from non-nodal solution parameters, such as those used to enforce incompressibility constraints in some computational fluid dynamics settings); an extensible variety of linear constraint relations; general linear boundary conditions applied to nodes; and diverse sets of elements used in displacement-based and mixed finite-element formulations. In addition to this variety of standard finite-element data types, the interface is designed for encapsulation of each of these base data structures into larger collections of generic data in order to facilitate gaining computational economies of scale from cache-resident aggregations of finite-element data. The interface is designed to work in either a serial or a scalable parallel setting, with the latter architecture currently implemented on distributed-memory supercomputers using the MPI parallel communications abstraction. The interface is callable from C++, C, or Fortran finite-element client applications. An implementation of the interface has been developed using ISIS++. Current research and development efforts include extensions to support multilevel schemes for solving systems of finite-element equations, as well as extensions for eigenvalue problems and for nonlinear solution services.