Team for Advanced Flow Simulation and Modeling


Research Overview

Research Highlights


Takizawa Lab

Undergraduate Research

TAFSM Featured

TAFSM Recognized

Publication, Preprints

Currrent Team Members

Collaborators, Ex-Members

AHPCRC, History


Next FSI short course

For more information:

AHPCRC Bulletin: Summer 1997 - Volume 7 Number 3

3D Simulation of Fluid-Object Interactions at a New Level-1000 Spheres

Andrew Johnson and Tayfun Tezduyar (AHPCRC-UM)

Fluid-Object Interactions is one of the "Targeted Challenges" in the HPC research activities of our Team for Advanced Flow Simulation and Modeling (T*AFSM) at the AHPCRC. Our goal is to build a set of advanced methods for fast and accurate 3D simulation of this class of flow problems. This serves two purposes: a) the methods developed are opening new doors to science and technology involving fluid-particle interactions; b) our computational research for this class of extremely challenging simulations is giving us a cultivating ground for new HPC methods for flow simulations in general, including other classes of challenging flow problems.

The source of the complexities involved in this class of problems is that the motion of the objects is determined by their interaction with the surrounding fluid, and the behavior of the surrounding fluid is dependent upon the motion and position of the objects. Furthermore, the objects, in addition to interacting with the surrounding fluid, interact with each other by colliding and forming groups. They may also collide with the solid surfaces that might be present at the external boundaries of the fluid-object system. In our computational model, the fluid dynamics is governed by the time-dependent, 3D Navier-Stokes equations. The equations of motion govern the 3D translational and rotational dynamics of the objects. Fluid forces acting on the objects are calculated from the computed flow field. These equations need to be solved in a coupled fashion. This class of problems is a subset of a larger class flow problems with moving boundaries and interfaces. In this larger class of problems, as far the fluid dynamics is concerned, the spatial domain changes in time, and a formulation that can handle this change accurately is essential.

Fluid-object interactions become more challenging as the number of objects or the Reynolds number increases, because both increases demand larger number of mesh points for accurate computations. Beyond some point, the problem size becomes too large to carry out the simulation in a reasonable amount of time on single-processor or shared-memory computing platforms. This adds the challenge of implementing the entire method on a distributed-memory computing platform. This includes handling the changes in the spatial domain-as large number of objects move around-by updating the mesh efficiently. We are now at a level where we can carry out simulation of 1000 spheres falling in a liquid-filled tube at Reynolds number 10.

Prior to this simulation with 1000 spheres, in September 1994, we completed four cases of 3D simulations with the number of spheres ranging from two to five. The Reynolds number for a single sphere was 100. This work was reported in [1]. The objective was to compare the computed results to those observed experimentally. In June 1996, we completed the simulation of two cases of 101 spheres: with the size of the spheres random in one case and uniform in the second. The Reynolds number for a single sphere was 100. The methods-which were more advanced than those used for earlier simulations involving an order of magnitude less spheres-and results were reported in [2]. The mesh sizes during those simulations reached 1.2 million tetrahedral elements, resulting in about 2.6 million coupled, nonlinear equations to be solved every time step.

Figure 1. Distribution of the spheres at four instants during the simulation. The first picture shows the initial distribution. The colors are for identifying and tracking the individual spheres.
The core method for our simulation environment is the Deformable-Spatial-Domain/Stabilized Space-Time formulation [3] developed earlier by the T*AFSM for flow problems with moving boundaries and interfaces. The methods layered around this include: an efficient distributed-memory implementation of the formulation for unstructured grids; fast automatic mesh generation with structured layers of elements around the objects; a mesh update method based on automatic mesh moving with remeshing only as needed; an efficient method for projecting the solution after each remesh; automatic surface mesh refinement to increase accuracy when two objects get close or an object gets close to the wall; multi-platform computing and simulation control environment; and visualization and animation tools. In these simulations, while mesh partitioning, flow computations, and mesh movements are performed on a 512-node Thinking Machines CM-5, automatic mesh generation and projection of the solution are accomplished on a 2-processor (MIPS R10K) SGI ONYX2.

In this simulation, we start with the 1000 uniform-sized spheres dispersed randomly in a section of the tube (see Figure 1, first picture). The Reynolds number for a single sphere falling at terminal speed is 10. The mesh has approximately 2.5 million tetrahedral elements. At every time step, a nonlinear equation system with approximately 5.5 million equations needs to be solved simultaneously. At this problem size, the flow solver can compute 30 time steps in about 7 hours on the 512-node CM-5, generate a new mesh in about 4.5 minutes on a single MIPS R10K processor, and project the solution in about 18 minutes on two MIPS R10K processors.

We computed a total of 800 time steps with 56 remeshes. The average Reynolds number at terminal velocity is around 8.1. The spheres at four instants during the simulation are shown in Figure 1. This fluid-object system involves behavior patterns not seen in our earlier simulations with 101 spheres. An example of this is that the spheres near the center of the tube fall with higher speeds compared to those near the sides, and this can be seen in Figure 2. This behavior is demonstrated also in Figure 3. Related to this behavior, the spheres near the center exit the group at the bottom and then spread out to the sides to fill up the open spaces.

Figure 2 (Far Left). As shown here, the spheres near the center of the tube fall with higher speeds compared to those near the sides. The colors are for denoting the magnitude of the velocity for each sphere.

Figure 3 (Left and Above). Distribution of the spheres at an instant during the simulation and the iso-surface corresponding to the 0.5 value of the vertical component of the flow velocity.



  1. A. Johnson and T. Tezduyar, "Simulation of Multiple Spheres Falling in a Liquid-Filled Tube," Comp. Meth. in App. Mech. and Eng., 134 (1996) 351-373.
  2. A. Johnson and T. Tezduyar, "3D Simulation of Fluid-Particle Interactions with the Number of Particles Reaching 100," Comp. Meth. in App. Mech. and Eng., 145 (1997) 301-321.
  3. T. Tezduyar, M. Behr, and J. Liou, "A New Strategy for Finite Element Computations Involving Moving Boundaries and Interfaces-The DSD/ST Procedure: I," Comp. Meth. in App. Mech. and Eng., 94 (1992) 339-351.