Team for Advanced Flow Simulation and Modeling 
For more information: 
AHPCRC Bulletin: Summer 1997  Volume 7 Number 33D Simulation of FluidObject Interactions at a New Level1000 SpheresAndrew Johnson and Tayfun Tezduyar (AHPCRCUM)FluidObject Interactions is one of the "Targeted Challenges" in the HPC research activities of our Team for Advanced Flow Simulation and Modeling (T*AFSM) at the AHPCRC. Our goal is to build a set of advanced methods for fast and accurate 3D simulation of this class of flow problems. This serves two purposes: a) the methods developed are opening new doors to science and technology involving fluidparticle interactions; b) our computational research for this class of extremely challenging simulations is giving us a cultivating ground for new HPC methods for flow simulations in general, including other classes of challenging flow problems.The source of the complexities involved in this class of problems is that the motion of the objects is determined by their interaction with the surrounding fluid, and the behavior of the surrounding fluid is dependent upon the motion and position of the objects. Furthermore, the objects, in addition to interacting with the surrounding fluid, interact with each other by colliding and forming groups. They may also collide with the solid surfaces that might be present at the external boundaries of the fluidobject system. In our computational model, the fluid dynamics is governed by the timedependent, 3D NavierStokes equations. The equations of motion govern the 3D translational and rotational dynamics of the objects. Fluid forces acting on the objects are calculated from the computed flow field. These equations need to be solved in a coupled fashion. This class of problems is a subset of a larger class flow problems with moving boundaries and interfaces. In this larger class of problems, as far the fluid dynamics is concerned, the spatial domain changes in time, and a formulation that can handle this change accurately is essential. Fluidobject interactions become more challenging as the number of objects or the Reynolds number increases, because both increases demand larger number of mesh points for accurate computations. Beyond some point, the problem size becomes too large to carry out the simulation in a reasonable amount of time on singleprocessor or sharedmemory computing platforms. This adds the challenge of implementing the entire method on a distributedmemory computing platform. This includes handling the changes in the spatial domainas large number of objects move aroundby updating the mesh efficiently. We are now at a level where we can carry out simulation of 1000 spheres falling in a liquidfilled tube at Reynolds number 10. Prior to this simulation with 1000 spheres, in September 1994, we completed four cases of 3D simulations with the number of spheres ranging from two to five. The Reynolds number for a single sphere was 100. This work was reported in [1]. The objective was to compare the computed results to those observed experimentally. In June 1996, we completed the simulation of two cases of 101 spheres: with the size of the spheres random in one case and uniform in the second. The Reynolds number for a single sphere was 100. The methodswhich were more advanced than those used for earlier simulations involving an order of magnitude less spheresand results were reported in [2]. The mesh sizes during those simulations reached 1.2 million tetrahedral elements, resulting in about 2.6 million coupled, nonlinear equations to be solved every time step.
In this simulation, we start with the 1000 uniformsized spheres dispersed randomly in a section of the tube (see Figure 1, first picture). The Reynolds number for a single sphere falling at terminal speed is 10. The mesh has approximately 2.5 million tetrahedral elements. At every time step, a nonlinear equation system with approximately 5.5 million equations needs to be solved simultaneously. At this problem size, the flow solver can compute 30 time steps in about 7 hours on the 512node CM5, generate a new mesh in about 4.5 minutes on a single MIPS R10K processor, and project the solution in about 18 minutes on two MIPS R10K processors. We computed a total of 800 time steps with 56 remeshes. The average Reynolds number at terminal velocity is around 8.1. The spheres at four instants during the simulation are shown in Figure 1. This fluidobject system involves behavior patterns not seen in our earlier simulations with 101 spheres. An example of this is that the spheres near the center of the tube fall with higher speeds compared to those near the sides, and this can be seen in Figure 2. This behavior is demonstrated also in Figure 3. Related to this behavior, the spheres near the center exit the group at the bottom and then spread out to the sides to fill up the open spaces.
References
