Team for Advanced Flow Simulation and Modeling


Research Overview

Research Highlights


Takizawa Lab

Undergraduate Research

TAFSM Featured

TAFSM Recognized

Publication, Preprints

Currrent Team Members

Collaborators, Ex-Members

AHPCRC, History


Next FSI short course

For more information:

AHPCRC Bulletin: Fall 1996 - Volume 6 Number 4


Gary Hansen (AHPCRC-MSCI)

At the Supercomputing '96 Conference, the Army High Performance Computing Research Center again demonstrated, via many numerical simulations, the importance of high performance computing (HPC) in the solution of critical defense problems. Supercomputing '96 was held November 17-22 in Pittsburgh.

In keeping with the theme of the importance of HPC in solving critical defense problems, graphic still images depicting results from key Army and AHPCRC research projects were displayed on the walls of the exhibit. These included Numerical Simulation of Paratrooper-Aircraft Interaction (Natick RDEC), Theater Missile Defense Interceptor (AHPCRC), Parallel Finite Element Simulation Flare Maneuver of a Large Ram Air Parachute (Natick RDEC), Chemical Transport Through Porous Media (Corps of Engineers Waterways Experiment Station [CEWES]), METIS: Unstructured Graph Partitioning Software (AHPCRC), Sloshing in a Tanker Truck (AHPCRC), Simulation of Flows in Waterways (CEWES), 3D Parachute Flow Field Computation (Natick RDEC), and Convective Mixing During Growth of Single Crystal KTP from Solution (AHPCRC).

AHPCRC Supercomputing '96 Research Exhibit

One of the highlights for the AHPCRC at Supercomputing '96 was the Center's participation in the "High Performance Heterogeneous Computing Challenge," an annual contest in which participants demonstrate their ability to run HPC applications whose solution is spread across multiple HPC platforms. The Center's entry, the simulation of fluid flow surrounding a deformable parachute, demonstrated the feasibility of using large-scale heterogeneous parallel computations to solve timely fluid-structure interaction problems. As demonstrated by the AHPCRC team, the benefits derived from such capacity-driven computations are directly related to the diverse analysis tools which such parallelism provides.

To perform this simulation the computational work was divided among four HPC systems, using three parallel programming models, working simultaneously on various parts of the problem. The four different HPC systems (each with a different architecture) and the respective problem parts they addressed were the Thinking Machines CM-5, providing the fluid dynamics flow solver; the CRAY Y-MP, providing problem synchronization and I/O; the CRAY T3D, providing the finite element mesh update solver and movement; and the Silicon Graphics Onyx, providing the rendering, visualization performance monitoring, and job control. The parallel programming models utilized included MPI (on T3D) and PVM (T3D to Y-MP and Y-MP to Onyx) message-passing; data-parallel (on CM-5); and remote procedure calls (CM-5 to Y-MP). The networking utilized to connect the various platforms included HiPPI, ATM, and the direct I/O channel between the T3D and Y-MP.

Heterogeneous computing diagram for the AHPCRC Challenge entry at
Supercomputing '96. The red lines indicate communications, both
between and within machines. The blue lines indicate computation.

This computational fluid-structural interaction problem was broken into four distinct steps, each working independently for short periods of time before communicating needed data to other segments of the simulation. Each of these four parts was modeled on a different hardware architecture with the results of each passed between machines via high-speed networks.

Many AHPCRC personnel were involved in the various aspects of preparation for and conducting of the Challenge demo, including Shahrouz Aliabadi, Marek Behr, Andrew Johnson, Vinay Kalro, and Tayfun Tezduyar, of the AHPCRC-UM, and Barry Bolding, Barbara Bryan, Wes Barris, and Paul Ewing of the AHPCRC-MSCI.

In addition to the Challenge application, several other AHPCRC research projects were demonstrated continuously via visualization on Onyx workstations at the AHPCRC exhibit. Among these was a simulation of the flow through a spillway of the Olmsted Dam on the Ohio River. The simulation demonstrated the applicability of finite element formulations developed at the Center to solve problems of interest to the CEWES. Using this numerical approach, new designs for dams and waterways can be tested efficiently, and modifications to existing structures can be evaluated.

Another demonstration depicted results of an AHPCRC research project which is aimed at development of methods for modeling free surface flows using fixed unstructured finite element meshes. The example simulations included both the sloshing of fluid carried in a tanker truck which is subjected to external accelerations and casting of an automotive brake part. The comparison of such methods with the moving grid approach, such as the one employed in simulation of flow over hydraulic structures, is one of the objectives of this research project.

In addition to the demonstration of AHPCRC research project results on Onyx workstations, a videotape of these and other AHPCRC research results was shown continuously in the Exhibit. The demonstrations were chosen to be representative of state-of-the-art HPC techniques and equipment.

Projects from the 1996 AHPCRC Summer Institute for Undergraduates were also featured in the form of graphic still images. They included Free-Surface Flow Past Bridge Supports, Mesh Refinement, Optimization of Dissolved Contaminant Capture and Treatment, Air Flow Visualization, Flow Around High Speed Trains, and Contaminant Dispersion in a City. Terrance Course, from Jackson State University and a 1996 Summer Institute participant, attended Supercomputing '96 and participated in the research exhibit activities, hosting visitors and providing information on his experiences and activities in high performance computing.

In addition to the demonstration activities at the AHPCRC research exhibit, some of these research projects were also demonstrated by Center researchers on an SGI workstation at the DoD HPC Modernization Program (HPCMP) research exhibit. Space at this research exhibit was time-shared between all HPCMP participants.