Accelerate Development | Simplify | Problem Solving

Call us: 952-412-2558

I recall a recent conversation with a person working for a company performing finite element analysis on a large steam turbine for a military application.

A simulation of this assembly was performed in CFX, using more than 30 million nodes, each requiring calculations for X, Y, and Z, providing this is for a structural or fluid dynamic simulation. Fluent can also be used, but CFX can be better with rotating machinery with some versions of the code.

Without having more information, a calculation run of this level of complexity may have been well justified. However, further conversations providing information that much of the simulation was a matter of common sense, which told me that the level of complexity could have been greatly reduced. This was a military application, and these design and analysis efforts are not limited by funding in the same manner as would be limited by private enterprise.

To get a handle on what is required for executing a simulation with 30 million nodes, several computers and possibly as many as 30 microprocessors are necessary, along with heavy use of parallel processing and massive amounts of random-access memory, likely 100 GB or more.

Hard drives are mechanical, and take significantly more time to access data than RAM. To access any particular piece of data, RAM takes about 0.00000001 seconds, 1.3 million times faster than a mechanical drive.

Scratch files

The situation to be avoided is running out of RAM, as the program needs to store information on the hard drive and then repeatedly go back to the hard drive to retrieve it. Telling whether your program has run out of RAM is quite easy, as ANSYS generates what are known as scratch files.

Solid-state drives can be used to supplement RAM. Their read-write speeds are approximately five times faster than mechanical drives. I designed my analysis computer with a solid-state drive and a large amount of RAM, both of which are expandable as the need arises.

Back in the “old days”, solid-state drive technology did not exist, and the computers rarely had more than 8 GB of RAM. Long-run times were the rule rather than the exception, even for moderately -sized executions-size being in reference to the number of elements and/or nodes.

Runs of several days-even weeks-were not uncommon. In either case, any error that would force the analyst to discard the results after waiting all that time is going to be very difficult to tolerate, not to mention very wasteful.

Simplifying the Model Using Symmetry

In the case of a symmetrical model, use of the symmetry feature can greatly reduce the model volume and thus the number of elements and modes. Axial models can oftentimes be reduced to 1/4, 1/8 of the model’s full volume, or even smaller divisions.

Even if the model is not symmetric, 3-D models can usually be reduced in the number of individual parts that it has been concentrating the analysis.

Break up the Problem into Smaller Pieces

Even if the model cannot be reduced in size, oftentimes the execution time can be greatly reduced by running the analysis in parts, using the output of one portion of the simulation as input to another. This will reduce the memory requirements significantly, which in turn may greatly reduce the total run time involved.

Model Simplification – Reducing Number of Parts

Recently, a computer model came to me that contained over 7000 parts. I would never attempt to solve an FEA problem of this level of complexity using this many individual parts. Fortunately, between the customer and myself we isolated the portions of the design that required analysis, and a relatively simple analysis task was agreed upon and executed.

Supplement with Classical Methods

Recently, I solved a problem for customer using this method. The assembly under analysis was a small industrial vehicle capable of transporting several thousand pounds. The problem could be summarized as follows: what will be the effect of the vehicle coming to a solid stop, as in encountering a brick wall or steel barrier. Initially, I looked at this and I think “impact analysis”. Fortunately, the main concern was with its containment structure, whether this be a concrete wall, a steel barrier or whatever. If damage to the containment structure were likely, what would be the effect of installing bumpers on the vehicle? This simplified things quite a bit, but I could still see some long calculation times with an impact analysis, which is by nature nonlinear. I used basic equations involving force, mass, acceleration, distance, and velocity to provide input for a linear analysis. Solution times were a few minutes in length and the customer was happy with the results as well as my billing. This will be the subject for another blog article, where I can go into more detail.

Supplement Analysis with Test Data

If this is a new design, using test data from an earlier version of the design and calibrating the simulation by comparing output from an FEA model of an existing assembly that test data is available for. This can be used to verify the analysis methodology and more accurately model a new design.

Norman Neher
Analytical Engineering Services, inc
Elko New Market, MN
www.aesmn.org