top of page

NASA | Space Shuttle

APU Test Cells - Complete Redesign

After decades of wear and tear, the Space Shuttle Auxiliary Power Unit (APU) Test Cells were in dire straits.  Archaic test equipment, chart recorders, holding relays, and command and control had to be replaced.  

The assignment was to rebuild and modernize the APU test cells from the ground up.

As the lead engineer for the entire system upgrade, there was a lot to do to meet NASA's standards for Manned Space Flight.  Hydrazine protection, modern Data Acquisition, Command & Control, Data Analysis and Reporting, Safety Monitoring, Fire Suppression, and more.

See many images and read more on why NASA said we "set a new standard" for NASA test cells.


Shown: Space Shuttle APU Test Cell with APU Loaded - Inside Blast Proof Walls

How do you rebuild a 25-year old complex electro-mechanical-hydraulic test stand, one moving 90 gallons per minute at 3000 PSI, without any of the original requirements?  The simple, but painful answer is you start over from scratch.

The APU's provide enough hydraulic power to control the rocket's 5 nozzles are both safety critical for flight, and in themselves, safety risks.  They operate move a whopping 90 gallons a minute of hydraulic fluid at 3000 PSI!  That 3 KW of power can cut through metal.  Add to that, the Hydrazine fuel used to power the APU will catch fire if it comes into contact with any oxidizer, even a simply rusty bolt.

The mechanical design was performed by my colleague.  Everything else, including Signal Conditioning, Data Acquisition, Command and Control, Sequencing, post-test Analysis and Reporting, FFT... too much to list really, was performed by myself and a few engineering coops supporting me over the course of two years.

Working this program was one of the top highlights of my professional career.


On a Sad Note:

On February 1st, 2003, I was working with a few NASA engineers inside one of my new test cells to certify them for use, when the Space Shuttle Columbia disintegrated upon reentry into the atmosphere.  It was personal on so many levels.  I had watched the first Shuttle disaster, the Challenger, destroyed on live TV during liftoff in 1986.  The lessons of the Challenger had been a core part of my safety and mission control training.  Through my time at UTC, I had met NASA Commanders and Mission Specialists and even attended a dinner with a few.  When you really get the chance to spend time with these men and women, you realize how really smart and incredible they are. They are why we all work so hard to keep them safe, and why the loss of the Columbia and its crew was so devastating.


My test cells would eventually be certified by NASA, and the Space Shuttle program would continue to fly all the way to July of 2011.  It was finally forced to retire because the mold had literally been broken for many of its massive gears, pumps, and systems, and the fatigue on reusable vehicles with many more years of use than planned had reduced the reliability.

The last I heard, my test stands had a continued life on new systems.

Please click on any of the Gallery images at the bottom of this page for more details about this project.

Automate Analysis

In the past, reams of chart recorder paper would take a week to work through and analyze.  There was no way to know if we had a good run immediately following a test, so we would have to reassemble a team if retesting was required.

By adding a parallel computer and DAQ system to the Safety & Control system, we were able to grab high-fidelity snapshots at critical moments of the test and know how we were doing before a run was complete.  The finished products were already in the data folder.

React to Problems Faster

One man standing over a chart recorder, with the ink from 12 traces inches from his eyes and a kill switch in hand, is no way to run such a dangerous system! 

By creating automated signal monitoring - with upper and lower warning and shutdown thresholds - we could react in a few milliseconds to violations we would never have seen previously.

Allow More Monitoring

When I purchased two $12K 42" Plasma TVs and a 14" touch screen, management thought NASA would be upset.  ​

To the contrary, creating the ability for NASA engineers, UTC engineers and union personnel, and visitors to watch the APU signals real-time was a huge hit. 

NASA said that our displays using LabVIEW charts and graphs, combined with the Touch Screens (another first for NASA), "set the standard for future test stands at NASA".

Increase Fidelity of Control

The original test stands used bank of parallel valve-controlled load paths to create different discrete combinations of hydraulic load.  But the load the Shuttle's Thrust Vector Control (TVC) system applies in flight is fully dynamic.

We had to build a new continuously variable hydraulic load and control scheme to "test as you fly".

Better Safety Across the Board

Hydrazine is nasty stuff.  It will ignite if it touches rust on a bolt!  With hundreds of data acquisition (DAQ) channels and both high and low critical values, a chart recorder wasn't going to give us the millisecond shutdown response we needed.

We had to build a Safety System that would handle a very dynamic array of measurements and protect the limited APU assets NASA still had.

Improve Testing Reliability

No more Auxiliary Power Units (APU) would be built.  And each unit had a finite amount of runtime before they had to be scrapped.  With live testing counting as runtime, ANY failed test effort meant NASA just lost some use out of the unit.

We had to build a test system that would run safer and collect the data more efficiently the first time.

Real-Time Data Analysis

It took weeks in the old cells to pour over all the chart paper rolls, manually measure traces, and see if we had a good test, rather less a good unit under test.

We had to build active data analytics to evaluate signals live during the test.  Not only did this protect the unit by shutting down on an early bad result, it meant when the test was over, we knew the results.

Sophisticated Data Acquisitions Schemes

Data rates for Data Acquisition (DAQ) must be selected carefully so as not to overwhelm the PC's.  In some cases, we needed a continuous 200K sample log, in others, high-fidelity burst snapshots.

We had to build custom signal conditioning that would allow us to read the signals from multiple DAQ systems at different rates without interference.

FieldPoint Remote I/O

Running hundreds of pairs of signal wires through the blast wall had been fraught with problems in the old cells.  A modern approach was required.

We had to build a distributed system for I/O that would minimize cable lengths and noise.  FieldPoint relays, temperature sense, digital I/O, and other modules put our control right at the required spots.

Please click on any of the Gallery images at the bottom

of this page for more details about this project.

bottom of page