LJ Archive

Remotely Monitoring a Satellite Instrument

Guy Beaver

Issue #65, September 1999

How a small aerospace company uses Linux to remotely monitor the performance of a satellite instrument.

G & A Technical Software (GATS) is a small aerospace company located in Newport News, Virginia. Our primary area of business is the analytical support of satellite-based atmospheric remote-sensing projects. We started using Linux in 1995 for software development and data processing workstations.

NASA is funding a two-year satellite mission called TIMED (Thermosphere Ionosphere Mesosphere Energetics and Dynamics) to study the atmosphere. One of the experiments on the spacecraft is called SABER, which stands for Sounding of the Atmosphere using Broadband Emission Radiometry. SABER's mission is to make measurements of temperature, ozone, carbon dioxide, water vapor and other trace gases to learn more about the complex relation of energy transfer between the upper and lower atmosphere. GATS has been contracted by NASA to develop and operate the systems and software that process the data from the spacecraft.

A big part of the SABER experiment is calibrating and testing the instrument while it is on the ground. This task involves making measurements of known sources and analyzing the results so that data taken from orbit can be understood. Calibration of SABER is a difficult task, because measurements need to be made at conditions found in space. To accommodate this, SABER is calibrated in a large chamber that is capable of cold temperatures and near vacuum pressures. SABER and its calibration facility were built by the Space Dynamics Laboratory (SDL) in Logan, Utah. Figure 1 is a photograph of the actual calibration test happening in the testing facility at SDL.

Figure 1. The high bay at SDL. The SABER test chamber is under a clean tent on the left. To the center and right are engineers operating the instrument during calibration tests. The computers mentioned in this article are along the right wall.

In this article, I will describe a Linux-based system that allows remote monitoring of the SABER calibration tests. I will discuss the porting of software from a Windows NT workstation to a Linux workstation due to reliability requirements, as well as the use of several open-source products such as the GNU C++ compiler, CVS for configuration management, Xmgr for diagnostic plots, PostgreSQL database, VNC for remote terminal access and Perl. Many of the systems proven by this project can be used by other small businesses for powerful cross-country data processing while keeping total costs and development time low. The system is robust and provides real-time monitoring and analysis of the SABER instrument (located in Logan, Utah) from Newport News, Virginia. The same system will be used for post-launch data processing.

Requirements

Big projects go through several iterations of requirements/design/review, and SABER was no exception. When the time came to put together a final design, the requirements for the system were well-documented. Basically, the system needed to talk via a socket to a computer (a Sun workstation dubbed GSE for ground system equipment). The GSE computer provided raw data from the SABER instrument as well as all the test equipment connected to it. We needed to unpack the data into large staging files which could be read by analysis programs. On top of this, the system needed to access a PostgreSQL database on the GSE that was being populated by the SDL test operators with information on the various tests being run, such as times and readings from the various sensors. We had to provide software flexible enough to quickly analyze data with unknown quirks. This means easy debugging and code-level diagnostic plotting capability. The system had to work 24 hours per day and provide remote access from Virginia to Utah so that our engineers could have timely access to the data for supporting the actual tests. Finally, the system we developed had to be easily reconfigurable for post-launch data processing.

Design

We began the design by using software that GATS developed for another project. These C routines provide an interface to raw spacecraft data in the standard spacecraft format developed by the Consultative Committee for Space Data Systems (CCSDS). SDL simplified the job for us by making the raw CCSDS data available over a TCP socket, which our system already supported. We call this stage of the processing Level 0. It takes the raw binary data in packet form and unpacks it into ASCII files, sorted by data type. We chose ASCII for the initial design because disk space was not an issue (9GB total), and troubleshooting is much simpler if files are human-readable. ASCII is also a friendly format for passing between different systems. We wanted this stage of processing to write across the network as well, so NFS was a must. Note that utilizing socket calls and NFS at this stage means the physical location of the Level 0 computer is irrelevant, as long as the Internet is available.

The next stage of the processing is called Level 1, and its first job is to query the PostgreSQL database for test information. With this, the relevant data is extracted from the Level 0 files and broken out into a single file for each test. For this part of the design, Linux was a must. The Level 1 processing had to talk to an NT box and a Sun workstation. We chose Linux because Samba is easy to set up for NT file sharing, and data sharing with the Sun is automatic via NFS. The software design called for object-oriented programming (OOP) using C++, so that classes writing the Level 1 files could be reused downstream to read the data. Linux comes with GNU C++, which is extremely reliable and easy to debug. Another reason for choosing Linux was that the analysis tools for this project were developed in C++ on a Linux workstation using the powerful (and freely available) display package called Xmgr.

This design is easily modified for post-launch processing, since the data coming from the calibration setup is in precisely the same format as that coming from the spacecraft.

Implementation

Our original design was based on two computers. One was an NT workstation running the Level 0 software and simultaneously providing some real-time strip charts of data. The second was a Linux workstation running the Level 1 software. It was a P-II with 128MB RAM and 24GB hard drive to catch six week's worth of calibration data. We built the Linux workstation for about $3500. We ran Samba on the Linux workstation so that the NT box could write its Level 0 files to the large hard drive on the Linux box across the network.

We flew out to Utah with the computers to test the original design. It worked, but we had some reliability issues with the NT-based Level 0 software. One requirement was that the Level 0 files must be generated reliably and be read-accessible 24 hours per day. We had some unexplained hiccups when the files were accessed by programs such as Microsoft Notepad. After verifying that file read/write access was set correctly, we decided to change the design and port the code to the Linux box.

As mentioned above, the Level 0 software was reconfigured from another GATS project that required an NT workstation. After reviewing the code, we realized the only Windows unique code was in the winsock calls for socket connections. It is easy to port Windows <winsock.h> calls to GNU C <socket.h> calls. In fact, it entails deleting some overhead needed in Windows but not required in GNU C. To make the code backward-compatible, we simply wrapped the Linux code with #ifdef LINUX-#else pre-compiler directives. This allowed us to keep the same version of code that worked on NT and Linux under one configuration management version. Some examples of converting Windows socket calls to Linux are shown in Listing 1.

Listing 1.

With these modifications, we now had the Level 0 and Level 1 processing stages on one Linux workstation. We called this the Calibration Analysis Computer, and we left it and the (now spare) NT workstation at SDL connected to their network. During calibration tests, it generated Level 0 files 24 hours per day and never had to be rebooted over the six-week calibration period. As I mentioned before, the NT workstation had some strip-charting capabilities for viewing real-time data. This turned out to be a good use for the NT box, so we configured it to work with SABER data. Since the Linux workstation was on the Internet, we automatically had remote access, and we needed the same for the NT box. VNC filled the bill. This remarkable application (Virtual Network Computer) pipes the Windows desktop to a client running on a Linux X session. With VNC, we had the ability to set up and monitor the NT box remotely so we could configure it for SDL engineers wishing to view real-time temperature output. We could also view the same real-time strip-charts on our Linux workstations back in Virginia.

This system offers a great deal of flexibility. We chose to let the Linux workstation at SDL connect to the socket, then access the data through the Internet. We could also connect to the socket from Virginia and generate the Level 0 files in our office.

The Level 1 processing stage ran flawlessly. The PostgreSQL database on the GSE workstation was easily accessible from the front-end library (libpq-fe.h) that comes with this powerful SQL database. Each calibration test event was performed automatically by a script on the GSE workstation which automatically populated an “event” in the database. The Level 1 stage made a query to this database for beginning and ending times of the test event. With this information, the particular piece of the Level 0 files could be pulled out and processed (even though they were constantly being written to). These files, called calibration analysis files, could then be accessed by the analysis routines, which we called Level 1b.

The Level 1b processing stage contained powerful tools for analyzing the calibration data. Many of the algorithms were from other GATS projects and were reconfigured to be methods within classes developed for the SABER project. One valuable diagnostic tool proved to be the C-callable library that came with the Xmgr graphics analysis package. These library calls were wrapped in an easily utilized plotting method contained in our Level 1b class. Using objects that have diagnostic plot methods shortened the debugging period that comes with looking at real data for the first time.

Our development team was lean and mean—three people working on various modules with support from two others, all working under the CVS configuration management system. Since our computers moved back and forth across the country, they were set up to be easily configured for their current location. We did this with simple scripts, stored in /root, which are run after bootup. We had a script for each location—“atGats” and “atSDL”. These simple scripts set the local IP address and set an /etc/resolv.conf (containing the location of local name servers and IP addresses) for each location. The scripts simply made a dynamic link to the appropriate resolv.conf file. An alternate solution would have been dynamically assigned IP addresses through DHCP, but we already had pre-assigned local addresses from SDL, and this method was simple and gave us easy control based on the computer's location.

Listing 2.

I attended the first two weeks of calibration testing in Utah to ensure everything was working well, while my support people stayed home in Virginia. During that time, we easily diagnosed problems and were able to make updates to the code using CVS (Concurrent Version System, the GNU configuration management package). I described problems, they were fixed in Virginia, and I got instant updates with the CVS update command. This works because CVS can be set up with an NFS-mounted CVS root directory on a remote machine (in this case, at GATS in Virginia).

Once the testing started and Level 0 files were being generated, we monitored the data from Virginia as the tests were being run. Quick queries of the database with SQL told us when tests were completed. Making the Level 1 files was easy to automate at this point. Since the Level 1 software had command-line arguments typical of UNIX applications, we wrote Perl scripts to loop over the test event IDs (which were database fields designating each test event), and generated the Level 1 files in batches. As we migrate the software to the post-launch processing system, we will automate the entire daily processing with similar Perl scripts.

Results/Benefits

This system provides us with the ability to remotely monitor the SABER calibration tests and the flexibility to remotely process the data, or even download and process it in Virginia on our local Linux workstations. We've used Linux for five years, so we were not surprised at the ease of getting Linux to work in a networked environment. From this exercise, we verified the validity and robustness of Linux as the OS of choice for systems requiring remote access and remote monitoring. Invaluable to our project has been the flexibility Linux offers for configuring a mobile computer to operate in different locations under different networks. Finally, the ease of troubleshooting provided by the open-source software available in the standard Linux packages makes it the clear winner in rapid-development environments.

Resources

Acknowledgments

Guy Beaver is a Software Engineer and Senior Associate at GATS Incorporated. He has worked with computers and satellite remote sensing since 1984. Although he looks young, he is old enough to remember card punches and 8-track tapes. He can be reached at beaver@gats-inc.com.

LJ Archive