Telescope ASPH 611 Term Project
A University of Calgary Department of Physics and Astronomy
Graduate Course in Radio Astronomy

Brief Intro to Radio Astronomy
FT Theory
Telescope Information
Telescope Pointing
First Light
Radio Sources of Interest
Acquisition Software
Electronics Characterisation
Temperature Conversion
Noise Investigation
Preliminary Observations
RFI Problems
PRIMARY Observation
ASPH 611 Team

Spectrometer Software

Christy Bredeson and Sukhpreet Guram, December 12, 2005

The PDA 14 is a powerful tool for high speed, high resolution data acquisition. It can handle up to 100 MHz data sampling rates, storing up to 512 MB of data on the board itself and can digitize 2 separate channels of analog data with 14-bit resolution.

These features make it well-equipped to handle our task of creating a software spectrometer. The acquisition card could amply sample 12.5 MHz, which allows for complete frequency coverage (Nyquist sampling) of our highest desirable frequency of 5.5 MHz (this is the frequency at the high end of our chosen band). The card also came equipped with many pre-written software routines (in C) that proved useful to the project. Routines were supplied that initialized card parameters, regulated data acquisition and performed Fourier transforms. This limited the number of routines that had to be devised independently. Several example programs also assisted in the software development. This section will discuss in detail the code that is used to acquire data from the radio telescope, digitize the signal, Fourier transform the data into the frequency domain and output the results. We will also discuss how the data can be visualized, issues and challenges that were faced when designing the code and possible improvements that could be made to software in the future.

Description of the Code

The board can be configured in a number of ways based on various parameters. Decisions had to be made on how we wanted the board to operate and then the code needed to reflect the configuration of the board. The setup of the board is as follows. We used channel 1 out of the two available channels. The peak to peak input voltage range is set to 3 V. The trigger mode is chosen as post trigger mode so that the data stream grabbed by the board is one that comes in after the trigger. Trigger slope has been set to positive. The trigger level has been set to 0 volts so that even a minimal signal can trigger the board into acquiring data. The DC offset level which shifts the input up or down by a specified level is also 0. The board has two internal clocks: one with a frequency of 100 MHz and other with 62.5 MHz, and there is a set of registers which hold a value called the clock divider. This divides the frequency of the clock by a power of two anywhere from 2 to 1024. Thus, the effective sampling rate of the board for grabbing the data is decided by which clock is active and which clock divider is selected. For our purposes we need to sample at a rate greater than 11.0 MHz, which is decided by two times the bandwidth of the whole system of amplifier and mixer circuits connected to the telescope (so that Nyquist sampling is achieved). The 100.0 MHz clock has been selected and a clock divider of 8 to give an effective sampling rate of 12.5 MHz, which will actually oversample our data somewhat, but is the closest we can get to our desired value. There are also a number of modes in which the board acquires data. It has two kinds of buses: one is the standard PCI bus and other a special high speed SAB bus. Using the SAB requires extra cables and interfaces as well as many options that are not yet supported by the board's software for SAB. To keep things simple, we use the PCI bus on the board. The mode is set to PCI acquisition mode in which the data is directly passed to the PC without first storing it in the board's RAM. This reduces latency by saving on transfers; instead of two transfers (first to the board's RAM and then to the PC) we do only one (from the board directly to the PC). The aforementioned parameters allow us to set up the board for normal operation.

The operation of the data acquisition is as follows. We grab a set of data in a size specified by the user (a number between 2 and 8192, a limit imposed by the Fourier transform routine), pass it to the Fourier transform routine and take the output in an array. Then we continually average many data sets until we have grabbed our integration time worth of data (also a variable specified by the user). This spectra is then written into a file in the reverse order because the final mixer stage of our system uses the lower sideband which inverts the frequencies. Doing the FFT and averaging in real time reduces the amount of data we would have to save. If we were to save all the raw data we would very quickly run out of our hard disk space (which is a 200 GB drive!) because we will be acquiring 15.675 Mega samples per second, and each sample is worth 2 bytes, totaling 31.25 MB/s. This is a huge amount of data, so by doing the Fourier transform and averaging right away, we end up having less than 1 MB per min which allows us to acquire data for days at a time without having to worry about our hard disk space.

When running the software, the user has to enter in the following parameters. The Fourier transform window is the size of each set of data to be grabbed. The total number of channels, the center channel, the integration time, as well as altitude and azimuth for the observation can all be changed by the user. The altitude and azimuth, along with a time stamp, can then be converted into right ascension and declination for each observation. Please see the section on coordinate conversions for more information on this procedure.

Two files are generated for each observation , a .cfg file which is a text file containing various parameters related to the observation and a .dat file containing the data. The format of the file for storing the data is as follows.

A 64 byte header is created with the following fields: the number of channels, the bandwidth, the center frequency, the channel width and a padding of 48 bytes in case anything else needs to be added later.


The data output to the .dat file has the format (for every time-averaged sample): right ascension, declination, time stamp, total power and the Fourier transformed spectral points. For calculation of the total power, we sum the square of the voltage values in the time domain and then average over the number of points in each integration.

The Fast Fourier Transform routine

The FFT routine gives the user the option to select a smoothing window function. A smoothing window function smoothes the data and removes the undesired artifacts in the data caused by FFT. We are currently using the RECTANGULAR smoothing function; however, there is a more effective smoothing function called the HANNING smoothing function, as discussed in the theory behind Fourier transforms. We are using the RECTANGULAR smoothing function because HANNING is very computationally intensive while RECTANGULAR is not. Since we currently process 1 second of data in roughly 4 seconds, our code is already computationally intensive and using HANNING smoothing would make it even more so. Thus, we are currently using RECTANGULAR to reduce data processing time.

The FFT routine basically uses the radix-2 decimation in frequency algorithm. The main idea behind this algorithm is that we can always write the Fourier transform of N points and sum of the two Fourier transforms formed by taking N/2 of the N points at one time (Danielson-Lanczos Lemma)1. We can do this recursively until we end up with having to evaluate the Fourier transform of one point, which is a trivial case. Once we have the Fourier transform of one point, all we have to do is reverse the bits of the numbers and reorder them in their bit reversed order. (We use the original numbers, but need the bit reversed values to figure out the order). Then the routine works backwards, combining adjacent sets of numbers until it ends up doing it for the whole input array.

Visualization of the data

Several avenues were explored for the visualization of the spectrometer data, however the most favourable proved to be writing our data as a FITS file and using Kvis2 to look at the data. Kvis is good program for visualizing large data sets and allows several axes of a data set to be viewed simultaneously. This makes it an ideal visualization tool for this project. A routine was written in C to read in the data output by the spectrometer and convert it to a FITS file with axes of time versus frequency. Kvis then displays the axes specified in the FITS file as well as the intensity represented by a colour scale.

Issues and Challenges

Many issues and challenges were encountered during software development. At a sampling rate of 12.5 MHz, we obtain 25 MB/s of time domain data, which is too large to store and process in non-real time. Thus, we knew that we needed to generate the spectra in real time, obtain an average spectra and output the result. For every iteration, 4096 points (the size of one FFT window) were obtained, an FFT was performed on the points and the spectrum was obtained. We wanted to continue this process until we had built up 1 second of data in total, or an average spectrum integrated over 1 second, and we hoped that we could process all these points without having to skip any data points. Unfortunately, due to the long processing times associated with the FFT routine supplied by the card, this was not possible; we could not collect and process 1 second of data in 1 second. It actually took about 4 seconds to gather and process 1 second worth of data. This meant that we had no choice but to skip data points while building up an average spectra over 1 second. This will affect our total integration time on a source, but not such that the sky is undersampled.

As previously mentioned, the FFT routine provided by the manufacturer of the PDA 14 was in fact slow and inefficient. This was not the only issue we had with this specific routine, and unfortunately this routine was crucial to the success of the software spectrometer. The FFT routine only accepted a maximum of 8192 points into the routine. This results in a maximum of only 4096 real frequency channels (the imaginary part of the FFT is discarded). We lose some channels on both sides due to roll off of the band and thus, we only have ~3000 useable channels. For a spectrometer studying galactic HI, this may be an insufficient number of channels. In our case, we accepted this limitation and our spectrometer output was 4095 channels, including the roll off. The FFT routine was also inconvenient in that it was only available for a Windows computer; no routine was supplied that would work on a Linux-based machine. Using Windows made our task much more difficult since Windows does not come with a free C compiler (we had to borrow one) and Windows does not have a good visualization program (Kvis does not work on Windows and Microsoft Excel could not handle such a large quantity of data). Also, since our data is being taken in Windows and then visualized in Kvis, we need a way to transfer our data file from one platform to the other, which required the use of a USB key and a Macintosh laptop. Had we known these many issues earlier on in the project, we likely would have taken the time to write our own Linux compatible FFT routine and acquired data using Linux. That would have alleviated many of these problems.

The card itself presented some major challenges. When we first began attempting to use it to acquire data, it had less than a 50% reliability rate. It was very frustrating trying to get it to respond when nothing seemed to work and it kept crashing the computer. Even more frustrations arose when the card was sent back to the manufacturer and they returned it having found nothing wrong. However, after an upgrade to the firmware of the board, and changing the board to set the DC signal to 0 V on the inputs, the board began to acquire data reliably. While the card was away at the manufacturer, we wrote a routine to generate synthetic data in order to continue work on software development. The synthetic data routine allowed the user to input single frequency sine waves and then mixed them in order to output a time series with the resulting frequencies. This time data was then fed into the FFT routine to check whether we were getting the correct results for the Fourier transformed spectra. Overall, this code proved to be a useful check of the performance and effectiveness of our program, and allowed us to develop software even in the absence of the data acquisition card. Please see the section reserved for the code in order to look at the code that synthesizes time data.

Another concern is the accuracy associated with acquiring a time stamp from the computer clock. The time stamp, along with the knowledge of the altitude and azimuth of our telescope, can tell us the RA and DEC of the object that we are observing. As computer clocks are notorious for not keeping time reliably, we needed to synchronize our clock with an atomic clock. This can be done over the internet by a free program called AtomTime.1. AtomTime connects to the atomic clock server in Boulder, Colorado and updates the PC clock to match the atomic clock value. This ensures that we are getting an accurate RA and DEC value for our observation.

Future Improvements

There are several future improvements that could be useful for the project. Abandoning the use of Windows and converting the code all over to Linux would be a desirable improvement. This would involve writing code to do Fourier transforms in Linux. It would benefit us by cutting down on some of the intermediate steps we are required to do right now, and would allow easy visualization of the data. We would also like the convenience of looking at our data in real time. This means we would need to write the FITS file line by line, a capability that we do not have currently. Unfortunately, we only have code to write a FITS file one plane at a time; to rewrite the code to append data line by line was too difficult and not a necessity for the scope of this project. However, it would be a welcome improvement for the future. Lastly, the code needs to be optimized as much as possible in order to be able to process 1 second of data within 1 second. This would mean that no data would have to be discarded and we would have a greater integration time, increasing our signal to noise ratio.

  1. AtomTime
  2. Danielson-Lanczos Lemma
  3. Kvis (Karma Astronomical Visualization Software)

Last modified: 10:22 am July 17, 2014

Valid HTML 4.01 Transitional