DSP Techniques

Digital Signal Processing (DSP) is becoming increasingly used in receivers on the market.  These promise improvements which can only be achieved by these techniques.  These techniques are not particularly difficult to implement.

In particular, I am interested in using DSP to implement narrow-band techniques for detecting signals well into the noise.  Elsewhere on this site you will see that I am interested in Pulsars - rotating neutron stars which produce a periodic pulse of wideband noise.  Much of the software and hardware that I use for lower was written and designed for the purpose of detecting strong pulsars using modest antennas.   That endeavour was not successful.  The antenna required was larger than I was prepared to build.   Consequently, my attention has been diverted to addressing similar projects based on LowFer.

Basically, DSP involves converting continuous analogue signals to a stream of numbers representing regular discrete samples of that signal.  Once in this form it is possible to use fast computers to either do real-time or off-line analysis of the signals.  The sampled signal is, by nature, an imperfect representation of the original signal but this can easily be taken into account in interpreting the results of the analysis.

To implement ultra-narrowband reception two main approaches can be adopted:-

  1. Autocorrelation - this method involves taking a long record.  Each point of the record is first signed multiplied with itself and the results added for all multiplications.  That addition is then stored as the first point of the Autocorrelation.  The process is then repeated, except this time each point of the record is signed multiplied with the point one position away from itself and the results added.   This is the second point of the autocorrelation. The process is repeated by selecting points two positions away and so on.  The resultant points trace a graph of the correlation or "similarity" of the waveform as a function of the successive shifts (ie., one point away, then two points away...).   As the shift gets greater random noise signals will become less and less similar so the correlation will tend to zero.  Periodic signals however show correlations which are periodic and do not diminish to zero but remain constant as the shift increases.  In summary, as the shift increases the noise tends to zero revealing the hidden periodic signals.  This method has the advantages that the period of the pulse need not be known and the accuracy of the data sampling process need only be accurate to the 0.1ppm over the time equivalent to the maximum shift.  It has the distinct disadvantage for amateurs of requiring (N * S) floating point multiplications, where N is the number of samples and S is the number of shifts.  In the software I have written N is about 40,000,000 and S is about 1000.   The time taken for these calculations is in the order of 10 days using a Pentium 200.
  2. Fourier Analysis - once again a long record is taken of the received noise. Fourier Analysis is done mathematically via the Fourier Transform and is equivalent to constructing a large analogue filter bank.  The power at each frequency in the record is calculated and placed into frequency bins.  Standard Fourier Analysis basically involves multiplying the record with a sine wave at each frequency of the bins and so involves the calculation of a trigonometrical value and a floating point multiplication.  This involves (N * N) floating calculations, even worse than autocorrelation.  However, some clever mathematicians have worked out that a large number of these calculations are the same and have developed the FAST Fourier Transform or FFT.   The number of calculations is now only  (N * log2(N)) floating point calculations.  For large records this results in huge savings in computational time.  This is the method I am using.

Data Acquisition - To apply DSP to the signals to be analysed it is necessary to record the data in digital form.  The normal way of doing this is to use an analogue-to-digital converter (ADC) between the analog signal from the receiver and a computer.  The ADC samples the analog signal at regular intervals and produces a digital or binary value which represents the level of the analogue signal. Therefore for signals which might contain frequencies up to about 2400Hz the sample rate should at least twice that frequency.  Assume a sampling rate of, say, 5000 samples a second and a 12-bit ADC, then for 4 hours of data acquisition you have a record of (5000 * 3600 * 4 * 2) = 144Mbytes !!!.

The other problem when running Windows 95, is that it is difficult to get your system to reliably sample the ADC through a parallel port (printer port) because periodically the system software goes off to carry out some servicing operation.  The only way is to set up your software to incorporate hardware interrupts to ensure that samples are not missed.  Having been down that path before, I can recommend the easiest way is to utilise the readily available interrupt-driven serial communications interface for acquiring data.  Many high level languages have interrupt-driven serial port access built in which allow the setting up of a buffer to ensure no samples are lost.  

Fortunately it turns out for periodic signals buried in wideband noise it is allowable to sample the analogue data and carry out 1-bit conversions.  That is, it is only necessary to record the polarity of the signal.  This can by conveniently done by one bit.  This allows the data to be recorded as 8 samples per byte instead of two bytes per sample in the example above.  This gives a compression gain of 16-1.   The 144MByte record becomes a more manageable 9MBytes.

The hardware for this 1-bit ADC and serial interface has been built and tested and has been successfully used for acquiring signals for off-line analysis.