An Introduction To Compressive Sampling
This article surveys the theory of compressive sensing, also known as compressed sensing or CS, a novel sensing/sampling paradigm that goes against the common wisdom in data acquisition.
Introduction to Compressed Sensing
Chapter 1 of the book: "Compressed Sensing: Theory and Applications".
Introduction to Real-Time Digital Signal Processing
Chapter 1 of the book: Real-Time Digital Signal Processing: Fundamentals, Implementations and Applications, 3rd Edition
A Pragmatic Introduction to Signal Processing
An illustrated essay with software available for free download.
Introduction to Signal Processing
This book provides an applications-oriented introduction to digital signal processing written primarily for electrical engineering undergraduates. Practicing engineers and graduate students may also find it useful as a first text on the subject.
C++ Tutorial
This tutorial is for those people who want to learn programming in C++ and do not necessarily have any previous knowledge of other programming languages. Of course any knowledge of other programming languages or any general computer skill can be useful to better understand this tutorial, although it is not essential. It is also suitable for those who need a little update on the new features the language has acquired from the latest standards. If you are familiar with the C language, you can take the first 3 parts of this tutorial as a review of concepts, since they mainly explain the C part of C++. There are slight differences in the C++ syntax for some C features, so I recommend you its reading anyway. The 4th part describes object-oriented programming. The 5th part mostly describes the new features introduced by ANSI-C++ standard.
Computing FFT Twiddle Factors
In this document are two algorithms showing how to compute the individual twiddle factors of an N-point decimation-in-frequency (DIF) and an N-point decimation-in-time (DIT) FFT.
Generating Complex Baseband and Analytic Bandpass Signals
There are so many different time- and frequency-domain methods for generating complex baseband and analytic bandpass signals that I had trouble keeping those techniques straight in my mind. Thus, for my own benefit, I created a kind of reference table showing those methods. I present that table for your viewing pleasure in this document.
De-Noising Audio Signals Using MATLAB Wavelets Toolbox
Based on the fact that noise and distortion are the main factors that limit the capacity of data transmission in telecommunications and that they also affect the accuracy of the results in the signal measurement systems, whereas, modeling and removing noise and distortions are at the core of theoretical and practical considerations in communications and signal processing. Another important issue here is that, noise reduction and distortion removal are major problems in applications such as; cellular mobile communication, speech recognition, image processing, medical signal processing, radar, sonar, and any other application where the desired signals cannot be isolated from noise and distortion. The use of wavelets in the field of de-noising audio signals is relatively new, the use of this technique has been increasing over the past 20 years. One way to think about wavelets matches the way how our eyes perceive the world when they are faced to different distances. In the real world, a forest can be seen from many different perspectives; they are, in fact, different scales of resolution. From the window of an airplane, for instance, the forest cover appears as a solid green roof. From the window of a car, the green roof gets transformed into individual trees, and if we leave the car and approach to the forest, we can gradually see details such as the trees branches and leaves. If we had a magnifying glass, we could see a dew drop on the tip of a leaf. As we get closer to even smaller scales, we can discover details that we had not seen before. On the other hand, if we tried to do the same thing with a photograph, we would be completely frustrated. If we enlarged the picture "closer" to a tree, we would only be able to see a blurred tree image; we would not be able to spot neither the branch, nor the leaf, and it would be impossible to spot the dew drop. Although our eyes can see on many scales of resolution, the camera can only display one at a time. In this chapter, we introduce the reader to a way to reduce noise in an audio signal by using wavelet transforms. We developed this technique by using the wavelet tool in MATLAB. A Simulink is used to acquire an audio signal and we use it to convert the signal to a digital format so it can be processed. Finally, a Graphical User Interface Development Environment (GUIDE) is used to create a graphical user interface. The reader can go through this chapter systematically, from the theory to the implementation of the noise reduction technique. We will introduce in the first place the basic theory of an audio signal, the noise treatment fundamentals and principles of the wavelets theory. Then, we will present the development of noise reduction when using wavelet functions in MATLAB. In the foreground, we will demonstrate the usefulness of wavelets to reduce noise in a model system where Gaussian noise is inserted to an audio signal. In the following sections, we will present a practical example of noise reduction in a sinusoidal signal that has been generated in the MATLAB, which it is followed by an example with a real audio signal captured via Simulink. Finally, the graphic noise reduction model using GUIDE will be shown.
Biosignal processing challenges in emotion recognition for adaptive learning
User-centered computer based learning is an emerging field of interdisciplinary research. Research in diverse areas such as psychology, computer science, neuroscience and signal processing is making contributions to take this field to the next level. Learning systems built using contributions from these fields could be used in actual training and education instead of just laboratory proof-of-concept. One of the important advances in this research is the detection and assessment of the cognitive and emotional state of the learner using such systems. This capability moves development beyond the use of traditional user performance metrics to include system intelligence measures that are based on current theories in neuroscience. These advances are of paramount importance in the success and wide spread use of learning systems that are automated and intelligent. Emotion is considered an important aspect of how learning occurs, and yet estimating it and making adaptive adjustments are not part of most learning systems. In this research we focus on one specific aspect of constructing an adaptive and intelligent learning system, that is, estimation of the emotion of the learner as he/she is using the automated training system. The challenge starts with the definition of the emotion and the utility of it in human life. The next challenge is to measure the co-varying factors of the emotions in a non-invasive way, and find consistent features from these measures that are valid across wide population. In this research we use four physiological sensors that are non-invasive, and establish a methodology of utilizing the data from these sensors using different signal processing tools. A validated set of visual stimuli used worldwide in the research of emotion and attention, called International Affective Picture System (IAPS), is used. A dataset is collected from the sensors in an experiment designed to elicit emotions from these validated visual stimuli. We describe a novel wavelet method to calculate hemispheric asymmetry metric using electroencephalography data. This method is tested against typically used power spectral density method. We show overall improvement in accuracy in classifying specific emotions using the novel method. We also show distinctions between different discrete emotions from the autonomic nervous system activity using electrocardiography, electrodermal activity and pupil diameter changes. Findings from different features from these sensors are used to give guidelines to use each of the individual sensors in the adaptive learning environment.
Implementation of Uncoordinated Direct Sequence Spread Spectrum using Software Defined Radios
One of the major threats to wireless communications is jamming. Many anti-jamming techniques have been presented in the past. However most of them are based on the precondition that the communicating devices have a pre-shared secret that can be used to synchronize the anti-jamming scheme. E.g. for frequency hopping the secret could be used to derive the hopping sequence and for direct sequence spread spectrum the secret is used to derive the spreading codes. But how can the devices bootstrap a jamming-resistant communication without having a pre-shared secret? Christina Popper and Mario Strasser propose as scheme for Uncoordinated Frequency Hopping (UFH) and Uncoordinated Direct Sequence Spread Spectrum (UDSSS) in their papers [1] and [2] respectively. The goal of my project was an implementation of Uncoordinated Direct Sequence Spread Spectrum (UDSSS) using Software Dened Radios. The First version should serve as an easy to use and extendable proof of conceptfor the proposed scheme.
EngD thesis: Reduced-Complexity Signal Detection in Digital Communications Receivers
The Author began this Engineering Doctorate (EngD) in Autumn 1999, whilst already in full-time employment as a DSP software engineer at Nortel Networks, Harlow. This EngD comprises a set of three projects. The first project was focused on the development of dual-tone multi-frequency (DTMF) signal detection software. DTMF signals are currently used for operating menu-driven services such as voice-mail, telephone banking and share-dealing. The need for detection software in a packet networking environment exists because such signals become degraded when they travel through speech compression circuits. In addition, if they appear as echoes on a telephone line, they can affect the operation of echo cancellation systems. In this thesis a number of DSP algorithms are discussed where fast detection and minimum complexity are key characteristics. A key contribution here was the development of a novel detection algorithm based on the extraction of parameters that model the DTMF signal. The thesis reports a method combining parameter extraction with the technique of maximum likelihood to perform DTMF detection, resulting in very short time-frames when compared to standard approaches. Reducing the complexity of detection techniques is also important in today’s communication systems. To this end a key contribution here was the development of the ‘split Goertzel algorithm’, which permitted an overlapping of analysis windows without the need for reprocessing input data. Besides being applied to voice-band signals, such as in the case of DTMF, the Author also had the opportunity during the EngD to apply the skills and knowledge acquired in signal processing to higher-rate data-streams. This involved work concerning the equalisation of channel distortion commonly found in wireless communication systems. This covers two projects, with the first being conducted at Verticalband Ltd. (now no longer operational) in the area of digital television receivers. In this part of the thesis a real-time DSP implementation is discussed for enhancing a simulation system developed for wireless channels. A number of channel equalisation approaches are studied. The work concludes that for high-rate signals, non-linear algorithms have the best error rate performance. On the basis of the studies carried out, the thesis considers development and implementation issues of designs based on the decision feedback equaliser. The thesis reports on a software design which applies the method of least squares to carry out filter coefficient adaptation. The Verticalband studies reported lead on to further research within the context of channel equalisation, in the context of the detection of data in multiple-input multiple-output (MIMO) wireless local area network (WLAN) systems. This work was undertaken at Philips Research in Eindhoven, The Netherlands. The thesis discusses implementation scenarios of multi-element antenna arrays that aim to provide bit-rates orders of magnitude higher than today’s WLAN offerings, as those required by emerging standards such as 802.11n. The complexity of optimal detection techniques, such as maximum likelihood, scales exponentially with the number of transmit antennas. This translates to high processing costs and power consumption, rendering such techniques unsuitable for use in battery-powered devices. The initial main contribution here was the analysis of the complexity of algorithms whose performance had already been tested, such as the non-linear maximum likelihood approach and also less complex methods utilising linear filtering. This resulted in the development of new formulae to predict processing costs of algorithms based on the number of transmit and receive antennas. Other key contributions were defining a method to reduce the complexity of matrix inversion when using the Moore-Penrose pseudo-inverse, and the application of matrix decomposition to obviate the need for costly matrix inversion at all. Some on-going research into sub-optimal detection is also discussed, which describes methods to reduce the search-space for the maximum likelihood algorithm.
Fixed-Point Arithmetic: An Introduction
This document presents definitions of signed and unsigned fixed-point binary number representations and develops basic rules and guidelines for the manipulation of these number representations using the common arithmetic and logical operations found in fixed-point DSPs and hardware components.
Algorithms, Architectures, and Applications for Compressive Video Sensing
The design of conventional sensors is based primarily on the Shannon-Nyquist sampling theorem, which states that a signal of bandwidth W Hz is fully determined by its discrete-time samples provided the sampling rate exceeds 2W samples per second. For discrete-time signals, the Shannon-Nyquist theorem has a very simple interpretation: the number of data samples must be at least as large as the dimensionality of the signal being sampled and recovered. This important result enables signal processing in the discrete-time domain without any loss of information. However, in an increasing number of applications, the Shannon-Nyquist sampling theorem dictates an unnecessary and often prohibitively high sampling rate. (See Box 1 for a derivation of the Nyquist rate of a time-varying scene.) As a motivating example, the high resolution of the image sensor hardware in modern cameras reflects the large amount of data sensed to capture an image. A 10-megapixel camera, in effect, takes 10 million measurements of the scene. Yet, almost immediately after acquisition, redundancies in the image are exploited to compress the acquired data significantly, often at compression ratios of 100:1 for visualization and even higher for detection and classification tasks. This example suggests immense wastage in the overall design of conventional cameras.
Peak-to-Average Power Ratio and CCDF
Peak to Average Power Ratio (PAPR) is often used to characterize digitally modulated signals. One example application is setting the level of the signal in a digital modulator. Knowing PAPR allows setting the average power to a level that is just low enough to minimize clipping.
The World's Most Interesting FIR Filter Equation: Why FIR Filters Can Be Linear Phase
This article discusses a little-known filter characteristic that enables real- and complex-coefficient tapped-delay line FIR filters to exhibit linear phase behavior. That is, this article answers the question: What is the constraint on real- and complex-valued FIR filters that guarantee linear phase behavior in the frequency domain?
Complex Down-Conversion Amplitude Loss
This article illustrates the signal amplitude loss inherent in a traditional complex down-conversion system. (In the literature of signal processing, complex down-conversion is also called "quadrature demodulation.")
Using the DFT as a Filter: Correcting a Misconception
I have read, in some of the literature of DSP, that when the discrete Fourier transform (DFT) is used as a filter the process of performing a DFT causes an input signal's spectrum to be frequency translated down to zero Hz (DC). I can understand why someone might say that, but I challenge that statement as being incorrect. Here are my thoughts.