How to Find a Fast Floating-Point atan2 Approximation
Context Over a short period of time, I came across nearly identical approximations of the two parameter arctangent function, atan2, developed by different companies, in different countries, and even in different decades. Fascinated with how the coefficients used in these approximations were derived, I set out to find them. This atan2 implementation is based around a rational approximation of arctangent on the domain -1 to 1:$$ atan(z) \approx \dfrac{z}{1.0 +...
Wavelets II - Vanishing Moments and Spectral Factorization
In the previous blog post I described the workings of the Fast Wavelet Transform (FWT) and how wavelets and filters are related. As promised, in this article we will see how to construct useful filters. Concretely, we will find a way to calculate the Daubechies filters, named after Ingrid Daubechies, who invented them and also laid much of the mathematical foundations for wavelet analysis.
Besides the content of the last post, you should be familiar with basic complex algebra, the...
Wavelets I - From Filter Banks to the Dilation Equation
This is the first in what I hope will be a series of posts about wavelets, particularly about the Fast Wavelet Transform (FWT). The FWT is extremely useful in practice and also very interesting from a theoretical point of view. Of course there are already plenty of resources, but I found them tending to be either simple implementation guides that do not touch on the many interesting and sometimes crucial connections. Or they are highly mathematical and definition-heavy, for a...
Digital Envelope Detection: The Good, the Bad, and the Ugly
Recently I've been thinking about the process of envelope detection. Tutorial information on this topic is readily available but that information is spread out over a number of DSP textbooks and many Internet web sites. The purpose of this blog is to summarize various digital envelope detection methods in one place.
Here I focus on envelope detection as it is applied to an amplitude-fluctuating sinusoidal signal where the positive-amplitude fluctuations (the sinusoid's envelope)...
Autocorrelation and the case of the missing fundamental
[UPDATED January 25, 2016: One of the examples was broken, also the IPython notebook links now point to nbviewer, where you can hear the examples.]
For sounds with simple harmonic structure, the pitch we perceive is usually the fundamental frequency, even if it is not dominant. For example, here's the spectrum of a half-second recording of a saxophone.
The first three peaks are at 464, 928, and 1392 Hz. The pitch we perceive is the fundamental, 464 Hz, which is close to...
Generating pink noise
In one of his most famous columns for Scientific American, Martin Gardner wrote about pink noise and its relation to fractal music. The article was based on a 1978 paper by Voss and Clarke, which presents, among other things, a simple algorithm for generating pink noise, also known as 1/f noise.
The fundamental idea of the algorithm is to add up several sequences of uniform random numbers that get updated at different rates. The first source gets updated at...
Amplitude modulation and the sampling theorem
I am working on the 11th and probably final chapter of Think DSP, which follows material my colleague Siddhartan Govindasamy developed for a class at Olin College. He introduces amplitude modulation as a clever way to sneak up on the Nyquist–Shannon sampling theorem.
Most of the code for the chapter is done: you can check it out in this IPython notebook. I haven't written the text yet, but I'll outline it here, and paste in the key figures.
Convolution...
Multilayer Perceptrons and Event Classification with data from CODEC using Scilab and Weka
For my first blog, I thought I would introduce the reader to Scilab [1] and Weka [2]. In order to illustrate how they work, I will put together a script in Scilab that will sample using the microphone and CODEC on your PC and save the waveform as a CSV file.
Python scipy.signal IIR Filtering: An Example
IntroductionIn the last posts I reviewed how to use the Python scipy.signal package to design digital infinite impulse response (IIR) filters, specifically, using the iirdesign function (IIR design I and IIR design II ). In this post I am going to conclude the IIR filter design review with an example.
Previous posts:
Beat Notes: An Interesting Observation
Some weeks ago a friend of mine, a long time radio engineer as well as a piano player, called and asked me,
"When I travel in a DC-9 aircraft, and I sit back near the engines, I hear this fairly loud unpleasant whump whump whump whump sound. The frequency of that sound is, maybe, two cycles per second. I think that sound is a beat frequency because the DC-9's engines are turning at a slightly different number of revolutions per second. My question is, what sort of mechanism in the airplane...
Overview of my Articles
IntroductionThis article is a summary of all the articles I've written here at DspRelated. The main focus has always been an increased understanding of the Discrete Fourier Transform (DFT). The references are grouped by topic and ordered in a reasonable reading order. All the articles are meant to teach math, or give examples of math, in context within a specific application. Many of the articles also have sample programs which demonstrate the equations derived in the articles. My...
Amplitude modulation and the sampling theorem
I am working on the 11th and probably final chapter of Think DSP, which follows material my colleague Siddhartan Govindasamy developed for a class at Olin College. He introduces amplitude modulation as a clever way to sneak up on the Nyquist–Shannon sampling theorem.
Most of the code for the chapter is done: you can check it out in this IPython notebook. I haven't written the text yet, but I'll outline it here, and paste in the key figures.
Convolution...
In Search of The Fourth Wave
Last year I participated in the first DSP Related online conference, where I presented a short talk called "In Search of The Fourth Wave". It's based on a small mystery I encountered when I was working on Think DSP. As you might know:
A sawtooth wave contains harmonics at integer multiples of the fundamental frequency, and their amplitudes drop off in proportion to 1/f. A square wave contains only odd multiples of the fundamental, but they also drop off...Autocorrelation and the case of the missing fundamental
[UPDATED January 25, 2016: One of the examples was broken, also the IPython notebook links now point to nbviewer, where you can hear the examples.]
For sounds with simple harmonic structure, the pitch we perceive is usually the fundamental frequency, even if it is not dominant. For example, here's the spectrum of a half-second recording of a saxophone.
The first three peaks are at 464, 928, and 1392 Hz. The pitch we perceive is the fundamental, 464 Hz, which is close to...
Components in Audio recognition - Part 1
Audio recognition is defined as the task of recognizing a particular piece of audio (could be music, ring-tone, and speech as well), from a given sample set of audio tracks.
The Human Auditory System (HAS) is unique in that the tasks of "familiarisation" of unknown tracks, and finding "similar" tracks come naturally to us. Tunes from the not-so-recent past can still haunt the human brain many years later, when triggered by a similar tune. The way the brain stores and...
Multilayer Perceptrons and Event Classification with data from CODEC using Scilab and Weka
For my first blog, I thought I would introduce the reader to Scilab [1] and Weka [2]. In order to illustrate how they work, I will put together a script in Scilab that will sample using the microphone and CODEC on your PC and save the waveform as a CSV file.
Digging into an Audio Signal and the DSP Process Pipeline
In this post, I'll look at the benefits of using multiple perspectives when handling signals.A Pre-existing Audio FileLet's say we have an audio file of interest. Let's load it into Audacity and zoom in a little (using View → Zoom → Zoom In, multiple times). The figure illustrates the audio signal: just a basic single-tone signal.
By continuing to zoom into the signal, we eventually get to the point of seeing individual samples as illustrated below. Notice that I've marked one...
Exploring Human Hearing Range
Human Hearing RangeIn this post, I'll look at an interesting aspect of Audacity – using it to explore the threshold of human hearing. In my book Digital Signal Processing: A Gentle Introduction with Audio Examples, I go into this topic and I include a side note on the amazing hearing range of our canine companions.
Creating a Test Audio FileAudacity allows for the generation of a variety of test signals. If you click the Generate->Tone menu, it looks something like...
ICASSP 2011 conference lectures online (for free)
For the first time, the oral presentations of the International Conference on Accoustics, Speech, and Signal Processing (ICASSP) were recorded and posted online for free. This conference is the best in signal processing and it's diverse as well.
It has a bit speech processing, communication signal processing, and some interesting stuff like bio-inspired signal processing, where Prof. Sayed modeled the behaviour of a group of predetors attacking a herd of preys using distributed least mean...
A Free DSP Laboratory
Getting Started In Audio DSPImagine you're starting out studying DSP and your particular interest is audio. Wouldn't it be nice to have access to some audio signals and the tools to analyze and modify them? In the old days, a laboratory like this would most likely have cost a lot of time and money to set up. Nowadays, it doesn't have to be like this. The magic of open source software makes it quite straightforward to build yourself a simple audio DSP laboratory – just use the brilliant...
Sonos, Shut Up and Take My Money! - Is Spatial Audio Finally Here?
Although I generally agree that money can't buy happiness, I recently made a purchase that has brought me countless hours of pure joy. In this blog post, I want to share my excitement with the DSPRelated community, because I know there are many audio and music enthusiasts here, and also because I suspect there is a lot of DSP magic behind this product. And I would love to hear your opinions and experiences if you have also bought or tried the Sonos ERA 300 wireless speaker, or any other...
Multilayer Perceptrons and Event Classification with data from CODEC using Scilab and Weka
For my first blog, I thought I would introduce the reader to Scilab [1] and Weka [2]. In order to illustrate how they work, I will put together a script in Scilab that will sample using the microphone and CODEC on your PC and save the waveform as a CSV file.
Autocorrelation and the case of the missing fundamental
[UPDATED January 25, 2016: One of the examples was broken, also the IPython notebook links now point to nbviewer, where you can hear the examples.]
For sounds with simple harmonic structure, the pitch we perceive is usually the fundamental frequency, even if it is not dominant. For example, here's the spectrum of a half-second recording of a saxophone.
The first three peaks are at 464, 928, and 1392 Hz. The pitch we perceive is the fundamental, 464 Hz, which is close to...
The Phase Vocoder Transform
1 IntroductionI would like to look at the phase vocoder in a fairly ``abstract'' way today. The purpose of this is to discuss a method for measuring the quality of various phase vocoder algorithms, and building off a proposed measure used in [2]. There will be a bit of time spent in the domain of continuous mathematics, thus defining a phase vocoder function or map rather than an algorithm. We will be using geometric visualizations when possible while pointing out certain group theory...
In Search of The Fourth Wave
Last year I participated in the first DSP Related online conference, where I presented a short talk called "In Search of The Fourth Wave". It's based on a small mystery I encountered when I was working on Think DSP. As you might know:
A sawtooth wave contains harmonics at integer multiples of the fundamental frequency, and their amplitudes drop off in proportion to 1/f. A square wave contains only odd multiples of the fundamental, but they also drop off...Exploring Human Hearing Range
Human Hearing RangeIn this post, I'll look at an interesting aspect of Audacity – using it to explore the threshold of human hearing. In my book Digital Signal Processing: A Gentle Introduction with Audio Examples, I go into this topic and I include a side note on the amazing hearing range of our canine companions.
Creating a Test Audio FileAudacity allows for the generation of a variety of test signals. If you click the Generate->Tone menu, it looks something like...
Overview of my Articles
IntroductionThis article is a summary of all the articles I've written here at DspRelated. The main focus has always been an increased understanding of the Discrete Fourier Transform (DFT). The references are grouped by topic and ordered in a reasonable reading order. All the articles are meant to teach math, or give examples of math, in context within a specific application. Many of the articles also have sample programs which demonstrate the equations derived in the articles. My...
Fitting Filters to Measured Amplitude Response Data Using invfreqz in Matlab
This blog post has been moved to the code snippet section and can now be found HERE. Please update your bookmark. Thanks!
Through the tube...
Hello all,
something completely different...
there was some recent discussion on the forum about modeling guitar amplifiers.I have been wondering for quite a while, whether the methods that I use to model radio frequency power amplifiers might also work for audio applications.
It's been a rainy day, so I found the time and energy for some experiments. Just for fun.
The device-under-test is a preamplifier with a single 12AX7 tube:
My good ol' Kurzweil (not in the picture) serves as "signal...
Digging into an Audio Signal and the DSP Process Pipeline
In this post, I'll look at the benefits of using multiple perspectives when handling signals.A Pre-existing Audio FileLet's say we have an audio file of interest. Let's load it into Audacity and zoom in a little (using View → Zoom → Zoom In, multiple times). The figure illustrates the audio signal: just a basic single-tone signal.
By continuing to zoom into the signal, we eventually get to the point of seeing individual samples as illustrated below. Notice that I've marked one...