For the last four months, this column has focused on the physical behavior of horns and the drivers that are attached to them. We intentionally tried to present these subjects with enough technical meat to be interesting even to very experienced pro sound practitioners. This month, we step back and re-examine frequency response, something that’s common in pro audio, and displayed on most equipment datasheets.

Almost everything here is covered using more mathematical terminology from many sources, everywhere from Wikipedia to university textbooks. Our intent is one of conceptual discovery, rather than a mathematical discourse. Most of the concepts here are not difficult, although some can be unintuitive. Let’s delve into the background behind how engineers calculate and display information about sound, and revisit the datasheet in light of our new perspective.

**Music Versus the Datasheet**

When sound strikes a microphone, what is occurring? Whether using a dynamic, ribbon or condenser transducer, the mic reacts to the air movement pressure applied by the sound and then converts that into an electrical signal sent to the electronics of the mixing console. The console can be thought of as gathering information about the voltage of the electrical signal from the microphone, which is in turn related to the sound modulating the pressure on the microphone diaphragm. Thus the voltage information is directly related to the sound captured by the microphone.

The sound signal coming in to the mixing console looks nothing like the performance plots on a manufacturer’s datasheet (**Fig. 1a**). Anyone who has made a recording and looked at the waveform in a digital audio workstation (DAW) knows that the information captured about pressure is an ever changing jagged line, rapidly jogging about in a complicated way (**Fig. 1b**). Contrast this bouncing waveform to the frequency response plot chart on a manufacturer’s datasheet, which for most electronics will be a more or less boring flat line.

It would be reasonable to ask how the boring, straight line on the datasheet has anything to do with the signal (i.e., sound information) coming from the microphones. Most people in pro audio would know that datasheet line represents frequency response, and that, in general, the flatter the line, the better. Some further fraction would know that a flat frequency response curve means that “all frequencies have the same relative volume.” But really, what does this mean in the context of the incoming signal from the microphone?

To answer this question, we journey back to the time not long after Napoleon’s defeat and a talented French mathematician and physicist named Jean Baptiste Joseph Fourier (*foy yea*) who rose to scientific and political importance after being orphaned as a child. Amid his many accomplishments, Fourier had a key mathematical insight that certain types of more complicated mathematical functions could be completely deconstructed into a collection of different, simpler functions. To recreate the original (complicated) math, one need only add up (sum) the simpler functions with a specific relationship between them. Fourier’s insight was soon proven by other mathematicians, and has since had far-reaching implications in many fields, including pro audio. His work ties the flat line of the datasheet to the dynamically changing signal of the incoming music signal.

**Two Sides of One Coin**

The complicated math Fourier considered in the 1820s today maps to the information about voltage ** **pressure ** **sound that is entering the mixing console. By extension, the flat line on the manufacturer’s datasheet can be thought of as information about the simpler math that can be used to deconstruct the complex signal. The flat line on the datasheet tells us that the mixing console electronics have a very uniform relationship between the simple math functions, and therefore will faithfully represent the complicated math (sound signal) with little deviation from the original mic input. Another way to think of this is that the datasheet tells you the simple math is added together in a manner that reshuffles it to a minimal degree, resulting in fidelity to the original captured information.

Fourier’s most significant achievements took place in an era before the phonograph, photography or other modern media. To him these two “domains” of functions (i.e., simple and complicated) were just mathematical abstractions on a piece of paper. In our modern context of the mixing console, Fourier’s concepts are *still* abstract mathematical ways of looking at the incoming sound information. It’s not like the mixing console creates math equations out of the input signal! Rather the two math domains are convenient ways of representing the incoming microphone information and the behavior of the mixing console.

It is important to remember that the sum of the simple functions are mathematically proven to be *completely equivalent* to the complex math functions. When we describe the behavior of one math domain, we are *also* saying something about the other’s behavior. Fourier’s work describes the mathematical machinery to move from one domain to another. This is to say that Fourier showed how to either deconstruct the complicated math, or sum up the simple math, and switch between the two representations. While the views are mathematically equivalent, each is useful to think about the signal (or console behavior) in different contexts.

**Fourier’s First Domain**

The information that comes into the mixing console details how the pressure of the sound wave at the microphone is changing. Specifically how the pressure is *changing* with *time*. As the trumpet blows, or the drum rings, it produces changes in air pressure that rise and fall as time progresses. Just as the microphone picks up these vibrations, so to do our ears. Our brains then interpret this pressure that changes with time as sound. This correlates to the complicated math domain discussed above. The technical term for this is the “time domain.”

In addition to the concept of time proceeding forward, the other key concept about the time domain is the principle of change. If the pressure in the air didn’t rise and fall, the force on the microphone diaphragm would not change, and the diaphragm would not change its movement as time progressed. A constant pressure is simple to describe mathematically. All one needs is to provide a *fixed* number that describes the pressure, and a *fixed* direction that describes where the pressure is pushing. It is only as the pressure changes moment by moment that more complicated mathematical descriptions arise.

In addition to the undulating pressure on a microphone, our world is full of change. Cars speed up and slow down, wind changes speed and direction, temperatures rise and fall, heat flows from hot spaces to cold, gravity accelerates skiers down the slope and money travels from one account to another. All of these circumstances represent change, and some of them can be tackled with the math Fourier developed. His work is ubiquitous in the sciences, and we in professional audio benefit from its development and application in other fields. For instance, the increasing affordability of much of our signal processing equipment derives from cheaper and more powerful electronic chips that are developed to tackle similar computations in other disciplines such as telecommunications.

If the time domain can be thought of as the complicated math domain, then the “frequency domain” is comprised of simpler math functions. Much of the thought and vocabulary in professional audio is tied to the frequency domain. This should make intuitive sense, as a datasheet frequency response plot is easier to understand than some jagged signal input. A datasheet could just as easily provide a time domain representation of the product’s performance (**Fig. 2**), but most would find it more difficult to interpret than a smooth, flat frequency response line.

In changing to the frequency domain representation, we mention some vocabulary common in live sound. The three terms below define the essential bookkeeping that keeps track of how to sum up the simple math functions and allow swapping back to the time domain:

**Frequency** — Frequency measures how often in time something completes the same cycle or path. For instance, the Earth makes a complete rotational cycle every 24 hours. Therefore, every location on the globe repeats its orientation to the sun on the same 24-hour interval. Higher frequencies mean something repeats more rapidly.

**Amplitude** — Amplitude is a measure of the intensity (i.e., level or volume) of a *given frequency*. A high amplitude means that a given frequency is prominent, while a zero amplitude means that a given frequency is entirely absent from a signal.

**Phase** — Phase is a mathematical way of keeping track of the relative orientations in time of *each frequency* with respect to others. Each frequency has a phase value that defines its orientation relative to other frequencies contained in a signal.

As pro sound practitioners, we use these terms regularly, but possibly without thinking deeper about how they enable swapping between domains. To transition from the frequency domain to the time domain, we need an account of all the frequencies a signal contains, the relative amplitude of each frequency and the relative position in time of each frequency component. Keep a record of these three things, and one can add them up in a specific way to reconstruct the time domain representation. In a similar manner, a time domain signal can be deconstructed into information about frequency, amplitude and phase. Scientists, engineers, designers and system techs routinely switch between the two domains to show audio information in the most useful and intuitive way.

There are a number of different graphical methods that people use to display information from the frequency domain. The most common graphic on equipment datasheets is frequency response. The frequency response is only half of another common display method called the Bode (*bo dee*) plot. The Bode plot consists of two panels. One presents relative amplitude versus frequency (**Fig. 3**). The other presents relative phase versus the same range of frequencies (**Fig. 4**). Datasheets (Fig. 1) generally omit the plot of phase behavior, as it can be less intuitive to understand than the frequency response plot. Measurement software like SMAART, Systune, SpectraFoo, SIM 3, SAT-LIVE, ARTA, etc., display frequency and phase in the Bode plot format (Figs. 3 and 4).

**Conclusion**

Behind this common terminology in our industry lies deep insights from centuries of mathematics, science and engineering. Fourier’s insight paved the way to how certain specifications are displayed on modern pro audio datasheets. He also opened the door to understanding that many of the complex undulations of our world can be taken apart and represented by frequency, amplitude and phase.

There are many ramifications of Fourier’s work in the world, certainly enough for another article in *FRONT of HOUSE*. We conclude with one profound realization about audio, and indeed in the physical world at large. We have see that two very different ways of looking at audio information are mathematically joined at the hip. If one manipulates a signal represented in the time domain, the frequency domain is *mathematically compelled* to reflect those same changes in its own way. Conversely, if we modify a signal’s frequency domain behavior, we *also* change its behavior when viewing it from the time domain perspective. One cannot manipulate the time domain without influencing the frequency domain, and vice versa.

*Phil Graham is the senior engineering consultant of PASSBAND, llc (passbandllc.com). *

< Prev Article | Next Article > |
---|