Chapter 1: Sounds and Signals

Interactive JavaScript notebook

Press Shift+Enter or click Run to execute each cell. Variables persist between cells. Run cells in order from top to bottom.

Signals

Instantiate cosine and sine signals.

The function CosSignal(freq, amp, offset) creates a cosine signal defined by:

y(t)=Acos(2πft+ϕ)y(t) = A \cos(2\pi f t + \phi)

where:

Similarly, SinSignal(freq, amp, offset) uses sine instead of cosine:

y(t)=Asin(2πft+ϕ)y(t) = A \sin(2\pi f t + \phi)
[1]:

Plot the sine and cosine signals. By default, plot plots three periods.

[2]:

Here's the sine signal.

[3]:

Notice that the frequency of the sine signal is doubled, so the period is halved.

The sum of two signals is a SumSignal.

[4]:

Here's what it looks like.

[5]:

Waves

A Signal represents a mathematical function defined for all values of time. If you evaluate a signal at a sequence of equally-spaced times, the result is a Wave. framerate is the number of samples per second.

[6]:

We can play the wave as audio.

[7]:

The ys attribute is an array that contains the values from the signal. The interval between samples is the inverse of the framerate.

[8]:

Signal objects that represent periodic signals have a period attribute.

Wave provides segment, which creates a new wave. So we can pull out a 3 period segment of this wave.

[9]:

Wave provides plot (we use our plot helper)

[10]:
normalize scales a wave so the range doesn't exceed -1 to 1.

apodize tapers the beginning and end of the wave so it doesn't click when you play it.

[11]:

Reading Audio Files

readWave reads WAV files. The WAV examples in the book are from freesound.org.

[12]:
[13]:

I pulled out a segment of this recording where the pitch is constant. When we plot the segment, we can't see the waveform clearly, but we can see the "envelope", which tracks the change in amplitude during the segment.

[14]:

Spectrums

Wave provides makeSpectrum, which computes the spectrum of the wave using the FFT.

[15]:

Spectrum provides plot (we use plotSpectrum for control over max frequency)

[16]:

The frequency components above 10 kHz are small. We can see the lower frequencies more clearly by providing an upper bound:

[17]:

Filtering

Spectrum provides lowPass, which applies a low pass filter; that is, it attenuates all frequency components above a cutoff frequency.

[18]:

The result is a spectrum with fewer components.

[19]:

We can convert the filtered spectrum back to a wave:

[20]:

And then normalize it to the range -1 to 1.

[21]:

Before playing it back, I'll apodize it (to avoid clicks).

[22]:

And I'll do the same with the original segment.

[23]:

Finally, we can listen to the original segment and the filtered version.

[24]:
[25]:

The original sounds more complex, with some high-frequency components that sound buzzy. The filtered version sounds more like a pure tone, with a more muffled quality. The cutoff frequency I chose, 3000 Hz, is similar to the quality of a telephone line, so this example simulates the sound of a violin recording played over a telephone.