SOUND Musical instruments produce harmonically complex sounds (containing many

SOUND SYNTHESIS:

          THE PHYSICS BEHIND MUSIC SYNTHESISERS

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

Introduction

The arrival
of musical synthesisers in the 1960s changed the industry, and came to define
the sound of contemporary music. Synthesisers are electronic musical instruments,
typically controlled with a keyboard, that can replicate the sound created by
traditional instruments, or produce their own unique electronic sound. The
first synthesisers were analog and used complex electronic circuitry, however
recent technological advances in digital signal processing have ushered in more
accessible digital synths 1.
There are many types of synthesis that can be employed, with each producing their
own distinctive sound allowing for a huge range of potential music tonality.

Synthesisers
were first used in pop music in the 1960s, where they featured in albums such
as Simon and Garfunkel’s Bookends and
The Beatles’ Abbey Road, but were a
rarity due to their high cost. In the 1980s, new relatively inexpensive digital
synthesisers became available which heightened their popularity in 80s pop and
dance music where they defined the sound of the era. Nowadays, synths are used
ubiquitously in rock, pop and dance and even contemporary classical composers
have experimented with their use.

What is sound?

The
physics governing sound is incredibly important in understanding how a
synthesiser functions. Sound is the perceived vibration of air resulting from the
oscillation of a sound source. These vibrations arise from the conversion of mechanical energy, for
example a hand clap, to a disturbance in air pressure. If the vibrations follow
a pattern, such that they are periodic, then the sound has a waveform and is
meaningful to our ears. From Fourier analysis, we know that any periodic
waveform, no matter how complex, can be expressed as a sum of sine waves with
different frequencies and amplitudes 2.
In the case of sound waves, these component frequencies are known as partials. Analysis
of the sound to find the component harmonics is achieved using the Fast Fourier
Transform (FFT) (as opposed to the usual Fourier transform, due to its impractical
computational demands), which is an algorithm which computes the Discrete
Fourier transform. Figure 1 shows the pressure disturbance waveform
and its corresponding frequency spectrum (found from the FFT) of a sound
produced by a piano.

Figure 1 The waveform and frequency spectrum
of a piano note. The spectrum is found by applying the FFT to the original
waveform 3.

In general, Musical instruments
produce harmonically complex sounds (containing many frequencies). Typically,
the sound produced by instruments contain partials which are all integer
multiples of the lowest frequency partial, these are known as harmonics. If
this is not the case, such that there are inharmonic partials in the frequency
spectrum, then the sound can be harsh, unpleasant and without clear pitch (such
as the sound from a crash cymbal) 4.
The frequency spectrum of a sound characterises its timbre, which is a quality
that makes instruments sound different from each other, even when they produce
sound of the same pitch and loudness.

Figure
2 The waveforms and frequency
spectra of different instruments playing the same A note. From top to bottom,
the instruments are a tuning fork, flute, violin and a human voice 5.

The
pitch of a sound is a property that allows for determination of whether one
sound is ‘higher’ or ‘lower’ than another. The pitch of a sine wave is directly
associated with its frequency, that is, a sinusoid of higher frequency will
have higher pitch. This however is not the same for complex waveforms, where
it’s pitch can be ambiguous, since the listener hears multiple frequencies at
once. However, the pitch we hear is typically based on the lowest, fundamental frequency 6. The pitch of a
sound depends on the frequency logarithmically, meaning that the ratio of the
frequencies defines the relationship between the pitches.

There
are two main factors responsible for the unique sound of each instrument, and
each must be utilised by a synth for sound replication. As previously
discussed, the timbre defines the steady state sound and so this must be imitated
by matching the frequency spectra. The other factor is how the volume and
frequencies present in the sound change over time in a unique way for each
instrument. These changes in volume are clear when comparing a piano and a violin,
for example, where when a piano key is struck the sound is immediate and
short-lived, but while a violin is bowed the sound builds much more gradually.

History of the Synthesiser

The origin of synthesisers is hard to define. Some
electronic organs from the early 20th century such as the Telharmonium
(shown in Figure 3),
which used dynamos to create its sound, utilised basic synthesis techniques 7.

Figure 3 The Telharmonium 7.

Many of the subsequent innovations in synthesiser technology
were supported by the invention of the electronic amplifying vacuum tube in
1906 8.
Their reliance on tube circuitry rendered these instruments relatively unwieldy
and volatile and certainly not reliable. When the transistor came in in 1947,
smaller and more trustworthy machines were possible.

Analogue
synthesisers as we know them today were first designed by Don Buchla and Robert
Moog, who to begin with created cumbersome and convoluted devices such as the 100
series Modular Electronic Music System (shown in Figure 4),
which was first produced in 1963. This synthesiser was controlled by rather
unusual capacitance-sensitive touch plates and was not commercially successful.

Figure 4 Don Buchla’s 100 series 9.

The 100 series consisted of various
modules mounted in a cabinet, with each performing its own signal-generating or
signal-modifying function. The modules had to be interconnected via a confusing
mesh of wires known as patch cords, and as a result using the 100 series
required significant expertise. In 1964, Robert Moog, who previously only
created Theremin kits (the instrument responsible for the theme music to Star
Trek), produced his first voltage controlled modular synthesiser, the Moog
Modular. Unlike Buchla’s 100 series, this featured a familiar piano-style
keyboard as the controller. The Moog Modular was a huge success which shortly
after its release was found in the hands of Keith Emerson, The Beatles and The
Rolling Stones. These early synths were not adopted by average musicians
however, due to their high cost and complexity 10.

Figure 5 The first Moog Modular from
1964 11.

This success prompted Moog Music to design
and manufacture increasingly innovative synths. In 1970, Moog created the bestselling
Mini-Moog. The popularity of the Mini-Moog was due to its focus on portability,
usability, and affordability, as well as its famous and unique three oscillator
bass sound. The popularity of the Mini-Moog and similar synths began the shift
away from large modular cabinets towards smaller and more straightforward
keyboard instruments. In 1973, Yamaha licenced John Chowning’s discovery of digital
FM Synthesis and went on to release the revolutionary Yamaha DX7 in 1983. This
came to define the sound of the 80s and lead to the development of many more
digital synthesisers capable of synthesis methods that were not possible using
analog synths.

Components of Synthesisers

While
analog and digital synthesisers naturally contain different components, the
basic principles behind each signal processing stage are identical 12. All analog synths
share the same basic components, known as modules:

·        
Voltage controlled oscillator (VCO) –
Generates the raw foundation waveform e.g. sine, sawtooth, square and
triangular waves. Several of these may be used depending on the type of
synthesis. It is an electronic circuit capable of producing an oscillating
voltage signal with a variety of waveforms whose frequency depend on the input
voltage (known as the control voltage).

·        
Low frequency oscillator (LFO) – An oscillator
with a variable frequency used to add tremolo (amplitude modulation) or vibrato
(frequency modulation). Similar to the VCO but with a much lower operating
frequency (out of the audible range).

·        
ADSR Envelope Generator (EG) – Creates a
voltage envelope signal known as an ADSR.

·        
Voltage controlled filter (VCF) – Used to
attenuate certain partials from the raw signal to create a specific timbre.
Typically a low-pass RC circuit with a controllable cut-off frequency and
resonance 13.

·        

 

 

Voltage controlled amplifier (VCA) – Adjusts the amount
of the signal that is passed and therefore controls the loudness of the sound.
The loudness of the resultant sound depends on the control voltage.

 

 

Figure
6 A schematic diagram showing
the components and how they are used in conjunction with one another.

As mentioned
previously, imitation of the loudness and frequency variations with time must
be employed alongside matching of the frequency spectra to produce an
instrument’s unique sound. This variation of the sound is governed by the
Attack Decay Sustain Release (ADSR) envelope:

·        
Attack
– Key is pressed, and the voltage builds from zero to maximum

·        
Decay
– After the initial attack, the voltage decays to a steady value (the sustain
level).

·        
Sustain –
The voltage is maintained for some time with the key still pressed.

·        
Release
– The key is released, and the voltage decreases from sustain level to zero.

Attack,
decay, and release are characterised by their duration, but sustain is
determined by the maintained voltage. This is because the duration of this
period is decided by the user.

Figure 7 A typical ADSR envelope 14.

ADSR envelopes are implemented in both the VCF and the VCA,
where the envelopes are generally different. In the VCA, the voltage variation
provided by the ADSR is taken as the control voltage and hence controls the loudness.
The ADSR is applied to the VCF to change the cut-off frequency over time. This
nearly always is used so that the cut-off frequency decreases during the decay
and release periods of the loudness, which consequently reduces the number of
partials present.

Types of Synthesis

There are many types of sound synthesis that are utilised by
modern synthesisers, each producing their own unique sound. For all the following types, the fundamental
harmonic and hence the pitch are determined by the controller (usually a
keyboard).

Additive

The
earliest synths such as the telharmonium and Hammond organ produced their sound
via additive synthesis. This method requires multiple VCOs with a sine wave
output which are added together. The timbre of an instrument is replicated by
tuning the VCOs frequencies to match its frequency spectrum. This is the most
basic synthesis technique and is rarely used in modern synths due to the large
numbers of VCOs required to produce a harmonically rich sound.

Subtractive

Subtractive
synthesis was used by Robert Moog’s first modular synthesisers. This technique begins
with the VCO generating a harmonically rich signal (such as a square, sawtooth
or triangular wave), which is then filtered by the VCF so that only certain
partials are present in the resulting signal 15. Typically, the
fundamental frequency (and hence the pitch) is not altered by the VCF, so only
the timbre of the sound is affected. This technique is utilised in the human
voice, where the vocal chords produce a harmonically rich waveform, which is
filtered by the throat and mouth according to how widely they are opened. All
modern analog and virtual analog synthesisers use subtractive synthesis and was
the method used by the first modular synths, Buchla’s 100 series and the Moog
Modular.

FM Synthesis

In
the 1980s and 1990s, subtractive synthesis was overtaken as the most popular technique
giving way to frequency modulation (FM) synthesis. It appeared on the Yamaha
DX7 as the first digital synthesis method. The main idea behind this technique
is to take a carrier waveform with a simple structure, such as a sine or square
wave, and use a modulator waveform to change its timbre. The wave you hear is
the carrier, with the modulator serving as its input so that it has a frequency
which is continually changing. For low modulating frequencies, the carrier wave
sounds like a siren, but when the modulator frequency is in the audible range,
new frequencies appear in the carrier wave’s spectrum 15.

Figure
8 The resulting waveform from
carrier and modulator waveforms using FM synthesis 16.

These additional frequencies are known as sidebands, and appear above and below
the carrier frequency separated by integer multiples of the modulating
frequency 17.
This technique can result in a very complex waveform with many partials which
are not necessarily harmonic.

 

Consequently,
FM synthesis excels at creating metallic and bell-like timbres (which have inharmonic
partials in their spectra), as well as electric piano sounds. FM synthesis is
favoured in Dance and Drum and Bass genres due to the punchy bass and
percussive sounds that can be created.

Wavetable Synthesis

Wavetable
synthesis is a digital technique which is by far the most commonly used
technique today. It utilises a table of many hundreds of pre-existing single-cycle
waveforms. These waveforms can be played directly and looped, or chained where
the sound evolves from one to the next with digital interpolation occurring
between the individual waveforms. This interpolation allows for dynamic and
smooth transitioning timbres. These stored waveforms are usually recorded
sounds (known as samples), whose pitches are altered by speeding up or slowing
down the playback rate. Single cycles of the waveforms are stored to reduce
memory requirements 18.

Effects

The
interesting and varied sounds that can be produced from synths are also due to
the vast range of effects available. In addition to those that can be added to
traditional acoustic instruments, such as vibrato on a violin, a synth provides
greater versatility due to the plethora of effects not possible without
electronic techniques.

Vibrato

Vibrato
is the subtle variation of a sound’s pitch because of a continuously changing
frequency. In the case of synths, this frequency modulation is caused by the
LFO. The type of vibrato depends on the LFOs parameters, which are frequency,
amplitude and waveform. If the modulating signal has a sine or triangular
waveform, then the pitch will rise and fall smoothly, however using a square-wave
will result in the pitch alternating between two values, which is known as a
trill. This is similar to FM
Modulation but with much lower frequencies (not in the audible range), where
the signal produced from the LFO serves as an input to the VCO.

Tremolo

Tremolo is the periodic modulation of the
loudness, resulting in a shuddering sound. The signal from the LFO is passed to
serve as input to the VCA where it acts as an envelope much like the ADSR. When
used in conjunction with the ADSR envelope, the effect of the tremolo is typically
only heard during the sustain phase.

Figure 10 The frequency modulation (FM)
used for vibrato and the amplitude modulation (AM) used in tremolo 19.

Chorus

The
chorus effect is where multiple sounds with approximately the same pitch and
timbre are perceived as one. This effect can be created with multiple sources,
such as by an orchestra, or can be produced electronically by a synthesiser. A
synth produces chorus by mixing the sound signal with one or more copies of
itself which have had their pitch modulated by the LFO. The differences in
pitch create a slight shimmering effect due to beating. The differences in pitch
are very small to simulate the minute differences in the tuning of an
orchestra’s instruments.

Reverb

Reverb
effects are produced when playing an instrument in a large room, due to the
reflections of the sound waves from the walls. This can be recreated by a
synthesiser by using the mathematical convolution operation. The impulse
response (IR) of the room whose reverb characteristics are to be simulated is
convoluted with the synthesised waveform. The IR is the recorded response of
the acoustic space to audio stimuli 20. This effect creates
a series of very fast echoes which add to the depth of the sound.

Conclusion

The
software innovations that were made possible by increased processing power of
computers have made digital and analog synthesisers no longer the only possible
way to synthesise new sounds. Computer programs which could generate sounds
completely digitally emerged, albeit with sometimes poor response times.
However, many believe dedicated instruments are and will always be the firm
favourite for professional musicians due to their unique character. Much to many
musician’s dismay, some manufacturers have turned to removing lots of the customisability
and control in favour of pre-set voices in a move to reduce the cost of their
instruments. Thankfully, in recent years analog synthesisers years have had a
revival, in which they’ve experienced huge improvements from their vintage
counterparts. Many leading manufactures such as Yamaha, Roland and Korg
continue to innovate in this area.

Synthesisers
have undeniably lead to massive changes within contemporary music, and with
future innovations in music technology it is predicted that many more are to
come.

 

References

1

W. M. Hartmann,
“The electronic music synthesiser and the physics of music,” Michigan State
University, Michigan, 1975.

2

U. Egede, Fourier
Transforms, London: Physics Department, Imperial College London, 2017.

3

J. F. Alm and J. S.
Walker, “Time-Frequency Analysis of Musical Instruments,” SIAM Review, vol.
44, no. 3, pp. 457-476, 2002.

4

D. Creasey, Audio
Processes: Musical Analysis, Modification, Synthesis, and Control, New
York: Routledge, 2017.

5

M. R. Peterson,
“Musical Analysis and Synthesis in Matlab,” MAA’s College Mathematics
Journal, vol. 35, no. 4, pp. 369-401, 2004.

6

D. S. Durfee and J.
S. Colton, “The physics of musical scales: Theory and experiment,” American
Journal of Physics, vol. 83, no. 10, pp. 835-842, 2015.

7

T. L. Rhea, “The
Telharmonium: A History of the First Music Synthesizer,” Computer Music
Journal, vol. 12, no. 3, pp. 59-63, 1988.

8

R. E. Fielding,
“Lee de Forest,” Encyclopaedia Britannica, 3 April 2017. Online.
Available: https://www.britannica.com/biography/Lee-de-Forest. Accessed 28
12 2017.

9

R. Smith, “Buchla
100 Series,” Vintage Synths, 20 March 1994. Online. Available:
http://www.vintagesynth.com/misc/buchla100.php. Accessed 27 12 2017.

10

M. Vail, Vintage
Synthesisers, San Francisco: Miller Freeman Books, 2000.

11

R. Luther, “Moog
Product Timeline,” MoogMusic, 6 February 2007. Online. Available:
https://www.moogmusic.com/legacy/moog-product-timeline. Accessed 27 12
2017.

12

S. O’Sullivan, “Pro
Audio Files: The Basics of Sound Synthesis,” 2012. Online. Available:
https://theproaudiofiles.com/sound-synthesis-basics/. Accessed 1 12 2017.

13

J. Pekonen and V.
Välimäki, “The Brief History of Virtual Analog Synthesis,” in Proceedings
of Forum Acusticum, Aalborg, 2011.

14

M. Duerinckx, “ADSR
Envelope Generator Module,” Synth DIY with Mich, 11 July 2017. Online.
Available: https://synth.michd.me/module/adsr-eg/. Accessed 21 12 2017.

15

S. Rise, “FM
Synthesis,” The Synthesiser Academy, 6 April 2013. Online. Available:
http://synthesizeracademy.com/fm-synthesis/. Accessed 23 12 2017.

16

Apple Inc.,
“Logistic Studio Instruments: Other Synthesis Methods,” Apple Inc., 2009.
Online. Available:
https://documentation.apple.com/en/logicstudio/instruments/. Accessed 28
12 2017.

17

C. Roads, The
Computer Music Tutorial, Cambridge: MIT Press, 1996.

18

J. O. Smith,
“Virtual Acoustic Musical Instruments: Review and Update,” Journal of
New Music Research, vol. 33, no. 3, pp. 283-304, 2005.

19

M. Russ, “Making
sounds with analogue electronics – Part 5: Using analogue synthesis,”
Embedded, 25 January 2012. Online. Available:
https://www.embedded.com/design/audio-design/4235289/Making-sounds-with-analogue-electronics—Part-5–Using-analogue-synthesis.
Accessed 04 01 2018.

20

D. V. Nikolov, M.
J. Miši? and M. V. Tomaševi?, “GPU-based implementation of reverb effect,”
in 2015 23rd Telecommunications Forum Telfor (TELFOR), Belgrade,
2015.

21

N. Lenssen and D.
Needell, “An Introduction to Fourier Analysis with Applications to Music,” Journal
of Humanisitc Mathematics, vol. 4, no. 1, pp. 72-91, 2014.

22

University of
Salford, “Sound Synthesis Tutorial,” 2010. Online. Available:
http://www.acoustics.salford.ac.uk/acoustics_info/sound_synthesis/.
Accessed 18 12 2017.

23

W. A. Sethares,
Tuning, Timbre, Spectrum, Scale, Madison: Springer, 1961.