Sound Synthesis
When any mechanical collision occurs, such as a fork being dropped, sound is produced. The energy from the collision is transferred through the air and other mediums, and if heard, into your ears. On a small scale, the collision creates sine waves. When different sine waves with variations are added, or are existing in the same place at the same time, they produce more complex waves with a sine function derivation. The environment the collision is produced in modulates, or changes, the wave or "sound". The basic idea behind sound synthesis is to produce a sound wave with a machine that would have been naturally produced by a collision, and then manipulate it like the environment does.
Generally, a single "sound" will include a fundamental frequency, and any number of overtones. The frequencies of these overtones are either integer multiples of the fundamental frequency, or integer fractions thereof (subharmonics). This study of how complex waveforms can be alternately represented is covered in Laplace and Fourier transforms.
When natural tonal instruments' sounds are analyzed in the frequency domain (as on a spectrum analyzer), the spectra of their sounds will exhibit amplitude spikes at each of the fundamental tone's harmonics. Some harmonics may have higher amplitudes than others. The specific set of harmonic-vs-amplitude pairs is known as a sound's harmonic content.
When analyzed in the time domain, a sound does not necessarily have the same harmonic content throughout the duration of the sound. Typically, high-frequency harmonics will die out more quickly than the lower harmonics. For a synthesized sound to "sound" right, it requires accurate reproduction of the original sound in both the frequency domain and the time domain.
Percussion instruments and rasps have very low harmonic content, and exhibit spectra that are comprised mainly of noise shaped by the resonant frequencies of the structures that produce the sounds. However, the resonant properties of the instruments (the spectral peaks of which are also referred to as formants) also shape an instrument's spectrum (esp. in string, wind, voice and other natural instruments).
In most conventional synthesizers, for purposes of re-synthesis, recordings of real instruments are composed of several components.
These component sounds represent the acoustic responses of different parts of the instrument, the sounds produced by the instrument during different parts of a performance, or the behavior of the instrument under different playing conditions (pitch, intensity of playing, fingering, etc.) The distinctive timbre, intonation and attack of a real instrument can therefore be created by mixing together these components in such a way as resembles the natural behavior of the real instrument. Nomenclature varies by synthesizer methodology and manufacturer, but the components are often referred to as oscillators or partials. A higher fidelity reproduction of a natural instrument can typically be achieved using more oscillators, but increased computational power and human programming is required, and most synthesizers use between one and four oscillators by default.
Schematic of ADSR
Schematic of ADSR
One of the most important parts of any sound is its amplitude envelope. This envelope determines whether the sound is percussive, like a snare drum, or persistent, like a violin string. Most often, this shaping of the sound's amplitude profile is realized with an "ADSR" (Attack Decay Sustain Release) envelope model applied to control oscillator volumes. Apart from Sustain, each of these stages is modeled by a change in volume (typically exponential). Although the oscillations in real instruments also change frequency, most instruments can be modeled well without this refinement. This refinement is necessary to generate a vibrato.
Additive Synthesis
Additive synthesis is a technique of audio synthesis which creates musical timbre.
The timbre of an instrument is composed of multiple harmonics or partials, in different quantities, that change over time. Additive synthesis emulates such timbres by combining numerous waveforms pitched to different harmonics, with a different amplitude envelope on each, along with inharmonic artifacts. Usually, this involves a bank of oscillators tuned to multiples of the base frequency. Often, each oscillator has its own customizable volume envelope, creating a realistic, dynamic sound that changes over time.
Frequency Modulation Synthesis
In audio and music frequency modulation synthesis (or FM synthesis) is a form of audio synthesis where the timbre of a simple waveform is changed by frequency modulating it with a modulating frequency that is also in the audio range, resulting in a more complex waveform and a different-sounding tone. The frequency of an oscillator is altered or distorted, "in accordance with the amplitude of a modulating signal. For synthesizing harmonic sounds, the modulating signal must have a harmonic relationship to the original carrier signal. As the amount of frequency modulation increases, the sound grows progressively more complex. Through the use of modulators with frequencies that are non-integer multiples of the carrier signal (i.e., non harmonic), bell-like dissonant and percussive sounds can easily be created.
Robert Moog
Robert Arthur Moog was an American pioneer of electronic music, best known as the inventor of the Moog synthesizer. Moog is the most influencial person in the history of electronic music.
No comments:
Post a Comment