Wednesday, January 30, 2013

ADSR

A - Attack: time taken for initial run-up of level from nil to peak, beginning when the key is first pressed.
D - Decay: time taken for the subsequent run down from the attack level to the designated sustain level.
S - Sustain: level during the main sequence of the sound's duration, until the key is released.
R - Release: time taken for the level to decay from the sustain level to zero after the key is released.

http://en.wikipedia.org/wiki/Synthesizer#ADSR_envelope



ADSR

ADSR is an acronym that stands for attack, decay, sustain, and release in reference to an amplitude envelope for a synthesizer. Attack is the initial hit of the note, and controls how the note comes from silence into life. Decay controls how the sound fades away over time. Sustain refers to the actual length of time that a note is playing. Release is in charge of how the volume diminishes right as a sound ends.

.MID ?

When you record music onto a computer using MIDI, the software saves this list of messages and instructions as a .MID file. If you play the .MID file back on an electronic keyboard, the keyboard's internal synthesizer software follows the instructions to play back the song. The keyboard will play a certain key with a certain velocity and hold it for a specified amount of time before moving on to the next note.
But .MID files aren't restricted to keyboards or other electronic musical instruments. They can be played on any electronic device that contains synthesizer software. Any computer with a sound card can play back .MID files. Cell phones use .MID files to play elaborate ringtones. MIDI data files are perfect for karaoke machines, because they allow the machine to easily change pitch for different vocal ranges. The .MID file will sound a little different on each device because the audio sources are different.

ADSR(Attack, Decay, Sustain and Release)

ADSR refers to Attack, Decay, Sustain and Release. Depending on what you are working with, these names mean relatively different things.

Picture of an ADSR envelope.

Knowing how ADSR works is very useful especially if you are working with synthesizers and trying to create your own patches. By knowing the basics and being able to implement each of these four elements of the waveform you are able to create the sounds you are looking for.
Whether you need washing pads with a long attack and sustain or quick synth stabs with instant attack and no release you can easily program your synths or processors to create the sounds you are looking for.

For more detailed information on each one of this terms, the link below explains each one them with graphical representations. Very useful.

http://audio.tutsplus.com/tutorials/production/an-introduction-to-adsr/

Monday, January 28, 2013

MIDI

Midi is an important piece of musical technology that allows a wide variety of digital instruments, computers, and other devices to communicate with each other. The wide-range of compatibility between devices makes it a good choice for musical creations. A midi file specifies notes, pitch, velocity, and many other control signals. It is easily manipulated to produce exactly the sound you want, and helped to spur the birth of electronic music.

Midi in Action

Hybrid Midi Turnable Action


MIDI in Performance



MIDI controllers can be customized greatly. This dj created his own midi controller for performance purposes.

What is MIDI?

MIDI, short for Musical Instrument Digital Interface, is a protocol for playing digitally generated sounds. Though it is not inherently an audio file, it contains the data output from electronic musical instruments. MIDI describes the pitch, velocity, volume, vibrato, etc. of whatever is being played.

It is a standard way for devices to communicate, which means maximum compatibility. 

MIDI (Musical Instrument Digital Interface)

MIDI is short for "Musical Instrument Digital Interface", Perhaps the best way to understand the meaning of MIDI is to first understand what it is not:
  • MIDI isn't music
  • MIDI doesn't contain any actual sounds
  • MIDI isn't a digital music file format like MP3 or WAV
MIDI is nothing more than data -- a set of instructions. MIDI data contains a list of events or messages that tell an electronic device (musical instrument, computer sound card, cell phone, et cetera) how to generate a certain sound. Here are a few examples of typical MIDI messages:
  • Note On signals that a key has been pressed or a note on another instrument (like a MIDI guitar or clarinet) has been played. The Note On message includes instructions for what key was pressed and at what velocity (how hard the note was played).
  • Note Off signals that the key has been released or the note is done playing.
  • Polyphonic Key Pressure is a measurement of how hard a key is pressed once it "bottoms out." On some keyboards, this adds vibrato or other effects to the note.
  • Control Change indicates that a controller -- perhaps a foot pedal or a fader knob -- has been pressed or turned. The control change message includes the number assigned to the controller and the value of the change (0-127).
  • Pitch Wheel Change signals that the pitch of the note has been bent with the keyboard's pitch wheel.

Friday, January 25, 2013

Class Today

There will not be a class today, please take this time to work on connecting your sound-file players to the

1.) RING MODULATION PATCH and the
2.) VCF Patch on the FLOSS Tutorials site.

Also this weekend please

3.) Blog on MIDI for Monday's class and
4.) download 2 .mid files for our work on MON-WED of next week.

I also would like everyone to

5.) signup for a SOUNDCLOUD FREE ACCOUNT

I am feeling sick and do not want to subject all of you to my illness.

Patrick

Friday, January 18, 2013

Integra Live

Hi Everyone, i thought i should tell you about this new FREE program I have been checking out that uses Pure Data as it's musical engine, Integra-Live. I would like everyone to try it in class this coming week but feel free to take a crack at it before then. It makes some wonderful sounds and Adds to our resources for sound design.

go here and download it:
http://www.integralive.org/

Integra Live has been built from the ground up using open source software and open standards. Our audio processing host runs in the Pure Data engine, communicating with the Integra server via Open Sound Control messaging. The graphical front end is written using the open source Apache Flex framework. Integra Live itself is freely available under the GNU GPL license with source code available on Sourceforge.

FM Synthesis

An interesting video explaining fm synthesis:


granular & additive synthesis

https://www.youtube.com/watch?v=9pn_b7OUO6I

this video is kinda long but the musician talks about a song he produced using granular synthesis. he speak about different sounds in his piece and how he made the sounds. kinda like a Behind the Scenes of a Movie.

also, from my understanding Additive Synthesis is a combination of sine waves that generate a timbre
the link below explains this combination of sine waves. disregard the end of it tho. they are promoting a product but the beginning has a fairly good description.

http://www.youtube.com/watch?v=BLoM9bBr8lc

Additive synthesis and Distortion

Additive synthesis:

Creating sound by adding together multiple sine waves. It is used in electronic music.

Distortion:

"Harmonic distortion adds overtones that are whole number multiples of a sound wave's frequencies.[1] Nonlinearities that give rise to amplitude distortion in audio systems are most often measured in terms of the harmonics (overtones) added to a pure sinewave fed to the system."

http://en.wikipedia.org/wiki/Distortion


Granular synthesis and Reverb


Granular synthesis is a sound synthesis method that breaks a sample into tiny snippets of sound that can each be manipulated individually, and which are then combined to form the final output. The process involves slicing up a sound into sections between .01 and .001 seconds in duration, where each slice is called a grain, and these are combined to form a graintable. Reverberation is an effect where a sound persists in a particular space after the original sound is produced. When a sound ends, the effect has the reverb persist for an amount of time determined by effect parameters.

Granular Synthesis

Much of the dubstep you hear today utilizes granular synthesis for it's sounds.


Frequency Modulation



I have found this video where he shows how FM differs from AM based on the changes on frequency in a wave instead of its amplitude.  It shows how the carrier change its frequency overtime by the modulator.

Also, on Partials, is very simple to know that when we hear any sound of a vibrating object (such a musical instrument) the sound we hear contains obviously many different frequencies and pitches and the combination of those two concepts is what is called Partials. So, wrapping all up, this collection of frequencies, pitches, or partials is what we define as Harmonic Series or Overtone Series.

Thursday, January 17, 2013

Problems with Cecilia5

I have trying to set up the sounds in Cecilia, but I hear nothing at all. Seems like is not working. My speaker are working just fine, but I cannot hear anything when I hit the play button. Any suggestions or something should I know to make it work?

Thank you

A word about Partials


Partial, harmonic, fundamental, inharmonicity, and overtone

Any complex tone "can be described as a combination of many simple periodic waves (i.e., sine waves) or partials, each with its own frequency of vibration, amplitude, and phase."[1] (Fourier analysis)
partial is any of the sine waves by which a complex tone is described.
harmonic (or a harmonic partial) is any of a set of partials that are whole number multiples of a common fundamental frequency.[2] This set includes the fundamental, which is a whole number multiple of itself (1 times itself).
Inharmonicity is a measure of the deviation of a partial from the closest ideal harmonic, typically measured in cents for each partial.[3]
Typical pitched instruments are designed to have partials that are close to being harmonics, with very low inharmonicity; therefore, in music theory, and in instrument tuning, it is convenient to speak of the partials in those instruments' sounds as harmonics, even if they have some inharmonicity. Other pitched instruments, especially certain percussion instruments, such as marimbavibraphonetubular bells, and timpani, contain non-harmonic partials, yet give the ear a good sense of pitch. Non-pitched, or indefinite-pitched instruments, such as cymbals, gongs, or tam-tams make sounds rich in inharmonic partials.
An overtone is any partial except the lowest. Overtone does not imply harmonicity or inharmonicity and has no other special meaning other than to exclude the fundamental. This can lead to numbering confusion when comparing overtones to partials; the first overtone is the second partial.
Some electronic instruments, such as theremins and synthesizers, can play a pure frequency with no overtones, although synthesizers can also combine frequencies into more complex tones, for example to simulate other instruments. Certain flutes and ocarinas are very nearly without overtones.

Tuesday, January 15, 2013

Overtones, harmonics and Additive synthesis

Blog Entries

Just Intonation

http://www.chrysalis-foundation.org/just_intonation.htm

Additive Synthesis

Combining multiple sine waves into one complex sine wave. The process emulates the process by which natural sounds are created

http://www.planetoftunes.com/synth/synth_types.htm

Amplitude Modulation

http://www.soundonsound.com/sos/mar00/articles/synthsecrets.htm

Additive Synthesis

Additive Synthesis is basically constructing any sound by combining several sine waves in different frequencies. I add this video where he shows how every sine wave moves and behaves according to the changes in the frequency.

Amplitud Modulation

I found this document in AM (Amplitud Modulation) ( http://arcarc.xmission.com/PDF_Electronics/Amplitude%20Modulation.pdf ) where explains how it was created and how it really started. It help me to understand that AM was the first method used to modulate a frequency, and that the basic principle of AM is to take voice frequencies, and mix (or modulate) them with a radio frequency signal so that they are converted to radio frequencies which will radiate or propagate through free space.

Additive Synthesis in Ableton


Here's a video about additive synthesis on ableton live. Does a pretty good job at explaining it in simple terms.






Amplitude Modulation Synthesis

I recently got my technician license for operating HAM radios, so I decided to blog about amplitude modulation. Amplitude modulation consists of two oscillators, a carrier and a modulator. From our previous exercises, we learned that an oscillator generates a tone. However in amplitude modulation, the carrier could also be another type of signal, such as an instrument or vocal input.

The modulator oscillator controls the gain of the carrier signal. Below is a video created by a radio teacher to help visualize this concept. It helped me understand how data can travel on a specific signal.


Additive Synthesis, Tremolo, Ring Modulation

Additive synthesis is simply combining two or more signals into a single waveform. When two waves are added together, the new waveform contains characteristics of all the signals combined to make it. Amplitude modulation synthesis is where the gain of one signal is modulated by the gain of another signal. Tremolo is a form of AMS where the gain of an audio signal is changed at a slow rate often at a frequency below the range of hearing (20 Hz). It is often used as an effect for organs or electric guitar. Ring modulation is where one audio signal is modulated with another audio signal, which has both positive and negative values. It is often used to create alien voices or distort an analog signal with a digital one.

Friday, January 11, 2013

Weekend HW

 pick 2 of the scales and replicate them in PD
http://www.kylegann.com/tuning.html



Do the Simple Synth tutorial located on the left column beneath the audio tutorials section 

http://en.flossmanuals.net/pure-data/




start these tutorials
http://en.flossmanuals.net/pure-data/


Just Intonation

In music, notes are defined by the their frequency, and their relationships to other frequencies. Since our ears are capable of recognizing frequencies in the 20 to 20000 Hz range with great accuracy, we rely on mathematical observations when devising a musical scale out of different frequencies. Two main types of assigning notes to frequencies are just intonation and equal temperament. In just intonation, the frequencies that define notes are related by ratios of small whole numbers. On the other hand, equal temperament defines all notes as multiples of the same basic interval, so that the distance between each frequency is the same for all.

Here is an example sourced from Wikipedia, it plays a pair of major chords with the first in each sequence tuned in equal temperament, with the second tuned in just intonation.

http://upload.wikimedia.org/wikipedia/en/9/99/Just_vs_equal.ogg

Thursday, January 10, 2013

Just intonation

Just intonation is a type very precise of musical tuning. Notes in a series are spaced apart by ratios of small whole numbers multiplied by the frequency of the Fundamental note (aka first harmonic). This differs from equal temperament because the intervals between each note are not constant. Musicians who rely on close harmonies prefer to use just intonation because the sound is much more stable. Perhaps computers and electronic make it easier to achieve these sounds, when traditional instruments are not always calibrated to be so precise.

 

 Sources: 

Harmonic Series. (n.d.). Retrieved from http://www.scottopus.org/topicharmonicseriesSS.htm

Just Intonation. (n.d.). Retrieved from http://en.wikipedia.org/wiki/Just_intonation

Just Intonation vs. Equal Temperament. (2009). Retrieved from http://www.youtube.com/watch?v=BhZpvGSPx6w&feature=youtube_gdata_player

Tuesday, January 8, 2013

Creating a new window for design...

That's been always my slogan, and how I see Digital and Graphic design. My name is Juan Carlos Tafur Mejia, and now I'm doing my MA in Digital Arts & Sciences in The Digital Worlds Institute. I feel blessed to be part of this new experience and goal because my best interest is to become a professional 3D designer to create 3D modeling and animations combined with visual effects for movies, films or commercial publicity. I graduated in 2008 from the college of Liberal Arts and Sciences at UF, and after been working as a freelance designer for 4 years and now at UF as a web designer, I'm back to school to combine my graphic and web design and development skills with everything that this new program has to offer. Everyone can visit my two websites to see my work at www.digi-graphical.com and www.jcdesignideas.com.