Thursday, January 30, 2014

Download this for when we install ableton!

I just found this awesome Vsti for when we are using ableton. It is a special Vsti that does spectral micro tuning. Which is related to Just intonation, the way we are learning the harmonic series. 
http://www.xen-arts.com/2011/12/xenharmonic-fmts-vsti-11-maintenance.html

Class Lecture 8 - 1/30/14

In-class assignment 

Prototyping with Pure Data and Processing

Make sure you download the correct version of Pure Data (0.43.2)

Install these Processing Libraries.
  • libPD - You must unzip the folder first, then unzip pdp5.zip. The output folder is called PureData. Drag this folder into the libraries folder of your Processing sketchbook.
  • controlP5 - Install by clicking Sketch menu in Processing -> Import Libraries... -> Add Libraries... -> then search for controlP5 -> Install
  • oscP5- Install by clicking Sketch menu in Processing -> Import Libraries... -> Add Libraries... -> then search for oscP5 -> Install

Homework

Blog on

     OSC/Midi

     Noise Music:
  • Throbbing Gristle
  • Keiji Haino
  • Tony Conrad
  • Psychic TV
     Granular Synthesis:
  • Riverrun
  • Barry Truax
  • Curtis Roads
  • Iannis Xenakis
Get 5 Midi Files

Create 5 Granular Samples in Cecilia (with clean tips and tails)

On Tuesday

Lecture on something called Throbbing Gristle
http://makezine.com/2012/09/24/protodrom-prototyping-with-pure-data-and-processing/

Imperfect

The mellotron is an example of a technological limit. Every time the tape was pulled through the machine, it would have slightly different pitch and amplitude.

Now we can create a perfect mellotron (or a sampler). So why would anyone use an unwieldily, hard-to-maintain mellotron?

Is the mellotron popular because those sounds are so distinctive in the minds of music lovers? If so, it's like an allusion in literature?

Is it like using a hammer that sometimes doesn't hit exactly where you want it to? Wouldn't that just be frustrating?

Isn't creativity is what you decide to do with your tools, not what your tools decide to do?

More about Mellotron

Good morning,

How are you? Sad, cause it's grey outside. Anyway, back to the Mellotron.

Although we already know what it is, I'll briefly explain again. It's like a piano but not a piano. It's more like a regular keyboard. You know how when you look at a modern keyboard, they have preset loops already programed into them? Well, Mellotron is basically just like the present keyboard, except you couldn't switch between presets and piano and back in the 60's tapes were used. Many famous musicians have used the Mellotron in their music. Here's a list a guy who loves Mellotrons made (he also made a website u should browse): http://www.planetmellotron.com/toptens2.htm

So, take a look and listen to some tunes.

Also, if you don't want to read anything at all, watch this video of Paul McCartney. He basically explains the Mellotron and gives a brief show on how it works.



Melletron: it's life and legacy

The melletron was a piece of machinery that produced samples on command of a button.  This allowed bands like The Beatles and Yes to produce sounds that needed the samples to be better and well rounded in sound. It's legacy founded the synthesizer and samplers of today, as well as forging the idea of looping the frame. These things left a good impression in the musical timeline.

Wednesday, January 29, 2014

Mellotron

The Mellotron is a polyphonic tape replay keyboard that generates its sound via audio tape. Each key on the Mellotron had recordings of real instruments on a piece of magnetic tape under each note of the 3-octave keyboard and each key had its own pinch roller and playhead. When a key was pressed, the pinch roller enaged with a master capstan wheel and dragged the key's tape over a playhead. Here is a picture from Wikipedia demonstrating this application.
 File:Mellotron2.jpg


Its popularity increased following its use by The Beatles, The Moody Blues and King Crimson, as well as being a notable instrument in progressive rock generally. It was one of the most unique electronic instruments ever made and a trademark sound of '70s prog-rock bands. The following video was part of a Japanese documentary series 'Song to Soul' where Robert Webb talks about King Crimson's making of 'In the Court of the Crimson King'.

Mellotron and Yes (You and I - Close to the Edge)

Yes, being one of my dad's favorite bands, is something I had a pretty large exposure to growing up, so I thought it would be fun to revisit them again to look at their use of mellotron in their music (and there's quite a lot of use of the instrument - Rick Wakeman was quite the fan). I decided to focus on the song You and I from the album Close to the Edge, where you can hear a mellotron used to emulate the sounds of strings, brass, and flutes.

Radiohead & The Mellotron

Radiohead contacted the makers of the Mellotron (Streetly) in 1997 to have a Mellotron restored. They then used it pretty extensively on the OK Computer album. While it doesn't always feature prominently in the songs (used more as an ethereal, ambient backing sound), it is used to create an interesting choir-like sound in the song Exit Music (For a Film):

Mellotron

Evolved from the Chamberlin, The mellotron is an electro-mechanical keyboard that uses a polyphonic tape and was developed in England in 1963. The mellotron was popularly used with progressive rock with some of the most famous artists from England such as the Beatles and Genesis. The mellotron was designed to reproduce sound, replaying the tape creates the minor variations of pitch(wow) and amplitude(flutter) which seem similar to the ones we did in class on Pd-Extended. The individual notes are recorded in isolation. The mellotron does not use tape loops. Each key has a length of tape below that lasts eight seconds so each key is a crude tape machine that is dragged forward to the hand and the sound recorded on that tape is played back. However the mellotron and known as a mechanical nightmare and the limitation regarding sustain is extremely frustrating to most musicians.

https://www.youtube.com/watch?v=oDqG9agd5wc

The Mellotron

The Mellotron was a vital instrument during the progressive rock era of the 60s and 70s.  Developed in 1963 and prominently featured in bands such as Yes, Genesis, and King Crimson, it provided many unique sounds that became an identifying aspect of the era's sound.

The Mellotron creates sounds when a key is pressed that drags magnetic tape across a head.  The M400 was the best selling model (the 3rd one, released in 1970,) of 5, before many bands moved on to newer technology and synthesizers.  A resurgence of popularity lead to reproduction of the instrument by Streetly Electronics in 2007 with the M4000.  "Watcher of the Skie"s by Genesis was a major Mellotron song, so much that the M4000 contains a "Watcher Mix" sound incorporated into it.

One of my favorite songs incorporating the Mellotron is "Dancing with the Moonlight Knight" by Genesis.  The Mellotron is used in the middle section of the song with a choir sound and strings towards the end.  I love the rock opera feel of the song as it builds momentum throughout its duration, while returning to a soft outro to end.  It creates a sense of grandness and urgency, while remaining energetic and melodic.


Mellotron

The Mellotron is an electrical musical instrument that was created in the year 1962 by a man named Harry Chamberlin. It can provide the sound of a variety of different instruments with 3 octaves by using strips of magnetic tape, a pinch roller, tape head, pressure pad, and a rewind mechanism for each note. To play a note, a key needs to be pressed, which would then trigger the mechanism that pulls each tape strip across the tape head, which would play the sound that was prerecorded on the tape. However, since the Mellotron had a limit of about seven or eight seconds of sound per note (after which the tape rewound itself to be ready when the same note might be played again), it was usually only used for slower music.Despite this, many famous bands have used the Mellotron in their music, including the Beetles, Genesis, and King Crimson, and Yes.

Tuesday, January 28, 2014

Mellotron Looping in Genesis

On the first track of Genesis' album in 1973 called Genesis Live, the group opened up with a Mellotron introduction. The Mellotron is an electro-mechanical keyboard like device that combines together various mechanisms to create different sounds. It is a notable instrument in progressive rock and a lot of bands
turned to using in their songs and work later on. In "Watcher of the Skies", which was the first track of Genesis' album, Foxtrot, they incorporated this very mellotron device.

Watcher of the Skies has an unusual polyrhythm part following the vocals of the song. I personally love how the song picks up after two minutes into listening. It definitely built up suspense and I enjoyed listening to it. It's an odd synchronization of being upbeat with drum and mellotron sounds incorporated into sad vocal lyrics.

Mellotron

A intro to the mellotron by Paul McCartney where he shows an example of how the Beatles used it the technology in Strawberry Fields Forever.

The mellotron was popular for about 20 years before the technology became dated. Mellotrons were not intended to be mobile devices, weighing around 120lbs. Although they gained their popularity through popular bands in the 1960's. I imagine traveling bands that used them probably had a trained mellotron maintenance person with them. They were also difficult to fix, and the tapes needed to be replaced often. All 35 keys had the be de-magnetized and maintain the proper tension. There were also issues with the tapes resetting and creating a click during the process that made the mellotron inconsistent. Overall the maintenance on these machines were their ultimate downfall.

Wednesday, January 22, 2014

Vocoder vs. Autotune

The Vocoder and the Autotune phenomenon are two things that are very intertwined. They tend to produce a very similar effect on the vocal track of a produced song. Auto-tuning is a process that corrects the off-key vocal track by shifting the electronic sound up or down to tune the off key note that is sung. The Vocoder is a machine that takes in vocal tones and returns them with harmonics included with the original voice.

Below is an example of Autotuning:
Ke$ha - Tik Tok
http://www.youtube.com/watch?v=iP6XpLQM2Cs

Here is an example of use of a Vocoder:
Imogen Heap - Aha!
http://www.youtube.com/watch?v=Z9_S8nrdERY

Operation principle of vocoder

The input signal (your voice saying "Hello") is fed into the vocoder's input. This audio signal is sent through a series of parallel signal filters that create a signature of the input signal, based on the frequency content and level of the frequency components. The signal to be processed (a synthesized string sound, for example) is fed into another input on the vocoder. The filter signature created above during the analysis of your voice is used to filter the synthesized sound. The audio output of the vocoder contains the synthesized sound modulated by the filter created by your voice. You hear a synthesized sound that pulses to the tempo of your voice input with the tonal characteristics of your voice added to it.

Reich and Fripp

In the two pieces "Come Out" by Steve Reich and "1984" by Robert Fripp, the two artists go about recording their works in different ways and result in a different effect.  Reich uses his a single line from Daniel Hamm's recording ("I had to, like, open the bruise up, and let some of the bruise blood come out to show them") on both tape recordings and initially syncs them so that both channels are playing together in harmony. As the piece goes on, the channels start to play out-of-phase, so you hear two distinct voices. This repeats so that you hear four voices, then eight voices (at which point the voices sound like they're reverberating, or echoing, off of each other). Finally, the voices have become so split that it just sounds like raw noise. Whereas Reich really only uses one type of sound (the single line from the recording), Fripp piles many different sounds on top of each other, played together and at different pitches. This results in a piece that sounds more melodic.

Early Speech Synthesis

Here  is an example of one of the first vocoders(VODER: Presented at the Worlds Fair) in 1939. It's incredible to listen to the sounds, and hear how each tone is used to vary the expression. It's now common to have computers talk to users via vocoders (Siri, Salli)

As an example of the progress sound synthesis has made, this  is a link to a Japanese android which sings using synthesis. The company is currently working on adding human breathe to the sound, to make the melody sound more natural. A practice of this being used daily would be Steven Hawking who uses Speech Plus (made in China) to verbally communicate and would other wise not be able to. 

The industry of speech synthesis is very competitive, this article does good job of explaining the history vocoders. 
Autotune & Vocoders

Andy Hildebrand, Auto-Tune’s inventor, spent eighteen years in a field called seismic data exploration, a branch of the oil industry. He worked in signal processing, using audio to map the earth’s subsurface. His technique involved a mathematical model called autocorrelation. The layers below the earth’s surface could be mapped by sending sound waves—dynamite charges work nicely in unpopulated areas—into the earth and then recording their reflections with a geophone. As it happened, autocorrelation could detect pitch as well as oil, and Hildebrand, who had taken some music courses, turned his engineering skills toward pop.


An example of Vocoding and Autotune put together...

http://en.wikipedia.org/wiki/File:Ke$ha_-_We_R_Who_We_R.ogg

Autotune vs Vocoder

An autotune is a plug-in for a correction that is most commonly used for editing a person's (usually a singer's) pitch. However, a vocoder is a specific effect that usually combines a person's vocals with synth-notes. Despite their difference, they both can be used to produce a similar effect of making a person's voice sound somewhat artificial in a song. For example, the song "Believe" by Cher had an autotune effect in certain parts towards the chorus while "Hide and Seek" by Imogen Heap used a Vocoder throughout most of the song.

Autotune (section from "Believe"):
Vocoder (section from "Hide and Seek"):

Phase Vocoder Sounds

These are some sounds that I made in the phase vocoder. The original sound that I altered is Spongebob's laugh.

This is the regular recording (s auto = 100)
This one is it really compressed (s auto = 2000)
And this one, which is my personal favorite, is it being dragged out, making it sound more like a siren in slow motion. (s auto = 2)

Bleeps to Speech

A PC Speaker is the chip that bleeps and bloops on your computer. Confusingly, it is different from the speakers we would normally plug into computer sound cards.

The PC speaker was designed to reproduce a square wave (in other words, the sound is either on or off) via only 2 levels of output. It was designed for bleeps and bloops.

As I was researching vocoders, I fell into a deep dive and came across some projects that attempted to make actual music out of these PC Speaker.

We know from class that the first computer song was cleverly arranged by Max Mathews in 1961, but Max was using specialized hardware designed for this purpose. Any attempt to make acceptable sounds on a PC Speaker presents even more of a challenge than making a recognizable, dramatic soundtrack on an 8-bit system.

Yet they did! Perhaps it was because every single Wintel machine in the 1990s had one of these chips. The trick is to time a short pulse at a different level to get the overall output level to be somewhere between the two levels of output. This technique is called Pulse-width modulation.

Even more ambitious where crude attempts to get the computer to painfully extract speech out of the PC Speaker. None of these where particularly successful, but it was a daring thing to try. Here is an example of a PC Speaker utterly failing to say, "Emergency! Emergency! Emergency!"

Again, the challenge is the limits of the PC Speaker chip, not necessarily the software. By contrast, the original 1984 Macintosh had a better internal chip. Here's a demo of the Macintosh speaking during its release - the crowd hilariously goes nuts.

Tuesday, January 21, 2014

Talked about vocoders and auto tune



http://www.youtube.com/watch?v=-VIqA3i2zQw - Laurie Anderson - O Superman


Discussed Phase vocoders

“A phase vocoder is a type of vocoder which can scale both the frequency and time domains of audio signals by using phase information. The computer algorithm allows frequency-domain modifications to a digital sound file (typically time expansion/compression and pitch shifting).”


In PD:
Look at measure spectrum 
(Help -> Browser -> Pure Data -> Audio Examples -> I07)
TO upload a sound click the toggle for the sample and find a sound file (MUST BE A .WAV file)

5 Necessary Tools for Audio Design
1) Loop
2) Cut/Slice
3) Changing
4) Make Delay
5) Change Direction

What we will do on Thursday:
Terry Riley - Poppy Nogood And the Phantom Band All Night



Homework:
Google
Cher
Imogen Heip
Laurie Anderson
Vocoders
Play with the Measure Spectrum (Directions to code up above)
Listen to:

Robert Fripp - Let the Power Fall http://www.youtube.com/watch?v=mHM9c3p41d8

Download Max map jitter
http://cycling74.com/downloads/

1.21 Homework

So I don't think my audio feedback in my laptop is working properly, ignore any excessive echos. I'm working on it. I also borrowed a track from Mr. Murphy. ENJOY!


PS my beat in Gibber sounded WAY cooler before my lame laptop ruined everything.

Modulations and a Remix of sorts

Hello!

I hope you all had a nice long weekend, I sure didn't (I joke). Anyway, here are my sounds with links of which I used and such. Gibber was kind of confusing for me, so that took longer than I expected, but I think I got it now... who knows. :)

Frequency: http://freesound.org/people/MichaelPrzybylski/sounds/203456/





Gibber: Additive Synthesis 




Monday, January 20, 2014

Assignment #2: Amplitude Modulation, Frequency Modulation, Gibber, Ring Modulation.

https://soundcloud.com/b-kilmanjaro/sets/amplitude-modulation-frequency

Audio Assignment Gibber

This was an experiment in Gibber learning to use additive synthesis.  The recording starts with the 1st and 3rd harmonics (of a Sine wave with a frequency of 101, and .30 amp) then with the 5th, then 7th, and finally the 9th harmonic.

The code I used was:

fundamentalAmplitude = .30
Sine(101, fundamentalAmplitude) // fundamental
Sine(303, fundamentalAmplitude) // 3rd harmonic
Sine(505, fundamentalAmplitude) // 5th harmonic
Sine(707, fundamentalAmplitude) // 7th harmonic
Sine(909, fundamentalAmplitude) // 9th harmonic

Additive Synth Experiments

I played with some frequency modulation, ring modulation, and layered some new sounds on top of my previous subway track (a radio broadcast about Franco's death). I used a high pass filter and reverb to make it seem more like a radio on in subway.

Also, I made a drum beat on Gibber, but Soundcloud's auto-copyright detection said that it was identified as "Top Brass" by the "All Good Funk Alliance." (?)

My Remix

https://soundcloud.com/tjnmusik/remix

Modulations & Remixing (1/18)


FM modulation: https://soundcloud.com/alexisbenter/fm-modulation
amplitude modulation: https://soundcloud.com/alexisbenter/amplitude-modulation
ring modulation:https://soundcloud.com/alexisbenter/ring-modulation
gibber demo: https://soundcloud.com/alexisbenter/gibber

I used Terrance's environment and remixed it with the two links posted below:
https://soundcloud.com/alexisbenter/remix

http://www.ubu.com/film/roulette_namchylak_2000.html
http://www.ubu.com/sound/etienne.html

Audio Design Assignment 2



This is my submission for assignment 2.
The first track is a recording of myself playing around in Gibber. I decided to play with the frequency and amplitude of a sine wave.
The second track is my recording of playing with the ring modulator. I tapped on my computer keyboard while whistling and singing a bit.  It was interesting to listen to.
Thirdly, the track is a recording of me using additive synth. I changed the two sliders every so often to produce a new sound.
The fourth track is a recording of myself using the frequency modulator. I was having an interesting time getting it to work. The sound was doing odd things when I changed the different settings.

recording & remixing

AM FM Ring Modulation Gibber Remix 1 Original resource: Hussain - spookymusic Remix 2 Original resource: Flad - Prelude

Examples of Frequency Modulation, Ring Modulation, Amplitude Modulation, and Gibber

https://soundcloud.com/tjnmusik/frequency-modulation-ring

Here are my examples of Frequency Modulation, Ring Modulation, Amplitude Modulation, and Gibber.

My Remix

Hello. This remix is a combination of Joseph Murphy's "Hong Kong Subway" and Ian Elsner's "System of Kids." It also includes the two sounds from ubu.com entitled "Blue Harp Study 1" by Anne LeBaron and "Mr. Control" by Jody Harris. Enjoy!

Frequency Modulation, Ring Modulation, Amplitude Modulation, and Gibber

Hello. Here are my examples of everything posted in the title.

Frequency Modulation (This recording has three different examples of frequency modulation lasting about five seconds each):



Ring Modulation (This recording is a ring modulation that features me saying "hello"):



Amplitude Modulation (This recording has three different examples of amplitude modulation lasting about five seconds each):



Gibber (This recording is of me fooling around in Gibber):
I used this equation. What you hear is me activating the sine wave first, and then, a few seconds later, I activate the line below it, giving it a much higher frequency.
a = Sine()
a.frequency = Mul( 100, 400 )



Have a great MLK day!

Thursday, January 16, 2014

Thursday, January 16, Class

 

Floss Manuals

Pure Data: http://en.flossmanuals.net/pure-data/

Different Types of Synthesis

Additive Synthesis: http://en.flossmanuals.net/pure-data/audio-tutorials/additive-synthesis/

These synthesizers add waveforms together to create complex sounds.
The wave types influenced here are:
  • Sine Wave
  • Sawtooth Wave
  • Triangle Wave
  • Square Wave
Looked at MaxMSP: http://cycling74.com/products/max/

Sound Envelopes

  • Attack: from the start of the sound to it's loudest point
  • Decay: from the loudest point to a sustain point
  • Sustain: a flat sustained sound
  • Release: the eventual fade out of the sound

Frequency Modulation

Gibber!

Assignment

  1. In PD > Help Menu > Browser > Pure Data > 3. Audio Examples > Play with all of the E & F Audio Examples, and D.14 Vibrato
  2. Take a look at a frequency modulation tutorial
  3. Record examples of Frequency Modulation, Ring Modulation, Amplitude Modulation, and Gibber. Use Audacity to record and post to the blog
  4. Listen to "Concrete PH"
  5. Go to ubu.com, check the sound repository, and pick two; remix with some of your classmate's work

Graph from class

                        0
http://en.flossmanuals.net/pure-data/audio-tutorials/additive-synthesis/

Sound Environments and CSOUND

Hi,

So... I apologize for my late upload. I have a million excuses why but mainly it's because 1) it took me forever to find the sounds I wanted and 2) for the life of me, I could not figure out how to record in audacity (noob, I know). Anyway, I hate being that person, yet somehow here I am. The first environment I made, I really enjoy. The second one is aight, I'm going to be honest. I tried going for the whole "creepy" vibe and it didn't turn out quite the way I would have hoped. But whatevs, still learning. Also, I have figured out how to embed things, yay me! 


*It gets a bit loud so, like, be prepared. 



(I tried really hard, ok.)


----

CSOUND: 

Since I’m one of the last people (if not very last), writing about csound is going to be repetitive and boring. Alas, the life of a college student.  Csound is an audio programming language for sound (u already know). Although the dewd Barry Vercoe was kind of the main dewd, many of his fellow bromingos aided with it’s finalization. It takes "two specially formatting coding”, the orchestra and score to make sound stuff.  It’s a pretty rad language to utilize and create if you know how it do. Plus, it’s free! (I think.)

CSound

CSound is essentially a computer programming language for synthesizing sound and music. Constructed using the C programming language, CSound is an open source program that is available for download on PC, Mac, and Linux operating systems. The software allows for modification and adding onto through C code, which, through users throughout its lifetime have done, have expanded the range of its abilities and given it the large sonic palette it has. CSound is also paradoxically both simple and complex to work with. All you really need to write music in CSound is a text editor two write the two text files orchestra and score and allows the user to work within a completely modular system without GUI limitations constraining user intent. However, since that means there is no real custom work environment for CSound, there is a much steeper learning curve to the program, as there's nothing pre-made to tinker with.

Sound Environments

Both of my recordings are influenced by cities. The first has Hong Kong ambience from under a bridge. There is an erhu player, a conga drum beat, a subway train arriving, and some ambient synth.
This recording has street sounds, lots of car horns, ambient synth that builds over time, with a violin and cuica:

My Environments

https://soundcloud.com/tjnmusik/rnb-melody-environment

https://soundcloud.com/tjnmusik/deep-melody-environment

The first steps of CSound

I thought it would be interesting to compare the code of CSound with Pure Data.  This what I found on Csound.


 This is a possible transformation of the signal graph into Csound code:
    instr Sine
aSig      oscils    0.2, 400, 0
          out       aSig
    endin
The oscillator is represented by the opcode oscils and gets its input arguments on the right-hand side. These are amplitude (0.2), frequency (400) and phase (0). It produces an audio signal called aSig at the left side, which is in turn the input of the second opcode out. The first and last lines encase these connections inside an instrument called Sine. That's it.

Assignment 1 Soundscapes/Csound



These are my soundscapes.
The first is of a set of sounds based around a campfire that is down on the beach.
The second of them is of a rainy landscape, with a few...unsettling sounds.

On Csound:
The Csound language is an audio programming language written entirely in C. Hence it's name Csound. The language has a strength in that it is highly modular and has been used extensively. The language makes use of two text input files to create its sound. These are the "orchestra", or the properties of the instruments to be emulated, and the "score", which is a text file containing information on the notes and other properties on a timeline. Thus the language acts like Conductor for a symphony. It takes the "orchestra" and "score", brings them together, and makes music from the resulting combination. The original Csound language was released on March 18, 1986, with it's 5th incarnation being released twenty years to the day later. The current version was released in July of 2013, and is the 6th edition of the programming language.

Wednesday, January 15, 2014

My Sound Environments and CSound Blog




These are my two sound environments. From their titles you can assume what they're supposed to be. 

The first is a Sci Fi soundscape with a popping sound that doesn't match the tempo of the rest of the piece making it very disorienting.

The second is something that has a very scary feeling to it but also has a simple beat and melody that makes it seem like a retro game is being played.


          Csound is a programming language meant for sound engineering. It is written in C, which is a computer programming language. It was originally made by Barry Vercoe at MIT. He modeled the program after a system that he created in the past called Music 11. Music 11 was actually inspired by Max Matthews work at Bell Labs. We know of Max Matthews because he was the one who had the computer sing "Bicycle Built for Two" in 1961. Csound is used to make models of synthesizers and other audio processors. The latest version update was released in July 2013 as Csound 6. There is even a series of 16 Csound tutorials on Youtube linked below.

http://www.youtube.com/watch?v=KxyBTr0eamQ

Bibek - 2 Audio Tracks

1. Warm bells (130 bpm): Bells with a basic drum loop. Also has a chant and crash breaks. https://soundcloud.com/b-kilmanjaro/warmbells

2. Cinematic Strings (120 bpm): Starts out with low cello and goes into the violin loop. Has a white noise sweep and upbeat drums with crashes.

https://soundcloud.com/b-kilmanjaro/cinematicstrings

MORE WEB AUDIO RESOURCES

In addition to frresound, we can also use the web and javascript to create interesting audio with our browsers in real-time, There are three main audio applications that i consider usable for sound design strictly with a browser the most interesting of the three by far is GIBBER by Charlie Roberts.
http://charlie-roberts.com/gibber/info/

Next is Flocking,js -- a very cool, unique use of the web as an audio engine
http://flockingjs.org/

And finally of course is the very simple a pure java, JSYN
http://www.softsynth.com/jsyn/examples/index.php

All three are powerful and will be useful when we deic1w-pl

CSound

I found this website
http://en.flossmanuals.net/csound/index/

If you have time you should check it out, it appears to be a very helpful tool when learning Csound.

Audio environment

The flute loop: The bar closes:

Stochastic Music (and Musical Environments)

Stochastic music is a term coined by Iannis Xenakis during the 20th century.  In the most basic sense, stochastic music is music that is created through mathematics, and is non-deterministic.  As it is part of a stochastic system, the composition is based on chance, where randmoness can be weighted.  This gives the possibility of creating partially controlled compositions, or entirely unpredictable pieces of music.  Because of the complexity and tedious mathematics involved in creating algorithms for stochastic music, Xenakis and other composers often use computers to produce their scores.  While there are other forms of music based on using chance, stochastic music has a strict mathematical basis.  Here is one of Xenakis' stochastic scores ST/4:


Alone In Space loop:


Prelude loop:

Thomas Flad

Hello, I'm Thomas Flad, and I'm in my 4th year here at UF.  I'm currently working on a double major with Advertising and Digital Arts and Science.  My ultimate goal is to make a living by creating music and/or video games, so this class is perfect.  I have little experience creating games, aside from previous BADAS classes, but I've had lots of hand on music editing experience in my life.  I played guitar in a local band called Settle 4 Nothing, and spent lots of time creating independent music in Pro Tools.  I'm hoping this class can help me learn more about various editing programs and get me that much closer to achieving my goals.

My music background is about a varied as they come.  I used to only like soundtrack music when I was young, and thought that lyrics were just an interruption.  As I grew up, I realized the elation of being able to relate to lyrics in songs, and developed more traditional tastes. My general preferences range from bands like blink-182, Muse, Green Day, MCR, and Dance Gavin Dance, to modern composers such as John Williams, Koji Kondo (Mario and Zelda,) Nobuo Uematsu (Final Fantasy,) and Marty O'Donnell (Halo).

My 'go to' all time favorite song is The Adventure by Angels & Airwaves
http://grooveshark.com/s/The+Adventure/8niLs?src=5

My favorite all time soundtrack song is To Zanarkand from FFX.  It's emotional impact is unmatched to me.
http://grooveshark.com/s/To+Zanarkand/88uXC?src=5

And this song (Make Room by MCR) I REALLY want to use in a soundtrack, especially from the bridge to the outro.
http://grooveshark.com/s/Make+Room/505W3r?src=5

Finally, this is my favorite song from my band's EP, titled (K)nowhere to GO



Audacity Audio Environments

Hello. These are my Audio environments. The first one I called Fairy Farm. It is a peaceful melody being slightly interrupted by a violin which sounded very much like a rooster. The second one I called Remorseful Villain Hideout. It is a somewhat dark tune with an evil cackle that sounded somewhat sad at the same time. Enjoy!

Blog About Csound

Csound is a c-based audio programming language. It is called Csound because it is written in C, as opposed to the programs before it. Even though it was started by a single man by the name of Barry Vercoe, many developers have aided with its production. Csound takes two specially formatted text files as input called the orchestra and the score. The orchestra describes the instruments, and the score describes the notes and the other parameters. Csound processes the both the orchestra and the score and renders an audio file as output.

Tuesday, January 14, 2014

Tabbed Browser tunes in Audacity (and embedding info)

The Tribal tune is supposed to paint a vision of huts and open fields.
The Spooky file is what I can only describe as something you might here when you enter a new location in a video game. Perhaps a run down church on a raining day that someone is repairing.
I know.. the tribal is better. Oh and if you'd like to embed your links in blogger,
Step 1 upload your files to SoundCloud
Step 2 in Sound Cloud select the share button and copy the embed code.
Step 3 paste this code in your blog post under the "HTML" view. After you do this is should appear in your "compose" view.