Friday, May 30, 2008

Week 12 Forum - Scratch II

We were going to have some student presentations today but once again both students didn't show up. That's a bit of a worry really...two weeks in a row?

Anyway Stephen came to the rescue and played us more of that awesome Scratch DVD. After watching the scratching lessons, by Qbert, I decided to put it into practice. Well to be honest I already knew a little about scratching but I'm not a scratch DJ. I am a mix DJ. I have been a DJ for around 5 years and have played in heaps of clubs around Adelaide. I've used everything from vinyl to CD to mixing mp3s. I still prefer old school vinyl.

Mixing and scratching are not easy techniques. I'm glad that we had touched on this topic in forum so people can get the idea out of their heads that DJs "just play music". It's so much more than that. Aside from the skills of mixing there is also the skills a DJ needs to work the crowd. You need to play certain songs at certain times so the crowd will buy drinks. e.g. If you play too many good songs, people will stay on the dance floor and not buy drinks. You need to work the crowd and make them do what you want them to do. I shouldn't give away too many secrets!

As a mix DJ, a common skill is to play two songs together to create a new song. I have done this in my demonstration. As a scratch DJ, there are skills of dropping in vocal stabs and samples to create new sounds. There is a lot of skill involved in DJing and I think the many talentless big heads out in Adelaide have given the rest a bad name.

I have prepared a little movie of my mixing. It's a bit long but you can just fast forward if you're bored. EnjoY!



DJ Reverie in the mix!


my record collection is around 500 so I still have a bit to go before I can compare to DJ Shadow. . .



a typical Saturday night. . .



[1] Stephen Whittington, "Music Technology Forum: Semester 1 - Week 12 – Scratch". Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 5th June 2008

AA3 Weeks 11 & 12 - Elder Hall & Orchestra Recording

For an orchestral recording I believe the following are important. There needs to be:

1) microphones capturing the overall orchestra
2) microphones down the hall near the audience to capture ambience
3) an announcer microphone
4) spot microphones to capture softer instruments

The announcer microphone is inside the booth in Elder Hall. Therefore I don't need to worry about setting that up.

If I remember correctly there are only 8 inputs available for recording at Elder Hall. Therefore it is important to choose your microphones wisely. I would start with the X-Y configuration already provided. The microphones provided by Elder Hall are Neumann KM183. These are small diaphragm, condenser and are omni directional. At approximately 2 meters of the ground they should pick up the transients quite well. These microphones are great for capturing a true sound, with a flat frequency response, accurate transients, and an uncolored sound.

But, since they have only a small diaphragm, I think I will need a separate set of microphones to pick up a more ambient wash of the sound. That is where I would place a pair of Neumann U87s on either side of the orchestra in an A-B setup. With these set to cardioid, I think I can capture a good stereo image.

Next I will place a pair of Neumann U87s halfway back from the stage into the audience in an A-B setup. This is to capture the ambience of the recording and also to include some of the audience applause. These would sit 3 meters off the ground. They need to be fairly high since the stage is raised about a meter. Normally I would say around 2 meters off the ground so I have taken the stage height into account. I think these microphones would be best set to omni directional since they are ambient microphones. Although I don't really want to capture what's happening behind so I'm not 100% sure at this stage.

For the last two inputs, I would use them for spotting instruments that need to be more prevalent in the mix. These would be quiet instruments like flute and violin. I would close mic each with a Shure Beta 57. These are super cardioid instrument microphones and could be good for isolating individual instruments.

When fed into the desk, I would have the inputs in the same order:

1-2 -> Neumann KM 183 (X-Y front)
3-4 -> Neumann U87s (A-B each side of orchestra)
5-6 -> Neumann U87s (A-B ambience/audience)
7-8 -> Shure Beta 57s (spot micing)

To be perfectly honest, I didn't really feel that Ray Thomas's approach to recording an orchestra was that useful. Recording to 16bit, 44.1kHz straightaway made me think he didn't know what he was doing. Classical music has more dynamic range than another style, so to limit yourself unnecessarily is unprofessional. Buy an external hard drive and do the job properly. He seemed very casual with his approach.




a rough outline of my set up. . .





[1] Christian Haines & Ray Thomas. "Audio Arts: Semester 1 - Weeks 11/12- Elder Hall & Orchestra Recording". Lecture presented at the Elder Hall, University of Adelaide, South Australia, 23rd May 2008

[2] Skeet, Mike. "Location Recording: Recording on Location Cheshire Youth Orchestra". 2002. Sound on Sound. 21/2 2007. .

[3] Hulse, Richard. "Orchestral." Audio Technology.

Week 11 Forum - Philosophy of Technology

Scenario 1
I woke up for another typical university day. I walked to the bus stop, waited there for the bus and hopped on. I had the usual day of lectures and crappy food from downstairs. On my way home I decided to stop off at the newsagency and buy a lottery ticket. You wouldn't believe what happened. . . . I WON $22 MILLION ! ! !

The next day I quit work, uni and went shopping. I bought a few houses and few cars. These weren't just any cars. Ferrari, Lamboghini, Porsche and BMW's. After I got sick of driving I went overseas for a few years. When I came back I met an amazing women and got married. Had heaps of kids and spoiled them. I had a great life on my beach-front property, grew old and died a happy man.

Scenario 2
I woke up for another typical university day. I walked to the bus stop and out of no where a car swerved infront of me and hit me straight on. I died that day.

[1] Stephen Whittington & David Harris, "Music Technology Forum: Semester 1 - Week 11 – Philosophy of Technology". Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 29th May 2008



-----------------------------------------------------------------
word count reached - read if you're interested
-----------------------------------------------------------------


What's my point? The world has so many problems, and of course there are steps you can take to try and make a difference. But at the end of the day everyone deserves to be happy and live their life the way they want to. There is no way an individual can make a notable difference in the world of problems that I consider important. These being war, poverty, sickness and human ignorance. Yes the environment is important too, but I place higher importance on people.

As individuals we never know what's coming. I think the world would be a better place if everyone just tried to be the best they can be and be useful members of society.

That is why I believe that I'm entitled to whatever I can accomplish. I work hard and I consider myself a good person. I don't think it's selfish to have high standards and expectations of what I want to achieve in life. I know there a millions of people less fortunate than me. But honestly what can I do about that? All I can do it be grateful for what I have, and I am. I know its terrible what is happening in other parts of the world and if there was a way I could help I would. I used to give money to World Vision until I went broke and couldn't afford it anymore. I probably donated around $400 overall. It made no difference.

You never know what's coming. All you can do is live life to the fullest and be happy with what you have.

Monday, May 26, 2008

Week 10 Forum – Scratching

I was very impressed with today’s forum. Something I can really relate to. I appreciate the fact that Stephen is open to the idea of DJing as an artform even though in the traditional scence it isn’t exactly the most scholarly form of music.

Many of the artists we watched in the film today I had heard of through my own interest of looking up the roots of DJing. DJs Qbert, Shadow, Mix-Master Mike are all legends in the art of scratching. Grand Master Flash, Kool Herc and Grand Wizard Theodore are of course the pioneers in this artform.

One part of the film I could really relate to was when the DJs talked about digging for vinyl. One of the biggest stigmas in the DJ industry now is the rise of digital downloading. The culture of spending hours plowing through crates of records has been lost with simply clicking on a computer. As a DJ I hate playing mp3s and will always remain a vinyl purist.

[2]
anyone can learn to scratch. . .


[1] Stephen Whittington, "Music Technology Forum: Semester 1 - Week 10 – Scratch". Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 22nd May 2008

[2] WP Remix, Cats Learning How to be a DJ. http://iwdj.com/ (Accessed 23/5/8)

CC3 Week 10 - Streams (III)

I created a new synthdef for this weeks exercise. I've used brown noise as my sound source. I am using a filter to change the sound. Using a pbind I am varying the start and end times of the frequency sweep. I am also varying the Q.

Click here for my audio example [252KB].


(
SynthDef("BrownNoise", {

arg cFreq = 500, cAmp = 0.5, mFreq = 3, mAmp = 0.75, start = 1000, end = 8000, time = 10, width = 0.001;

var carrier, modulator, envStruct, envGen, filter, out;

modulator = SinOsc.ar(freq: mFreq, mul: mAmp);
carrier = BrownNoise.ar(1);



// Envelope Structure
envStruct = Env(
levels: [1.0, 0.35, 0.6, 0],
times: [1, 2, 3],
curve: 'welch'
);
// Envelope Instance
envGen = EnvGen.kr(
envelope: envStruct,
gate: 1.0,
doneAction: 2.0
);

filter = Resonz.ar(carrier, XLine.kr(start, end, time), width) * envGen;

out = Out.ar(bus: 0, channelsArray: [filter, filter]);

}).store;
)

Synth("BrownNoise", [ \start, 300, \end, 1000, \time, 3, \width, 0.1 ]);

SynthDescLib.read

(
Pbind(
\instrument, "BrownNoise",
\width, Pseq(#[0.001, 0.05, 0.01, 0.5, 0.75, 1], 1),
\start, Pseq(#[100, 500, 1000, 2000, 5000, 10000], 1),
\end, Pseq(#[300, 1000, 5000, 8000, 2000, 500], 1)


).play;
)


[1] Christian Haines. "Creative Computing: Semester 1 - Week 10 - Streams III". Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 8th May 2008

CC3 Week 9 - Streams (II)

For this week's patch I used the synthdef I made in week 7. I have used the Pshuf to generate a sequence to control the speed, modulator frequency and modulator amplitude of the sample. I found this weeks exercise fairly straight forward since the ideas were really just the same as last week.

Click here for my audio example [464KB].


b = Buffer.read(s, "sounds/Producers @ The Electric Light Hotel.wav");

// Here is the sample for this patch.


(
SynthDef("BufferPlay", {

arg speed = 1, amp = 1, pan = 0, mFreq, mAmp;

var sig, pan2, envStruct, envGen, mod, car, out;

sig = PlayBuf.ar(1, b.bufnum, BufRateScale.kr(b.bufnum) * speed, loop: 1.0);
pan2 = Pan2.ar(sig, pan);

mod = SinOsc.ar(freq: mFreq, mul: mAmp);



// Envelope Structure
envStruct = Env(
levels: [0, 1, 0.5, 0.3, 0],
times: [2.0, 1.0, 3.0, 10.0],
curve: 'lin'
).plot;
// Envelope Instance
envGen = EnvGen.kr(
envelope: envStruct,
gate: 1.0,
doneAction: 2.0
);





Out.ar(bus: 0, channelsArray: amp * pan2 * mod* envGen);

}).send(s);
)

a = Synth("BufferPlay", [\speed, 1, \pan, -1, \mFreq, 1, \mAmp, 1]);



(
var a, b, c, d;
a = Pshuf.new(#[1, 1.5, 2, 2.5], 3).asStream;
b = Pshuf.new(#[1, 2, 3, 4], 3).asStream;
c = Pshuf.new(#[0.25, 0.5, 0.75, 1], 3).asStream;
d = Pshuf.new(#[0.25, 0.5, 0.75, 1], 3).asStream;

Task({
12.do({
Synth("BufferPlay", [ \speed, a.next, \mFreq, b.next, \mAmp, c.next ]);
d.next.wait;
});
}).play
)


[1] Christian Haines. "Creative Computing: Semester 1 - Week 9 - Streams II". Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 1st May 2008

Week 9 Forum – Student Presentations

Today we had presentations from Seb Tomzak and Darrin Curtis. Seb presented us with his instrument he built using lasers through water. This idea is quite interesting and I think has a lot of potential commercially provided Seb can make the water enclosed so it is safe to use. Also it would be good if he can get rid of the computer which he did say he was working on.

Seb also talked about Milkcrate, a musical idea he has been a part of for a few years now. I was very impressed by the piece Seb showed us where he hit a milkcrate once and then created an entire piece based on that one recorded sound.

Darrin Curtis has been researching music therapy. I thought his ideas on healing with music was very interesting and it is something I have looked into doing in past years. I thought the idea of generating music through crystals was original and the beating frequencies that were generated from his piece was very soothing and peaceful to listen to. Impressive work.

Milkcrate


[1] Seb Tomzak & Darrin Curtis, "Music Technology Forum: Semester 1 - Week 9 – Student Presentations ". Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 15th May 2008

AA3 Weeks 8 & 9 – Live Sound & Acoustics


What do you think the purpose of live recording is? Consider the advantages and disadvantages of this approach.

A live recording is very different to a studio recording in many ways. There is of course the obvious difference that a live recording is a recording of a live performance. Yes, a studio recording is also recording live musicians but rather than recording a band on a stage outside they’re sitting in a little booth in a treated room.

When you’re going to see a live act, what is the appeal to go see them live instead of sitting at home listening to the band on a CD? I think if you ask yourself this question the answer is very similar as to the difference between a live verses studio recording.

This is my answer: It’s just the vibe mate.

A live performance usually consists of a massive arena, thousands of screaming fans, a stage full of speakers and instruments. The band has a huge adrenalin rush and is overwhelmed by the huge crowd.

When they start performing their music, that is exactly what they’re doing. Performing, and this isn’t just musically. They are engaging the crowd and creating a vibe. A live recording captures the energy of the performer at that time. There may even be some bum notes in a guitar solo but if the recording is capturing an amazing atmosphere or mood then that’s a positive outcome.

The purpose of a studio recording is to capture the best take. The sound is supposable perfect, and whilst it does sound good it is just a different sound to a live recording. I’m not saying a studio recording lacks energy. I am just making a comparison between two different recording environments.

A studio is generally a nice, quiet atmosphere. You generally perform in a acoustically treated room that is very quiet. You can call on a cup of coffee whenever you feel like it. You generally can’t smoke or take drugs.

Now compare this setting to a mass of screaming, drunken fans. Smoking and drinking on stage. Blaring guitars cranked to the max and thunderous drums. Ear-piercing vocals in you’re face.

The band could be playing the exact same song but it will sound noticeably different. This is because a musician’s frame of mind is expressed through their performance and when you feel different, you play different.

Of course a jazz or classical performance may not have the same effect in terms of performers’ emotion state. But, there will always be that difference.

The advantages of a live recording is that you really do capture that performance. It is a real performance at that point in time. It may not be a perfect performance but at the end of the day no one performance ever is. Provided you have recorded it correctly, it will sound more real.

The disadvantages of a live recording is that it lakes the cleanness of a studio environment. A guitar solo in a studio environment could consist of a manifestation of many takes to make it sound perfect. It will be colored using effects to give it the perfect amount of delay or reverb. A vocal line could have warmth added to it or some pitch correction. Whatever the effect is, there are no limitations to how you can work with audio in a studio.

A studio recording is aimed at making a professional piece of music so that it can be commercially successful. It will always sound right if the engineer knows what he’s doing.



Where do you think the future of studio and venue acoustics lies? Consider the ideas of physical studio acoustics and virtual studio acoustics.

The future of acoustics lies in the growth of technology. We are at the point now where pretty much anything is possible in terms of music technology. I really don’t think there is much more we can learn about acoustics.

We can already measure wavelengths of sound sources and study the interaction of those waveforms in different surfaces. We already know how different surfaces interact with various waves types. We can calculate standing waves in any recording space and correct these problems. We know about resonance modes. We already know about bass traps, baffles, helmholtz resonators and many other forms of acoustic treatment. I think as technology grows we will be just better at measuring these principles.

It may be condescending to say there will definatly be no more advances. But when you consider the technology available to us now I just can’t think of where there is left to know. Perhaps in the future there will be better substances to use for acoustically treating room (e.g. instead of rockwall). But we do have better substances now, they just cost more.

The way I see it, the way in which we treat acoustics will just become easier and more convenient. That has been the trend for many technological advances.

In terms of software, I feel that software will continue to improve. Convolution reverbs will get better over time and continue to sound more real. I don’t know if it will ever sound as real as actual reverb but we are at the point now where it is pretty close.

Acoustic emulation in software can be very pleasing and is defiantly more convenient than using actual reverb. It is, of course, much cheaper too. Where I see the real advantages and advances is where reverbs in general are able to create an environment that is either impossible to record or doesn’t exist.

What does it sound like to sit in a room with 15 walls? Who knows. But, with the right algorithm you can generate the correct pre-delays & reverb times to replicate this environment. How much will it cost to build this? It’s not even worth considering.



don't you just love technology!


[1] David Grice. "Audio Arts: Semester 1 - Week 8/9 - Live Sound & Acoustics". Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 16th May 2008

[2] "KRK Systems Releases Ergo, Powerful Room Audio Correction System". 2008. http://www.krksys.com/news_pr011708.php (Accessed 22nd May 2008)

[3] Multimedia, IK. "Advanced Room Correction System - Chapter 1 - Arc Overview". 2008. http://www.ikmultimedia.com/arc/download/ARC-1.0-UsrMan-CD.pdf (Accessed 22nd May 2008)

[4] Stavrou, Michael Paul. "Stav's Word: Recording a Live Gig." Audio Technology.45: 64 - 66

CC3 Week 8 - Streams (I)


I used my synthdef from week 6 in this week's exercise. I have generated a very basic sequence using Pseq. This week I really just tried to get my sequence to work. In later weeks I want to experiment further and create more exciting sounds. Click here for my audio example [12KB].


(
SynthDef("SinTone", {

arg cFreq = 500, cAmp = 0.5, mFreq = 3, mAmp = 0.75;

var carrier, modulator, envStruct, envGen, out;

modulator = SinOsc.ar(freq: mFreq, mul: mAmp);
carrier = SinOsc.ar(freq: cFreq, mul: modulator * cAmp);

// Envelope Structure
envStruct = Env(
levels: [1.0, 0.35, 0.6],
times: [0.1, 0.2],
curve: 'welch'
);
// Envelope Instance
envGen = EnvGen.kr(
envelope: envStruct,
gate: 1.0,
doneAction: 2.0
);

out = Out.ar(bus: 0, channelsArray: carrier);

}).store;
)

(
var a, b;
a = Pseq(#[60, 61, 63, 65, 67, 63], inf ).asStream.midicps;
b = Pseq(#[0.3, 0.4, 0.5, 0.6, 0.7, 0.8], 5).asStream;
Task({
12.do({
Synth("SinTone", [ \cFreq, a.next, \dur, b.next ]);
b.next.wait;
});
}).play
)



[1] Christian Haines. "Creative Computing: Semester 1 - Week 8 - Streams I". Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 24th April 2008

Week 8 Forum – Peter Dowdall Presentation

This week’s forum without a doubt was the best forum I’ve attended in my years at this university. It was great to listen to someone’s life experience in the industry and his honest opinions on working with commercial artists.

Peter Dowdall had worked with many artists including Britney Spears. It was interesting to see how even the most professional artists are not conscience about such important recording practices. I’m talking about his example of inserting an mp3 region in a ProTools session. That did surprise me quite a bit.

Dowdall’s career is pretty much my dream job. It was interesting to see how other students, notably John, didn’t share my enthusiasm as their career goal. Dowdall’s job did sound stressful at times but I wouldn’t trade the opportunity to work with professionals for anything.


who wouldn't want Peter's job?


[1] Peter Dowdall, "Music Technology Forum: Semester 1 - Week 8 – Sound Engineering". Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 8th May 2008

[2] wallpaper, Britney Spears. www.skins.be/britney-spears (Accessed 9/5/8)

CC3 Week 7 - Synthesiser Definitions II


//PART ONE - CODE BORROWED FROM JOHN DELANY


(
SynthDef("IndustrialAmbience", {
arg freq = 100, rate = 0, contFreq = 100, i;
var env, osc, hit, sin, dur = 10, sinDec, envStruct, envGen;
hit = Decay2.ar(Impulse.ar(rate,0,0.3), 0.03, 0.3);
sinDec = Decay2.ar(Impulse.ar(rate,0,0.3), 0.01, 0.3);
sin = SinOsc.ar(freq: contFreq*(i+2), mul: sinDec);
osc = Klank.ar(`[Array.series(10, 50, contFreq, 50),
Array.series(10, 0.8, -0.1)], ClipNoise.ar([0.005,0.005])
);


// Envelope Structure
envStruct = Env(
levels: [0.1, 1, 0],
times: [1.0, 3.0],
curve: 'welch'
).plot;
// Envelope Instance
envGen = EnvGen.kr(
envelope: envStruct,
gate: 1.0,
doneAction: 2.0
);


Out.ar(0, osc * envGen); // Outputs to busses 0 and 1
}).send(s);
)

a = Synth("IndustrialAmbience", [\contFreq, 10000, \rate, 9]);






//PART TWO - MY SYNTH DEF



b = Buffer.read(s, "sounds/Producers @ The Electric Light Hotel.wav");

// Here is the sample for part two.



(
SynthDef("BufferPlay", {

arg speed = 1, amp = 1, pan = 0, mFreq, mAmp;

var sig, pan2, envStruct, envGen, mod, car, out;

sig = PlayBuf.ar(1, b.bufnum, BufRateScale.kr(b.bufnum) * speed, loop: 1.0);
pan2 = Pan2.ar(sig, pan);

mod = SinOsc.ar(freq: mFreq, mul: mAmp);



// Envelope Structure
envStruct = Env(
levels: [0, 1, 0.5, 0.3, 0],
times: [2.0, 1.0, 3.0, 10.0],
curve: 'lin'
).plot;
// Envelope Instance
envGen = EnvGen.kr(
envelope: envStruct,
gate: 1.0,
doneAction: 2.0
);





Out.ar(bus: 0, channelsArray: amp * pan2 * mod* envGen);

}).send(s);
)

a = Synth("BufferPlay", [\speed, 1, \pan, -1, \mFreq, 1, \mAmp, 1]);
a = Synth("BufferPlay", [\speed, 2, \pan, 1, \mFreq, 1, \mAmp, 1]);
a = Synth("BufferPlay", [\speed, 4, \pan, 0, \mFreq, 3, \mAmp, 1]);



// PART 3 - MIDI control

(
MIDIClient.init;

MIDIClient.sources;

MIDIIn.connectByUID(inport: 0, uid: MIDIClient.sources.at(0).uid);
)



(
MIDIIn.noteOn = {

arg src, chan, num, vel;

[chan,num,vel].postln;

a = Synth("BufferPlay", [\speed, 1, \pan, -1, \mFreq, 1, \mAmp, 1]);
a.set(\speed, num.linlin(0, 127, -11, 10).postln);
};
MIDIIn.noteOff = {};
)
(
MIDIIn.control = {

arg src, chan, num, val;

[src, chan,num,val].postln;

a.set(\mFreq, (val * 1.5));
};
)


// PART FOUR - Sound example of PART TWO



[1] Christian Haines. "Creative Computing: Semester 1 - Week 7 - Synthesiser Definitions II". Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 17th April 2008

Week 7 Forum - Tristram Carey

[2]

Before today I didn't know much about Tristram Carey. I feel quite lucky that one of the most successful electronic musicians of all time has had a big part at this university. Stephen's presentation on his life was quite interesting and I appreciate the work he did.

The VCS was a great invention by Carey and even after so many years it still has a distinctive analogue sound. I could see myself implementing that synth in the music I make now. When Stephen told us the story of how Carey used post World War II devices to make his instruments, I found that very interesting.

Where are the innovative musicians of today? I guess the next step is to build an instrument out of nuclear weapons or instruments of mass destruction . . .


[1] Stephen Whittington, "Music Technology Forum: Semester 1 - Week 7 – Tristram Carey". Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 1st May 2008

[2] Electronic Music Uni, University of Adelaide. www.emu.adelaide.edu.au (Accessed 2/5/8)

Monday, May 05, 2008

AA3 Week 7 - Location Recording

To put it nicely, the Marantz recorder is not worth its weight in plastic. Each recording we did today was filled with hiss. I was considering trying to use plugins to mask the artifacts, but since that is not part of this week’s exercise I didn't think it was worth my time.

This week Jake, Luke and I recorded a piano in the Hartley Concert Room. We tried six different microphone configurations. Our pianist was Ben Bamford.

The first spaced pair was a typical A-B setup with each microphone 1.5 meters apart. I think this technique worked quite well. The sound was full and the frequencies picked were complimenting the piano’s tone. My only criticism is that there’s hardly any reverb in the recording.

The second spaced pair was set up two meters apart with one microphone pointing up towards the ceiling. I also like this tone and the room sound was captured quite well.

The third spaced pair was the same as the second but with both microphones pointing down. They were positioned higher than the first one. The sound is full with some good reverb but I still prefer the second spaced pair.

We then tried recording the resonate sounds from one piano through playing another piano. This sounded interesting and I commend Jake for his imagination.

Finally, the final two recordings are experimenting with recording from outside the door. I think with a better quality recording it could sound much better. It’s still an interesting idea.


preparing the setup during our audio arts class. . .


AUDIO
Spaced Pair I [788KB]
Spaced Pair II [872KB]
Spaced Pair III [664KB]
Inside Another Piano [616KB]
One Mic Outside [856KB]
Two Mics Outside [1.8MB]


[1] David Grice & Christian Haines. "Audio Arts: Semester 1 - Week 7 Location Recording". Lecture presented at Elder Hall, University of Adelaide, South Australia, 29th April 2008

[2] Robjohns, Hugh 1999, The Choir Masters: Recording A Choir On Location: Part 1, Sound on Sound, http://www.soundonsound.com/sos/sep99/articles/location.htm (Accessed Sunday 4th May)

[3] Robjohns, Hugh 1999, The Choir Masters: Recording A Choir On Location: Part 2, Sound on Sound, http://www.soundonsound.com/sos/oct99/articles/location.htm (Accessed Sunday 4th May)

[4] Robjohns, Hugh 2003, Recording A Jazz Band On Location, Sound on Sound, http://www.soundonsound.com/sos/Jul03/articles/recordingjazz.asp?print=yes (Accessed Sunday 4th May)

[5] White, Paul 2003, Recording Bands on Location, Sound on Sound, viewed 21/2 2007, http://www.soundonsound.com/sos/Mar03/articles/recordingbands.asp (Accessed Sunday 4th May)