SONIC DESIGN - FULL EXERCISES

 

Week 1 - Week 4
Task 1 - Exercises
Siti Zara Sophia Binti Mohammad Reeza (0359881)
Bachelor of Interactive Spatial Design (Honours)



INSTRUCTIONS


WEEK 1

Lecture


In this lecture, we covered the topics—nature of sound, how sound can be captured, how sound can be processed, how to analyse and use it, how to convert it to the digital world and pro tools. There are a multitude of softwares that can be used for sound engineering and design (especially in studios) but since this module focuses on sound design, we are to use Adobe Audition. Sound isn't just something we hear but it's the vibration of air molecules that stimulate our eardrums. There are a few phases of sound—production (how the vibrations are produced), propagation (how it travels, the medium) and perception (the translated signal to our brain). 



Next, we delved into the human ear. I learned that specific notes trigger certain areas of your cochlear, the more notes you played, the more notes you hear simultaneously. The lecture showed a video demonstrating how sound works through the use of a slinky toy, which I personally found really helpful to visualise how sound vibrations actually work—the video covered the travel of sound through different media such as solid, liquid and gas. Sound waves are longitudinal waves (meaning they travel parallel to the direction of the wave and travel the fastest through solids). 



As sound travels to our ears, we are going to (naturally) perceive it in a certain way. This is called psychoacoustics. How we respond to sound depends on the pitch, loudness, volume and timbre of the sound vibrations—making you feel something. For example, the sounds of traffic can make you feel stressed when on the road—that's psychoacoustics. Next, we briefly jumped into the properties of soundwaves, defining wavelengths (distance between two points), amplitude (height aka the loudness of the wave) and frequency (the cycle of the waves, higher the cycle, higher the frequency.



Then, we delve into the properties of sound. There are six categories as shown above: pitch, loudness, timbre (quality of sound), perceived duration (pace of the sound—fast or slow? Based on how you feel), envelope (the structure of the sound—when the sound gets louder, softer, when it maintains it volume, etc.), spatialisation (where the sound is located, the placement). Sound is measured using hertz (Hz). Humans are generally only able to hear between 20Hz to 20 kHz. However, this varies between gender, health and so on. Lower frequency sounds have more bass compared to higher frequency sounds which has more treble (sharper sounds). This is especially prominent when listening to music.


Additionally, I also watched this video Mr Razif had shared:



I found this video very helpful for the upcoming (pretty tough) exercise. It taught me how to recognise what an audio needed more of based on vowels. I learnt that starting from 250Hz to about 4 kHz had its own unique vowel sounds when listening to it—'OOO', 'OH', 'AAA', 'AH' and 'EEE' (in ascending order)—this really helped me identify what the audio files from the exercise below needed and even then it still wasn't an easy ride! Ear training takes time and practise so I'm sure if I keep at it throughout this semester, I'll hopefully get better at identifying and editing the appropriate sounds needed.

Exercise

In this task, the objective is to equalise the sounds to make them all sound the same with the original. We were one flat audio file (the original) alongside 4 equaliser files we were meant to edit in Adobe Audition. The below shows the overall look of the software, the original audios given and my edits for each file. 




The above is what Adobe Audition looks like. In this image, the software is in multitrack mode as opposed to the original waveform mode. In this mode, we can look and compare multiple audio files at once and play them at the same time if needed as well which I found really helpful! If I wanted to focus on one specific file, you can press 'S' to listen to said file or double click to open the file in waveform mode.


For all the EQ tracks, we were supposed to replicate and make it as close to the 'flat' file as possible, you can listen to the flat file below:




I found it pretty hard to decode the sounds from each audio file to make it as similar as possible to flat even though the ear training video did help out (I probably listened to flat one too many times HAHA). But alas, I tried my best with the equipment I had (consumer edifier headphones)! Hopefully, when listened to with studio headphones, it'll sound nice and similar cause it did on the edifier ones. Anyways, here are my EQ edits:


EQ 1:


EQ 2:


EQ 3:



EQ 4:


WEEK 2


Lecture




In this lecture, we covered the basic tools and steps used in sound design—layering, time stretching, pitch shifting, reversing, mouth it. 


Layering

Like graphic design, you use layers and blending to make sounds richer, have more depth and higher quality. You do this by taking two or more sounds and combining them in your sound design software just like adding extra oomph to an artwork to make it more interesting and better than before! To do layering, you need to have a good ear in order to make sure the layered sounds jive well together.


Time Stretching

Time Stretching (aka Time Compression) is the ability and process of literally stretching and compressing sounds without changing the pitch. For example, when speaking, if you streeetchhhh your pace when saying words and sentences, you will naturally be slower! It's just like that in sound design too! With compression, it's the opposite. One of the most common uses of time stretching and compression is when watching long, boring lecturers and you just can't help but click on that juicy 2x speed button. But then, you realise it's WAY too fast for you to keep up with so you reduce it down to 0.75x! Time Stretching and Compression will change the pacing/tempo/speed of the audio but not the pitch. You can do this when someone is talking but err on the side of caution cause if it's too fast, the person could end up sounding like some kinda alien. For sound effects however, it's 10000% dependent on what you wanna do! 


Pitch Shifting

This is the ability to change the pitch without changing the speed and length of the audio. So yes, your Alvin and the Chipmunk or Minecraft Zombie dreams can come through almost instantly! Having a higher pitch will make the sounds thinner, smaller and well... higher. Whereas a lower pitch will make the audio sound bigger, the bass will be more prominent and of course, the pitch will be lower.


Naturally, there's a correlation between the sound pitch and the type of thing or character the audience might associate with it. In a way, I believe this is related to the psychoacoustics we covered in week 1!

Just think about it... 




You're watching Godzilla vs Gigan (1972)(one of, if not the only movie where Godzilla talks), Godzilla opens his mouth to speak after a spine shivering roar and he ends up sounding like Theodore from Alvin and the Chipmunks... It doesn't fit or feel right, right? Maybe for comedic purposes but not in a serious battle scene... 


When there's a higher pitched sound, we tend to think it's a tiny sound meaning it comes from a smaller, cuter subject like a toddler! On the other hand, with lower pitch sounds, we tend to think it's a bigger subject like a giant, monster or dinosaur!


Reversing

It's as the title says—reversing an audio to create a newer, weirder and unnatural sound. This layered with layered (hehe) would be a match made in audio heaven!


Mouth It

Mouthing It is a more organic and interesting technique that almost everyone can do (even without any sound design background and even software!) We use this when there's no other source or way to produce a certain sound. Surprisingly, a ton of film productions use Mouth It to create sound effects needed for that specific character or scene!


Exercise

This week, we were to explore the lands of REVERB in sound design! Using the reverb effect and our parametric equaliser, we were to create 6 sounds—a telephone, in the closet, a walkie talkie, bathroom, stadium and an airport or train station—using the sample voice provided to us by Mr Razif. The first 3 sound effects were to use just the parametric equaliser while the last three were to use both the parametric equaliser and reverb. Here's how mine went...

Telephone

Characteristic

Typically, the audio transmitted through telephones are not only less quality than speaking to someone directly but it also lacks two things—bass and sharpness. In order to replicate it, I lowered both the sharpness and bass sounds. The middle frequencies were raised to make the audio quality appear worse than before, adding a sort of soft harshness to the sound like a real telephone!


In The Closet


Characteristic

Now, it's pretty hard to imagine what someone would sound like in the closet unless you watch horror movies frequently or you've been in one yourself (though I certainly hope not! If you have, I hope you're okay!!!) Nonetheless, you can easily mimic what that particular scene would sound like if you fully covered your mouth and talked. The sound is muffled for sure and to design sound in that manner, you gotta remove all sharpness as seen above! I also lowered the bass to provide for a lower frequency sound which would happen if something is blocking your audio transmission path. For example, a closet.


Walkie Talkie


Characteristic

Ahhh walkie talkie... This one genuinely HURT my ear when trying to replicate. The process was similar to the phone, lowering the bass and the sharpness but boy oh boy the harshness level quite literally transcended realities! 


Bathroom (first try)



Bathroom (2nd try)

Characteristic

Two attempts for bathroom cause I wanted to try explore and play around with the different settings! When talking from the bathroom, I pinpointed that the sound would mainly be muffled, foggy almost? So I tried to replicate that kinda effect! Additionally, due to being in a room, there's bound to be some bounce within the space hence why I added that and wetness in reverb!

Stadium (redo — less muffled, more walkie-talkie like, more bounce, similar to airport)


(I accidentally added two reverbs and didn't notice until I exported everything...)


Characteristic

in progress...

Airport


Characteristic

in progress...


WEEK 3


Lecture


Through this video, I learnt more about how sound design was the unsung hero of film-making and video production in general. I mean, as we have learnt before, sound can convey so many things to the viewer—it shows us more about characters, it helps us feel things while we watch, sound can even be use as a plot point. I mean, just think about it... A horror movie for example thrives on sound design, if you're scared in the cinema, curled up against yourself while watching The Conjuring 2, just close your ears while you watch... Suddenly, Valek the Nun doesn't seem so scary anymore, right? This goes to show the sheer power of sound and how it effects and designs our viewing experience. 


Now, it's time to explore and delve deep into the insights of sounds in the storytelling production process. 


Diegetic is derived from the word Diegesis which means the world of the film and everything in it. In simpler terms, diegetic sounds are everything the characters can experience within their world. Sounds can be divided into 3 zones. 2 are acousmatic zones. One covering sounds we hear but can't see—for example, the sound of ominous footsteps slowly approaching the main character. The other is non-diegetic. Non-diegetic sounds are sounds a character can't perceive in their world—meaning only the audience watching experiences it. A common example of this is movie soundtracks (but we'll get more into the details later...). However, non-diegetic elements within a film can also be visual such as title cards and time stamps to help the audience further understand what they're watching. Essentially, shots taken from sources completely external from the main diegesis. The last zone is called the visualised zone, where the source of the sound is visible on screen. Some sounds can even switch zones within a story


Diegetic Sound

Differentiating between the two sounds is pretty simple—if the characters can hear it, it's diegetic, if they can't, it's non-diegetic. Diegetic sounds range from the atmosphere and environment they're in like rain, dialogue and even some voiceover! If the voiceover communicates the character's own thoughts, it's considered internal diegetic sounds. The role of diegetic sound is to build the world around the character and can have a massive impact on the overall story—sounds heard offscreen can help the audience establish the setting and expand the world from what we see. Audiences generally expect diegetic sound to always be there, and to always be what's expected. However, if the director deviates from this, the result (when used correctly), can make for a focal point within a story. A good example is from The Last Jedi, when silence speaks louder than sound. Diegetic sounds can also be manipulated so we can hear what the character hears or when a character's mental state is compromised, sound can immerse the audience into the character's mind. 


Non-Diegetic Sound

Non-diegetic sound includes, musical tracks, sound effects and some narration (provided the narrator doesn't play a role in the film. Non-diegetic narration can be a good callback to the traditional way of storytelling, but be careful as if done, it can break the illusion of the story. It can help enhance motion and movement, the comedy to emphasise a joke within a film and of course add to the film experience through the musical score. Without non-diegetic sound, Up wouldn't be so heart wrenching, Harry Potter wouldn't feel so magical, stories would not be told to its full potential.


Secret 3rd Option?!

Now, what happens when sounds decide to switch between these two modes? Well, fear not, cause this is called Trans-diegetic sounds! This helps enhance the storytelling, as the audience's expectations are subverted. Like when something we assume is non-diegetic becomes diegetic. This is done by playing with our expectations through music and sound effects for instance to add an extra quality or smooth out certain sections of the story. 


Switching between the two zones can be a way for the director to blur the lines between fantasy and reality—for example, when a character can suddenly hear and even interact with the story's narrator. There are exceptions to the rule as some sounds don't even fit into these categories through the use of meticulously planned, creative exceptions. One moment that's hard to classify is one moment in the joker where Arthur quietly sings along to the music we assume is non-diegetic... is the music in his own head? What exactly could be going on? These nuances all come together to create excellent use of sound to tell a story.


Exercise


Class Exercises

In class, we were to do a few exercises using editing feautres Mr Razif had newly introduced to us. Namely, how to control volume, direction and effects using stereo balance and automation envelopes! We were to do a jetplane, a shorter jetplane, a girl walking away and a girl walking away into a cave in order to familiarise ourselves with and play around with the editing features!


Jetplane (Directional Only)




Jetplane (Directional + Volume Envelopes)



A Girl Walking Away





A Girl Walking Away Into A Cave 

in progress...


Environment Soundscape Exercises


For this exercise, we were to create a soundscape environment based on the two images given to us by Mr Razif! This is meant to be a little practise run before our next major assignments where we have to create the settings of the story and create the sounds of the story ourselves! 


To help me (somewhat) organise my story for each image and what I wanted to do, I created a google docs table for both images and tried to layout what sounds I needed for the stories I want to personally portray! Then,  I went on good ol' Adobe Audition and actually started the process of creating each soundscape!


Laser

https://drive.google.com/file/d/1bSwJmNXJhgCLRrqvgbHJRfOt6R06fqvo/view?usp=sharing


I've always enjoyed storytelling and creating new stories so I was pretty excited to learn a new way to actually let audiences experience what I want them to experience without actually seeing anything! Luckily, the first image for this task was pretty straightforward story-wise. The soundscape story I wanted the audience to visualise when listening was this:


A group of scientists in their lab are testing out their newest invention—their laser, said to be powerful enough to dig through the Earth. However, as the laser is still in its testing phase, it seems as though the laser isn’t as ready to release to the world as they thought…


Thus, based on that, I compiled all the necessary audio files I could potentially use within the piece!


Step 1: I downloaded and compiled all my files, put them into categorised folders and put everything into Adobe Audition—naming everything by the type of sound! I tried mainly to go by the order I listed above but I did accidentally reorder a few things!


Step 2: After inputting everything, it was time to edit each and every original audio file to suit the soundscape I had in mind! I did this first before placing and cutting the files as some of the audio files are shorter than others and I wanted to have some play on loop so I decided to edit everything first then cut or elongate it into the proper timeframe! For this, I mainly used directional sounds, parametric EQ, volume and also reverb balance, stretch and pitch and automation envelopes!


Step 3: Cut and place! After editing (which took the longest time as I had exactly 20 audio files), it was time for me to relocate everything! For certain sounds (like the laser charging up), I had to make sure they all aligned so I could use the layering technique on them! I had about 3 predominant categories when it came to cutting and placing: 

  • Background (which was constant throughout)
  • People
  • Laser

So each component of the specific category would have to run for a set amount of time based on the soundscape story!


Step 4: Double check if I liked how everything sounds and adjust accordingly!


Step 5: Export and tadaaaa! It's done! Here's the final audio soundscape!


Alien

https://drive.google.com/file/d/1y84-g8st3B5SfQYb_OJ64TNs0nmQXxrS/view?usp=sharing


I definitely had a muchhh harder time coming up with a story for this still image compared to the last... I mean, you look at the image in a glance and think "What is happening here...???" So, you gotta use a TON of imagination to make whatever it is you think is happening come to life auditorily! For me, it took a while to figure out before I got a solid idea I wanted to try. Inspired by the animanga Kaiju No 8, in particular this image:

I decided that the things in the glass containers were different types of aliens plotting a way out! So, based on that, this is the soundscape story I came up with:


Isolated on a deserted island lies Earth’s most dangerous prisoners—man-made aliens. Trapped in a military base, these aliens have nowhere to go, helpless as they look through the glass. Taunting the onlookers was how they spent their days. That was until one naive young guard loses his cool and all hell breaks loose…


And I did the cycle again. Figure out which sounds I needed, chronologically list a storyline that makes sense, and the works. With my story in place, sounds at the ready, it was time to create an out of this world soundscape (pun intended).


I repeated the same trusty steps above!


Downloading all my files:


Inputting everything into Adobe Audition:


Edits (using reverb, parametric equaliser, automation envelopes, directional sound and volume strength, stretch and pitch) + Placing and Cutting:


Final version:


Overall Thoughts

Honestly, I had so much fun when doing this exercise! Yes, it was time consuming. Yes, it was tedious to the point where I didn't want to do it at times but  listening and experiencing the final result you get after doing all that work is so satisfactory and so worth it! I can't wait to continue and see what else Sonic Design has in store for me!

Here's a link to all the sounds I used!

https://drive.google.com/drive/folders/1hwk5F1olifQ4bRe0bEzi-sVcVZHxH4S3?usp=sharing


WEEK 4

Lecture


In this lecture, we talk all about soundscapes! Which begs the question, what is a soundscape? Well, soundscape is basically landscape (land + scene) but with sound! Essentially, a scenery created by, well, sound. 

"Wait what...??? How does that even-?!" 

Shhh, let me explain! Imagine the sound of trees rustling, birds are cheerfully chirping in the bright, sunny sky, kids are playing loudly, happily nearby and you can hear the slight sound of munching to your left. 

What do you think this is? Well, no time to think cause I'll tell you that I'm trying to describe the scene of a picnic at the park using just audio descriptions of sounds relating to that scenario! Our brains naturally recognise sounds and thus produces a certain scene to fit that particular sound. This is because things, naturally, make sounds. And our brains, naturally, associates a certain sound to a thing. So all these sounds suggest something and they come together in our brain to make a scene!


Soundscape can push the boundaries of what people usually think are possible with sound. For example it can communicate distance, space, direction, temperature, weight, time and era, emotion and even concepts like nostalgia. Music can also be used as part of said soundscape as music is crucial in certain scenarios like creating a soundscape of a marching band for instance!


However, it's key to know that there are differences between sounds—some are instinctual, some are learnt. For example, 'Ka-chiing!' is a sound that we recognise as pertaining to money—like cash registers or money flowing into your bank account. This is because media (films, TV, ads, games, etc) usually use this sound effect in relation to well, money. Hence, this is a learnt sound.

Instinctual sounds are cover some of the things we've previously such as pitch! Higher pitch noises are generally related to things that are cute and safe like babies and Alvin and the Chipmunks (though, some of their shenanigans are FAR from safe!) While lower pitched sounds like zombies and sometimes your "friendly" neighbourhood villains generally convey danger and are naturally intimidating. This can easily be chalked up to the fact that humans and animals are conditioned to protect people with higher pitched sounds (like their children) and fearing those with lower pitched sounds (like a big bad wolf or any other intimidating, growling predator that might come to mind)!

Comments