Approach:
This week marked the beginning of the next 4-5 weeks of further developing my visual-audioizer project. I also spent time this week working towards writing for a Graduate Fellowship (pg 14-15): Link: Global Arts & Humanities Brochure Attached above is the paper I submitted, as well as the current project proposal for the next few weeks. During this week I also attempted to find more scholarly articles dealing sound and animation - it's been somewhat difficult finding any relevant information that could be used towards my research. I seem to keep returning to Paul Wells, Normal McLaren, Peter Greenway, and recently found Rebecca Coyle. I also took this week as a chance to familiarize myself more with the topic I'm working with. As stated in my previous journal, I decided that the relationship between my sound and visuals should be reliant on each other, rather than equivocal/synonymous/related. Choices Made: This week I decided to further evaluate how to split up the amplitude of sounds within my patch. First, I had to determine the width/height of the composition that the patch was rendering and send that information to the synthesizer. To clarify: Find/Set the dimensions of Video/Webcam >> Find/track the blobs on the screen >> find the total number of visible blobs on the screen >> send this data for individual blobs to Scalar value >> attribute this to a number and eventually an amplitude adjustment >> attribute this to a divisional operator that equally distributes a total amplitude range to total amount of pixels noticed on screen >> output data to synthesizer that equally shows the correct blob on another screen. Easy enough, right? Inspirational Sources: Unfortunately I haven't had the best of luck when it comes to finding information about the subject. Most of it is dealing with experiments that individuals have gone through, but not about the synonymous/ambivalent relationship between visuals and audio. I find one by a woman named Rebecca Coyle who examines animation as an audio-visual film form but she instead focuses more on the importance of sound within animated films. Link: "Drawn to Sound" A quote from her article (pg. 4): "Sound cannot be freeze-framed in the same way that images can be presented on the page, despite the best efforts of musicologists to capture dynamic elements by notating melodies and arrangements. Sound is constant movement." I love the ending sentence about sound being "constant movement", because in a literal and figurative sense, this is true. Sound is only created in an environment in which motion is possible and noticeable. If we as a person were able to move our hand back and forth in front of our faces quickly enough, we would eventually create a sound in front of ourselves. In this sense, just clapping our hands creates enough of a vibration for us to hear, as well as us stomping our foot. All sound is created by motion, but not the other way around (I think). This article also has a slew of other references that I will be eventually addressing and finding a stronger connection between the relationship of audio and visuals in the animation realm. Questions Raised & Needs:
Next Steps: I need to do a little catch-up in terms of creating the synthesizer (as mentioned in my previous post). I have started re-working what I already had, because the previous way that I was attempting to implement it was through having all the values simultaneously work through the synth before it would output a sound. I need to have some set values already so that I can work with the synth, and pre-determining these before switching them out with dynamically changing values will help with the creation of a more coherent synthesizer. -Taylor Olsen Comments are closed.
|
All PostsArchives
May 2020
Blog ContentsInterests: |