*The Illusion of Life - Essays On Animation - Edited by Alan Cholodenko*
Phase II Updates
Approach: Because of the issues from last week put me back somewhat on my proposed timeline:
Choices Made: I began small with attributing the numbers (after they had been scaled to a frequency) into a 'cycle' object. This will, instead of how I originally had it with pure MIDI tones, give a broader range to the already-received values from the data, partially eliminating the 'jumping' between notes. I also realized that when it wasn't 'playing' something on the screen, it was also attempting to still output a MIDI note -- if I listened closely I could hear consistent 'clicks' coming from the patch, meaning it was still playing even though I had told it to not play until it recognized that there was an object in front of it. This week I also went back to basics with building the patch and got to incorporate the size of the blob attributed to the amount of amplitude it gives off (this method of getting the size had been found in the previous weeks, but I couldn't get it to work up until now). I tested this using a growing/shrinking circle that would be 1:1 with pixel space on the screen and the amount of amplitude outputted. I threw in a couple of other circles to make sure it still recognized multiple objects on the screen. I investigated a patch as well called HS Flow that incorporated the detection of movement between subsequent frames. I want to further this use and make it so I'll have a version of my work that only updates/turns on the Digital Audio Converter (dac) object when it notices movement. I eventually got the color tracking working as well, but the actual use of this seems like it may not be beneficial to me yet. I wouldn't know exactly what to attribute it to, as I doesn't necessarily have a big factor in what I am doing. Part of the issue with this patch as well, is that it has to be 'set-up' to find certain colors in an area, not lending itself to automatically matching the colors in a scene. I think this would find it's use somewhere in the future, but for now I don't imagine it helping me as much as I thought it would. Another issue is that lighting conditions are constantly different in multiple settings, and wouldn't yield the same results as other settings that are dimly/brightly lit. Inspirational Sources:
*Using Color and x/y Position as pitch on/off music tests*
List of Interactive ways to create Instruments
During the class discussion, it was mentioned to me:
Questions Raised & Needs:
Next Steps: When I talk to Jessica and Janet, I want to remind them that I am interested in the idea of these projects becoming useful in the medical field, but it was solely created first as a tool for the artist, and that's what I'll be focusing on. For the patch, I'll incorporate more steps into the patcher itself regarding blob elongation/direction. I've also been given some Kinect patches from Matt that I can utilize and experiment with over the summer. I'll be investing more ways of current audio/visual readings and examples. I've begun using p5js as a function for coding sounds as well. -Taylor Olsen *Above you'll find some screenshots of the different modules needed for scale distribution, dimension allocations, poly objects for multiple sounds, etc.* Approach:
I took the main considerations of last weeks Open House and attempted to implement any comments that guests may have made about the patch itself. Some of the comments I overheard while individuals were interacting with the patch:
Choices Made: This week I pushed more of the comments from the Open House into actual results. Up above in the image slideshow, starting from the first image:
Inspirational Sources: This week I've continued with reading Chion's "Audio-Vision." The book is fascinating, and I sincerely can't believe that I hadn't found it before. He does reference Walter Murch, a famous sound engineer, who gives a wonderful foreword and sets the mood for the rest of the book. "We gestate in Sound, and are born into Sight. Cinema gestated in Sight, and was born into Sound." -Walter Murch Questions Raised & Needs:
Next Steps: The motto for this week, for me, sounds like this: ---Conceptually, it's easy! But, easy concepts usually have the most complex underlying potential just waiting to be tapped into. I'm excited to see where this project goes, and I can't wait to speak more with Janet, my friends and family, classmates, and any others who have insight. I'm going to miss having this semester end, but I'll be glad to have more free time to work on the things I enjoy - including this! -Taylor Olsen Approach: This week was the ACCAD open house: a wonderful time for research projects, students/academics, and the general public alike to come together and appreciate what the DAIM and DRD track at Ohio State have to offer. I spent more of this week preparing for the installation and presentation of my project rather than the actual project itself. But. the open house gave me a new appreciation for what I was doing. Seeing everyone enjoy and experiment with my "Visual Audio-izer" created innumerous conversations between physcial therapists, and installation artists the like. One such individual, Janet Weisenberger, who is affiliated with physical therapy and is a speech and hearing science professor, suggested that the work could be used as a means of helping those who're in need of music therapy in an unrestricted environment. Using it would be another means of informing a patient how they're moving their body in a sonic setting, rather than having to focus solely on coaching of a therapist. Choices Made: This week I did more research into the different researchers behind sound settings within the animation world. I went an re-rented Paul Wells', "Understanding Animation", and chose to copy those pages as well. Below you there is links to the full Chion document and Deutsch aspects of synchrony in animation. I figured out this week how to get the amplitude distributed among selected objects depending on the screen space it takes up. This means a value of 1000px out of 3000px would produce 1/3 of the total amplitude allowed out of the speakers. Many people also noticed that the feedback was a little slow--I'll have to look into this, but I think it's because the computer is located in a different room, and the usb-connection to the webcam is on a connection-splitter. I also considered that I should have the webcam on the ceiling but found that I didn't have a long enough cord/way to prop it up there. During the open house though, Oded (a theatre/dance specialist at ACCAD), said that he could help. I'll be asking him soon for this. Many children liked moving around the space as well. They enjoyed the small lights within the paper boxes that simulated the sound. A good amount of adults also became curious of how it worked. I usually just explained that the box was the catalyst that activated the sound, and it noticed the light within the room. I would sometimes pull out my phone and use it as an example. The screen or the light from the flash was noticed as an object. ---Once I left for a few minutes for a drink/restroom break, and came back into the sound room and had a group of six people shaking the cubes and dancing randomly throughout the room.
Inspirational Sources:
Using the Open House as a means to find inspiration within the work that I've done, as well as my other classmates and people of ACCAD, made me appreciate again how hard everyone works towards their projects. I feel as though the general public is very welcoming of everything we do. just being able to hear, "Oh, this is cool." as subtle as it is, has an effect on the one who made the piece they're experiencing. Questions Raised & Needs:
Keep working towards having the ability of a better sound to be translated. Use the amplitude and scale attributes as synonymous relationship. Making the ability of the patch to notice multiple objects within the scene. -Taylor Olsen |
All PostsArchives
May 2020
Blog ContentsInterests: |