*An example of audio clipping.* (Source)
Approach:
This week I fell a little behind in my proposed progress due to some complications with my mental health. But, I did come to some realizations about how to prevent this patch from becoming too loud when having visual playback incorporated into the output. I'm beginning to understand the complexities of having too many objects on screen and how this would interfere with the digital-to-audio playback of the patch. Choices Made: The image above shows an example of how sound can be clipped during playback if the amplitude is too high with a set frequency. We can see that there are ways to have the sound become less clipped by increasing the threshold of the amplitude waveform, but this would ultimately lead to an issue where the actual hardware wouldn't be able to decode the waveform being fed into it. In sound synthesis, a basic way to consider how an amplitude envelope works is to consider it as a factor of 1 to 0. In this sense, 1 is full amplitude, while 0 is presumably "off". Something I have now considered (which my patch already does) is to have it recognize how many different shapes there are on the screen, and how large these shapes are in terms of pixel density (which my patch also does too!). When this is read, I can have the actual patch distribute the amplitude accordingly to each object. So, two objects would each have a (0.5) amplitude among them, 3 would have (0.33), 4 is (0.25), and so on. This would be accomplished by using a [target $1] command to send a message to the frequency modulation index. I am considering throwing a subtract (-.001) onto the total amplitude as to prevent any sort of clipping in the playback, just in case the patch decides, in the order of how it reads the data, to accidentally play both at full amplitude first before "snapping" them to it's distributed amplitude. I may have to play around with how this is determined though, as it could lead to some issues and may need reworking. Through observing how I can distribute the amplitude correctly, I want to consider having the size play an influence in how much of the sound is distributed as well. Having them both evenly distributed is a good first step, but having size as a factor will also be something worth considering. A larger object could have (0.7) amplitude, while a smaller one could have (0.3). Considering this, and associating it with float values, it could even be distributed as (0.98) and (0.02) depending on the size relationship of each object. Inspirational Sources: I found a new source of audio-visual inspiration this week from an individual named Max Hattler. While his pieces are more influenced by pre-rendered visuals, he uses audio to adjust and modify the visuals presented. A significant visual aesthetic is the repeated imagery. I learned from taking Matt Lewis's "programming design concepts" that this is a repeated frame-rate 'draw' function that can be adjusted and simulated to shrink/grow in size.
I've considered taking into account that if i can get this patch to work in the way that I want to, is to have an audio adjustment afterwords and a feedback loop to create even more obscure and abstracted visuals/sound. Some of the visuals are to the beat of the song, while some appear to be independent.
Questions Raised & Needs:
Next steps: This coming week I want to get back on track with my progress and begin play-testing with some basic animations that I'll created for the sake of the audio output. I'm hoping that I'll find more time on top of my other courses to further my progress for my research. Taking 4 studio/lab courses as well as teaching has started to become more hectic than I anticipated, but finding ways to distribute time is also necessary. -Taylor Olsen Comments are closed.
|
All PostsArchives
May 2020
Blog ContentsInterests: |