MSc Final Project Weblog 2: Ryoji Ikeda Exhibition and Sonification Archetype Development
Updated: Aug 21, 2021
24/07/21 - 01/08/21
I attended the Ryoji Ikeda exhibition held at 180 The Strand, which has served as fuel for further inspiration for conceptualising and planning the audio-visual interaction for the project. It allowed for the consolidation of several ideas and concepts that I had been introduced to by the Scaletti's Sonification ≠ Music chapter in The Oxford Handbook of Algorithmic Music (pp.363-386). Of particular interest was Ikeda's data-verse trilogy, which overwhelmingly featured dots/points across macro and microscopic levels - often utilising lines to create scanners, which had tones associated with them and triggered smaller audio events when they met individual data points. High densities would cause rhythms/tones and interplay between them was used to powerful effect. Additionally, on a visual level, this has served to deepen an understanding and appreciation for the dichotomy of dots and lines. I explored this in Prion, my previous swarm granulator, however in this project I intend to test multiple approaches to line implementation considering each method's implications for both sound and visuals. At present the idea of lines crawling across point cloud structures, following an attractor of some kind could be very powerful and a computationally cheap way to add movement to point clouds (both animated and unanimated).
Still from Ryoji Ikdea's 'data-verse' inspires speculation that lines do not have to be coupled to point locations; a drifting cloud of points resembling a shape passing through rigid geometric line structures could be very powerful.
From here, I began developing the idea of point cloud sonification archetypes, giving me multiple avenues to explore, and perhaps offering interesting nuances if combined/switched between in interesting ways.
1-to-1 Granular Mapping. This is the approach I used in Prion, linking each boid's X position to sound pan position and using the Y position to move through the file. Given it's 3D nature, this project has the additional consideration of the Z axis. There is potential to simply use the X,Y,Z to spatially locate the grain of audio, and use a random file position (within a range) at which to start the grain. Alternatively, the localisation of the grain could be attributed to the origin of the point cloud mesh and the positional data could influence FMOD parameters, such controlling automation that balances the volume of different granular sources. This approach could also be applied to clusters of points if it becomes too computationally inefficient. Scaletti would define this as 1st Order Mapping (p.368).
Higher Order Mapping. Using point data to control multiple layers of parameter per data - easily achieved with FMOD parameters - or as Scaletti would call it 'Emergent Mapping' (pp.368-9)
Point Data Maths. Using maths to generate an abstract point in the sound file's data space given the distance relationship of some cluster of points/neighbourhood of boids.
Events and Event Chains. Miscellaneous triggers cause short audio triggers, or networks of said triggers. Visually this is combined with some form of event (e.g a 'scanner' connecting with a point), satisfying Chion's notion of synchresis. Regular/tempo synched events could form basis of beat/rhythmic content
Next week's tasks are to begin prototyping implementation for these various mapping archetypes using Unity and FMOD, and begin experimenting with 3D Gan and 3D IWGAN (the latter of which provides implementation code for both it and 3D-GAN, not provided with original release 3D-GAN). Also found TensorFlow implementation of it here.