Elemental - A Gesturally Controlled System to Perform Meteorological Sounds T. Brizolara da Rosa, S. Gibet, C. Larboulette Universite Bretagne Sud IRISA Lab, Expression Team Abstract We developed and evaluated Elemental, a NIME based on audio synthesis of rain, wind and thunder, intended for application in contemporary music/sound art, performing arts and entertainment. The evaluated version is controlled by the performer's arms through Inertial Measuring Units and Electromyography sensors, which data mapped to the sound synthesis engine. In the user studies both with experts and general public, the users approached the system ranging from the manipulation of abstract sound to the direct simulation of atmospheric phenomena - in the latter case, it could even be to revive memories or to create novel situations. This suggests that the instrumentalization of environmental sounds originally not produced by human action can be a fruitful strategy for constructing expressive NIMEs. Figure 1. Left: A public demonstration of the Elemental. Right: a rehearsal with musicians (reproduced with authorization). 1. Introduction The current confluence of: availability of low latency and reliable sensors for capturing human movement; realistic flexible models for real-time audio synthesis of environmental sounds and the increasing presence of continuous interaction in digital applications (such as VR and games), naturally presents the necessary conditions for the design of musical expression systems using these environmental sounds. The paradigm chosen was that of a sound-based (instead of musical note-based) New Interface for Music Expression (NIME), with application in areas such as live performances (music / theatre / dance), art installations, sound design / sound effects, music composition / sonic art. 2. Materials and Methods The audio synthesis module was written in the Pure Data (link 1), on top of publicly available implementations[ref 1](link 2). Movement was captured with one Myo Armband (link 3) in each user's wrist, equipped with accelerometer, gyroscope, magnetometer and 8 electromyography (EMG) sensors to measure the arm's muscular activity. Gestural input data was mapped manually to a layer of high level parameters (see 3.) to control the sound synthesis. EMG data was used to detect when the user performs a fist hand gesture, via knn classification, with the aid of the Wekinator[ref 2](link 4) machine learning software. Two experimental sessions were conducted. The first comprised mostly of non-skilled participants, and the second mostly of skilled ones (regarding music, dance, acting, sound design, or similar). The users answered a questionnaire concerning ease of use, ease of learning, pleasure in using the system, intuitiveness of the mappings, efficiency exploring and refining sounds, suitability of the system for expressive applications and quality of the sounds. 3. Mapping of gestural data to sound synthesis Figure 2. Left: pitch and roll angles. Right: schematics of the mappings. Details in the table below. The user can control the following high-level parameters of the sound synthesis: Rain amount (right arm pitch) - From no rain, to torrential rain Rain "color" (right arm roll) - From bass rain, as when the rain hits the roof of a car to treble rain, as when the rain hits dry sand. Rain throw (right arm angular speed) - From no rain throw, to strong throw of rain. Wind speed (left arm azimuth*pitch + angular speed) - From no wind, to strong wind. Wind speed oscillation (left arm roll) - From no random oscillations on wind speed, to strong random oscillations on wind speed. Thunder (event) - Thunder is triggered by a left hand fist pose (Figure 2). Thunder distance (left arm pitch) - From thunder very close, to thunder far away. 4. Results and Discussion The ability with Elemental to perform and compose music at the sound level, and its suitability for multimodal performance, using both body and sound, was almost unanimously recognized, both by direct observation via the answers of the questionnaire and by the fact that many performers showed a strong interest in including Elemental in different artistic works. At the moment, an artistic collective in France (link 5) is rehearsing with the system and also a duo of a musician/dancer with a classical guitarist in Brazil (Figure 1, video demo at link 6). In addition, users reported working both in terms of pure sound creation or in terms of simulation or recall of meteorological phenomena. These results suggest that environmental sound models may be suitable to build expressive NIMEs. A compromise between realism (or believability) and sound control was also observed in the development of Elemental. Noise melodies or noise sweeps were not allowed by controlling Rain Color, because realism imposed high frequencies to be present (from the smaller fragments generated from water drops after impact). Wind Oscillation, which is stochastic, had also to be re-modelled and controlled by the performer. A complete absence of oscillation rendered the wind sound artificial, while completely autonomous oscillations represented a lack of control. References [1] A. Farnell. Designing sound. MIT Press, Cambridge, Mass, 2010. OCLC: ocn494275436. [2] R. Fiebrink, D. Trueman, and P. Cook. A Meta-Instrument for Interactive, On-the-Fly Machine Learning. Jan. 2009. Links 1. https://puredata.info 2. https://mitpress.mit.edu/books/designing-sound 3. former Thalmic Labs, now https://www.bynorth.com 4. www.wekinator.org 5. https://www.fukeicollectif.com/ 6. https://youtu.be/V_Sv5HiV5zU Contact tiago.brizolara-da-rosa@univ-ubs.fr sylvie.gibet@univ-ubs.fr caroline.larboulette@univ-ubs.fr Universite Bretagne Sud Campus de Tohannic IRISA Lab, Expression Team 56000 Vannes Project funded by Region Bretagne and Departement du Morbihan