joe.wright
2020-07-18 13:50
has joined #papers13-hci-gesture-machines

joe.wright
2020-07-18 13:50
@joe.wright set the channel purpose: Paper Session 13: HCI / Gesture / Machines

niccolo.granieri
2020-07-18 13:50
has joined #papers13-hci-gesture-machines

hassan.hussain5
2020-07-18 13:50
has joined #papers13-hci-gesture-machines

overdriverecording
2020-07-18 13:50
has joined #papers13-hci-gesture-machines

lamberto.coccioli
2020-07-18 13:50
has joined #papers13-hci-gesture-machines

jonathan.pearce
2020-07-18 13:50
has joined #papers13-hci-gesture-machines

richard.j.c
2020-07-18 13:50
has joined #papers13-hci-gesture-machines

eskimotion
2020-07-20 09:25
has joined #papers13-hci-gesture-machines

edmund.hunt
2020-07-20 09:25
has joined #papers13-hci-gesture-machines

acamci
2020-07-20 17:01
has joined #papers13-hci-gesture-machines

aaresty
2020-07-20 17:21
has joined #papers13-hci-gesture-machines

10068197
2020-07-20 17:21
has joined #papers13-hci-gesture-machines

a.nonnis
2020-07-20 17:22
has joined #papers13-hci-gesture-machines

a.macdonald
2020-07-20 17:23
has joined #papers13-hci-gesture-machines

andreas
2020-07-20 17:24
has joined #papers13-hci-gesture-machines

dianneverdonk
2020-07-20 17:25
has joined #papers13-hci-gesture-machines

likelian
2020-07-20 17:25
has joined #papers13-hci-gesture-machines

ko.chantelle
2020-07-20 17:25
has joined #papers13-hci-gesture-machines

anika.fuloria
2020-07-20 17:26
has joined #papers13-hci-gesture-machines

clemens.wegener
2020-07-20 17:26
has joined #papers13-hci-gesture-machines

lamberto.coccioli
2020-07-25 08:43
Alon A Ilsar, Matthew Hughes, Andrew Johnston _NIME or Mime: A Sound-First Approach to Developing an Audio-Visual Gestural Instrument_ No. 60 in Proceedings Juan Pablo Yepez Placencia, Jim Murphy, Dale Carnegie _Designing an Expressive Pitch Shifting Mechanism for Mechatronic Chordophones_ No. 59 in Proceedings Gwendal Le Vaillant, Thierry Dutoit, Rudi Giot _Analytic vs. holistic approaches for the live search of sound presets using graphical interpolation_ No. 43 in Proceedings Rebecca Fiebrink, Laetitia Sonami _Reflections on Eight Years of Instrument Creation with Machine Learning_ No.45 in Proceedings Olivier Capra, Florent Berthaut, Laurent Grisoni _All You Need Is LOD : Levels of Detail in Visual Augmentations for the Audience_ No. 13 in Proceedings

niccolo.granieri
2020-07-25 09:01
*Paper Session 13: HCI / Gesture / Machines* will begin in 15 minutes!



niccolo.granieri
2020-07-25 09:06

niccolo.granieri
2020-07-25 09:07
And as mentioned in the other channels, here is a link to a zoom room that will be open all day long for you to pop-in and chat with other NIME attendees! https://us04web.zoom.us/j/75307801251?pwd=VTF3MFJ4UTNaY1psTHQ4Qllkckhndz09

hassan.hussain5
2020-07-25 09:08
also, as mentioned previously: when asking a question in response to a paper, please indicate in your message to which paper presentation you are responding.  Either by mentioning the title of the paper or using the @ to direct it to the presenter. This will make it easier for people to follow the presentations and the Q&A later (due to being in different time zones). and please keep replies to the question in a thread!

niccolo.granieri
2020-07-25 09:16
*Presenting Now:* @alon.ilsar, @matthew.d.hughes, @andrew.johnston NIME or Mime: A Sound-First Approach to Developing an Audio-Visual Gestural Instrument No. 60 in Proceedings

timo.dufner
2020-07-25 09:27
@alon.ilsar, @matthew.d.hughes, @andrew.johnston Hello! not sure if just got it right: did you say audience does not care about mimed performances?

a.r.jensenius
2020-07-25 09:27
@alon.ilsar Nice presentation! Could you say a little more about how it feels different for the performer?

timo.dufner
2020-07-25 09:28
thanks

dianneverdonk
2020-07-25 09:28
@alon.ilsar, @matthew.d.hughes, @andrew.johnston> really interesting research! Since it is a combination of miming and 'real' actions. Did you have any enquetes with the audience? Wonder what the outcome is (or maybe I missed this). Awesome performance also! Really enjoyed it yesterday.

jens.vetter
2020-07-25 09:28
@alon.ilsar would it be interesting, to involve the immaginary ideas of the audience more to the mapping process?

cagri.erdem
2020-07-25 09:28
Nice work @alon.ilsar! Maybe you mention in the paper but, have you used AirSticks in ensemble settings? I would like to hear more about your experience with non-precomposed material (in the context of miming).

hofmann-alex
2020-07-25 09:28
@alon.ilsar great performance, especially the integration with the visuals, how was the collaboration with the visual artists, what was first sound or image? and how did it influence each other?

robert.blazey1
2020-07-25 09:28
@alon.ilsar Having gone back to the system, do you find you can perform the same piece as convincingly with the preprogrammed elements removed?

a.macdonald
2020-07-25 09:29
It seems that this approach gives you more compositional freedom and *musical* agency. You described miming as ?enslavement? but maybe the opposite is true

info041
2020-07-25 09:30
@alon.ilsar @matthew.d.hughes miming - I do it a lot in composition with gestures and sensors! Glad to see different approaches to this :slightly_smiling_face: Great work, enjoyed yesterday's performance!

a.macdonald
2020-07-25 09:31
Thanks - great project

lamberto.coccioli
2020-07-25 09:31
@alon.ilsar @matthew.d.hughes lovely presentation, thank you

niccolo.granieri
2020-07-25 09:31
*NEXT UP* @jpyepezimc, Jim Murphy, Dale Carnegie Designing an Expressive Pitch Shifting Mechanism for Mechatronic Chordophones No. 59 in Proceedings

niccolo.granieri
2020-07-25 09:32
If you know the author's nicknames on Slack, please tag them!

sleitman
2020-07-25 09:32
@jpyepezimc

marije
2020-07-25 09:34
@alon.ilsar @matthew.d.hughes great also to see a paper about the use of an instrument in performance, after numerous performances with an audience!

dianneverdonk
2020-07-25 09:35
@alon.ilsar gonna dive in your paper soon. Do you have a web page about the project (next to the commercial site)?

florent.berthaut
2020-07-25 09:38
@alon.ilsar Great project ! I wonder what the influence of visuals is (compared to gestures only) in the perception of the audience (between Miming and playing), did you experiment with the effect of different visual designs as you did with the gestures ?

alon.ilsar
2020-07-25 09:40
great question Alexander! could discuss it lots lots more. especially as most other performances I do are improvisational ensemble playing

alon.ilsar
2020-07-25 09:40
where there's obviously no miming :wink:

gndunning
2020-07-25 09:41
Really enjoyed the paper @jpyepezimc

l.mice
2020-07-25 09:41
@alon.ilsar @matthew.d.hughes Very nice! I really enjoyed your approach in considering the audience while designing the instrument- which clearly makes for a great live show. I?m curious to know if there is flexibility for improvisation or changing the performance on the fly in response to the audience energy during each performance?

halldorion
2020-07-25 09:42
Good work. Just a question if there are composers in line who want to work with the system?

a.macdonald
2020-07-25 09:42
what is the input to play notes?

hofmann-alex
2020-07-25 09:42
I could totally see this in music acoustics research, which needs reproducable excitation, to study the effect on the string when pitch shifting.. is this also an idea to do research in this direction? @jpyepezimc

robert.blazey1
2020-07-25 09:43
@jpyepezimc would love to hear it in action, is there any documentation we can see online?

gndunning
2020-07-25 09:43
Perhaps there could be an option to create a realtime interface to play this instrument with live?

a.macdonald
2020-07-25 09:43
are there unexpected affordances?

konstantinos.vasilako
2020-07-25 09:44
Are there any samples of music making with the instrument? [RESOLVED]

halldorion
2020-07-25 09:46
Can you post a link to the presentation? And would it be fine to share it?

robert.blazey1
2020-07-25 09:46
@jpyepezimc Thanks, can definitely relate to frustrating lack of workshop access but look forward to hearing how it turns out!

niccolo.granieri
2020-07-25 09:47
*UP NEXT* @glevaillant, Thierry Dutoit, Rudi Giot Analytic vs. holistic approaches for the live search of sound presets using graphical interpolation No. 43 in Proceedings

joe.wright
2020-07-25 09:47

niccolo.granieri
2020-07-25 09:48
@sleitman & @ all the session authors: just a reminder, please read out the question for the sake of those following captions and those simply not on Slack. Thanks.

sleitman
2020-07-25 09:48
Yes, ok.

halldorion
2020-07-25 09:48
Thanks Joe!

jpyepezimc
2020-07-25 09:49
Yeah, I would say so. More than unexpected affordances, we found opportunities here. For example, we changed the orientation of the arm at one point, which allowed us to get better potential for pitch bends or vibrato. It was a fun ?feedback loop? between the design process and what the instrument allowed us to do through each iteration.

alon.ilsar
2020-07-25 09:51
thanks dianne, only anecdotal and reviews for this project unfortunately. particularly because the creative part and performances of this project happened after my phd and before my post-doc. for our current project, which is hold on due to covid, we had an initial work - in - progress showing and recorded discussion. the piece was part of TEI conference and the next paper on that would include more audience feedback https://dl.acm.org/doi/abs/10.1145/3374920.3375283

joe.wright
2020-07-25 09:51
No problem :-)

alon.ilsar
2020-07-25 09:51
yes! great idea. we have started doing this for our new piece, computer storm, https://dl.acm.org/doi/abs/10.1145/3374920.3375283

alon.ilsar
2020-07-25 09:52
the audience feedback of the workinprogress showing was really interesting

v.zappi
2020-07-25 09:52
Also, what does your completely-live version [e.g., Guthman, SIGGRAPH] inherit from the previous experience? I'd like to hear/read more about the full design cycle experience! Very inspiring work. Also, I cannot refrain from sharing this piece that you probably know :slightly_smiling_face: : Mark Applebaum - Aphasia https://youtu.be/wWt1qh67EnA

alon.ilsar
2020-07-25 09:54
lots of their ones lined up with our plans for the piece. especially considering the piece involves them :wink: https://youtu.be/-yV3xTRs6vQ

alon.ilsar
2020-07-25 09:54
sorry, put this in the wrong thread :disappointed:

alon.ilsar
2020-07-25 09:55
yes! great idea. we have started doing this for our new piece, computer storm, https://dl.acm.org/doi/abs/10.1145/3374920.3375283

alon.ilsar
2020-07-25 09:55
the audience feedback of the workinprogress showing was really interesting

alon.ilsar
2020-07-25 09:55
lots of their ones lined up with our plans for the piece. especially considering the piece involves them :wink: https://youtu.be/-yV3xTRs6vQ

alon.ilsar
2020-07-25 09:56
yes, plenty of documentation working in ensembles, particularly with a trio called The Sticks


timo.dufner
2020-07-25 09:58
@glevaillant thanks for the presentation! i have a not directly on topic question: what did you use to create the interface on the tablet?

alon.ilsar
2020-07-25 09:58
my thesis was titled *The AirSticks:* *A New Instrument for Live Electronic Percussion within an Ensemble*

capra.olivier
2020-07-25 09:58
@glevaillant Nice presentation !! The expertise level was a self evaluation right ?

marije
2020-07-25 09:58
@glevaillant Are you also considering experiments where motoric memory plays a more important role? So more embodied interfaces than touchschreens?

alon.ilsar
2020-07-25 09:58
happy to share further over a zoom chat?

alon.ilsar
2020-07-25 09:58
and of course happy to discuss further over a zoom chat

alon.ilsar
2020-07-25 09:59
and would love to discuss further over a zoom chat

timo.dufner
2020-07-25 10:00
sad you should publish it :wink:

a.macdonald
2020-07-25 10:00
Looking forward to seeing the 6 string version!

jpyepezimc
2020-07-25 10:00
Absolutely! Having a ?concrete? instrument in a real environment to produce complex sounds is one of the main reasons to even go for a robot. This is an interesting observation, and studying a string with something like this might provide a different perspective than can be achieved from using a human performer.

marije
2020-07-25 10:01
Not just feedback, but also interfaces where the movement of the body itself plays a more important role? basically, anything that goes beyond pointing with a finger, but involves movement of more parts of the body.

niccolo.granieri
2020-07-25 10:01
*UP NEXT* @r.fiebrink, Laetitia Sonami Reflections on Eight Years of Instrument Creation with Machine Learning No.45 in Proceedings

decampo
2020-07-25 10:02
@glevaillant there is a generalized approach for experimental many-2-many mappings that may be interesting for you here: https://www.3dmin.org/wp-content/uploads/2014/03/Campo_2014.pdf

noris
2020-07-25 10:03
@glevaillant is there any reason why 35 seconds duration max set for the search process? could the visual countdown create unnecessary stress to the participants (may contribute to to error as they race to complete the task?)

glevaillant
2020-07-25 10:04
Sorry my answer wasn't clear ; the current interface was not published *in a conference* but it is available! macOS/Windows version for editing, and iOS/Android versions for playing https://miem.laras.be

vincze
2020-07-25 10:05
I watched a live performance of Laetitia at CCRMA last year, and we talked about your collaboration, impressive work!

jpyepezimc
2020-07-25 10:06
Thank you! If everything goes well, it might be operational in less than 2 weeks. :slightly_smiling_face:

a.macdonald
2020-07-25 10:07
:+1:

glevaillant
2020-07-25 10:09
That would be a very interesting experiment! But we did not consider that at the moment, as we wanted to focus on touch interfaces. Also, it would require maybe a bit more 'training' to explain the user how to use the interface ; the touchscreen interface is a quite straightforward for beginners

hofmann-alex
2020-07-25 10:09
I see some potential for collaborations here. :slightly_smiling_face: I forwarded your paper to my colleague Montserrat (https://www.mdw.ac.at/iwk/montserrat-pamies-vila/), she is planing to do research on string excitation in the future, and has build a blowing machine for clarinet experiments in the past.

matthew.d.hughes
2020-07-25 10:10
Hi Alex, I think I can answer. Initially for this project, the sound was definitely the lead. It was based around the idea of turning a completely finished album into a performance. So the visuals took cues from the aesthetics of the finished pieces. Of course, when we started integrating them into the performance, there was a feedback loop from gesture <-> visuals. And then when we transformed the systems into being fully interactive (both sound and visuals generated), @alon.ilsar composed a new piece around the ?audio-visual instruments? we had created. Our latest work however, presented at TEI this year, took lead from the visuals. I was experimenting with a graphics system that used depth sensors to surveille the audience, and the sound in this piece is meant to feel dark and disturbing as Alon uses the sticks to navigate inside the crowd?s personal space. This piece can be seen here: https://youtu.be/xcrN-1PrfIU Thanks

vincze
2020-07-25 10:10
@r.fiebrink - I really love your Wekinator. Just started using it very scarcely since last year, but it is a great tool. I didn't get what is following Laetitia's gestures, while she is performing?

dianneverdonk
2020-07-25 10:10
@leatitia / @r.fiebrink Thanks for your interesting research! Leatitia, could you elaborate a bit on the first challenge you mentioned, about synthesis techniques that have to deal with a lot of parameters simultaniously?

marije
2020-07-25 10:10
Great work! @r.fiebrink Andi Otto's thesis "Dutch Touch" is an elaborate study of the collaboration between Waisvisz and the many engineers that he has worked with at STEIM. Found here in German: http://www.andiotto.com/teaching-and-research many of the interviews are available as audio online and are in English

s.d.wilson
2020-07-25 10:10
:clap: :clap: :clap:

glevaillant
2020-07-25 10:10
Thanks! So, yes, the expertise level was a self evaluation based on extensive text descriptions, but the participants could ask questions if they could properly evaluate themselves

lamberto.coccioli
2020-07-25 10:11
@r.fiebrink Great presentation, many thanks. The point about tending to your project vs. publication goals is very well made

info041
2020-07-25 10:11
@r.fiebrink & Laetitia Sonami - great work! Will check out the paper for sure :slightly_smiling_face:

konstantinos.vasilako
2020-07-25 10:11
@r.fiebrink Is the mapping chaged dynamically based on the model which is trained in real time or there is a model already trained and deployed prior to the performance

glevaillant
2020-07-25 10:11
I could send the text description in this thread if you'd like!

a.mcpherson
2020-07-25 10:12
@r.fiebrink and Laetitia Sonami - great discussion and reflections! I wonder if you could talk more about the balance between continually updating the ML models versus deliberately fixing a particular model and exploring its performance implications.

o.green
2020-07-25 10:12
@r.fiebrink that's a great point about the tension between immediate research novelty and the more mundane work needed to develop support for long term engagment. Do see an easy way for us to work around this in NIME?

marije
2020-07-25 10:12
@r.fiebrink I like the use of ML algorithms as a kind of resonance system where data is sent into, or not.

m.zbyszynski
2020-07-25 10:12
@r.fiebrink I appreciate the paper format, and am sympathetic to the comment that software design doesn't always lead to publishable "research." How do you suggest developers share their design work in this (or other) communities? (PS -- I think this question is made redundant by the previous, similar, question.)

eskimotion
2020-07-25 10:12
@r.fiebrink@Leatitia Thank you for your insightful presentation, fantastic work, beautiful collaboration! @Leatitia : Could you high light how this collaboration influence your artistic decision relating to building your instrument and perform?

timo.dufner
2020-07-25 10:12
ahh great. thank you

decampo
2020-07-25 10:12
@r.fiebrink very nice paper, completely agree with the grounded personal opinion perspective of it, great model for more papers with that stance!

glevaillant
2020-07-25 10:13
Thanks for the link :slightly_smiling_face: Looks very interesting!

schwarz
2020-07-25 10:13
Very interesting insights into your working method, @r.fiebrink, and Laetita! Laetita, how do you"meta-control" your process of finding mappings, i.e. how do you perform the freeze while you're exploring and listening? footpedal / mouseclick / DWIM =-) ?

marceldesmith
2020-07-25 10:13
Awesome work! I'm curious if there's been any exploration of ML with Spectral Resynthesis algorithms? Perhaps a way to apply variable wave-shapes per band to improve the fidelity and sonic accuracy that you could get with a limited number of oscillators compared to what could be achieved with just sine waves.

florent.berthaut
2020-07-25 10:14
@r.fiebrink Thank you for the very interesting presentation. I wonder what role the physical interface plays in the exploration of sounds with machine learning. For example, comparing the use of the spring spyre compared to the lady's glove in terms of appropriation / gestures.

cagri.erdem
2020-07-25 10:14
Thank you @r.fiebrink for the great work. That?d be great to hear about your experience regarding the ?unpredictability index? of ML (in Laetitia?s terms) and how it reflects differently on composition and performance.

glevaillant
2020-07-25 10:14
We calibrated this 35s seconds durations after the alpha and beta experiments. First, there was no limit, then 60s, but the subject kept forgetting the reference preset (we were very surprised with this!)

matthew.d.hughes
2020-07-25 10:14
Thanks Lia. There is not so much flexibility in the hour-long performance showcased in the presentation here, but the ultimate product was a fully-interactive instrument, and this show was one step in getting there. Since this show, we have transformed several of the systems into fully interactive ?audio-visual instruments?, which allow for total improvisation and also the chance to create completely new pieces out of them. They have a great deal of flexibility and are able to respond to the audience. These fully interactive instruments are explained in our presentation at SIGGRAPH Asia last year: https://youtu.be/KAasdjohfuo

glevaillant
2020-07-25 10:16
So we had to force them to answer quickly... And your question is very interesting: yes, it creates a bit of stress! But this is part of the gamification process, which eventually leads to better performances for most users (there are references for this in the paper)

niccolo.granieri
2020-07-25 10:16
*UP NEXT* @capra.olivier, @florent.berthaut, Laurent Grisoni All You Need Is LOD : Levels of Detail in Visual Augmentations for the Audience No. 13 in Proceedings

hugo.scurto
2020-07-25 10:17
@r.fiebrink Great presentation, many thanks!! I was wondering if you could say a few words on collaboration with your students? especially on how they may appropriate ML as a creative tool/how you may get feedback from their diverse sets of practices, just as when working with Laetitia Sonami. looking forward to reading the full paper! :rainbow:

decampo
2020-07-25 10:17
I also really like the unpredictability index as a meta-control for how much control vs surprise one wants to play with :slightly_smiling_face:

s.holland
2020-07-25 10:17
Its clear how you can make mapping dynamic by moving between different models and its also clear how you could mix output from different models dynamically in performance, but are there ways of having models that intrinsically. Morph in time, and might  that be worth having or not?

s.holland
2020-07-25 10:17
sorry that was @r.fiebrink

r.fiebrink
2020-07-25 10:18
Thanks @vincze! There's a lot more about this in the paper, but essentially information about the springs' movements (captured through a set of biquad filters) is sent as inputs to Wekinator, which is then used to control (mainly) a set of PAF synthesis objects in Max/MSP

noris
2020-07-25 10:18
i would be interested to reading the text description that you presented to the participants, if you dont mind. thanks so much

jpyepezimc
2020-07-25 10:18
That?d be fantastic! Would love to talk to her. I?ll make sure to check her site right after the presentations. Thanks!

dianneverdonk
2020-07-25 10:18
@r.fiebrink it's great to see such a nice collaboration between you and Leatitia and very interesting thing that you just elaborated about your both different perspectives and needs for the instrument to work. Thanks a lot!

r.fiebrink
2020-07-25 10:19
Thank you @lamberto.coccioli!

r.fiebrink
2020-07-25 10:19
@konstantinos.vasilako I think I answered this in the Q&A but happy to answer any follow-ups if you still have questions about it

alucas02
2020-07-25 10:19
@r.fiebrink Great to hear about this long-term collaboration!

noris
2020-07-25 10:20
ah i see... thank you for clearing this. it makes more sense now. thank you for answering my question

alucas02
2020-07-25 10:20
Looking forward to reading the paper!

alon.ilsar
2020-07-25 10:20
great question. this project was in a sense visuals last... but playing off them definitely affected both movement and sound in the development, rehearsal and performance. our next piece was led with the visuals, though very interactive and real-time in nature... https://youtu.be/-yV3xTRs6vQ

alon.ilsar
2020-07-25 10:20
happy to discuss further on zoom too

vincze
2020-07-25 10:21
Thanks, I'll read the paper to better understand the inner working. Great work!

r.fiebrink
2020-07-25 10:23
Thanks @a.mcpherson. There's a bit more about this in the paper. It's my understanding that Laetitia does both of these a lot, and in an iterative, messy way that's hard for me to characterise. It's further complicated by the fact that she's usually working with sets of many models at once, organised roughly into subsets of models corresponding to each of the three springs, and she may swap in/out model subsets for a spring or model subsets for a particular piece/section of a piece, working on refining or exploring these. This has informed a lot of the design refinements in the Wekinator software (e.g., ability to do very fine-grained editing/retraining/deleting/etc. of individual models, ability to run/pause individual models, ability to save/load individual models, etc.) but honestly there is more that I would do if I could to make this messy experimentation easier to manage.

r.fiebrink
2020-07-25 10:23
@o.green I love this question. As I mentioned, I don't see an easy way to work around this, but I would really love to hear ideas from others in this community! I know this is an issue that many people here confront.

alon.ilsar
2020-07-25 10:24
haha, yes, i have my problems with Aphasia, but also love it at the same time. in no moment was i thinking about it during this project though... or maybe it made me want to keep my metaphors 'truer' to the actuall workings of the airsticks

r.fiebrink
2020-07-25 10:24
Yes, me too! And to be clear, this is one of the really cool things that Laetitia developed in her own practice, which I never would have thought of. (And it took me a while to get my head around it, but now I love it.)

dianneverdonk
2020-07-25 10:26
@capra.olivier: thanks for your presentation! Do you consider LOD as a visible reaction to what a musician is doing, or as an insight into the actual creation of sound? And do you think this matters for the outcome of the research?

konstantinos.vasilako
2020-07-25 10:26
@capra.olivier From the perspective of the audience and the links between visual aspects of the performance and mapping, I wonder if gestural ?metaphors? (in terms of NIME) were not used deliberately and/or were replaced by maybe other approaches.

timo.dufner
2020-07-25 10:27
@capra.olivier, @florent.berthaut thanks a lot! Did you do test over a longer periode of time, like in 1-2h performances?

r.fiebrink
2020-07-25 10:28
Thanks @m.zbyszynski. Personally, I think the activities I've done that have been most helpful for making impact and being able to learn about longer-term consequences of ML have been things like: ? releasing software as executables that run on as many platforms as possible (not just github source code) ? ensuring I have lots of educational materials available for people to learn, in different formats (e.g., text walkthroughs, videos, scaffolding examples) ? running a lot of workshop and outreach events to teach people to use the tech, and also using these to inform my own understanding of what is useful to build/fix/etc. ? recognising that, in order to use ML tech effectively in a variety of work, people often require more than a tutorial on the tech (i.e., "press this button to do this, then send this OSC message, etc."). There are actually some things about ML that they also need to know to do more sophisticated things. This influenced my choice to make a MOOC on this topic (well, 2 moocs now - on Kadenze and FutureLearn).

abi
2020-07-25 10:28
@capra.olivier Very interesting work! Do you think these results would be different outside the lab environment - ie, in a concert setting, where there are far more factors affecting audience experience than only on the visualisations?

lamberto.coccioli
2020-07-25 10:29
@capra.olivier @florent.berthaut Interesting methodology for your study, thank you for sharing.

r.fiebrink
2020-07-25 10:29
Thanks @eskimotion. I hope you will find at least a partial answer to this in Laetitia's part of the paper :slightly_smiling_face:

schwarz
2020-07-25 10:29
...like what beers are on tap... :slightly_smiling_face:

glevaillant
2020-07-25 10:30
You're welcome :slightly_smiling_face: Please feel free to contact me (for support or feedback) if you are using the app!

marije
2020-07-25 10:30
For me as a physicist, this parallel is quite clear, the maths of neural networks and acoustical waves have interesting similarities. In that sense tuning a neural network to respond in a particular way to input data, is like tuning an acoustical system.

cagri.erdem
2020-07-25 10:30
@capra.olivier, very interesting. How much do you think is the extra cognitive load that LOD provides?

r.fiebrink
2020-07-25 10:31
Thank you @decampo! Honestly it was quite difficult to figure out how to present this type of work/perspective within the NIME context. Ultimately I feel like the paper format we used enabled us to share our most important ideas in a clear way, and reviewers seemed to be OK with it, so I am hopeful that other people could use our approach as a model and push it in new, better directions.

marije
2020-07-25 10:31
but touchscreens are usually also highly connected to controlling just one parameter of a process, or perhaps two, given their history and use in everyday life.

timo.dufner
2020-07-25 10:31
@capra.olivier, @florent.berthaut may the corona performance situation is interesting for your "long time" experiment as it is a kind of controlled enviroment with seated audience etc.

sallyjane.norman
2020-07-25 10:31
thanks session 13! stimulating stuff!

lamberto.coccioli
2020-07-25 10:31
@sleitman Many thanks for chairing today!

r.fiebrink
2020-07-25 10:31
Thanks @schwarz Laetitia is using buttons on a PC1600 controller to do the "freezing", as well as faders to fade different models in/out

niccolo.granieri
2020-07-25 10:31
as previously mentioned, here is a link to a zoom room that will be open all day long for you to pop-in and chat with other NIME attendees! https://us04web.zoom.us/j/75307801251?pwd=VTF3MFJ4UTNaY1psTHQ4Qllkckhndz09

r.fiebrink
2020-07-25 10:32
(This is in the paper in more detail)

jpyepezimc
2020-07-25 10:32
Thank you all so much! Enjoy the rest of the conference!

marije
2020-07-25 10:32
These are the captions from the zoom session, including Q&A

r.fiebrink
2020-07-25 10:33
Good question @marceldesmith! I'm not aware of any work doing this, but seems like it would be fun to try

marije
2020-07-25 10:33
And check out the installations! https://nime2020.bcu.ac.uk/installations/

a.r.jensenius
2020-07-25 10:35
Yes, would love to discuss this more!

r.fiebrink
2020-07-25 10:35
Thanks @florent.berthaut Laetitia has written a bit about this in the paper. For instance, she says "[ML] cannot be dissociated from the hardware (the springs and pickups) and the software (Max/MSP). These three components define the instrument." Comparing Spring Spyre with lady's glove, she says that in building the Spring Spyre "I wanted to improve the likelihood of unpredictable events, which I learnt to cherish in the lady?s glove when they occurred. I wanted to retain some interdependence of inputs (in the lady?s glove, one muscle of one finger, when moved, will affect other muscles in other fingers)." And then regarding performance with the two, she writes: "This active listening is challenging, exciting and is new. This is the main difference in the actual live performance using the Spring Spyre. While the lady?s glove required a very focused attention to keep track of how my gestures would affect the thirty sensors attached to the arm, the mapping would be fixed as to how the sounds would be affected by the gestures. This is a very exciting part of ML: the ability to move across synthesis terrains, discover new sounds, and refine the control in live performance."

l.mice
2020-07-25 10:36
Wow sounds amazing. Thanks for the link - I will check it out for sure. :space_invader:

r.fiebrink
2020-07-25 10:38
Thank you @cagri.erdem. I hope my answer in the Q&A was somewhat helpful here. In terms of how this plays out differently in composition and performance, I can't speak for Laetitia, but she's written a bit in the paper about some of her techniques for (and rationale for) performing with "high predictability-index" models. In performance, being able to "freeze" or "hone in on" sounds that she likes "forces me to listen very actively during the performance so I can ?catch? the sounds. This active listening is challenging, exciting and is new. This is the main difference in the actual live performance using the Spring Spyre. While the lady?s glove required a very focused attention to keep track of how my gestures would affect the thirty sensors attached to the arm, the mapping would be fixed as to how the sounds would be affected by the gestures. This is a very exciting part of ML: the ability to move across synthesis terrains, discover new sounds, and refine the control in live performance."

glevaillant
2020-07-25 10:38
Here it is! Level 1 on the left. There were originally 5 levels, but we merged levels 4 and 5 (they were very similar, and people were too modest - only a few considered to be level 5)

o.green
2020-07-25 10:40
@r.fiebrink your reply to @m.zbyszynski a couple of posts down gives some really valuable starting considerations I think: I guess the challenge for NIME (which, IMO, should be lapping up this kind of work) is to do to try and establish this as a valued strand of activity within the field, in the hope that it comes to be acknoelwdged more widely as 'legitimate' research in its own right (rather than something that has to happen in 'free' time).

alon.ilsar
2020-07-25 10:40
and here is a link to the guthman performance


decampo
2020-07-25 10:42
I can imagine! Well, happy to help establish that in any way I can! Also, we have a lot a synth patches around that are too complex for simple control which we play with our metacontrol software and self-designed hardware (e.g. Isak Han's piece at NIME) - your paper really makes me want to try playing these with wekinator !

o.green
2020-07-25 10:42
Maybe a collaborative future submission between NIMErs who maintain (or have maintained) long running frameworks and tools, developing some of those useful starting ideas would be a good thing? Again, the sort of comparative, conversational format that we've seen a few instances of this year perhaps be an interesting way for folk to compare notes...

alon.ilsar
2020-07-25 10:43
and yes, @robert.blazey1 i dont think i can perform that particular piece as convincingly, so i actually changed the piece a fair bit

florent.berthaut
2020-07-25 10:43
Hi @konstantinos.vasilako, the idea with the augmentations was to avoid constraining the design of gestures and/or mappings to "transparent " gestural metaphors, . But there are many possibilities (and biases) for representing the various components of the instruments. In the study we relied on simple shapes and connections between them. But other types of representation could be useful, again with different LODs.

alon.ilsar
2020-07-25 10:43
happy to discuss more with either or both of you over zoom @robert.blazey1 and @v.zappi

florent.berthaut
2020-07-25 10:43
@sleitman Thanks for chairing !

r.fiebrink
2020-07-25 10:44
Thanks @hugo.scurto! I think my work with students is often a bit more complicated, in that I'm simultaneously doing several different things: ? working to understand how they use/appropriate ML in their work (or how they would ideally like to) ? trying to teach them to use ML effectively and to push them to consider new ways of using ML (as we mentioned briefly in the video and expand on in the paper, students' initial ideas for using ML are often not very interesting/useful, in that ML may not be necessary or that it doesn't result in any particularly interesting type of interaction that couldn't be accomplished without ML) ? trying to evaluate their learning (e.g., did they do something a certain way because it was their creative goal, or because they didn't understand how to do what they really wanted, or because it was just the easiest thing for them to do with the knowledge/tools they had, ...) Honestly, a lot of my deep reflections on this work have focused on trying to understand their learning and how to build better curricula and tools (or improving existing tools) to support their learning. (I have an ACM TOCE article on this just published in 2019.) I do think that's important though!

r.fiebrink
2020-07-25 10:44
For sure! And this is certainly something that could be built into other instrument design/synthesis exploration tools, with or without ML...

konstantinos.vasilako
2020-07-25 10:44
Thanks for the response, nice project :rocket:

capra.olivier
2020-07-25 10:44
Thanks for your question ! To complete what I tried to express in the questions session, LODs are ?gradients? related to an augmentation technique. Meaning that the nature of the augmentation can vary by delivering rich abstract insights on the artist?s intentions or maybe more primary cues like sensors activation with visual augmentations. LODs can then be interpreted as the amount of info a particular augmentation should be delivered with.

konstantinos.vasilako
2020-07-25 10:45
I will give a read to the paper to get the whole idea/

florent.berthaut
2020-07-25 10:46
Thank you !

capra.olivier
2020-07-25 10:47
In our presented example with visual augmentations, the LODs were not strictly a gradient of amount of info as some levels deploy info about different aspects of a performance like the mappings or the audio processes

r.fiebrink
2020-07-25 10:47
@s.holland that's an interesting idea that Laetitia has talked about, as have some other users. Honestly, I don't know if it's worth having or not -- there are big questions about how this would be enabled (both how to support it algorithmically and what sort of mental model / user interface would be used for this), and then implementing it in Wekinator itself would be a lot of work (too much for me to take on), so it would probably need to be explored with other tools using software built just for this.

florent.berthaut
2020-07-25 10:47
Thank you ! I find that really interesting the "external" (non-wearable) interface having a more independent / unpredictable aspect (and i'll go read the paper :wink: )

r.fiebrink
2020-07-25 10:47
Thank you @dianneverdonk!

r.fiebrink
2020-07-25 10:47
Thank you @alucas02!

konstantinos.vasilako
2020-07-25 10:47
@r.fiebrink Indeed, I got the answer, but I will read the paper to get the whole idea, thanks!

r.fiebrink
2020-07-25 10:49
@o.green I like this idea a lot! Would you want to be involved in such a submission?

r.fiebrink
2020-07-25 10:49
Or are there other people you'd really want to see involved?

m.zbyszynski
2020-07-25 10:50
I can imagine something like studio reports...

r.fiebrink
2020-07-25 10:50
@decampo cool! Are these patches things you can share?

m.zbyszynski
2020-07-25 10:50
Paul Stapleton, and the SARC crew, have a long-term approach to this kind of work.

florent.berthaut
2020-07-25 10:51
@schwarz That definitely has an impact :wink: , but more seriously , it could be that individual strategies in using LODs (that we got from the interviews) would be similar with individual displays, but that there would be a very strong group effect in their collective use and perception. That is definitely something we have to look into

m.zbyszynski
2020-07-25 10:51
Adrian Freed is writing blog posts that try to pull together the long-term topics that informed CNMAT. (Unfortunately, that's now historical.)

capra.olivier
2020-07-25 10:52
Thanks @abi for the question . Short aside, I would have really appreciate meeting you IRL after having read your work so often during my PhD thesis ! To complete the answer to your question, we think mixed protocols with variables controlled as much as possible and a ?real? and ecological context should definitely be insightful when investigating audience experience.

s.holland
2020-07-25 10:52
BTW lovely talk - great to see Laetitia on the Spyre and very enlightening to see the interplay between your respective concerns..

capra.olivier
2020-07-25 10:53
We?re working on solutions to capture objective data from audience throughout a performance. One of the solution involves the use of beer glasses as measurement devices ! #trueStory

hugo.scurto
2020-07-25 10:53
Thanks for such a thorough answer! It seems that pedagogy- and practice-based approaches cannot bring similar contributions, yet they can bring rich, inspiring insight by somehow being complementary to each other. I guess that makes one more great paper for me to read from you! :smile:


o.green
2020-07-25 10:53
I'd love to be involved, although our stuff is still quite young (but maybe that provides its own distinct perspective). Meanwhile, I can think of a long list of interesting people / projects; your experiences with Wekinator (obviously), all the various strands and people from MIMIC (as well as the many, many other things that its various researchers continue to maintain), @schwarz?s various long running things, the CNMAT crew (thanks @m.zbyszynski!), Agostini and Daniele from Bach; <deep breath>

o.green
2020-07-25 10:54
actually, the more I think, the longer the list :joy:

r.fiebrink
2020-07-25 10:55
Well, I think that can potentially bring similar contributions -- for instance, I think I would've learned a lot more from students about their goals/appropriations/challenges if I hadn't already learned those things from practitioners I'd been working with and observing for years. (You might say that most of the 'low-hanging fruit' lessons had already been learned by the time I started teaching this at university level!)

capra.olivier
2020-07-25 10:56
Thank you Lamberto ! This is a first step from the lab but we do think that mixed approaches have precious insights to reveal. Controlling variables in a concert hall is ambitious but we hope to include objective data measurement in live context in a short future

m.zbyszynski
2020-07-25 10:56
If we had a NIME periodical, or journal, it could have an interview section. Each issue could have a conversation like this.

r.fiebrink
2020-07-25 10:56
And that in itself is valuable -- that I do see confirmation of many of the lessons I've learned with Laetitia and other composers in the work with my students, I can understand better what might/might not be generalisable

r.fiebrink
2020-07-25 10:57
oooh now there's a fun idea...

robert.blazey1
2020-07-25 10:58
@alon.ilsar thanks, I was curious because I made a gestural controller a few years ago that I still perform with sometimes, originally aiming to be able to play tight beats etc live through movement. I ended up abandoning that aim and instead built up drum sets and sounds that were more suited to wonky/sloppy beats as they were the only ones that were sounding convincing! Also ended up including a lot more soundscapey/electroacoustic elements where the overall gesture is more important than the strict timing. Basically embraced what it could do instead of labour over fixing what it couldn't! So yeah in the performance yesterday and the presentation today the beats are really tight and I was interested to know if that was still possible or if you were forced into less quantised territory!

m.zbyszynski
2020-07-25 10:59
Having a journal might somewhat "legitimize" this practice, by providing a publication with perceived impact-value.

m.zbyszynski
2020-07-25 11:00
That might be a good place for NIME-reviews of commercial DMI's, too. (This from an earlier paper session.)

o.green
2020-07-25 11:00
Indeed: the incentive structures of academic work have a lot to answer for here!

r.fiebrink
2020-07-25 11:00
Would love to see more people chime in on this thread -- some interesting ideas from @o.green and @m.zbyszynski here! :point_up:

capra.olivier
2020-07-25 11:02
Thank you Cagri for your question. This is one of the most interesting aspects we should look in to in my opinion. Dealing with spectator experience imposes a fine line between the proper amount / kind of information to deliver and the risk to capture attention too much. I think that this question should be addressed in collaboration with artists and researchers as the former could provide a sensitive approach that the latter won?t be able to model / identify so easily.

m.zbyszynski
2020-07-25 11:03
@m.ortiz? @p.stapleton?

capra.olivier
2020-07-25 11:05
Thanks for your question Timo ! This is definitely a test that should be conducted. Our results from the lab provide interesting insights that should now be exported to ecological contexts, like a longer performance as you pointed.

capra.olivier
2020-07-25 11:05
Thank you Konstantinos !

m.zbyszynski
2020-07-25 11:06
I wonder if this should move to the access-ecosystem thread?

o.green
2020-07-25 11:08
good shout

marije
2020-07-25 11:12
I've been doing some case studies on long term developments, that will be published in my book https://justaquestionofmapping.info

alon.ilsar
2020-07-25 11:12
Thanks @l.mice. I'd like to add that I actually mostly improvise with or compose on the AirSticks within ensembles, but when visuals are at play, often replacing other band members, i felt I wanted to leave less to chance i guess? @matthew.d.hughes becomes a true collaborator, band member and co-composer, particularly in a new piece we are working on which is visual-led, or at least led by the visual interactive system. here is a work-in-progress performance we did of it just before lockdown at TEI Sydney 2020... https://youtu.be/-yV3xTRs6vQ

marije
2020-07-25 11:15
with my artist association we are planning a series of artistic publications (Blueprint Series). The goal there is to document complete works: instrument/score/performance. https://instrumentinventors.org These should be coming out between 2021 and 2024.

schwarz
2020-07-25 11:20
seriously!

timo.dufner
2020-07-25 11:22
will do - and also will challange a few of my students to use it :slightly_smiling_face:

alon.ilsar
2020-07-25 11:24
the only 'improvisation' i have done with the AirSticks and a visual system was with another visual artist, Andrew Bluff (who has worked lots with @matthew.d.hughes), https://www.youtube.com/watch?v=UsPtnSEwYEY this improv was more to test the system, which was then used in a trio https://www.youtube.com/watch?v=SnpTFW-6804 but it was hard to integrate the other musicians into the visual system short of just analysing their audio

o.green
2020-07-25 11:28
@marije that's really valuable, thanks

dianneverdonk
2020-07-25 11:31
Thanks! I'm going to read your paper since I've not looked into it yet. Maybe I'll get even more of an answer I'm looking for. Really interesting project, thanks for the presentation!

dianneverdonk
2020-07-25 11:34
(and otherwise I'll come back to you for further questions ;))

alon.ilsar
2020-07-25 11:42
just checking out this performance of yours @l.mice https://www.youtube.com/watch?v=yRyzR4wgeDE @matthew.d.hughes im sure you'd dig this also. can you tell us about the visuals?

r.fiebrink
2020-07-25 11:47
@marije That's wonderful!

florent.berthaut
2020-07-25 12:03
@cagri.erdem On a side note, Olivier has looked in another paper at the impact of non-congruent visual augmentations with the same visual complexity but disconnected from the actual gestures and instrument, which led to a lower subjective comprehension and experience than the control condition (without any augmentations) and the augmentations ( non-congruent < control < congruent) https://hal.archives-ouvertes.fr/hal-02560916/file/Have_a_SEAT_on_Stage.pdf

cagri.erdem
2020-07-25 12:12
?This is a very exciting part of ML: the ability to move across synthesis terrains, discover new sounds, and refine the control in live performance.? Exactly! Thank you for the explanation and for all the inspiring work you?ve done.

alon.ilsar
2020-07-25 12:13
perhaps we can make a time to chat after the conference? enjoy the rest of the presentations today. im doing my best to stay up for them!

cagri.erdem
2020-07-25 12:14
Thank you for the explanations. I think the action-sound causality in performance in a general sense is a very important topic. I?m looking forward to reading the paper and see the upcoming work. Cheers!

alon.ilsar
2020-07-25 12:18
this is very much my experience!! i wrote a small section in my thesis of wonky beat, and how the AirSticks lent themselves to the style, here's a little improvised wonky track with a trio actually... https://thesticksband.bandcamp.com/track/deep-fried

alon.ilsar
2020-07-25 12:20
i do use the trigger button on the controllers a lot to trigger arpeggios too, with movement mapping to timbral changes. i find i can then play the kick and snare patterns more convincingly when that is taken care of. would be keen to discuss this further, and hear/see your instrument!

alon.ilsar
2020-07-25 12:26
i demonstrate it a little here... https://youtu.be/GGtwgX4hVsU

alon.ilsar
2020-07-25 12:27
and in terms of quantisation, here was the first time i used it, in a looping context. so the notes in are actually off grid, but they are quantised as soon as they are looped

robert.blazey1
2020-07-25 12:38
Love the track, similar vibe to Hermutt Lobby :heart: here is a more structured track i made with edited layers of imrov https://mrblazey.tumblr.com/post/156363106451/demo-of-a-studio-composition-made-entirely-with and a little demo from original NIME application https://www.youtube.com/watch?v=zyJFihBKXrE

robert.blazey1
2020-07-25 12:39
Would love to read your thesis if possible? sounds like it's probably a good reference for mine!

decampo
2020-07-25 12:42
yes sure! a collection of them are published in the SuperCollider quark here: https://github.com/aiberlin/NTMI, the patches are in POOL/process/

noris
2020-07-25 12:53
This is excellent. thanks!

a.r.jensenius
2020-07-25 12:54
Great, thread, just catching up. And, yes, I think that such longer thinking/perspectives would be very important for a journal-like structure (or whatever else it may become)

p.stapleton
2020-07-25 13:08
Yay for this thread! Great suggestions from @o.green and @m.zbyszynski, and I'm also looking forward to your book @marije. Sorry I missed this mornings session, but I've just been catching up with the videos now. @r.fiebrink I previously read your paper with Laetitia and absolutely loved it. This is exactly the kind of documentation of long-term collaborations we need more of at NIME. The conversational/interview format in both the paper and presentation work really well. And the actual content is really valuable, particularly how you both pointed towards possibilities beyond conventional notions of control in favour of exploration, as well as machine non-learning, compositional hijackings, and scalable levels of predictability in the "synthesis terrain". I would love to chat with you more about all of this at some point. Many thanks for this work.

alon.ilsar
2020-07-25 13:26
sure thing. please do email me and i can send it through. Looking forward to checking these videos out properly, im fading quickly tonight here in Melbourne...

alon.ilsar
2020-07-25 13:31
yes, it started for me that way, and then i found myself doing it in front of people :wink:

alon.ilsar
2020-07-25 13:36
oh great! only this one re website... https://alonilsar.com/triggerhappy/ But a stream of the entire show just premiered a few hours ago! https://youtu.be/x-Xbo0mOgX0

alon.ilsar
2020-07-25 13:39
yeah, we really trialed the show in a lot of environments too, without visuals, changing the set around, doing short sets, until we finally landed on an hour long performance we were really happy with. there's a stream of an entire performance available here. just premiered this tonight australian east coast time actually https://youtu.be/x-Xbo0mOgX0

marije
2020-07-25 13:42
cool! it seems like it will do quite well at many festivals.

alon.ilsar
2020-07-25 13:45
we did experiment with it in the sense that sometimes i was miming, but still manipulating the visuals, which i think aids the audience in assuming that sound isnt mimed either... and we did go through several iterations of the visuals. unity made that a little easier, but @matthew.d.hughes has now developed a system that makes it much easier in the rehearsal room to try more ideas. here is his demo at the conference this year https://youtu.be/MfTxVD8cMqE

alon.ilsar
2020-07-25 13:46
also enjoyed your paper this year in the new music journal. you and @matthew.d.hughes have much to discuss im sure

florent.berthaut
2020-07-25 13:56
@alon.ilsar @matthew.d.hughes Thank you for the answer, URack is definitely a great new paradigm for prototyping / developing audiovisual instruments ! (and thanks for the jnmr paper)

alon.ilsar
2020-07-25 14:00
i really hope so! was booked for a few that obviously fell through this year. hopefully can reignite it with some local ones in 2021

florent.berthaut
2020-07-25 14:00
On that note @matthew.d.hughes, do you transmit the point cloud from Urack modules to Unity via OSC ? Or is the depth camera processing done completely in Unity ?

marije
2020-07-25 14:02
I hope so too!

sdbarton
2020-07-25 14:05
@jpyepezimc @jim.murphy I just watched the presentation: nice work! There are a number of novel aspects of this design that are a real contribution to the world of mechatronic chordophones. The great part of sliding pitch stoppers is that you get portamenti; the downside is that you sometimes get portamenti when not desired (a fact you undoubtedly faced with MechBass). I wonder how this system deals with such issues. Another potential issue is temporal evenness of uneven pitch intervals: have you tested the system?s response with sequences that contain different sized pitch intervals? I also wonder about how the system deals with legato passages. I?m excited to hear the instrument and to see the newest version!

r.fiebrink
2020-07-25 15:02
Thanks @p.stapleton! I'm so glad you found the paper valuable. It would absolutely be great to talk with you at some point!

r.fiebrink
2020-07-25 15:02
wonderful, thank you!

lauren.s.hayes
2020-07-25 15:05
@alon.ilsar this seems like the paper Jean Michel Jarre should have written...

lauren.s.hayes
2020-07-25 15:22
8 hours behind so I'm just catching up on your paper @r.fiebrink and Laetita. Really enjoyed the interview style (like @tdavis & @laurajaynereid yesterday). Particularly helpful for me to see someone with such an established practice decide to move to ML in this way and the discussion of how it benefits her artistic practice (I remember Laetita's amazing keynote from NIME in London). I'm in a similar position myself, and until I feel the affordances of the shift it's daunting but exciting (although @o.green and the lovely FluCoMa folks are being very supportive and helpful). Maybe you say in the paper, but what was the original motivation/spark/enticement for the collaboration? Congrats and I look forward to the next 8 year report.

tiago.brizolara-da-ro
2020-07-25 16:23
This is super interesting... some years ago I developed a NIME which was also mid-air with visual feedback (unfortunatyely not published in English language...). In the same way you reported, the visual feedback turned out to be indispensable - other than the aesthetic addition, it somewhat substituted tactile feedback. With it the performer knew when the instrument was being controlled and how...

capra.olivier
2020-07-25 17:12
Thank you !

matthew.d.hughes
2020-07-25 17:27
@florent.berthaut With URack, the module?s user interface ? the part that lives inside VCV Rack ? is just like a dummy front-end for the Unity-based client. When you spawn a new URack module in VCV Rack it inserts all of the corresponding objects etc. into the Unity scene, and when you tweak knobs and connect cables it sends these updates over OSC. No hardcore data is being sent between the programs - just control messages. So to answer your question: the point clouds live entirely inside Unity. We have implemented some point-clouds streaming over a network in some other performances, but again this just streams directly into Unity, and the URack modules just control how they appear. It?s not unlike many other projects that use OSC *controllers* to manipulate visuals - but URack just tries to abstract that middleware layer away from the user entirely, so they?re just presented as *visual *instruments* inside the DAW right alongside the composers usual sound tools.

eskimotion
2020-07-25 17:35
Very much looking forward to reading the paper! Thank you for providing such a fantastic tool that enables artistic implementation. I've just started to mess up with wekinator. Hopefully, I will be able to adapt it into my artwork or research process :)

lja
2020-07-25 17:49
Echoing a point made above; it has been said that development only happens on ChucK when there?s a pressing research need for it. A lot of routine maintenance goes ignored because all of us on the (small) development team don?t have any ?free time? to spare from our other work. It seems like a really hard problem to address if mundane contributions to open source software aren?t valued by academia and don?t generate income for living.

a.r.jensenius
2020-07-25 17:58
Yes, this is important! I am sitting on the Open Science board of the European University Association, and there we are focusing a lot on how we can change the way we are thinking about research assessment. Too much focus is put on publication metrics, and nothing on all the other parts of the research "ecosystem" (sorry about reusing that term so many times...). So we need to change the way we are thinking, and develop structures for systematically recognizing code development, etc. We organized a high-level workshop about this in May: https://eua.eu/events/129-2020-eua-webinar-series-on-academic-career-assessment-in-the-transition-to-open-science.html and all the presentations are on YouTube: https://www.youtube.com/playlist?list=PLq0J1sJGsmQ4n3dfVDwt8cOQ4CX8ddjLV. There is a lot of policy stuff there, but the positive thing is that both institutions and funders seem to be on the move. Next we need to change the culture as well, that is probably easier within NIME than elsewhere...

lja
2020-07-25 18:15
@r.fiebrink I really appreciate the viewpoint of mapping as ?the process of creating a new interactive world ? that invites exploration and discovery?. Can we still think of mapping in other contexts that are not as strictly about controller input --> ML --> synthesis output? I?m trying to think, in my work e.g. with animating birds by example, it?s sort of like creating a mapping from [properties of current position in the world] to [behavior in the world]. But it feels more like just teaching something how to behave, rather than explicitly forming connections between terrain and animation. Maybe that?s the exploration and discovery, the ?computer as creative partner? that you discuss, at play?

lja
2020-07-25 18:24
Also really appreciate your perspective on learning how to make the _right_ thing only through extremely long term engagement with and improvement to a tool, with long term engagement with how others use the tool. I can see how this work might be more valuable or at least valuable in a very different way than the typical approach to ?make a quick buck? and learn some small, neatly-packaged thing from a short-to-medium-term design project, before abandoning it for the next ?right thing?.

lja
2020-07-25 18:38
@capra.olivier really amazing work here teasing out these different modes of ?understanding?!! Now I wish I had secret AR goggles to show me what is happening whenever I?m at a computer music concert that?s really confusing and dense. The ability to turn it on when I get curious/confused and turn it off to enjoy the music would be so amazing.

r.fiebrink
2020-07-25 18:39
Thanks!

r.fiebrink
2020-07-25 18:50
Thanks @lauren.s.hayes! Laetitia says a bit in the paper about her initial creative motivations for what became Spring Spyre. Her use of Wekinator was initially sparked just by the fact that I gave her a demo when it was in its very early stages, and we hooked it up to the lady's glove to see what it would do, and she saw something potentially valuable. Thanks so much also to you and @adnan.marquez also for your excellent paper on political and epistemological crises. I also missed your realtime talk but I've been reading the paper and it's definitely going to be something I think about and share for a long time.

r.fiebrink
2020-07-25 19:02
Ah @lauren.s.hayes and @adnan.marquez - congratulations on the best paper award! so well deserved!

lauren.s.hayes
2020-07-25 20:50
ah thanks. ok that is nice to hear. i'll read the whole paper soon. congrats!

capra.olivier
2020-07-25 20:51
feel free to work on the secret glasses and show us !! We will be happy to help :slightly_smiling_face:

florent.berthaut
2020-07-25 21:03
@timo.dufner yes, there might be something to try there

alessan
2020-07-25 22:49
Just catching up on this great thread in my timezone! I agree with @r.fiebrink, @m.zbyszynski's idea for a newletter-type publication is great. It is definitely possible to have reflections on long-term projects in papers, just as Rebecca and Laetitia's presentation exemplifies, and I would love to see more of that, but a newsletter or "magazine" would seem to be a particularly reflective format. For myself, guest co-editing ICMC's Array publication a few years ago allowed me to highlight some of those kinds of reflections, including a discussion between Rebecca and Laetitia, one between Atau Tanaka and Pamela Z, and various artistic statements by long-time developer-practitioners such as Mari Kimura. I think studio reports could be possible features as well.

jpyepezimc
2020-07-26 08:08
Thank you so much, Scott! These are great questions. About the unexpected portamento/slides, this was definitely something to keep in mind, which is why we changed the clamping approach from a proper ?clamper? to more of a ?bottleneck? type of clamping. This made it easier for the clamping mechanism to apply pressure or release the string faster. In the new version I have also been developing a ?string release? gesture to properly center the clamper before releasing, which has made it even easier to avoid unwanted noises such as these (or accidental bounces, buzzing noises and such).

jpyepezimc
2020-07-26 08:14
Regarding pitch, I?d say that is one of the main challenges on this ?robot arm? type of design. Not only do we have the pitches that get ?logarithmically closer? on the string, but also an increased resolution on the far end of the robot arm?s range. This was definitely tricky to handle. We have been using lookup tables to find the best position for the pitches, which has worked pretty well. Having a clamper that can apply variable levels of force has been really helpful, and we have mapped different ?pressure values? to the clamper, which really makes it possible to increase the pitch precision. Compared to MechBass? clamper, this is pretty much like going from a discrete domain, to a continuous domain, which has been really flexible.

jpyepezimc
2020-07-26 08:22
And finally, about legato passages? From experimenting so far, I?ve seen that there are two ways to really take advantage of what the instrument has to offer to keep notes ringing and as tied as possible. The first is to use as many slides as possible (whether it is between two notes, on when it?s too awkward, perhaps trying to ?arrive? via a slide). This usually results in long notes, and interesting melodic results (interpolating the clamper pressure using the robot arm position creates some interesting and natural sounding slides). The second one, which is one of my personal favorite things about guitars, is to try to use open strings as much as possible. Not only do you get long notes here, but it also frees up the arm to reposition, while a note is still ringing. Lots to explore here as well, and looking forward to the six-string version, which should open up even more options with multiple strings.

ko.chantelle
2020-08-04 02:44
@alon.ilsar I'm only just going back to look at the paper presentations that I missed due to my timezone. Have you thought about applying your miming approach to composition with dancers? It might also be interesting to see if a dancer's insights to how it feels to perform with such a system either aligns or differs from yours as a musician.