joe.wright
2020-07-17 23:15
has joined #papers01-hci-and-mapping

joe.wright
2020-07-17 23:15
@joe.wright set the channel purpose: Paper Session 1: HCI and Mapping

niccolo.granieri
2020-07-17 23:15
has joined #papers01-hci-and-mapping

hassan.hussain5
2020-07-17 23:15
has joined #papers01-hci-and-mapping

overdriverecording
2020-07-17 23:15
has joined #papers01-hci-and-mapping

lamberto.coccioli
2020-07-17 23:15
has joined #papers01-hci-and-mapping

jonathan.pearce
2020-07-17 23:15
has joined #papers01-hci-and-mapping

richard.j.c
2020-07-17 23:15
has joined #papers01-hci-and-mapping

joe.wright
2020-07-18 12:13
@joe.wright has renamed the channel from ?papers1-hci-and-mapping? to ?papers01-hci-and-mapping?

eskimotion
2020-07-20 09:25
has joined #papers01-hci-and-mapping

edmund.hunt
2020-07-20 09:25
has joined #papers01-hci-and-mapping

acamci
2020-07-20 17:01
has joined #papers01-hci-and-mapping

aaresty
2020-07-20 17:21
has joined #papers01-hci-and-mapping

10068197
2020-07-20 17:21
has joined #papers01-hci-and-mapping

a.nonnis
2020-07-20 17:22
has joined #papers01-hci-and-mapping

a.macdonald
2020-07-20 17:23
has joined #papers01-hci-and-mapping

andreas
2020-07-20 17:24
has joined #papers01-hci-and-mapping

dianneverdonk
2020-07-20 17:25
has joined #papers01-hci-and-mapping

likelian
2020-07-20 17:25
has joined #papers01-hci-and-mapping

ko.chantelle
2020-07-20 17:25
has joined #papers01-hci-and-mapping

anika.fuloria
2020-07-20 17:26
has joined #papers01-hci-and-mapping

clemens.wegener
2020-07-20 17:26
has joined #papers01-hci-and-mapping

a.mcpherson
2020-07-21 21:28
PDF papers for this session (easier than searching the whole proceedings): *Percussive Fingerstyle Guitar through the Lens of NIME: an Interview Study* Paper 85 in proceedings or http://instrumentslab.org/data/andreaM/Martelloni_McPherson_Barthet-Fingerstyle_camera_ready.pdf *RAW: Exploring Control Structures for Muscle-based Interaction in Collective Improvisation* Paper 91 in proceedings *Beholden to our tools: negotiating with technology while sketching digital instruments* Paper 84 in proceedings or http://instrumentslab.org/data/giacomo/BeholdenToOurTools.pdf *AutoScale: Automatic and Dynamic Scale Selection for Live Jazz Improvisation* Paper 82 in proceedings *Towards an Interactive Model-Based Sonification of Hand Gesture for Dance Performance* Paper 72 in proceedings Proceedings as ZIP here: https://nime2020.bcu.ac.uk/paper-pre-proceedings/

niccolo.granieri
2020-07-22 08:50
We will start with the welcome address in webinar 1 in 9 mins, see you there! https://us04web.zoom.us/j/74077066157?pwd=VmtXdXc0UU1VdFlDNjkvMi8xMHgxUT09


niccolo.granieri
2020-07-22 08:50
Please ask questions for the authors on here, @v.zappi will be keeping an eye out on here, and will relay questions for the Q&A

a.martelloni
2020-07-22 08:56
@v.zappi should we (the authors) pin the questions on Slack as we go along? Or is that your job?

niccolo.granieri
2020-07-22 08:57
That's the chair's job :slightly_smiling_face:

joe.wright
2020-07-22 08:59
broadcasting in 1 minute :slightly_smiling_face:

g.moro
2020-07-22 08:59
zoom link?

joe.wright
2020-07-22 08:59
Just a few comments above

g.moro
2020-07-22 08:59
tks

niccolo.granieri
2020-07-22 09:00
@g.moro as a rule of thumb, you'll always find everything on the conference-hub (http://nime2020.bcu.ac.uk/conference-hub/)

juan.jpma
2020-07-22 09:02
is the whole audience muted by default? I can't see my own video feed nor an option to mute myself

joe.wright
2020-07-22 09:03
Yes, this is primarily so that we can save time in q&a and avoid captioning issues when lots of people are trying to speak at once

juan.jpma
2020-07-22 09:04
that's a relief I thought I had a bug or something !

niccolo.granieri
2020-07-22 09:11
To clarify the process: what you're experiencing is a Zoom Webinar. Chairs and panelists will be able to speak and see each other. NIME attendees will be able only to view the panelists and will be able to ask questions for the Q&A in the correspondent Slack Channel. Please let me know if you need any guidance and enjoy this first paper session (@v.zappi just explained it perfectly).

xname
2020-07-22 09:11
link?

g.moro
2020-07-22 09:12
@xname

joe.wright
2020-07-22 09:12
links to zoom and youtube are in the comments above

niccolo.granieri
2020-07-22 09:13
as a rule of thumb, you'll always find everything through the conference-hub (http://nime2020.bcu.ac.uk/conference-hub/)

xname
2020-07-22 09:15
This is where I am looking at the moment. Not so clear where is the webinar tho



niccolo.granieri
2020-07-22 09:17
Thanks @c.n.reed, however, for future webinars: -> Timetable and Zoom Links -> Find the Webinar you want to join -> Click on the "webinar" hyperlink -> Enjoy

tragtenberg
2020-07-22 09:17
loved the bublewrap experiment!

v.zappi
2020-07-22 09:21
Questions?

juan.jpma
2020-07-22 09:22
awesome video and presentation Andrea!

laurel.pardue
2020-07-22 09:24
Nice talk! Really enjoyed.

e.shatri1
2020-07-22 09:25
Great presentation and video @a.martelloni, really enjoyed it!

fcac
2020-07-22 09:25
Fantastic video, Andrea!

sallyjane.norman
2020-07-22 09:25
thanks Andrea, you're wonderfully clear and it's nice to have the links to other NIME authors!

manolimoriaty
2020-07-22 09:25
thank you, Andrea!

f.morreale
2020-07-22 09:26
Well done Andrea, very interesting work

laurel.pardue
2020-07-22 09:26
Andrea- I'm too slow, but do you think the fact that you are looking at something that is sort of-already a reappropriation of an existing instrument has impacted the audience so that it is more micro-targetted rather than macro? Presumably their bandwidth is already quite used.

charles.martin
2020-07-22 09:28
great animation @cagri.erdem!

sallyjane.norman
2020-07-22 09:29
co-adaptation and/ or competition?

m.rodger
2020-07-22 09:30
@a.martelloni Great presentation!! I wonder if you have a working definition to help distinguish between 'unexpected' and 'expected' affordances of an instrument?

l.mice
2020-07-22 09:31
Thanks for the shout-out in question time @a.martelloni. If anyone is curious, my large scale instrument & research will be presented in Paper Session 12: Evaluation of NIME?s

m.ortiz
2020-07-22 09:32
@sallyjane.norman Interesting question Sally, I would argue that it might be co-adaptation as a result of a lost competition. Biocontrol while possible, excessive processing can increase control which turns muscle interaction into a very unreliable faderbox loosing the interesting behaviours of the signals themselves

a.martelloni
2020-07-22 09:33
I'll beam a couple of references at you in a second, however in short I'd say the definition comes from the designer as well as a history of past performers. This is certainly more obvious when talking about trad instruments as opposed to DMI/NIMEs. In the latter case I would define the "expectation" to be that of the designer... However, please do prove me otherwise as required!

juan.jpma
2020-07-22 09:33
I keep hearing whatsapp notifications but dunno where they come from :eyes:

js.vanderwalt
2020-07-22 09:34
Must be coming from the machine running the webinar?

sallyjane.norman
2020-07-22 09:34
@m.ortiz thanks - are you suggesting that co-adaptation is a behavioural "flattening"? I see the issue with excessive processing...

v.zappi
2020-07-22 09:34
Of course follow-ups to questions and answers are welcome! I will mention your comment after inquiring the presenter

juan.jpma
2020-07-22 09:34
I think so, yeah

joe.wright
2020-07-22 09:34
Fixed

joe.wright
2020-07-22 09:35
Thanks for the heads up

juan.jpma
2020-07-22 09:35
no problem :wink:

v.zappi
2020-07-22 09:35
I will ask you elaborate your question for the presenter and then mention any further comment from Miguel and other attendees

m.ortiz
2020-07-22 09:35
@sallyjane.norman Something like that in the specific context of EMG based performance.

v.zappi
2020-07-22 09:35
*I will mention

sallyjane.norman
2020-07-22 09:36
OK, I'll try!

m.rodger
2020-07-22 09:38
References would be great, thanks! To my mind, the issue of affordance expectation would come from the perspective of the perceiver-actor, i.e. the agent who needs to be sensitive to the affordance in order to act upon it. The shaping of the agent's perceptual system will depend on developmental history, including social-cultural scaffolding. I would suggest that the designer co-opts these processes, rather than determines them. Agree that this is harder to unpack for DMIs/NIMEs than trad instruments though

m.ortiz
2020-07-22 09:39
Can I have a copy of RAW?

konstantinos.vasilako
2020-07-22 09:39
I was wondering if there is something more on the live coding aspect of the project

konstantinos.vasilako
2020-07-22 09:40
Some elaboration on this would be helpful

email
2020-07-22 09:41
can you describe your sensor setup in a bit more detail pleaase...

c.n.reed
2020-07-22 09:41
would also love to work with this!

sallyjane.norman
2020-07-22 09:43
@m.ortiz - sorry I was unable to convey your thoughts on this - which sound very interesting!

tragtenberg
2020-07-22 09:43
Congratulations @cagri.erdem! Amazing instrument and research. Did you open-source the Max interface? Sure many of us would have a lot to learn from it.

marceldesmith
2020-07-22 09:43
Do you feel that ML / AI could be used for dimensionality reduction of the movement data as well as the gamification aspect? Could this be used to simplify mapping and improve perceived causality?

konstantinos.vasilako
2020-07-22 09:44
Does he see a dialogue between the live coding and the physical performance or him leading the temporal progression of the decisions of the live coders (or vice versa)

sallyjane.norman
2020-07-22 09:45
@cagri.erdem thanks for a provocative paper - full of surprises!

a.martelloni
2020-07-22 09:46
Indeed the case of the acoustic guitar is different to the case of other instruments: I could say in other words that what I'm trying to do is to "legitimise" the new appropriation back into the instrument's design (horrible wording but you get the idea hopefully!). That would be different when thinking about a new DMI design, where the design goal would be to have a few simple affordances and leave those affordances flexible enough to encourage unexpected uses. I have to use the trope of MIDI again as a negative example: avoid such a bottleneck whereby you are forced to play as close as possible to what the machine/instrument expects.

konstantinos.vasilako
2020-07-22 09:46
@cagri.erdem Hello from Istanbul. Do you see a dialogue between the live coding and the physical performance or you leading the temporal progression of the decisions of the live coders (or vice versa).

a.martelloni
2020-07-22 09:48
Zappi, V. and McPherson, A.P., 2014. Dimensionality and Appropriation in Digital Musical Instrument Design. In _NIME_ (Vol. 14, pp. 455-460).

m.ortiz
2020-07-22 09:49
Not a worry this is only starting :wink:

a.martelloni
2020-07-22 09:49
Sorry I was typing a justification for this citation but Slack just fired it - I'm finding it really hard to multitask, sorry :D

m.rodger
2020-07-22 09:50
No worries - I'm happy to pick this up again later. Thanks for the paper reference!!

bunktrunk
2020-07-22 09:52
I wonder if we have a slight bias towards seeing these particular tropes in higher education, as we often have students creating projects in very limited time periods. There is no time to explore much beyond the default tutorial approaches to synthesis, mapping, etc?

m.ortiz
2020-07-22 09:55
It would be interesting to have a comparative study looking at other languages. What might be the bias of PD specifically vs digital music instrument building in a more generic way?

bob295
2020-07-22 09:55
Wow. That was great.

michael.lyons
2020-07-22 09:55
Inspiring!

m.yee-king
2020-07-22 09:55
Great celebration of opinionated tool design

charles.martin
2020-07-22 09:55
:clap: !

js.vanderwalt
2020-07-22 09:56
Interesting, thanks!

noris
2020-07-22 09:56
:+1:

michael.lyons
2020-07-22 09:56
Paradigm shifting

alucas02
2020-07-22 09:56
:+1: great presentation!

a.martelloni
2020-07-22 09:56
Please do - I think this deserves another couple of references and almost surely some more development!

lamberto.coccioli
2020-07-22 09:56
Inspiring stuff, thank you Andrew!

koray.tahiroglu
2020-07-22 09:56
Great presentation!

james.leonard
2020-07-22 09:57
Great talk! I have a possible question: has any research been carried out regarding aesthestic and use of these paradigms inside popular electronic music tools? Specifically, does an electronic musician working with Max4Live in Ableton use Max with truly different aesthetics to a "pure" Maxer?

marije
2020-07-22 09:58
Alex McLean wrote from a similar point of view in his article in Floss+Art (2010), called LiveCoding for Free, about code creativity.

erik.nystrom
2020-07-22 09:58
:+1: excellent talk

aes
2020-07-22 09:58
Very interesting thoughts, thank you. It would be interesting to include something like Sonic Pi in a comparative analysis.

laurel.pardue
2020-07-22 09:58
I heard this working at Digidesign years ago, 95% of people don't do much beyond Presets. Some people will specialize in sounds, but lots want to focus on use and that is pretty universal.

robert.blazey1
2020-07-22 09:58
Do you think this is a double-edge aspect of open source platforms? The ability to cut, paste and adapt code opens up these platforms massively but maybe points everyone down similar paths?

raul.masu
2020-07-22 09:58
thank you for the presentation. very inspiring My question is how much do you think this is this due to a cultural habits and how much is it due to the program itself, I'm thinking about how Brahms and Cage used a piano? Thanks

lamberto.coccioli
2020-07-22 09:59
There is also a way of teaching Max, pd or Supercollider that encourages a certain aesthetics and its associated sound

fcac
2020-07-22 09:59
Hi, @a.mcpherson, great work. Here a *question*: What would be the influence of the time to understand the mental model of assembling and using sensors and Pure Data in the whole process? Would the pedagogical issue play an important role along the idiomatic only?

r.fiebrink
2020-07-22 10:00
Thank you Andrew, that was excellent! You touched on this a bit in your talk -- I think it's so important to recognise that this phenomenon isn't just a matter of languages making idiomatic sounds, but also languages (and in fact programming paradigms) leading to idiomatic approaches to designing instrumental interactions. For instance, in my work and teaching I see over and over that programming encouraging people to make linear and one-to-one mappings, as opposed to ML or other approaches that encourage other types of mappings. I wonder what other types of compositional/design choices we might unpack...

marije
2020-07-22 10:00
Looking at instruments developed by artists who then play the instrument for a long time, I find that the music style of the artist is as important as the sensor interface for the interactions that are chosen.

dianneverdonk
2020-07-22 10:00
Hey Andrew, awesome talk and paper! Are you planning on looking into the physical side of the interactions with the tools? I think the software-side is also really good to look into, and can be looked at similarly as at hardware (human - computer interaction), but would be really interested in the physical 'sensor-side'. If so, please keep us/me informed!

mario.buoninfante
2020-07-22 10:01
Miller Puckette about Pd and Max sound at 1:05:45 :slightly_smiling_face: https://vimeo.com/296991851

charles.martin
2020-07-22 10:01
I guess that kindof makes sense that over time the artist's personal style overtakes the direct influence of the medium

imtortorken
2020-07-22 10:01
my question is, while development tools have an impact on the outcome of the way NIMEs are, do you think training and practicing on the instrument is actually the biggest factor that somehow causes most NIME performances not as 'musicial' as typical performances with acoustic instruments?

a.macdonald
2020-07-22 10:01
really interesting, thanks!

fcac
2020-07-22 10:02
Thanks @dianneverdonk. Really interesting point!

marije
2020-07-22 10:02
in the cases I looked at, also from the outset, that had a great influence. The types of thing you want to control are quite important

hugo.scurto
2020-07-22 10:03
Great talk, looking forward to reading the full paper :slightly_smiling_face:

a.mcpherson
2020-07-22 10:09
Thanks everyone! Will reply to the comments after this session, but meanwhile, here is a link to the Organised Sound paper with @koray.tahiroglu which I mentioned in the talk that expands on the topic of idiomatic patterns in languages: https://acris.aalto.fi/ws/portalfiles/portal/42816040/idiomatic_patterns_and_aesthetic_influence_in_computer_music_languages.pdf

marije
2020-07-22 10:11
the chapter in the SuperCollider book by DeCampo and Rohrhuber about dialects in the langauge might also be of interest as a reference!

a.martelloni
2020-07-22 10:11
That's great Thibault! I wonder what would happen with a continuous surface.e.g. Linnstrument and how you would negotiate with the idea of a diatonic grid VS slides and "in between" notes?

tragtenberg
2020-07-22 10:11
Great presentation @a.mcpherson! We don't see in the VST synths development these biases, do you believe that the languages used by these developers don't impose a cultural bias? Couldn't these biases in pd and max happen because of a limited virtuosity in the NIME community in designing synthesizers, leading to low expressivity in sound design possibilities in these languages? Max and Pd are languages usually used to prototype synths, and in the synth development community you usually don't see these same biases, but in a community that isn't focused on sound design, but on interface expressivity. Wouldn't that effect of musical bias in pd and max happen in the low floor range of these languages, while the high ceiling being much more diverse?

konstantinos.vasilako
2020-07-22 10:12
That?s so nice!

v.zappi
2020-07-22 10:12
The current presentation is about to end!

v.zappi
2020-07-22 10:12
Please unleash your questions

alucas02
2020-07-22 10:13
In your demo, is chord selection automated?

joe.wright
2020-07-22 10:13
Has the app been tested within ensembles? and if so, do users feel ?locked in? to the harmony? Playing in/out can be a very impactful way to instigate or provoke a response from other players. In other words, have you considered an ?in/out? feature?

timo.dufner
2020-07-22 10:13
where can i play with this? :slightly_smiling_face:

js.vanderwalt
2020-07-22 10:15
I had no idea that iReal Pro could export Music XML! I just did that and imported into MuseScore, nice.

corey2.ford
2020-07-22 10:15

juan.jpma
2020-07-22 10:18
you should collaborate with this project to have this in guitar :smile: https://hackaday.io/project/161675-elektrocaster

raul.masu
2020-07-22 10:22
Thank you for this research very interesting, did you consider conditions where you have more than one dancer and one choreographer, where the aesthetic needs and interaction needs come from different people. In my experience, this requires specific design approaches. How would you apply your work to such scenarios?

v.zappi
2020-07-22 10:24
Very interesting question indeed!

tragtenberg
2020-07-22 10:25
Would the culture of using VST synths and ready-made max, m4L instruments made with the secret sauce of synth developers be a possibility to expand the sonic affordances by the NIME community?

raul.masu
2020-07-22 10:25
thanks

marije
2020-07-22 10:27
@andrea.giomi would you care to share the slides?

email
2020-07-22 10:28
Very interesting... thanks. How do you think your approach might be adapted for use for people with disabilities - where they have more restricted movements.

marije
2020-07-22 10:28
Interesting related work (and well documented) might be Roosna & Flaks's 100 Sketches: https://www.roosnaflak.com/100-sketches/

weixler
2020-07-22 10:29
to the NIME team you are doing a great conference - pioneer on this circumstances.

thibaulj
2020-07-22 10:30
It has not been tested with more than one user, but it is definitely an important step to consider soon. The chords and sync will probably be shared between all users, but each musician will be able to choose its own scale, and therefore enabling any out sounds. The interface is designed to be easy to use but never limitation, so it is always possible to play any scale on any chord. An 'in/out' button could be very interesting, for instance clicking on it could result in transposing the scale a semitone or a tritone higher, and enabling a harmonically non related scale, but a very out feeling (so jazz...)

aes
2020-07-22 10:30
Another thought in relation to Andrew's talk and other comments above: When teaching tools like SuperCollider, there is a notable difference between starting with algorithmic composition (patterns) and starting with synthesis (UGens/oscillators) or perhaps interactive stuff. Different levels of abstraction, I suppose. Any suggestions for readings on how these considerations (idiomatics of tools, style/sound of platforms, and so on) relate to pedagogy?

tragtenberg
2020-07-22 10:30
or even commercial Hardware synths, Eurorack...

sallyjane.norman
2020-07-22 10:31
thanks all!

michael.lyons
2020-07-22 10:31
Great job, Victor!

thibaulj
2020-07-22 10:32
Please feel free to use and fork the webapp: https://thib03.github.io/Heptatonic-Circle/ The plugin is still a work in progress, send an email to if you want to beta test

james.leonard
2020-07-22 10:32
Here is a link to the slides in PDF format :slightly_smiling_face: https://drive.google.com/file/d/1mWVxXBwZPrziYBOSD7d_lUP3Xzy_cxt7/view?usp=sharing

niccolo.granieri
2020-07-22 10:33
Amazing first session, @v.zappi MVP!

thibaulj
2020-07-22 10:34
There is also a very prototypy chord progression webapp and more, you can find the instructions here: https://docs.google.com/document/d/16cdTrZ_0WUbb6cqWHoZ0B3P-2QgcSFLnE4YYKnwt-3g/edit?usp=sharing

marije
2020-07-22 10:35
thanks!

v.zappi
2020-07-22 10:35
Easy when there are amazing papers and great discussion!

thibaulj
2020-07-22 10:36
Oh great! Do you know if it is related to Fret Zealot? https://www.fretzealot.com/

timo.dufner
2020-07-22 10:37
thank you

joe.wright
2020-07-22 10:38
Great paper session, thanks all of you!

juan.jpma
2020-07-22 10:38
I?m not sure, but this project is open source

marije
2020-07-22 10:39
What I really appreciated when studying at the Institute of Sonology in the Hague, was being exposed to a lot of different softwares/languages, so I could find which one I was attracted to. Rather than having to work with one specific language.

a.mcpherson
2020-07-22 10:40
As I said at the session, my sense is that the short time period calls the influence of the tools into sharper relief, but that even with a longer period of time, the influence of the tool persists. There are some useful references in the paper and in the Organised Sound paper I linked in a later message here. For example, bricolage practice can be a useful model for understanding how people create interactive systems.

thibaulj
2020-07-22 10:41
That's good to know, thanks! I'm already working with some hardware devices using those RGB leds, I could indeed consider working with ElektroCaster. Fret Zealot has an API, but is of course not open source

info041
2020-07-22 10:43
Really good paper session!

a.mcpherson
2020-07-22 10:44
Stay tuned! Giacomo and I have done some preliminary investigation of SuperCollider. Participant recruitment is a challenge, especially now, but hopefully we'll get back to it. On the instrument building side of things, I would look at two sources of influence: first, as I said in the talk, we often get preoccupied with sensors themselves and the most obvious/direct ways of interacting with them (e.g. pushing directly on an FSR). Second, every sensor has its own connotations based on where we have seen it before. Even what kind of knob you put on a potentiometer might call up different associations with past music technology contexts. In other words: sensors are also cultural artefacts.

a.mcpherson
2020-07-22 10:45
It would be really interesting for someone to study this. More generally I'd love to see NIME authors studying DMI practices outside of the experimental/academic community.

a.mcpherson
2020-07-22 10:47
One study I did with @f.morreale and @j.harrison which goes partly in this direction looked at musical instruments released on Kickstarter, comparing to what is published at NIME and CHI. https://qmro.qmul.ac.uk/xmlui/bitstream/handle/123456789/55134/Mcpherson%20Musical%20Instruments%20for%202019%20Accepted.pdf?sequence=1

info041
2020-07-22 10:47
also having a look, thank you!

a.mcpherson
2020-07-22 10:49
Absolutely, or maybe TidalCycles. I imagine the instruments would be quite different when the designer's attention is focused on higher-level patterning.

a.mcpherson
2020-07-22 10:51
No question that tutorial and example materials are a big influence alongside the affordances of the language itself. Whether the language itself is open source might not even matter -- consider the influence of cutting and pasting from help patches in Max. David Zicarelli's 2002 CMJ article "How I learned to love a program that does nothing" talks a bit about the role of community: https://www.jstor.org/stable/pdf/3681768.pdf?casa_token=Iu_06SmaXqsAAAAA:EISuibgWUiyr40HdVbZ3RQcBOLU2cFvURiAhcvVCKJcvJ-pCjOHcH7ayRbQDO7UqeillCJaSiWQQjIzRUTbIKl30XaTDCgf_OS6njmg-Z61Gnaj7fw

a.mcpherson
2020-07-22 10:53
It's surely both. The language makes some things easy and some things hard, and that influences what we do. On the other hand, we are also inspired by what others have done with the same tools. We might say that the piano became a different instrument after Cage, and the saxophone changed after early jazz, because musicians now see their possibilities differently.

juan.jpma
2020-07-22 10:54
Very cool stuff, I'd definitely try it out in a launch pad! congratulations

a.mcpherson
2020-07-22 10:54
Yes definitely. Especially, as teachers we often are teaching fundamentals of musical synthesis at the same time, so lots of time spent with sine and sawtooth waves, FM synthesis, etc. As a result, we get lots of digital instruments which involve those elements.

aes
2020-07-22 10:55
That makes sense. I think we learn a lot conceptually by switching between languages and paradigms.

bunktrunk
2020-07-22 10:57
Thanks Andrew - I stupidly asked the question mid-way through the talk, then you covered this. Yes, I take the point that these things persist - in a way that is slightly harder to put your finger on. Enjoyed the paper - was good to actually have a study into this!

a.mcpherson
2020-07-22 10:59
Pedagogy surely has a big influence in the same way that community and example materials do. The language itself is not neutral, but our encounters with it are also shaped by others. And it might be that the way you first encounter a tool leaves a lasting mark on how you use that tool forever thereafter.

a.mcpherson
2020-07-22 11:05
Absolutely! In some ways the `map()` function is the worst thing that ever happened to DMI design. It's just so alluringly convenient that we forget all the assumptions it makes. There's an important role for ML here. What do you find that people do differently when designing instruments this way?

a.mcpherson
2020-07-22 11:09
Definitely. This paper was about challenging the notion that our tools are neutral channels for expressing our ideas. But likewise, our aesthetic preconceptions are really important. Unpacking what parts of NIME practice come from the influence of technology and what parts come from a post-Cage experimental aesthetic is a really interesting and challenging topic.

marije
2020-07-22 11:09
I think where you start also depends on what you want to teach. So if you teach sound synthesis, within SC you'll focus on that. If you teach algorithmic composition, that is the focus, and if you teach sound-interaction, you'll focus on that.

a.mcpherson
2020-07-22 11:11
I would love to do this. Something for future work, or for someone else to try! Also keep an eye out for a journal paper from Giacomo and I (to be finished soon) which will expand on some of these points.

r.fiebrink
2020-07-22 11:14
Haha yes, makes me wonder what arbitrary one-line alternatives to map() we could come up with to shake things up a bit!

a.mcpherson
2020-07-22 11:15
Leaving aside how we define "musical", it's definitely pretty common at NIME to spend more time building than practicing. It's really challenging because (arguably) the conference incentivises new technology, and it's hard to spend years learning a brand new instrument. Michel Waisvisz has an interview somewhere about playing The Hands, where he talks about deliberately freezing the mappings to focus on performance practice.

a.mcpherson
2020-07-22 11:16
Thanks, I'll check out that and the other reference you suggested. Interested in any other references anyone might suggest on this topic!

a.mcpherson
2020-07-22 11:18
I do think NIME could use more VSTs or other commercial instruments. There might be an aversion to "commercial" technology or a "not invented here" syndrome. But surely VST synths, and even languages like C++ used to make them, are just as directive as Max or SuperCollider. It's just that the lines of influence are different. IMO, true musical neutrality doesn't exist.

a.mcpherson
2020-07-22 11:21
I've been working on this Bela C++ YouTube course, and in the last few lectures I've been doing logarithmic mappings for frequency and amplitude. Compared to `map()` it always looks really messy. I was thinking a `logmap()` function might be a decent start. But that's still only .0001% of the possibilities. Is there a way to shoehorn ML-based mappings into easy single-line functions?

r.fiebrink
2020-07-22 11:21
We've tried to design Wekinator, RapidLib, MIMIC machine learning tools to all make it equally easy (well, often even easier) to make a many-to-many & non-linear mapping (specific shape driven by examples) as a one-to-one and linear mapping. And this leads to a big shift in how people think about the design -- it's easy to forget how many input dimensions you've got and how many control dimensions you've got, and get into a mode in which you're just associating a particular input state/position/etc. with a particular sound, rather than thinking analytically about how to control individual sound parameters as if you had a set of virtual knobs that you were controlling. It is a totally different mindset. And it leads to a very different type of interface being built (e.g., ability to access a wider variety of sounds and to move through diverse sounds more efficiently, encouraging performances that are about sound sculpting/exploration rather than fine-tuning one thing at a time, encouraging performers/composers to use physical gestures of different magnitudes to discover and improvise with sonic gestures...). Laetitia Sonami touches on this a bit in our NIME paper on Saturday morning.

r.fiebrink
2020-07-22 11:22
"Is there a way to shoehorn ML-based mappings into easy single-line functions?" -- potentially! I could imagine something like mlmap(x1, y1, x2, y2, x3, y3, ... , ?function_type)

r.fiebrink
2020-07-22 11:23
where you've got a list of input/output pairs of arbitrary length and optionally a function type that practically gives you control over how wild this function might be (e.g., a neural network with/without some amount of regularisation, a polynomial of a greater/lesser degree)

r.fiebrink
2020-07-22 11:25
implementation could be more or less hacky (e.g., to simulate how a wekinator user might train this, adding some/many additional phantom x-y pairs with small amounts of noise added, so that you've got a dataset big enough to enable effective training; or using a more principled approach that's more of a regularised curve fitting that's happy with smaller number of examples)

a.mcpherson
2020-07-22 11:32
`mlmap()` would be really fun to try. The easier to just replace `map()` with something like that, the better. Really we ought to rename `map()` to `linearmap()` to remind designers what it does, but backwards compatibility is a powerful force. Looking forward to your paper on Saturday!

a.mcpherson
2020-07-22 11:33
One other aspect of mapping I'm interested to challenge is its often memoryless nature. There are so many physical processes that can't be modelled by "sensor value right now changes sound parameter right now".

aes
2020-07-22 11:35
Good point. For music students who are relatively new in computer music I also think it makes sense to take into account their existing experiences and preferences so they'll have at least some familiarity with the starting point.

tragtenberg
2020-07-22 11:42
Sure, no neutrality.

robert.lieck
2020-07-22 11:50
Very interesting point @a.martelloni! I'd like to add add two points to what @thibaulj already said in the session. 1) Practically, this is relatively simple because even on the diatonic grid, tones are ordered by frequency, so interpolation to a continuous interface is straight-forward. Conversely, even on a continuous interface (Linnstrument, ROLI Seaboard, ...) there usually is a discrete structure to help orientation, which can be used to indicate the diatonic grid. 2) From a more fundamental perspective, this brings up a very interesting point. Is the tonal space actually continuous or one-dimensional? I would argue yes and no. Frequency space definitely is, so a one-dimensional (quasi-)continuous representation is one important approach. But harmonic space is probably neither one-dimensional nor continuous because it is tightly connected to the overtone series (which is inherently discrete). This question has a long-standing history in music theory and is actually also the subject of ongoing research in our lab (http://dcml.epfl.ch). Now we haven't event started talking about timbre-space...but that's for another day :D

r.fiebrink
2020-07-22 11:53
For sure! I am still struggling to come up with a simple mental model for such processes that can translate easily into a simple GUI or programming interface, in part because there are so many different modes of such interaction that designers employ or imagine, but not an established vocabulary for how these differ. For instance, even when limiting consideration to orchestral conducting gestures, think of the differences in how you would build a system capable of interpreting gestures used to communicate tempo, to bring in an orchestra in synchrony at the beginning of a piece, or gestures like shaking a fist to communicate the energy of a section... I think even musicians with decades of experience playing for conductors have to think hard about all the different ways that temporal dynamics, "memory", etc. come into play in interpreting and responding to such motions. I like Jules Francoise's "PASR" (preparation, attack, sustain, release) framework (inspired by ADSR envelopes) for formalising a certain set of musical actions, but so much is still not captured here... Anyway, that's a long way of saying that I think the biggest challenge is coming up with a good way for musicians/users/programmers to *think* about such mappings.

vincze
2020-07-22 11:53
Hi, i came a bit late, how can I still follow the paper presentations that are going on now?

niccolo.granieri
2020-07-22 11:53
Hi @vincze! To follow the paper going on, first thing to do is to move to the channel named #papers02-bespoke-adaptable-admis

niccolo.granieri
2020-07-22 11:54
That's where all the Q&A is going on

a.mcpherson
2020-07-22 11:55
> the biggest challenge is coming up with a good way for musicians/users/programmers to *think* about such mappings. Yes! One barrier is the role of (human) language. An orchestral musician can understand what a conductor does without being able to describe it precisely in words.

aes
2020-07-22 11:57
Yeah, that's another good example. The notion of the loop would probably feature prominently when sketching with those platforms.

a.mcpherson
2020-07-22 12:11
Or the pattern, or algorithm. None of which are naturally idiomatic to Pd. For example, in our study we didn't see any rhythmic patterns other than a basic `[metro]` object, and we saw mainly continuous pitch spaces. It would probably very different in these other languages.

raul.masu
2020-07-22 12:15
Thanks or this reply Andrew.

raul.masu
2020-07-22 12:16
Looking forward to reading the paper

joe.wright
2020-07-22 13:01
Thanks! It?s a really cool project, I?ll keep an eye out for more in the future :slightly_smiling_face:

sallyjane.norman
2020-07-22 19:19
@cagri.erdem - Hi Cagri, would love to pick up with you on the co-adaptation versus competition question - indeed you didn't raise the latter, but I wondered whether and how this might constitute an alternative to the "co-adaptation" you did evoke (sorry for poorly (sleepily!) formulated oral question), on your continuum of relations between artificial agents and humans. Will read your paper properly and maybe get back to you - hopefully drawing @m.ortiz into the loop, as his reflection that co-adaptation "might be a result of lost competition" resonated with what I was trying to query.

m.ortiz
2020-07-22 19:31
@sallyjane.norman Hi Sally, I think you mentioned the term Competition, questioning if it was related to co-adaptation mentioned by @cagri.erdem. My response was that I can see this move as a result that any competition was 'lost' a long time ago. To unpack this, the biocontrol paradigm was still rooted in interfaces to control (via midi at the time) external hardware, with the promise of 'direct visceral control unencumbered by a physical interface' (Knapp et al). During my PhD I was very frustrated with this promise and the results I could get, on my thesis I wrote somewhere that the human body is a terrible fader box. With enough processing we could make a functional 'air piano' using EMG. But this would limit any expressive potential and be a waste of time and effort to replicate an existing instrument. In my work, I argued that defining the EMG as an instrument component was an arbitrary decision framed by the software I developed and my skills at the time. However, once this arbitrary decision has been made, there is an open space to explore what the instrument is and how it wants to be played. Definitely it is not a piano, maybe it is a digeridoo which has completely different characteristics and roles. In the end it is neither of the above but an EMG instrument which itself is different from Atau's, Marco's, Cagri's, etc.

jmalloch
2020-07-22 19:42
"The only solution that worked for me is to freeze tech development for a period of sometimes nearly two years, and than exclusively compose, perform and explore/exploit its limits." ?Michel Waisvisz, Round table: Electronic Controllers in Music Performance and Composition, Trends in Gestural Control of Music, Wanderley & Battier, eds. IRCAM 2000.

jmalloch
2020-07-22 20:04
Hope you don't mind if I jump on here too! :slightly_smiling_face: I agree that it would be nice to see more non-linear, complex mapping, but we also need to remember that the linear one-to-one mapping structures we complain about are often connected to non-linear sensors, controlled by non-linear musicians and complexly-interconnected by the mechanics of an instrument body. Viewed from this distance the mappings don't seem as simple!

jmalloch
2020-07-22 20:05
My go-to complaint is usually about "events" and the way people insist on describing interactions in terms of MIDI.

jmalloch
2020-07-22 20:10
@a.mcpherson regarding memoryless mapping: most of the "successful" T-Stick mapping designs have made heavy use of IIR filters (mostly leaky integration).

a.mcpherson
2020-07-22 20:15
@jmalloch agreed on all counts! One objection I have to the mapping paradigm of NIMEs is that it's actually only a small part of the overall interactive experience. We need to think about what comes before and after the convenient numbers we plug together. I am a big fan of leaky integrators. See for example the Magpick work with @f.morreale where we did this in hardware to get very low corner frequency and minimal drift. http://instrumentslab.org/data/andrea/2019NIMEFinal.pdf

sallyjane.norman
2020-07-22 20:49
thanks @m.ortiz this is insightful. I've been "tuned" to Ben, Atau and their successors but perhaps there's a link for me between issues you raise, and those @a.mcpherson featured in his "beholden" paper. This haunting question of same-sounding stuff. At the poster session, @sleitman eloquently relayed her yearning to enliven technology by injecting grit from real life, instead of expecting technology to serve as a preferable overlay to real life (she said it really well - something along those lines). What happens when muscle-based interaction is construed as clean signal explored for control purposes, more or rather than embodied exertion with its inherent/ inevitable "glitches"? Probably a very naive question and a bit far from your other intriguing reflection on the influence of prior instrumental models...

lja
2020-07-22 21:23
`mlmap()` does sound really interesting! I?ve been thinking of IML mappings more in terms of ?components? due to spending nearly all my time in Unity. So in that model, the mapping is something that ?lives on? / belongs to a virtual object, and its state gets modified over time. Having `mlmap()` be a standalone function sounds more portable, but also then the user becomes responsible for storing all their data separately and providing it anew each time? I wonder if it?s a terrible idea for `mlmap()` to remember all the input-output pairs until you clear it, so that whenever you call it, it uses all the training examples you?ve used before alongside all the ones you?re providing now. Food for thought?

lja
2020-07-22 21:26
@a.mcpherson just catching up to this morning?s presentations from California! Great talk, I love the assertion that tools are special for their weirdness. I started learning on Max/MSP but became a pretty heavy ChucK user once I was introduced to it, because its way of working aligned so much better with my way of thinking. A lot of people dismiss ChucK because of its inefficiencies in rendering audio, but that?s not the only way to evaluate a programming language for whether it?s useful to someone.

jmalloch
2020-07-22 21:32
@a.mcpherson Thanks for the link ? I'll read it more thoroughly but I already love the use of Pd pseudocode :slightly_smiling_face:

jmalloch
2020-07-22 21:38
@lja the libmapper project (http://libmapper.org) treats inter-signal maps as objects that have their own properties. Still needs work on C# bindings for interfacing with Unity though...

cagri.erdem
2020-07-22 21:44
Thank you very much @sallyjane.norman for bringing this up. I think @m.ortiz explained clearly the particular case of using EMG signals in a musical instrument. I can say a bit more in addition to that, but perhaps a more general music-making perspective. I remember a study of a muscle-based controller I was developing a few years ago, where most users were frustrated when they wanted to control their effects parameters as if they were using footswitches. However, once they started to let the system influence how and what they play, they started to enjoy it much more. The joy there lies in waiving the control while still performing (together), which can be perfectly seen as ?lost competition? (I remember I was influenced quite a lot by Chadabe?s thoughts on mapping in NIMEs, in this context). I think the key concept here is loosing that competition intentionally and collaboratively, through using the technology in the form of signal glitches, random processes, multi-agent systems and so on. However, as active performers (and audience), we also want some amount of causality between action and sound. So, in a sense, the competition is lost on both sides by co-adapting. I think this is perhaps where we ?inject grit from real life? ?please correct me if I got that wrong. This is a very thought-provoking discussion. I hope to have more!

a.r.jensenius
2020-07-23 07:22
Thanks @sallyjane.norman for an excellent question, and for good comments by @m.ortiz and @cagri.erdem. Just to add in one comment based on our experiences of creating instruments based on "inverse" interaction. Here the idea is that you control the instrument by _not_ doing anything. With the self-playing guitars, for example, users have to relax and deliberately try not to move for the guitars to make sound. If you move, they will be quiet. In the beginning it is challenging, but once you get into it, you can play with finding just the right balance between moving and not moving, controlling and not controlling. This can also be seen as a type of co-adapting with the instrument, I think.

a.mcpherson
2020-07-23 09:02
> A lot of people dismiss ChucK because of its inefficiencies in rendering audio, but that?s not the only way to evaluate a programming language for whether it?s useful to someone. Yes absolutely!

marije
2020-07-23 09:17
@a.mcpherson Katherin Hayles' discussion of the technogenetic cycle is also relevant to your paper. In some of my papers I have referenced her work. (my papers on 'Embodiment of Code' and 'Interplay between composition, performance and instrument design'.

cagri.erdem
2020-07-23 13:15
Hi Konstantinos! I think the biggest challenge in having such a dialogue is how the live coder and muscle performer structure the time in very different ways. In my case so far, I found the best way is to follow the live coder, and influence their upcoming decisions through the dynamic changes I can introduce. I think this is an unusually very interesting ensemble playing, which I wish to explore more. Additionally, as a live coding hobbyist myself, I am also curious about using bio-sensing in live coding contexts.

lja
2020-07-23 19:04
(I am still learning about Slack and suppose I should have tagged @r.fiebrink and @a.mcpherson in my comment above)

r.fiebrink
2020-07-24 07:28
@lja no worries :slightly_smiling_face: at least for me, Slack sends me notifications when there's a reply in a thread I'm following.

dianneverdonk
2020-07-25 19:12
that sounds great! Could you keep me updated about this? I'm not in a catalogue/proceedings - environment, so I bet I'll miss it when it is published. Thanks again!

a.mcpherson
2020-07-25 20:21
Sure, I will try to remember to write you when the paper is out!