joe.wright
2020-07-18 00:28
has joined #papers07-design-cultural-diversity

joe.wright
2020-07-18 00:28
@joe.wright set the channel purpose: Paper Session 7: Design / Cultural Diversity in Design

niccolo.granieri
2020-07-18 00:28
has joined #papers07-design-cultural-diversity

hassan.hussain5
2020-07-18 00:28
has joined #papers07-design-cultural-diversity

overdriverecording
2020-07-18 00:28
has joined #papers07-design-cultural-diversity

lamberto.coccioli
2020-07-18 00:28
has joined #papers07-design-cultural-diversity

jonathan.pearce
2020-07-18 00:28
has joined #papers07-design-cultural-diversity

richard.j.c
2020-07-18 00:28
has joined #papers07-design-cultural-diversity

joe.wright
2020-07-18 12:14
@joe.wright has renamed the channel from ?papers7-design-cultural-diversity? to ?papers07-design-cultural-diversity?

eskimotion
2020-07-20 09:25
has joined #papers07-design-cultural-diversity

edmund.hunt
2020-07-20 09:25
has joined #papers07-design-cultural-diversity

acamci
2020-07-20 17:01
has joined #papers07-design-cultural-diversity

aaresty
2020-07-20 17:21
has joined #papers07-design-cultural-diversity

10068197
2020-07-20 17:21
has joined #papers07-design-cultural-diversity

a.nonnis
2020-07-20 17:22
has joined #papers07-design-cultural-diversity

a.macdonald
2020-07-20 17:23
has joined #papers07-design-cultural-diversity

andreas
2020-07-20 17:24
has joined #papers07-design-cultural-diversity

dianneverdonk
2020-07-20 17:25
has joined #papers07-design-cultural-diversity

likelian
2020-07-20 17:25
has joined #papers07-design-cultural-diversity

ko.chantelle
2020-07-20 17:25
has joined #papers07-design-cultural-diversity

anika.fuloria
2020-07-20 17:26
has joined #papers07-design-cultural-diversity

clemens.wegener
2020-07-20 17:26
has joined #papers07-design-cultural-diversity

lamberto.coccioli
2020-07-22 16:03
Nick Bryan-Kinns, Li Zijin _ReImagining: Cross-cultural Co-Creation of a Chinese Traditional Musical Instrument with Digital Technologies_ *Paper 75 in Proceedings* Sara Sithi-Amnuai _Exploring Identity Through Design: A Focus on the Cultural Body Via Nami_ *Paper 109 in Proceedings* Adam Pultz Melbye, Halldor Ulfarsson _Sculpting the behaviour of the Feedback-Actuated Augmented Bass: Design strategies for subtle manipulations of string feedback using simple adaptive algorithms_ *Paper 42 in Proceedings* Travis J West, Marcelo M Wanderley, Baptiste Caramiaux _Making Mappings: Examining the Design Process_ *Paper 55 in Proceedings* Giulio Moro, Andrew McPherson _A platform for low-latency continuous keyboard sensing and sound generation_ *Paper 19 in Proceedings*

lamberto.coccioli
2020-07-23 09:15
If you want to ask a question in response to a paper, please indicate in your message to which paper presentation you are responding. Either by mentioning the title of the paper or using the @ to direct it to the presenter. This will make it easier for people to follow the presentations and the Q&A later (due to being in different time zones).

lamberto.coccioli
2020-07-23 10:43

g.moro
2020-07-23 13:00
And please respond in threads!

joe.wright
2020-07-23 13:43
the next paper session is open now!



hassan.hussain5
2020-07-23 13:44
Just in case we have issues with Zoom Captions, this is the link for the external captions: https://www.streamtext.net/player?event=NIME230720

x
2020-07-23 13:53
*STATEMENT / QUESTION*: There was dance music in the background of the performances video - was this deliberately in the performance / coming from the instruments - or part of the video edit?

info041
2020-07-23 13:54
love the octoqin!

lamberto.coccioli
2020-07-23 13:55
Any questions for Zijin and Nick?

js.vanderwalt
2020-07-23 13:56
Was the making done in China or UK?

a.mcpherson
2020-07-23 13:56
Thanks @n.bryan-kinns @zijin.li for this important work. It's great to see more attention to cross-cultural NIME work in recent years. I'm interested in your thoughts on the cultural context of the outputs of these collaborations. When NIME technology meets traditional instruments, can you comment on to what extent the resulting music should follow an experimental NIME aesthetic versus the original aesthetic of the traditional instrument?

a.mcpherson
2020-07-23 13:56
This question also relates to a discussion @tragtenberg started on #nime-access-and-ecosystem

info041
2020-07-23 13:57
@n.bryan-kinns was special notation developed for extra features of the instruments?

barryjosephcullen
2020-07-23 13:58
Did traditional players ask for modern designers to come share their influence, or did the modern designers ask the traditional players to take part in a study?

fengjian113
2020-07-23 14:02
*It?s very nice to see more researches eyes on some not so popular ethnic instruments. Great job!*

zijin.li
2020-07-23 14:06
Thanks!

max
2020-07-23 14:06
Hopefully Instruments of minorities which are subjected of cultural sub domination will receive the same attention, like the Rawap.

lamberto.coccioli
2020-07-23 14:08
Please remember to use @ to address the author so that questions can be traced back to the right paper

tragtenberg
2020-07-23 14:10
Amazing research @sarasithiamnuai! Creating new instruments can have a very important role of creating new cultural identities... Congratulations!

lamberto.coccioli
2020-07-23 14:12
Any questions for Sara?

a.mcpherson
2020-07-23 14:12
Really great ideas and questions @sarasithiamnuai!

a.r.jensenius
2020-07-23 14:12
Very nice @sarasithiamnuai! Any thoughts on how this could be extended to other instruments as well?

ko.chantelle
2020-07-23 14:13
@sarasithiamnuai perhaps I missed it, but why did you choose the name Nami?

m.zbyszynski
2020-07-23 14:13
@sarasithiamnuai interesting work, thanks. I need to read the paper a bit before I have a reasonable question.

x
2020-07-23 14:13
@sarasithiamnuai It seems like your hitting on elegant questions to aid the design process. Were these design methods self initiated or is it part of a design course?

laddy.cadavid-hinojos
2020-07-23 14:14
wonderful project @sarasithiamnuai will be great read your paper in detail.

info041
2020-07-23 14:15
@sarasithiamnuai do you now have a set of gestural vocabulary for Nami or it changes with different contexts, cultures you work with?

knotts.shelly
2020-07-23 14:16
Awesome work @sarasithiamnuai Thanks for introducing us to the Cultural AI design tool!

fengjian113
2020-07-23 14:16
@sarasithiamnuai brilliant! thanks

barryjosephcullen
2020-07-23 14:17
@sarasithiamnuai thanks for sharing the sensitive reflections during your design process.

a.mcpherson
2020-07-23 14:17
On the topic of the last two presentations, an interesting ethnomusicological paper about the cultural complexity of musical instruments is "The Social Life of Musical Instruments" by Eliot Bates: https://www.jstor.org/stable/pdf/10.5406/ethnomusicology.56.3.0363.pdf?casa_token=5cd87cUWO6YAAAAA:Coi1oY2Kly4IcA2R5VM4WoVw_WxOsswkmxC8dZGXPWYe9syXzqZ0P9kL-tjlQmiMemDSL4ds1xBT7KP18HpFFP_MqYT0A98xJK-SLrWzvF6xyMi4PA

n.bryan-kinns
2020-07-23 14:17
no special notation developed - the composers used a combination of Chinese notation and electronic music notation

lamberto.coccioli
2020-07-23 14:25
Any questions for Halldor and Adam?

konstantinos.vasilako
2020-07-23 14:25
QUESTION: Is there a roadmap precomposed prior to each performance while improvising, in terms of ?interaction affordances? of the system(s) while performing? How these are influencing the design process of the interfaces/environments.

max
2020-07-23 14:25
Where can I hear more of the instrument?

a.martelloni
2020-07-23 14:25
Question: How robust is the current system against the typical acoustic feedback during a stage performance?

ko.chantelle
2020-07-23 14:26
@amelbye01 Could you share the github links from the presentation here?

sarasithiamnuai
2020-07-23 14:26
Hi Chantelle! Nami in Japanese means ?wave.? I learned about nami in the context of art/music through a first generation Nikkei musician during a Nikkei Music Reclamation workshop in Little Tokyo. She described it as an embodiment of life through music that goes beyond bar lines and technicalities. This was something that resonated with me especially as someone who plays African American music and more specifically jazz in which this concept has been acknowledged as a big part of the music. We can see this span cross-culturally in the music of Charles Mingus to a local Nikkei group based in LA called Minyo Station.

m.ortiz
2020-07-23 14:26
@amelbye01 You mentioned using an external laptop which breaks with the self contained ethos. Have you looked at other Single Board Computers?


barryjosephcullen
2020-07-23 14:26
@amelbye01 What part of the programming stretched the BELA beyond it's capabilities?

marceldesmith
2020-07-23 14:27
Does 2nd order cybernetics play a role in how you conceptualize and perform with your instrument?

a.martelloni
2020-07-23 14:28
If he hasn't played it live it's a bit of an out-of-place question, sorry :smile:

cagri.erdem
2020-07-23 14:30
@amelbye01 In what ways did using feedback influence the existing interactive strategies on your instrument, for instance how much you play and/or exert effort?

agpiepenbrink
2020-07-23 14:31
Regarding Bela vs. other single-board computers, it's my understanding that the PRU (programmabble realtime unit) is the key feature of the CPU which enables the ultra-low latency, and other SBCs do not have this. I imagine that if one wanted to do heavy DSP such as FFTs beyond the capability of Bela, you would need an FPGA.

marije
2020-07-23 14:31
next paper starting - making mappings

amelbye01
2020-07-23 14:31
Yes! Thank you Giulio

g.moro
2020-07-23 14:32
Bela dev here, I think we may have to look at what routines are used by Supercollider for performing FFTs

v.zappi
2020-07-23 14:32
Also, frequency analysis/processing does not go along well with low latency/small buffers

g.moro
2020-07-23 14:32
Problem is - IIRC - they are done in the audio thread, which is why you need larger buffer sizes:

g.moro
2020-07-23 14:33
as Victor is saying, you are not necessarily maxing out the CPU on average, but you may have a CPU peak during every block where the FFT is performed

sarasithiamnuai
2020-07-23 14:33
Hi Solomiya, so far it?s largely been on a performance-by-performance basis. For some solo performances, I?ve tried to combine gestural vocabulary sets but when working with a specific community I do like focusing on a set specific to them (and also creating a glove controller specific to them) and that also provides its own challenges but also highlights unique needs or aspects of a community.


amelbye01
2020-07-23 14:34
http://www.adampultz.com/augmenting-the-double-bass/ also, there?s a concert tonight with the instrument


g.moro
2020-07-23 14:34
@agpiepenbrink Running at very small blocksizes, you'd want the FFTs to run in a separate, lower priority thread. Unfortunately I don't think this is how it's currently done on Sc (nor Pd). So yeah, there may be some CPU optimisations that can improve the situation, but it's mostly about the architecture of the algorithm: large, expensive computations need to be outside the audio thread, especially if your blocksize is very small.

sarasithiamnuai
2020-07-23 14:35
Hi Vahakn, thanks for the message. Yes, it was introduced to me by Christine Meinders at CalArts under the AI.Culture.Creativity course but it is also utilized for many of the products created at http://Feminist.AI.

amelbye01
2020-07-23 14:36
Great question. Yes, very much. First and second-order cybernetics are central to our understanding of adaption . I can send you some text on this if you send me an email on

g.moro
2020-07-23 14:36
I explain something very similar to this in this video around 21 minutes https://youtu.be/e1D5vCBWhdk?t=1257

g.moro
2020-07-23 14:36
[title is supposed to be "suitability", not "stability"]

v.zappi
2020-07-23 14:36
@g.moro, nice beard!

amelbye01
2020-07-23 14:37
I feel that I need to more or less completely rethink techniques and performance tactics. It?s great and terrifying :-)

max
2020-07-23 14:38
@g.moro Will there be a bela with beagle bone incorporated?

g.moro
2020-07-23 14:38
@max what do you mean there? Like all on a single board?

agpiepenbrink
2020-07-23 14:38
This problem with FFTs makes sense. I ported Lance Putnam's Gamma DSP library to Bela some time ago, and in my experience the only code examples which made the Bela choke were the FFT examples. I assume this is because it's using FFTPACK and not taking advantage of any ARM hardware accelerations available via NEON.

lamberto.coccioli
2020-07-23 14:38
Any questions for Travis?

ko.chantelle
2020-07-23 14:39
This looks like very useful research! I've seen many papers on design models, guidelines, and frameworks as relating to DMIs, but this is the first time I've heard of "getting into the head" of designers/composers as they make decisions.

amelbye01
2020-07-23 14:39
Thanks! I still have lots to learn in terms of programming and it?s entirely possible there are ways of optimising the code.

g.moro
2020-07-23 14:39
it's usually a combination of the two: non-optimised FFT libraries AND small blocksizes

amelbye01
2020-07-23 14:40
Andrew?s Bela / C++ course is great and I hope to be able to do the processing in C++ in the future

g.moro
2020-07-23 14:40
@amelbye01 we should upstream our code to the main Supercollider in the near future, we can look at more optimisations then.

marije
2020-07-23 14:40
@travis.west It seems that the type of mapping is focused on direct mapping of continuous signals to continuous control over parameters, this seems to exclude approaches that allow for different types of mappings, e.g. modal conrtol.

sdbarton
2020-07-23 14:40
@travis.west Can you generalize about the kinds of interface elements (for both the hardware and the software) that are conducive to exploration / experimentation?

a.mcpherson
2020-07-23 14:40
@travis.west How was it decided what sources and destinations to provide, and do you think the process of designing mappings would have changed if you presented a different set of parameters?

marceldesmith
2020-07-23 14:41
You mentioned modular synthesizers, and I wonder if that community in particular would be an interesting one to study for this work considering how mapping itself is the primary interaction of that instrument and workflow.

a.mcpherson
2020-07-23 14:41
Thanks Adam! More C++ lectures coming soon. :slightly_smiling_face:

amelbye01
2020-07-23 14:41
I meant me optimising my code, but that sounds great

konstantinos.vasilako
2020-07-23 14:41
How do you classify an efficient mapping? Is there an evaluation process?

g.moro
2020-07-23 14:41
@amelbye01 a threaded approach to FFT is what you need, look at the example `Audio/FFT-phase-vocoder`. That does FFT and IFFT in a separate thread. In your case for pitch tracking you'd only need the FFT, but that can serve as a starting point (however the code is a bit messy at the moment)

amelbye01
2020-07-23 14:41
@a.mcpherson Can?t wait! Thanks so much for the work.

james.leonard
2020-07-23 14:42
@travis.west: when you mention that users spend one or several minutes without altering the mapping, does this correspond to users experimenting and playing the instrument for a prolonged period, or did you observe breaks during which users would take a step back to rethink a starting point for a mapping strategy?

x
2020-07-23 14:42
@travis.west QUESTION: Did anything surprise you during your observations of the test subjects?

amelbye01
2020-07-23 14:42
@g.moro Thanks, will definitely have a look at that.

tom.mitchell
2020-07-23 14:43
Hi @travis.west NIME folks use all kinds of environments for mapping all with different features and nuances do you think some of the processes that you?ve observed are applicable to other environments?

marije
2020-07-23 14:43
@travis.west the choice of output parameters and synthesis (or, more general, musical representation) is usually an important part of the process, not just the connections from input to outputs

rschramm
2020-07-23 14:43
@travis.west Great work! Do you have plans to extend the experiments including more participants?

noris
2020-07-23 14:43
@travis.west very interesting research! i wonder if in the end you managed to achieve some sort of universal agreement with the mappings?

marije
2020-07-23 14:45
Last paper of the session is starting now

travis.west
2020-07-23 14:46
Yes, I would more or less agree. I guess it's possible 'theoretically'

travis.west
2020-07-23 14:46
to do this kind of modal mapping with webmapper, but it's definitely not the strength of that tool

konstantinos.vasilako
2020-07-23 14:47
@travis.west A bit about the question regarding efficient mapping:Is apropo to each user/developer aesthetics? in this case isn?t this a self referential axiomatic process instead of efficient in terms of technological approach? I am not sure I understood this.

travis.west
2020-07-23 14:47
Absolutely. The tricky thing I guess would be how to capture the design process for analysis; maybe using a fully digital modular synthesizer...

travis.west
2020-07-23 14:49
Definitely a bit of both. I still haven't looked too closely at the signals from the t-stick, so I can't say for certain or offer too much detail on the proportion of breaks vs playing, but from watching participants while doing the design, there is definitely a mix of these kinds of behaviors.

travis.west
2020-07-23 14:50
Nothing jumps out at me, except maybe just the variety in the mappings people made. There was really very little agreement between participants about how to map the t-stick to the synthesizer we gave them, no one did exactly the same thing. Also, some participants did some really weird things with webmapper that I was not expecting (possibly that the designers of the software were not expecting) that evoke some very bizarre glitchy sounds

travis.west
2020-07-23 14:51
Mostly though I didn't have too many ideas about what to expect I guess, so mostly there was not much that really shocked me.

marije
2020-07-23 14:52
I think also approaches of further processing data from the input, before connecting it to an output parameter is missing in the webmapper approach, i.e. first generating more data.

james.leonard
2020-07-23 14:53
thanks! Also a small follow up question (I may have missed this in the presentation): did any participants entirely discard their mapping and restart from scratch in the middle of the test? If the mapping design is iterative and bottom-up, the first choices have a huge influence, and I wonder if any users felt that they reached a "dead-end" due to the first choices further in the design process.

travis.west
2020-07-23 14:53
Yeah, definitely some things, but probably not most. I think regardless of the tool being used to make the mapping, there would still be a period of learning about the instrument and the synthesizer by moving things around and getting a feel for it. And I guess, tools with a similar kind of mental model to webmapper (e.g. Pure data maybe) might facilitate a similar working process. But otherwise, I really don't know; probably things would be very different when using other tools, especially if they present a different working/mental model

alucas02
2020-07-23 14:54
Wow, I'd love to try this!

lamberto.coccioli
2020-07-23 14:54
Any questions for Giulio?

juan.jpma
2020-07-23 14:54
was that ELP in the hammond?

niccolo.granieri
2020-07-23 14:54
This paper is awesome. Now I want one for myself.

jmalloch
2020-07-23 14:54
I would argue that modal mappings are pervasive because they are easy to create using common tools, whereas making complex mappings that maintain interest without changing modes is difficult. Modes are like "channel-surfing" when watching television :slightly_smiling_face:

travis.west
2020-07-23 14:55
I absolutely agree. For the purposes of this study though, we were intentionally focused on examining the input-output connection part of the process.

g.moro
2020-07-23 14:55
Emerson would use an L100, because it's lighter to kick around and stab

sdbarton
2020-07-23 14:55
@g.moro interesting research. I wonder about the possibilities of integrating haptic feedback in the physical interface. I think about the feedback that a string provides to a guitarist as it is bent.

dianneverdonk
2020-07-23 14:55
Great project, presentation and results, @g.moro and @a.mcpherson! Have you let (pro) instrumentalists play with it already?

myounghoonjeon
2020-07-23 14:56
Any plans to work on the pedals as well? It would be awesome to combine these keys with advanced pedals.

marceldesmith
2020-07-23 14:56
Do you feel that the sound / synthesis engines will be the bottleneck to wide adoption once controllers like the Expressive E keyboard are released? What are your thoughts on the Eigen matrix platform.

niccolo.granieri
2020-07-23 14:56
@g.moro that tshirt is an easy win: I want that too.

travis.west
2020-07-23 14:56
For sure. We've already had a few more participants since I wrote the paper, and we have plans to extend the research with different mapping tools as well.

j.harrison
2020-07-23 14:57
I?m thinking of a continuous-control access switch!

jmalloch
2020-07-23 14:57
@marije libmapper (etc) do support processing as part of the connection. "Naming" the processed version and representing it as a source signal is not part of the current UIs but has been discussed (for many years!)

travis.west
2020-07-23 14:57
Definitely not! There was (I would say) very little agreement between participants about what connections were most effective. There was some agreement about the qualities of effective mappings, but not about which connections had these qualities. I go into this a bit more in my thesis, which should be available soon.

sarasithiamnuai
2020-07-23 14:57
Thanks Alexander! So far I?ve built glove controllers for non-musicians and an electronic musician/improviser but this is a great question. I haven?t focused necessarily on other specific instruments yet and have actually been primarily focused on working with creative people from a variety of communities who may not necessarily be musicians. But the good thing about the Cultural AI Design Tool is that it isn?t instrument specific or even culture specific so I would be curious to see how others use this framework for their respective needs.

tom.mitchell
2020-07-23 14:57
OK thanks @travis.west. For interest with Dom Brown and Chris Nash we have done some related analysis of experienced musicians/mappers that might be of interest to your work, again using only one mapping environment: Understanding User-Defined Mapping Design in Mid-Air Musical Performance http://teamaxe.co.uk/publications/032/moco2018.pdf Simple Mappings, Expressive Movement: A Qualitative Investigation into the End-User Mapping Design of Experienced Mid-Air Musicians https://uwe-repository.worktribe.com/preview/857790/expert-paper-FINAL.pdf

tragtenberg
2020-07-23 14:58
@g.moro @a.mcpherson Haken's Continuum Fingerboard uses MIDI 1.0 internally in his similar implementation to MPE protocol with 14 bits CCs and poly pressure. Why didn't this protocol work in your setups? What limitations did you find in MIDI?

laurel.pardue
2020-07-23 14:59
Please check out our Vodhran video during the poster/demo session. We very much hope you enjoy it and that it brings a smile. We?ll also be running a live demo on Zoom where I?ll do my best to demo for you whatever you?d like to see! https://www.youtube.com/watch?v=Tw7J3pyMXW8&feature=youtu.be

a.r.jensenius
2020-07-23 14:59
Great comment, @a.mcpherson: pedals are the only continuous controllers on a piano.

travis.west
2020-07-23 14:59
"Effective" was the term we used, not "efficient"; sorry if I misspoke at some point! Our thinking here was that "effective" depends on what your goals are. If you want to play melodies, then an effective mapping would be one that lets you play melodies. If you want to play noise, an effective mapping would be one that lets you play noise. Probably those would not be the same mapping.

marije
2020-07-23 15:00
And installations to visit now!

travis.west
2020-07-23 15:00
Thanks, that looks very relevant!

konstantinos.vasilako
2020-07-23 15:00
:+1:

agpiepenbrink
2020-07-23 15:00
I really appreciate the concept of the "always-on" sound being accessed by a key press. This reminds me of David Wessel's "dipping" metaphor on the SLABS instrument, as detailed in the classic paper here: https://cnmat.berkeley.edu/sites/default/files/attachments/2002_problems-and-prospects-for-intimate-musical-control-of-computers.pdf

g.moro
2020-07-23 15:02
I also tried to put some foam under the keys on a synth-style non-weighted keyboard to offer some resistance to the press. That works kinda OK but needs refining. Ultimately it turns it into a "larger and lighter than usual aftertouch"

rschramm
2020-07-23 15:02
Nice research. This is an important contribution to the NIME field. And, of course, a higher number of subjects will increase the statistical significance. I can't wait for the next outcome of your project!

marije
2020-07-23 15:02
I find in my research that these modal approaches relate to acting on different levels of music making, e.g. note vs timbre level, section vs note level, etc.

a.mcpherson
2020-07-23 15:03
The limitations of MIDI (whether 1.0 or 2.0) is a really interesting rabbit hole actually. It's not so much that you couldn't represent the numbers of continuous key angle with 14 bit CCs (though you'd be limited to 16 keys in an MPE approach, which might be a problem). The bigger problem is the assumptions that MIDI makes: that notes have discrete onsets and releases characterised only by velocity, that notes are independent of one another, and that everything else can be lumped into "continuous controllers". In reality, *articulation* is the big missing piece when you get to any instrumental metaphor beyond the piano. The transient behaviour at the beginning of the note can be quite complex on strings and winds, and even for key motion. Even if you were to capture lots of high-speed samples of key position with MIDI CCs, what would you do with them in a MIDI context? It's not the instantaneous values we care about, it's the evolution over time and what that says about the player's key strike gesture.

g.moro
2020-07-23 15:04
I really feel you need some bespoke or at least finely tuned sound generators to take advantage of these gestures. The Osmose is doing it, as I see it more of an instrument (includes the Haken sound engine) than a generic "controller"

travis.west
2020-07-23 15:04
I don't remember off the top of my head. Let me check the usage data and see: http://traviswest.ca/making_mappings/#activity_data

g.moro
2020-07-23 15:05

cesare.ferrari
2020-07-23 15:06
The problem with FFT and low latency is that most of the time that runtime of algorithms is calculated is amortised - so basically the figures give you ?cost per sample?. However, an algorithm that buffers samples then does an FFT every FFT block size will cause a massive spike in computational cost for that realtime audio block.

travis.west
2020-07-23 15:07
No, it doesn't look like anyone went completely nuclear at any point and started over completely. This participant got rid of most of their connections at one point, but not all of them: http://traviswest.ca/making_mappings/activity_data/P7.timeline.html

g.moro
2020-07-23 15:07
MIDI 2.0 has per-note onset "articulation" controls, but these are discrete classes, e.g.: "staccato", "sforzato", and are instantaneous descriptors for actions that have infinite possible temporal evolutions

g.moro
2020-07-23 15:07
working on it ...

marije
2020-07-23 15:08
@travis.west what kind of visual feedback did the participants get of the signals coming from the t-stick, as they used it? were they provided with tools to view the signals, whilst making the mappings?

a.mcpherson
2020-07-23 15:08
This video is amazing.

cesare.ferrari
2020-07-23 15:08
So, to resolve this you need an FFT that spreads the cost across samples so that the worst case is less bad :slightly_smiling_face: This algorithm will end up doing more work than the ?do it in one go? approach, and may have greater latency, but with lower worst case cost

travis.west
2020-07-23 15:08
Thanks!

amelbye01
2020-07-23 15:08
It can get really loud in itself, so I wouldn?t imagine feedback from the front would be a problem during performance. The few times I?ve played it in the wild it?s been plenty loud.

julian
2020-07-23 15:09
I got really excited this year at NAMM and went to the MIDI 2.0 talk, expecting big news. But the talk was a summary of this https://www.midi.org/midi-articles/details-about-midi-2-0-midi-ci-profiles-and-property-exchange, and they did nothing to try to make this topic accessible to anyone. I was hoping MIDI 2.0 would become more user friendly instead of more obscure... Anyone knows a really good introduction?

julian
2020-07-23 15:10
That picture they keep referencing is not helping either :hidethepainharold:

cesare.ferrari
2020-07-23 15:11
Yes, you can use an extra thread to do this, but in belas case there is only one core, so you are basically pushing the processing into the downtime between audio callbacks, and there will be IPC, jitter and extra latency introduced. I?m not sure how bad, maybe not enough to be a problem. It?s a relatively simple approach though so maybe a good option

travis.west
2020-07-23 15:12
Yes, we used the multislider max/msp object to provide scrolling graphs of recent signals from the t-stick, as well as recent changes to the synthesis parameters.

marije
2020-07-23 15:12
ok

travis.west
2020-07-23 15:12
So they could see the sensor data from the instrument, and also the way that the synthesis parameters were responding to the mapping.

a.mcpherson
2020-07-23 15:13
Paper 100 in proceedings

alucas02
2020-07-23 15:16
Yes! Which is already part of the Logitech Adaptive Gaming kit! I don't think you can buy these separately yet.

amelbye01
2020-07-23 15:19
Thank you for the tip Cesare! There?s some interesting solutions to look into here.

j.harrison
2020-07-23 15:20
Ha yep - I realised they already exist as soon as I hit send! Although I was thinking of something in the same form factor as the button but with the sensitivity, low latency and tight coupling to a physical model as we saw with Giulio?s keyboard/flute instrument

robert.blazey1
2020-07-23 15:22
Great instrument, amazing video :slightly_smiling_face:

jmalloch
2020-07-23 15:24
@marije but are modal mappings actually necessary for acting on different levels of control? I would never argue that we should disallow modal mappings, but they shouldn't be the default solution.

marije
2020-07-23 15:26
I think they are quite natural to musical instruments, if you consider the use of the fingers on a guitar to select a chord, that puts the strings in a certain mode.

isabelaalmeida29
2020-07-23 15:26
*Awesome!!!!* :star-struck:

a.mcpherson
2020-07-23 15:31
I agree. It's difficult to have a generic "controller" with a high-bandwidth interaction. Whatever mappings you choose, the whole experience is so strongly dependent on the physical interface and the particulars of the sound generator. But then, the longer I spend on NIME stuff, the more I think that specific rather than generic is the way to go.

jmalloch
2020-07-23 15:32
Interesting ? I would not call that mode-switching at all, but rather bimanual control within a single control mode.

mario.buoninfante
2020-07-23 15:34
@amelbye01 @halldorion I missed the live zoom presentation, but I saw the video on YouTube now. I just wanted to say, beautiful project! I'm looking forward to read the paper as well. I'd really like to see the FAAB played live :slightly_smiling_face:

mario.buoninfante
2020-07-23 15:35
do you have any more link to share where I can see the instrument in action?

amelbye01
2020-07-23 15:47
Thank you! There?s some more here: http://www.adampultz.com/augmenting-the-double-bass/ there will also be a performance of the FAAB tonight and hopefully I?ll get to play it wherever you live soon!

jmalloch
2020-07-23 15:49
... *continuous* bimanual control at that, since the fretting hand can damp/bend/slide/scrape/pluck the strings...

mario.buoninfante
2020-07-23 15:50
Nice, thanks for sharing. Well I'm in the London area, and this is my email: :)

tragtenberg
2020-07-23 15:51
@a.mcpherson Maybe it is a musical bias similar to those you presented that programming languages have. And specially because Midi has a long tradition of use. I have heard a lot of people saying that something sounds midy... refering to 1990s synths that came with computer sound cards to play .mid files. It affords all these continuous parameters of control as well but has a strong bias to discrete onsets for playing notes. MPE is a nightmare to program, whay we call in Brazil ?Gambiarra? (Hard to explain, bit like an improvised reuse of something made to another function). A note with pitchbend in different channels where the pitchbend range is defined elsewhere...

tragtenberg
2020-07-23 15:53
Hope these things are made better in midi2.0, I will look for that document Julian, thank you. I havent understood past the Protocol negotiation. It would be a good nime paper for next year?

robert.blazey1
2020-07-23 15:55
@g.moro just caught up on your presentation as I couldn't catch it live. Fantastic work! I've made a lot of sampler instruments from wind instrument recordings over the years, and while they can be nice as textural tools they have never sounded particularly realistic or expressive. This work really does a lot to solve this but also sounds like a really interesting and expressive instrument in its own right.

marije
2020-07-23 15:57
there are more examples. Jeff Carey was describing to me the use of sensors before and after launching the musical event. First using them to set the starting parameters, and then launching the event, and remapping the same sensors to control different parameters as the event unfolds. This is a kind of modal control that makes a lot of sense.

osmith10
2020-07-23 16:01
Loving this video Laurel :joy: looking forward to seeing more of these!

g.moro
2020-07-23 16:19
thanks! there are a couple of (partly-overlapping) videos https://vimeo.com/364675614 https://vimeo.com/352530701 and some more details in my http://instrumentslab.org/data/giulio/giulioMoroThesis.pdf (chapter 5 mostly)

g.moro
2020-07-23 16:21
fun fact the physical model in use here is the one from STK, which goes back to Perry Cook's 1992 model, with some minor modifications I added on top inspired by Valimaki's work. That is to say, the model itself is pretty simple, and it could be boring if you play it with automated envelopes as it was done in the FAUST model I started from, but it comes to life with continuous control

laurel.pardue
2020-07-23 16:28
Thanks, glad you all enjoyed!

g.moro
2020-07-23 16:32
@robert.blazey1 I also think that "sustained" instruments such as woodwinds are fairly hard to keep interesting when sampled, as there is so much that can happen during the duration of the note. [The Mellotron was good at that because it had its own characteristic sound]

g.moro
2020-07-23 16:41
I am not sure how much technical material about MIDI 2.0 has already been published, but the specs can be found https://www.midi.org/midi2 (we had an early look at them because we are in the MIDI Manufacturers Association). I don't think there is a lot that MIDI 2.0 can do today _in practice_, though support is https://developer.apple.com/documentation/coremidi/?utm_source=ActiveCampaign&utm_medium=email&utm_content=The%20MIDI%20Message%20Newsletter%20July%202020&utm_campaign=The%20MIDI%20Message%20Newsletter%20July#topics and the USB-IF MIDI2.0 specs have been https://www.businesswire.com/news/home/20200716005214/en/USB-IF-Publishes-USB-Device-Class-Specification-MIDI. Protocol negotiation is kind of a distraction from the core 2.0 specs, as I understand it: it explains how devices can discover each other's capabilities, so that they can then use 2.0 specs for communication.

robert.blazey1
2020-07-23 16:41
@g.moro That's true. The best results I've had have worked a bit like the mellotron, but with different recordings and loop lengths for each note, so that the texture of held chords evolves in interesting ways due to those aspects that happen in the notes duration (swells in volume, changes in tone etc.). But as I said, these are not at all expressive when played live, more a useful compositional texture

noris
2020-07-23 16:51
ahh.. this is exactly what i am faced with in a small-scale project that I am doing. there was no consensus achieved, and the mapping results were so broad (almost random). this made me wonder if it will ever be possible to reach some sort of agreement. thanks for your feedback. I really appreciate it!

g.moro
2020-07-23 16:59
thanks @cesare.ferrari, still having an extra thread on Bela makes long FFT doable, and probably with less overhead than doing a partitioned one (both threads would be Xenomai, with the FFT one being pre-empted by the higher-priority low-latency audio thread). The latency may be at most one extra block in this case with respect to a partitioned approach.

g.moro
2020-07-23 17:01
I mean, the IPC jitter would be deterministically in the order of 30us, so lower than you'd see elsewhere

jmalloch
2020-07-23 17:32
I think many such modal conceptions of control can be redescribed/reimplemented as more complex non-modal mappings, but it's fine if a particular musician/interactor want to think of it this way. I'm interested in finding mappings that must be modal _in concept_ rather than those that are simply control multiplexing because the musician ran out of sensors :slightly_smiling_face: For me, the example you gave above doesn't suffice ? I would have to know *why* the same sensors must be used to control different parameters. Is it important that the performer use exactly the same movement/gestures/postures for different purposes at different points in the evolution of the sound?

marije
2020-07-23 17:37
bandwidth of the body - you have only so many fingers, arms, feet, and so on. And sometimes you need these for other things (like standing, moving around). Being able to turn continuous control off temporarily, or have an additional control to steer the sensor data, seems necessary then.

marije
2020-07-23 17:38
The need for this is also dependent on the musicking context: whether you play solo, or in an ensemble, music style, etc.

jmalloch
2020-07-23 17:45
Despite some past projects, I forgot about your interest in/emphasis on wearable instruments! I totally agree that it's important to stop controlling and this is probably best implemented as a mode for electronic instruments. The guitar analogue is putting down the instrument, which most people would not describe as a mode of control (but of course is if we're being technical).

jmalloch
2020-07-23 17:47
WRT bandwidth of the body: depending on context this is likely better than the cognitive bandwidth required for tracking multiplexed control & modal mappings...

jmalloch
2020-07-23 17:48
(thanks for the nice discussion btw @marije!)

marije
2020-07-23 18:16
I'm actually reflecting now also a lot based on the case studies I've been doing, so: what are artists doing in the wild :slightly_smiling_face:

sarasithiamnuai
2020-07-23 18:41
Also, some of the questions in the presentation are Christine?s and some are mine that I created for more specificity and to build on Christine?s questions that can be utilized in non-music instrument design products as well.

sdbarton
2020-07-23 18:49
As you mentioned in your response, as a performer, I imagine one of the ways to get the most out of a system like this is to be able to feel the nuance as well as hear it. I?m thinking about electromechanical possibilities such as variable stiffness actuators?

g.moro
2020-07-23 18:53
see Cadoz's and Oboe's work: my impressions is that it gets very expensive and not so reliable ... ``` Claude Cadoz, Leszek Lisowski, and Jean-Loup Florens. A modular feedback keyboard design. Computer music journal, 14(2):47?51, 1990. Roberto Oboe. A multi-instrument, force-feedback keyboard. Computer Music Journal, 30(3):38?52, 2006.```

g.moro
2020-07-23 18:55
if you want to see what techniques people came up with during my studies, see 5.7 and 5.8 of my thesis http://instrumentslab.org/data/giulio/giulioMoroThesis.pdf Ultimately they found their personal way through the instrument's affordances in just a few hours

marije
2020-07-23 19:24
I guess we need a definition of what is considered modal control: I guess in my mind it equates to a control that does not effect a change in a perceivable sound parameter directly, but affects how other controls affect sound parameters. In that sense the hand that plays the guitar frets has a dual function: initially, it sets the mode for the string (it's frequency to sound), but then it can be used to more subtle change the sound, and add modulations. So it is a combination of functions in itself.

marije
2020-07-23 19:30
cognitive bandwidth has another interaction with training of the body - modal controls can also be come part of the body memory. Jeff Carey is very slow in adding new modes to his instrument, as he does not want to overload his cognitive bandwidth, but after training, features can be added, or not, if they never get out of the cognitive mode.

sdbarton
2020-07-23 19:31
cool, thanks for sharing

tragtenberg
2020-07-23 19:33
Yes, it is centered in how to convince the big manufacturers of midi products that MIDI 1.0 devices won't get obsolete... I believe it was a hard discussion at MMA, wasn't it?

g.moro
2020-07-23 19:43
I joined too late to see that part of the conversation really :slightly_smiling_face: It seems that MIDI 2.0 has been in the makes for 15 years ...

tragtenberg
2020-07-23 19:48
yes, things seem to go at a slow pace at MMA, but I really respect that. musical instruments don't fit into the fast and furious programmed obsolescence of consumer products, so for the past to follow along things have to go slowly indeed

g.moro
2020-07-23 20:02
I guess given how MIDI 1 always stayed at .0 and got market-wide adoption, they were put under a lot of pressure to release something that had to be as successful as that.

jmalloch
2020-07-23 20:14
In the context of mapping design, I assume that when we talk about different modes we are talking about "dynamic" mapping, in which the associations between sources and sinks are modified during some time window (for our purposes probably a piece of music or shorter). No change of mapping is necessary to implement the guitar example, so I'm not sure what benefit we gain from talking about fretting as "setting the mode for the string" rather than controlling a continuous (but detented) string length property.

marije
2020-07-23 20:21
perhaps the guitar example was not the best example... organ register settings are perhaps a better example.

marije
2020-07-23 21:14
with the guitar the capo would be a more obvious mode change, to set the strings to a different base tuning, that is usually done between songs, and not within.

marije
2020-07-23 21:14
anyway, good discussion. Food for thought on naming and language; and aesthetics perspective from which to design or think.

info041
2020-07-25 17:33
Great, seems very appropriate and personalised touch to the instrument!