joe.wright
2020-07-18 00:23
has joined #papers06-ai-digital-platforms-web-based

joe.wright
2020-07-18 00:23
@joe.wright set the channel purpose: Paper Session 6: AI / Digital Platforms / Web-Based

niccolo.granieri
2020-07-18 00:23
has joined #papers06-ai-digital-platforms-web-based

hassan.hussain5
2020-07-18 00:23
has joined #papers06-ai-digital-platforms-web-based

overdriverecording
2020-07-18 00:23
has joined #papers06-ai-digital-platforms-web-based

lamberto.coccioli
2020-07-18 00:23
has joined #papers06-ai-digital-platforms-web-based

jonathan.pearce
2020-07-18 00:23
has joined #papers06-ai-digital-platforms-web-based

richard.j.c
2020-07-18 00:23
has joined #papers06-ai-digital-platforms-web-based

joe.wright
2020-07-18 12:14
@joe.wright has renamed the channel from ?papers6-ai-digital-platforms-web-based? to ?papers06-ai-digital-platforms-web-based?

eskimotion
2020-07-20 09:25
has joined #papers06-ai-digital-platforms-web-based

edmund.hunt
2020-07-20 09:25
has joined #papers06-ai-digital-platforms-web-based

acamci
2020-07-20 17:01
has joined #papers06-ai-digital-platforms-web-based

aaresty
2020-07-20 17:21
has joined #papers06-ai-digital-platforms-web-based

10068197
2020-07-20 17:21
has joined #papers06-ai-digital-platforms-web-based

a.nonnis
2020-07-20 17:22
has joined #papers06-ai-digital-platforms-web-based

a.macdonald
2020-07-20 17:23
has joined #papers06-ai-digital-platforms-web-based

andreas
2020-07-20 17:24
has joined #papers06-ai-digital-platforms-web-based

dianneverdonk
2020-07-20 17:25
has joined #papers06-ai-digital-platforms-web-based

likelian
2020-07-20 17:25
has joined #papers06-ai-digital-platforms-web-based

ko.chantelle
2020-07-20 17:25
has joined #papers06-ai-digital-platforms-web-based

anika.fuloria
2020-07-20 17:26
has joined #papers06-ai-digital-platforms-web-based

clemens.wegener
2020-07-20 17:26
has joined #papers06-ai-digital-platforms-web-based

jason.hockman
2020-07-22 16:23
Papers for this session: ? *Support System for Improvisational Ensemble Based on Long Short-term Memory Using Smartphone Sensor* (Paper 77) H. Takase and S. Shiramatsu ? *On Digital Platforms and AI for Music in the UK and China* (Paper 69) N. Bryan-Kinns, Z. Li and X. Sun ? *A Survey on The Uptake of Music AI Software* (Paper 95) S. Knotts and N. Collins ? *Sonification of High Energy Physics Data Using Live Coding and Web Based Interfaces* (Paper 76) K. Vasilakos, S. Wilson, T. McCauley, M.K. Mardakheh, E. Margetson and T.W. Yeung ? *Patch-corde: An Expressive Patch Cable for The Modular Synthesizer* (Paper 120) J. Wilbert, D. Haddad, H. Ishii and J.A. Paradiso

jason.hockman
2020-07-23 10:57
Immediately following this session will be Lunch and the OHMI Dropin: https://nime2020.bcu.ac.uk/ohmi/

niccolo.granieri
2020-07-23 11:26
** Live in 4 minutes with paper session 6 - see you there!



niccolo.granieri
2020-07-23 11:27
Just in case we have issues with Zoom Captions, this is the link for the external captions: https://www.streamtext.net/player?event=NIME230720

niccolo.granieri
2020-07-23 11:31
we're live now!

h.takase
2020-07-23 11:32
https://www.youtube.com/watch?v=M2aXVLSY9ko&feature=youtu.be This week I actually uploaded a demo movie of the system at this stage to You Tube. I post the URL on slack, so please take a look!

jason.hockman
2020-07-23 11:44
Could you discuss the LSTM model in more detail? Perhaps the model architecture and relevant parameterisation?

tom.mitchell
2020-07-23 11:44
Hi @h.takase thanks for the talk really interesting - modern smartphones often incorporate barometers I wonder if you have considered using that for estimating vertical position (altitude)?

julian
2020-07-23 11:44
What is the application of the Kinect in this performance system? Does it have the same function as the smartphone?

quinnjarvisholland
2020-07-23 11:44
Interesting graph of the two axis of rotation for detecting attack. Do you think that has applications for other gyroscopic/accelerometer device mapping?

quinnjarvisholland
2020-07-23 11:45
could be wrong but looks like it's ugmenting the phone data with a height value?

marije
2020-07-23 11:48
Second paper presentation starts now!

marije
2020-07-23 11:49
And please mention the author's name when asking a question, for those who trace back the conversation later on! So we are now at @n.bryan-kinns

f.schroeder
2020-07-23 11:53
Is @jason.hockman muted?

a.r.jensenius
2020-07-23 11:54
Very interesting @n.bryan-kinns - do you have any ideas for follow-ups on this study?

h.takase
2020-07-23 11:55
I use Kinect to collect teacher data for Vertical motion estimation. The Kinect records the height of the hand holding the phone. The vertical motion estimation LSTM model estimates the height of the hand holding the phone. The sum of the actual values between attack timings is used to determine vertical movement.

artem
2020-07-23 11:55
@n.bryan-kinns could you elaborate on adaptive music techniques? what's their purpose and mechanics?

a.r.jensenius
2020-07-23 11:56
And: in relation to other discussions we have had about a Western bias of a lot of NIMEs, it would be super-interesting to apply a similar approach to what you have done also to other countries. @n.bryan-kinns

jason.hockman
2020-07-23 11:57
Sorry all, my Zoom crashed.

h.takase
2020-07-23 11:58
I hadn't considered using a barometric sensor. We'll use your message to help us with future research. Thank you?

fengjian113
2020-07-23 11:58
Very interesting, wondering is there a way to quantize the cultural aspects behind Chinese music in terms of performance tech etc?

artem
2020-07-23 11:59
@n.bryan-kinns so the music would be fundamentally synthetic?

artem
2020-07-23 12:02
super interesting, thank you!

marije
2020-07-23 12:03
next up @knotts.shelly !

n.bryan-kinns
2020-07-23 12:11
@knotts.shelly interesting that you found that participants did not think that AI would take away their jobs, and are more seen as an opportunity - we found the same thing in our study in the UK and China (not reported in our paper). But, our participant sample was self-selecting and had a strong interest in digital music and AI anyway.

sallyjane.norman
2020-07-23 12:12
@knotts.shelly what's your own take on the date question - i.e. when AI might attain human-level music making?

h.takase
2020-07-23 12:13
@quinnjarvisholland it's possible to use other method. Could you teach me the names of the other methods for gyroscopic/accelerometer device mapping?

samuel.hunt
2020-07-23 12:15
@knotts.shelly Really great talk!

r.fiebrink
2020-07-23 12:16
@knotts.shelly very interesting, thanks for doing this work! Such a nice counterbalance to the overhyped and oversimplified narratives we see getting so much attention in the broader world.

hugo.scurto
2020-07-23 12:16
@knotts.shelly Thanks for the very interesting work! :slightly_smiling_face: I might have missed it at the beginning of your talk, but what definition of « music AI » were you using in the survey? Are we specifically discussing machine learning models for mapping or music generation? Or do we include systems with autonomous decision-making abilities, such as shallow classifiers, or even rule-based software (which might be put into AI to some extent)?

f.morreale
2020-07-23 12:17
Nice job @knotts.shelly! But maybe the mostly positive attitude has to do with self-selection of participants? Those that decided to go ahead and fill the survey are probably somehow more attracted to music AI than those than did not?

emmafrid
2020-07-23 12:18
Thank you for an interesting presentation @knotts.shelly! :smile: We published a paper focusing on AI tools for making music in the context of video creation at CHI this year: https://dl.acm.org/doi/abs/10.1145/3313831.3376514 Some of the questions that you asked in your survey were similar to the ones that we asked in our surveys :slightly_smiling_face:

sallyjane.norman
2020-07-23 12:18
@knotts.shelly It's not a silly question and thanks for a great answer!

knotts.shelly
2020-07-23 12:20
oh great to know. i'll check it out!

h.takase
2020-07-23 12:20
@jason.hockman LSTM is suitable for estimating time series data such as music and text data because having a layer that retains past output. Actually LSTM gives better accuracy in attack timing estimation and vertical motion estimation than Bayesian network in this result. I hope that answers your question.

marije
2020-07-23 12:20
Perhaps the question is when computers will start musicking by themselves, and only then we can consider it AI creativity. As long as it is humans functionally applying AI applications in their musicking pursuits, the creativity is still with the human.

knotts.shelly
2020-07-23 12:21
we didnt define AI at all - for the reasons i just explained. The software use section listed a number of softwares that either claim to use AI (commercial softwares) or are programming tools with AI libraries or potential.

marije
2020-07-23 12:21
Fourth presentation is running now! @konstantinos.vasilako

sallyjane.norman
2020-07-23 12:21
@marije Shouldn't we be putting the question to the AI systems? That might really tell us how smart they are!

hugo.scurto
2020-07-23 12:22
@knotts.shelly thanks for your answer :pray:

knotts.shelly
2020-07-23 12:22
yes, that's an excellent point. actually the wording was "independent musical AI" so that was the implication.

knotts.shelly
2020-07-23 12:22
thank you!

konstantinos.vasilako
2020-07-23 12:24
a trailer of this project can be found also here https://www.youtube.com/watch?v=U2aDudtCiY4

knotts.shelly
2020-07-23 12:24
the positive vs. negative responses was fairly evenly split. there were lots of people who made fairly grumpy comments about the state of AI in music making in any other comments so i dont think people necessarily de-selected due to not liking AI.

f.morreale
2020-07-23 12:26
That?s true, somebody anti-AI could have as easily decided to take the survey!

hugo.scurto
2020-07-23 12:26
?copy-pasting my message here?thanks for your answer!! :)

quinnjarvisholland
2020-07-23 12:26
I don't have any names for methods really. Just have used mapping "pitch" and "yaw" or x and z rotation to pitch as well as filter cutoff on a synthesizer. https://github.com/pccadaptiveinstrumentsteam/PCC-Adaptive-Instruments-Project my team project here <--- I like the inclusion of the kinect- we also could not get a gould altitude reading without augmenting it with something like that

knotts.shelly
2020-07-23 12:26
yes i think we comment briefly in the discussion section of the paper on the need for AIs that can engage in creative dialogue :slightly_smiling_face:

marije
2020-07-23 12:29
@konstantinos.vasilako Interesting! since the data is not coming in realtime (I guess?) I am wondering wether the interface gives some insights into the time dynamics of the data, as that might influence the mapping choices

marije
2020-07-23 12:32
ok, so it is about sonifying each event, and then aurally distinguish different events by the mappings chosen.

tom.mitchell
2020-07-23 12:32
Hi @konstantinos.vasilako really interesting project - I really like the idea of doing live sonification design and live coding is a good way to do this. In the work I?ve done with scientists there is sometimes some skepticism over the merit of sonification. How did the scientists at CERN respond to this work? Can they use it did they like it?

s.d.wilson
2020-07-23 12:33
Yes, basically @marije. There?s an idea of an active collision event, which can be swapped while sonifying. So you can compare different events with the same mapping.

marije
2020-07-23 12:34
Next paper is starting! Last one of the session.

s.d.wilson
2020-07-23 12:35
@tom.mitchell They were firstly very happy that we were using the data, as a lot of art projects there are more ?in response to? the physics than data driven. Maurizio was very happy and said that we ?got it? which was very gratifying. We did discuss ways in which it might be usable for them, but we haven?t yet got to that.

konstantinos.vasilako
2020-07-23 12:37
@tom.mitchell the responses we got so far, is that it helps them to think about this data in a little bit out of the box of the way they are used to approach it, so yes I can say we got positive responses.

marije
2020-07-23 12:38
ok, cool!

x
2020-07-23 12:42
this is rad!

manolimoriaty
2020-07-23 12:42
This is such a great idea!

x
2020-07-23 12:43
*QUESTION*: Is it necessary to have a decoder module? What happens when you use the patch cable 'as-is' between different sorts of modules?

michael.lyons
2020-07-23 12:46
QUESTION: I missed have missed it: did you mention whether or not there is hysteresis?

c.kiefer
2020-07-23 12:46
QUESTION: How durable is the cable? Does the conductive material maintain the same properties in the long term?

julian
2020-07-23 12:47
in my experience they tend to rip quite easily. We discussed the same question yesterday as well: https://nime2020.slack.com/archives/C017H5QEY3W/p1595428289086200

info041
2020-07-23 12:49
could it be confusing perhaps with all the other wires in performance to pull on and manipulate the wrong one, could the cable be made slightly differently also different colour to the other ones?

harri.renney
2020-07-23 12:49
Got to do it right, can imagine yanking on one too hard and breaking somethingXD

sallyjane.norman
2020-07-23 12:50
thank you all!

marije
2020-07-23 12:50
For those who run off now: check the installations! https://nime2020.bcu.ac.uk/installations/

m.barthet
2020-07-23 12:50
expressive controller! Patch-cord!


vincze
2020-07-23 12:51
@konstantinos.vasilako - hi, I really liked you project, and it also reminded me of a project I presented in 2018. https://www.youtube.com/watch?v=JUSO-6ykuIo - it was based on sonification of the data from CERN particle collision as well,. We also scaled the data arbitrarily to various ranges that later controlled all parameters (freq, del. time, amount of feedback, ?). From what I see here is that you also converted the data rather freely so that it suits artistic needs which is great. My only criticism to myself was that I wish I had found some underlying connection, because, even though the data was taken from the particle collision, it could have (hypothetically) been a series of random numbers. My question is, why was this particular collission data important to you, did you notice some underlying connection, or what have you done in order to validate the use of this data as opposed to some random numbers in the process of sonification. THANKS :slightly_smiling_face:

joe.wright
2020-07-23 12:52
Thanks for your patience with technical issues, we have limited capacity to run over as our hard-working captioners need time to rest in between the discussions. Please keep your questions coming. And thanks to all the presenters for another really interesting session!

abi
2020-07-23 12:53
Thank you captioners! :trophy:

s.d.wilson
2020-07-23 12:55
@vincze I?m sure @konstantinos.vasilako will have much to add as well, but it?s a general problem with sonification, often at odds with the conceit that you?re hearing the ?sound? of something that media and public seem so keen to endorse. That said it is of course possible to have good and bad sonifications, just as you can have good and bad visualisations. We mean this in the sense that the sonification brings out or clarifies salient aspects of the data and its character. To the extent that we succeed at this the result is in some sense meaningful, and not random. Hope that makes sense.

s.d.wilson
2020-07-23 12:56
Your project looks very interesting, btw!

v.zappi
2020-07-23 12:58
I guess the need for the extra electronics is to allow for the use of the full voltage range modules expect/support [fix impedance and source signal level]. And probably the dual supply configuration was chosen to use this tech to modulate audio signals running between modules too, rather than only CVs. But I am looking ff to hearing from the authors!

konstantinos.vasilako
2020-07-23 12:59
@vincze Yeah I see your point that it could be also a random/chaotic noise generator doing some live feeding, however what we noticed after several performances, was that the data was carrying some noticeable patterns between each events, which allowed one to map it to specific parameters apropo to their control signal e.g., some event was working nicely for Freq, or for modFreq etc, while other events were providing a different character. On top of that, what was important for the collectiveness in the ensemble and for the creation of coherence, was that all are using same input of raw data and thus this works as a starting point for everyone.

h.takase
2020-07-23 13:02
@quinnjarvisholland We also made a system that allows you to change the pitch at an angle of your smartphone and produce sound by touch. However, it is difficult to simply calculate the vertical motion of the smartphone while shaking it, as we did in this study, so we tried to improve the accuracy of the vertical motion estimation by training LSTM using the vertical motion of the smartphone obtained from Kinect as teacher data.

vincze
2020-07-23 13:07
@s.d.wilson and @konstantinos.vasilako - thank you guys for your answers, as Scott pointed out, it is a general question when doing this kind of sonifications/visualisations what one is to do with abstract data, especially when it is complex. One can always assume that that must be some patterns if the data (as in your case) is not random, but I guess the trick is to notice that and have it work in your favour. Really like your work, generally very curious about these kinds of projects that bind particle physics with art/sound, because the physical models are often so abstract and crazy that it borders with art practice anyway. For us, the unifying element was that 15m long string that was symbolising the idea of the string theory, while data was driving the whole installation. Cool getting to know your work, wish I could have been there. Any future performances planned?

konstantinos.vasilako
2020-07-23 13:08
@vincze Thanks! We are performing DarkMatter on Saturday.

jason.hockman
2020-07-23 13:09
@h.takase Many thanks for your reply. Are you using the cell independently or within a layer of a network (e.g., RNN)?

vincze
2020-07-23 13:09
@konstantinos.vasilako - oh great, as a part of NIME, virtually? What time?

konstantinos.vasilako
2020-07-23 13:12
@vincze Yeah we will attempt to make a telematic version of it this Saturday here at *22:00-22:50* (UTC + 3) *session.*

konstantinos.vasilako
2020-07-23 13:13
(UTC + 3)

s.d.wilson
2020-07-23 13:19
The live streaming set is here: https://youtu.be/4C8E559Pc30

s.d.wilson
2020-07-23 13:19
Also have a short prerecorded version in Music 4 tomorrow: https://www.youtube.com/watch?v=m0A3C1imhhw&feature=youtu.be

vincze
2020-07-23 13:27
@konstantinos.vasilako & @s.d.wilson / Thank you guys, good luck with the performance, looking forward!

s.d.wilson
2020-07-23 13:28
Cheers!

tom.mitchell
2020-07-23 13:33
Very cool - I?ll stay tuned?

h.takase
2020-07-23 13:51
@jason.hockman We are using simpleLSTM from keras. Since LSTM is an extension of RNN, the memory cells are contained in the network layer. The difference between RNN and LSTM is that RNN simply keeps data up to the present, whereas LSTM has a forget gate to select the past data to keep. RNN causes gradient loss by using all the past data for the current training, but LSTM avoids this problem thanks to the forget gate.

vincze
2020-07-25 11:20
Oh, thank you so much! Your performance is tonight right?