joe.wright
2020-07-18 12:13
has joined #papers10-collaborations-digital-audio

joe.wright
2020-07-18 12:13
@joe.wright set the channel purpose: Paper Session 10: Collaborations / Digital Audio

niccolo.granieri
2020-07-18 12:13
has joined #papers10-collaborations-digital-audio

hassan.hussain5
2020-07-18 12:13
has joined #papers10-collaborations-digital-audio

overdriverecording
2020-07-18 12:13
has joined #papers10-collaborations-digital-audio

lamberto.coccioli
2020-07-18 12:13
has joined #papers10-collaborations-digital-audio

jonathan.pearce
2020-07-18 12:13
has joined #papers10-collaborations-digital-audio

richard.j.c
2020-07-18 12:13
has joined #papers10-collaborations-digital-audio

eskimotion
2020-07-20 09:25
has joined #papers10-collaborations-digital-audio

edmund.hunt
2020-07-20 09:25
has joined #papers10-collaborations-digital-audio

acamci
2020-07-20 17:01
has joined #papers10-collaborations-digital-audio

aaresty
2020-07-20 17:21
has joined #papers10-collaborations-digital-audio

10068197
2020-07-20 17:21
has joined #papers10-collaborations-digital-audio

a.nonnis
2020-07-20 17:22
has joined #papers10-collaborations-digital-audio

a.macdonald
2020-07-20 17:23
has joined #papers10-collaborations-digital-audio

andreas
2020-07-20 17:24
has joined #papers10-collaborations-digital-audio

dianneverdonk
2020-07-20 17:25
has joined #papers10-collaborations-digital-audio

likelian
2020-07-20 17:25
has joined #papers10-collaborations-digital-audio

ko.chantelle
2020-07-20 17:25
has joined #papers10-collaborations-digital-audio

anika.fuloria
2020-07-20 17:26
has joined #papers10-collaborations-digital-audio

clemens.wegener
2020-07-20 17:26
has joined #papers10-collaborations-digital-audio

simon.hall
2020-07-23 08:30
*Session 10 Papers in Proceedings:* *Kiyu Nishida, Kazuhiro Jo* _Modules for analog synthesizers using Aloe Vera biomemristor_  *pdfs/nime2020_paper18.pdf* *Corey J Ford, Chris Nash* _An Iterative Design ?by proxy? Method for Developing Educational Music Interfaces_  *pdfs/nime2020_paper53.pdf* *Florent Berthaut, Luke Dahl* _Adapting & Openness: Dynamics of Collaboration Interfaces for Heterogeneous Digital Orchestras_  *pdfs/nime2020_paper15.pdf* *Harri L Renney, Tom Mitchell, Benedict Gaster* _There and Back Again: The Practicality of GPU Accelerated Digital Audio_  *pdfs/nime2020_paper39.pdf* *Filipe Calegario, Marcelo Wanderley, Joćo Tragtenberg, Eduardo Meneses, Johnty Wang, John Sullivan, Ivan Franco, Mathias S Kirkegaard, Mathias Bredholt, Josh Rohs* _Probatio 1.0: collaborative development of a toolkit for functional DMI prototypes_  *pdfs/nime2020_paper54.pdf*

hassan.hussain5
2020-07-24 11:20
10 mins to go until paper session 10: *collaborations/digital-audio* is about to begin *Zoom:* https://us04web.zoom.us/j/78638902810?pwd=R29NRFkwaFp6UjE0MjhyMEw3eXU5dz09 *YouTube*: https://www.youtube.com/playlist?list=PLz8WNY_I2S5Se9_Q8NbLLAk3FnZkAecQe

hassan.hussain5
2020-07-24 11:21

joe.wright
2020-07-24 11:32
@knishida20 and @jo are kicking off this next session with their paper on aloe vera biomemristors for analog synths now!

hassan.hussain5
2020-07-24 11:33
don?t forget: when asking a question in response to a paper, please indicate in your message to which paper presentation you are responding. Either by mentioning the title of the paper or using the @ to direct it to the presenter. This will make it easier for people to follow the presentations and the Q&A later (due to being in different time zones).

g.moro
2020-07-24 11:35
and please keep replies to the question in a thread !

lukedahl
2020-07-24 11:42
@knishida20 @jo Very cool work! Does Aloe Vera react differently to other plants?

v.zappi
2020-07-24 11:43
@knishida20, the plots on the scope reminded me of some of the differences that are visible between digital and analog oscillators' output. Have you considered using this system to give some kind of "analog touch" to waveforms generated digitally?

satvik.venkatesh
2020-07-24 11:44
@knishida20 Thank you for the great presentation! Could you explain more about the spiking dynamics? How long does it take to settle down and what is the latency during sound generation?

tom.mitchell
2020-07-24 11:44
@knishida20 @jo does it make a difference when you water or don?t water the plant?

corey2.ford
2020-07-24 11:46
@lamberto.coccioli sorry for the typo, whoops

lamberto.coccioli
2020-07-24 11:49
No worries!

benedict.gaster
2020-07-24 11:51
@corey2.ford interesting project. one thing i have noticed with scratch when introducing programming to my children is they have often found it different to move to text based programming languages, but at the same time found scratch frustrating at some point. A solution that has worked well for a couple of them is a move to fantasy console (e.g. PICO-8 or TIC-80), which fully supports text based programming, but at the same time, like scratch, is constrained. Do you foresee similar issues when user transitioning on from Codetta?

marije
2020-07-24 11:53
second presentation is ongoing @corey2.ford

lamberto.coccioli
2020-07-24 11:54
@corey2.ford You make several initial assumptions on how to compose music, favouring certain dimensions and parameters. What were these assumptions based on? Were the participants involved in choosing them?

x
2020-07-24 11:55
@corey2.ford ^^ Echo

x
2020-07-24 11:55
great work!

noamlederman
2020-07-24 11:56
@corey2.ford Great idea, which other features would you like to include in future designs? Have you considered adding a drum machine feature? a harmonisation tool?

a.martelloni
2020-07-24 11:56
@corey2.ford Great stuff, I wish I had that at 7 :smile: What was your thinking behind the choice of the ribbon headings (Time, Dynamics, etc...)? Have you considered using pictures there? I'm assuming children won't really have working knowledge of these terms in music.

alucas02
2020-07-24 11:56
@corey2.ford Were any accessibility considerations raised during user testing?

john.m.bowers
2020-07-24 11:56
Yes, I thought this too. Aloe Vera for warm distortion?!

jmalloch
2020-07-24 11:56
Thanks for the presentation @corey2.ford Could you explain some of the other processing blocks that are available (other than tempo increments)? ... any random processes?

alucas02
2020-07-24 11:59
Thanks!

john.m.bowers
2020-07-24 12:07
*@florent.berthaut* *@lukedahl Although different in many ways, you might like this paper in how it grapples with some of the same high-level issues (mixture of copresence and remoteness, heterogeneity, supporting musician's mutual awareness).* https://www.researchgate.net/publication/266658331_Musical_MESHWORKS_From_networked_performance_to_cultures_of_exchange

robert.blazey1
2020-07-24 12:09
@florent.berthaut @lukedahl Very nice system. Do you think it would be possible to patch in pre-existing PD instruments or even interface external hardware?

juan.jpma
2020-07-24 12:10
@florent.berthaut @lukedahl do you think that non-musical collaborative phenomena studied with ethnomethodological methods and conversational analysis might also inform your work in collaborative musical interfaces?

jmalloch
2020-07-24 12:10
Thanks @florent.berthaut and @lukedahl. Sorry if I missed this, but were your participants experienced with Pd? Any thought about supporting users of other languages & environments?

harri.renney
2020-07-24 12:11
So cool!

elblaus
2020-07-24 12:11
Would you have a PDF to share of that paper? The link only allows me to "Request full text", not download it.

marije
2020-07-24 12:11
@florent.berthaut @lukedahl In how far do you think you addressed the heterogenity of digital orchestras? at a first glance it seems quite homogenous


john.m.bowers
2020-07-24 12:12
:+1:

john.m.bowers
2020-07-24 12:14
And my best wishes to all at KTH! Worked there off and on from 1992 to 2004 (or thereabouts).

marije
2020-07-24 12:16
@florent.berthaut @lukedahl for reference, perhaps the work we did in the past on the SenseWorld DataNetwork might be informative: Sharing Data in Collaborative, Interactive Performances: the SenseWorld DataNetwork Marije A.J. Baalman, Harry C. Smoak, Christopher L. Salter, Joseph Malloch, and Marcelo M. Wanderley | Conference paper | 2009 _New Interfaces for Musical Expression (NIME) 2009, Pittsburgh, USA, June 4-7, 2009_

lukedahl
2020-07-24 12:17
@robert.blazey1 Our documentation (which is still in -process) provides some simple examples of converting an instrument into bf-pd.

elblaus
2020-07-24 12:17
On behalf of all of us, thanks! I think I met you there around Vygandas' PhD defence as well, must have been 2017 or so?

elblaus
2020-07-24 12:18
Anyway, have a good NIME, cheers!

florent.berthaut
2020-07-24 12:18
Thank you @john.m.bowers for the reference, it will definitely be useful for integrating distant instruments in the boeuf orchestras

abi
2020-07-24 12:19
This is super useful Marije, thanks :slightly_smiling_face:

juan.jpma
2020-07-24 12:20
@lukedahl @florent.berthaut re ethno studies in music, there are a couple of interesting papers from the Mixed Reality Lab. This one received a honourable mention at chi 2019: https://dl.acm.org/doi/10.1145/3290605.3300706 _Juan Pablo Martinez Avila, Chris Greenhalgh, Adrian Hazzard, Steve Benford, and Alan Chamberlain. 2019. Encumbered Interaction: a Study of Musicians Preparing to Perform. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ?19). Association for Computing Machinery, New York, NY, USA, Paper 476, 1?13. DOI:https://doi.org/10.1145/3290605.3300706_

lukedahl
2020-07-24 12:21
@florent.berthaut @lukedahl I neglected to mention: we have _not_ yet released the beta of Bf-Pd! We plan to very soon, and will email the NIME list. We?d love for more people to use it and give us feedback.

lukedahl
2020-07-24 12:23
Thanks. I love the title!

marije
2020-07-24 12:24
At some point I should write down reflections since then...

florent.berthaut
2020-07-24 12:25
Hi @marije, thank you, the Orchestra toolbox and Senseworld were definitely inspiration for us !

hofmann-alex
2020-07-24 12:26
@harri.renney great to see your practice experience with GPUs, I wonder how you had to optimize the physical model code to run on the GPU?

c.kiefer
2020-07-24 12:26
@harri.renney can you comment on asymmetry in GPU buses? It seems like it's often very slow to read back a buffer compared to sending it to the GPU

a.martelloni
2020-07-24 12:26
@harri.renney would using OpenGL (as opposed to CL) be worth investigating? I believe @v.zappi used it in a physical modelling instrument by hijacking the vertex shader (??) model (my GPU programming skills are non-existent)...

marije
2020-07-24 12:27
since then I also wrote a very simple tool, to just enable sharing osc data between more than two collaborators. To make it easy to move from one-to-one communication to shared communication between any number of collaborators. https://www.nescivi.eu/projects/xosc.html

a.mcpherson
2020-07-24 12:27
@harri.renney Nice comprehensive study! Can you talk more about the jitter measurements? Presumably, in buffered digital audio, jitter is only a question of having the CPU idling while waiting for the GPU results to become available -- it wouldn't actually manifest in a variable latency in the audio output.

florent.berthaut
2020-07-24 12:28
Thanks @juan.jpma for the reference !

john.m.bowers
2020-07-24 12:28
In addition to my own work (!!!) which I (im)modestly sent to Luke in a DM, the locus classicus for ethno in music would be David Sudnow's The Ways Of The Hand in case you do not know it - a study of his own development as an improvising jazz pianist. While the topic is a little different, the way in which the analysis is done is characteristic.

x
2020-07-24 12:28
@harri.renney Is this going to become an app where we can flip a switch on the mac and get more power during use of something like logic?

jmalloch
2020-07-24 12:29
Florent and Luke already know this, but the developers of Ossia (http://ossia.io) and libmapper (http://libmapper.org) are actively working on advancing tools for intermedia mapping & scripting. Just putting the links here for others.

hofmann-alex
2020-07-24 12:29
Thanks for you reply! is your code available somewhere? It would be nice to further discuss this..I would be interested in combining VR and physical modelling, which might require GPUs for the sound synthesis part. Pretty much the domain of your paper. Great work! :slightly_smiling_face:

tragtenberg
2020-07-24 12:29
@harri.renney Did you measure jitter as the difference between two subsequent latency measurements or the overall maximum deviation of all the latency measurements?

v.zappi
2020-07-24 12:31
The authors actually ran a study similar to the one you are mentioning, check "OpenCL vs: Accelerated Finite-Difference Digital Synthesis" https://dl.acm.org/doi/pdf/10.1145/3318170.3318172?casa_token=TaeWyFKNLAMAAAAA:qpCjxurM5_Fmfuh4URVWuP1HOJxk22uTj3QIBNRnoQYo418WlZ7zkLmHknYQQ8rZoVf2kWuKcOZf

x
2020-07-24 12:31
siiiiiiiiick!!!

michael.lyons
2020-07-24 12:32
Cool!

tom.mitchell
2020-07-24 12:32
@harri.renney please add to this - yes it?s in the copy to and from GPU. The worst case jitter is added to your processing time and together these need to meet your audio callback deadlines. So in practice the jitter will only manifests in an under-run if at all.

elblaus
2020-07-24 12:32
Wow, this looks really impressive.

harri.renney
2020-07-24 12:34
Hi. The jitter was measured as the difference between the latency of two buffers sent to the GPU. These were measured for all buffers sent to the GPU, but the maximimum jitter measurement taken was recorded and shown in the results. This was done to take a worst case approach.

a.mcpherson
2020-07-24 12:35
Makes sense, thanks

florent.berthaut
2020-07-24 12:36
@marije Very cool, i'll definitely have a look, it seems that it would work very well as a base layer on which more advanced modes of collaboration can be built. We are using a more distributed approach for now for connecting instruments but it also has some drawbacks.

harri.renney
2020-07-24 12:37
Amazing! Yes, we should discuss this further. Your work sounds an interesting use for physical models:)

marije
2020-07-24 12:37
I think that the nice thing is that xosc could be distributed as well, as different instances can talk to each other over the network.

benedict.gaster
2020-07-24 12:38
great talk

marije
2020-07-24 12:39
We are now at @fcac btw,

marije
2020-07-24 12:39
@fcac looks like a great toolkit!

f.morreale
2020-07-24 12:40
Great job @fcac I love the new version of your Probatio

a.r.jensenius
2020-07-24 12:41
Fantastic work @fcac! Will there be a speaker block as well?

dianneverdonk
2020-07-24 12:41
@fcac and @tragtenberg and Marcelo Wanderley awesome project! This is what I'm looking for (and designing, so it might save a lot of work! :D)....

abi
2020-07-24 12:41
Amazing presentation @fcac!

x
2020-07-24 12:41
:+1::+1::+1::+1::+1::+1::+1::+1::+1:

a.mcpherson
2020-07-24 12:41
@fcac Nice to see work documenting the long-term evolution and iteration of instruments and toolkits. Great presentation!

michael.lyons
2020-07-24 12:41
@fcac nice presentation!

dianneverdonk
2020-07-24 12:42
@fcac Maybe I missed this, but is it already available for printing somewhere??

alucas02
2020-07-24 12:42
Great work! Have you considered moving toward an embedded version of Probatio?

corey2.ford
2020-07-24 12:42
Great work; blocks are great!

x
2020-07-24 12:42
QUESTION: you mentionned creatde in 2016 - has the project stagnated or is there further dev support ongoing - do you have finanical backing?

marije
2020-07-24 12:42

dianneverdonk
2020-07-24 12:42
Thanks

x
2020-07-24 12:42
(this is not an offer)

x
2020-07-24 12:43
(but definately interested in other kinds of support)

harri.renney
2020-07-24 12:43
Hi Chris. It's a good question! I would like to have invetsigated and covered this in the paper but the page limit meant I could only cover a few things. From my experience, I do remember the readback being slightly slower, but not considerably. I will take a look at the results and see if there is a difference and let you know what I can find:)

juan.jpma
2020-07-24 12:43
@fcac @tragtenberg has this project allowed your local community to engage more in DMI design?

dianneverdonk
2020-07-24 12:44
@fcac do you also ship the already printed modules?

dianneverdonk
2020-07-24 12:44
or planning to?

tragtenberg
2020-07-24 12:44

abi
2020-07-24 12:45

marije
2020-07-24 12:46
And check out the wonderful installations! https://nime2020.bcu.ac.uk/installations/

jmalloch
2020-07-24 12:46
Perhaps a Bela module? :slightly_smiling_face:

g.moro
2020-07-24 12:46
come on in if you are interested in teaching/learning programming and interactive audio with Bela (or if you already are!)

fcac
2020-07-24 12:47
Hello, if you would like to see more information about Proabatio, here is the website: https://probat.io

alicee
2020-07-24 12:47
:eyes::+1:

elblaus
2020-07-24 12:47
Really great work!

c.kiefer
2020-07-24 12:48
thanks for looking into this. I had most success with GPU realtime audio on a raspberry pi, which probably reflects your results on SBCs with tight memory integration. On other systems, using gl_readpixels() has seemed painfully slow so it would be interesting to see your results. Maybe systems are optimised for gaming where you don't need to read back data so often

corey2.ford
2020-07-24 12:48
@benedict.gaster As promised here is a link to a paper comparing students perceptions of block/text/hybrid modalities: https://doi.org/10.1016/j.ijcci.2018.04.005 . Another thought concerning fantasy console (which I haven't used so forgive me if I'm wrong) but it looks like it has some gamification features, which would advantageously motivate child self-learning. One big drawback of Codetta/Scratch is that the open-endedness deters children, who need some pedagogic scaffolding. :)

simon.hall
2020-07-24 12:49
Thanks to all the presenters and for some nice questions from delegates too. Really enjoyable session. :slightly_smiling_face:

tragtenberg
2020-07-24 12:50
All the current files are there, but we are still in the process of documenting it

fcac
2020-07-24 12:51
Hi @dianneverdonk, thank you for your question. In the future, we?re planning to do so. But, right now, the main focus is on the replication documentation as an open source project. It would be perfect if you could check out http://github.com/probatio and give us feedback on the documentation.

corey2.ford
2020-07-24 12:52
@lamberto.coccioli @x Most of the initial assumptions were based on music pedagogies. For example, the note editor only allows for major scale notes, which was based on Orff Schulwurk, who would remove all non-pentatonic notes from a glockenspiel to enable child improvisation. Similarly the tempo-processing blocks means that pieces like 'Piano Phase' can be created with a small number of blocks. Teachers and children are then motivated by the grandiose piece they could create with little effort. See: https://www.youtube.com/watch?v=DVRoQlSkQ5s :slightly_smiling_face:!

corey2.ford
2020-07-24 12:52
@lamberto.coccioli @x Most of the initial assumptions were based on music pedagogies. For example, the note editor only allows for major scale notes, which was based on Orff Schulwurk, who would remove all non-pentatonic notes from a glockenspiel to enable child improvisation. Similarly the tempo-processing blocks means that pieces like 'Piano Phase' can be created with a small number of blocks. Teachers and children are then motivated by the grandiose piece they could create with little effort. See: https://www.youtube.com/watch?v=DVRoQlSkQ5s :slightly_smiling_face:!

knishida20
2020-07-24 12:52
Thank you for your question. The prior research by Volkov et al insists some plants behave as a memristor such as mimosa pudica, Venus flytrap. In our study, we used not only Aloe vera as a biomemristor but also mimosa pudica. We focused on these plants mainly because you can easily find them in garden stores and they are not expensive.  However it was difficult to use mimosa pudica because its stems and leaves are too small to attach electrodes to. Compared with mimosa pudica, Aloe vera leaves are thick  and it is easy to put electrodes on them. Therefore, we used Aloe vera. this is a link of the prior research:grinning: https://www.researchgate.net/publication/260371087_Memristors_in_plants

harri.renney
2020-07-24 12:54
Oh! I see. It's possible OpenGL does things a little differently that could make the gl_readpixels() slower. As OpenCL was designed with the back and forth transfers in mind, it may be more optimized for this. And like you say, GPUs are typically a one-way device for gaming etc, so it wouldn't be a surprise.

corey2.ford
2020-07-24 12:55
Mostly the child users have requested lots of different instruments, including drums. My current thinking is that a step-sequencer styled design may work best in this case. As each block line currently only supports one voice, I haven't thought about how to incorporate harmonisation; but this is something I will defo consider for the future! :slightly_smiling_face: Hope this answers your question!

harri.renney
2020-07-24 12:56
We are actually interested in experimenting with the raspberry pi to run the physical models.

benedict.gaster
2020-07-24 12:58
@corey2.ford yes i think it is indeed the case that the gamification helps. i think the ability to generate something that is not a huge leap away from what they are playing on a console and so on, really helps. It would be a hard slog to get the into Unity and get the same results so quickly. I think to para phase Andrew McPherson from a video earlier this week, we want a tool that would be used by a expert, but accessible to a beginner. many professional programmers and game designers play with PICO-8 and TIC-80, but it is still useable by my 7 year old. Of course, how they use it is very different, I quickly moved to using VScode with TIC-80, rather than the not so easy builtin editor, I've updated colour tables with hex values, and so on, while my kids stay firmly within the constrained editor itself.

benedict.gaster
2020-07-24 12:58
@corey2.ford thanks for the link

corey2.ford
2020-07-24 12:58
@a.martelloni I split the tabs into Time, Dynamics etc... based partly on my composition tutor (Adrian Hull) who would talk about how the different elements of music could be reduced into just Time, Pitch + Dynamics. Supposedly he explored this in his Phd, although he hides it from people. :joy: Icons are a great idea; the student teacher collaborators loved seeing icons. Never occurred to me to put some in the toolbox tabs until now tho - thanks!

corey2.ford
2020-07-24 13:00
@alucas02 Hopefully I answered this in the Q and A. Briefly, however, I think there is room to improve on the colour scheme (with regard to colour blindness). All interaction is click-based also, so this could lend itself to eye-gaze instruments. :slightly_smiling_face:

vincze
2020-07-24 13:00
Hi, is this the current thread discussing on BELA?

florent.berthaut
2020-07-24 13:01
Oh great ! I'll definitely try it out ! Thanks

tragtenberg
2020-07-24 13:01
no, for that go to #drop-ins-bela

corey2.ford
2020-07-24 13:01
@jmalloch The total list is Tempo Changer, Tempo Setter, Dynamics Changer, Dynamics Setter, Pitch Setter; and an array style pitch feature (now embedded into the end-repeat block). No random processes, but may be a good idea for scaffolding the children during ideation. :slightly_smiling_face:

corey2.ford
2020-07-24 13:05
I agree, the old "low threshold high ceiling" design principle springs to mind. Codetta at the moment is quite limited (it has a 'low ceiling'); the challenge is how to appropriately scaffold children's self-directed learning so that they are motivated to learn more advanced features :slightly_smiling_face:

a.martelloni
2020-07-24 13:05
My pleasure! Ribbons are hairy to get right, as Sibelius taught us all :D

corey2.ford
2020-07-24 13:05
@benedict.gaster Also on the staying constrained in the editor point, this is something that @chris.nash does with Manhattan, so that there is less distraction. :slightly_smiling_face:

harri.renney
2020-07-24 13:07
Yep! The best part about using OpenGL, as Victor has shown, is that the visualization comes much more cheaply/conveniently. We have used OpenCL interop with OpenGL but there is an overhead when working between the two APIs, even though they have tried to support this: https://software.intel.com/content/www/us/en/develop/articles/opencl-and-opengl-interoperability-tutorial.html

harri.renney
2020-07-24 13:09
Vulkan also is a potential graphics API like OpenGL which can be used similarly for audio processing. We have experimented with this but I've not been happy with the results so far. You really need to know what you are doing in Vulkan to see the benefits.

lamberto.coccioli
2020-07-24 13:11
Thank you @corey2.ford. Looking at established pedagogies seems like a sensible approach, but I was wondering if - in what is at its heart a children-led design process - you were not tempted to start with a blank canvas and build from there.

harri.renney
2020-07-24 13:12
Hi:) Not exactly. The important point I should have mentioned when answering your question is that the GPU is only suitable for particular processes. Most physical models are highly parallelizable. However, many processes in digital audio are not, and won't benefit from this directly.

tragtenberg
2020-07-24 13:12
If you want to use, collaborate with the development or print your Probatio version, subscribe to our mailing list: http://eepurl.com/c8IAab and we will keep you up to date!

fcac
2020-07-24 13:13
Thank you, @elblaus :smiley:

harri.renney
2020-07-24 13:15
This would be good to highlight more in the work to be fair, for people less familiar with the GPU architecture/convention.

knishida20
2020-07-24 13:15
Thank you for your question! we can get a spike whenever the applied voltage changes. there is no latency between the change of applied voltage and generation of a spike. the magnitude of a spike is determined by the magnitude of the voltage that was applied before the change, its duration, and the magnitude of change. We've not measure time which a spike take to settle down, but it seems to take more than 2 seconds (depending on the magnitude of spike) does my answer make sense?

corey2.ford
2020-07-24 13:16
I will keep you updated! My current work (soon to be assessed :crossed_fingers:) has empirically looked at how children use the system, which has found feature requests/design recommendations 'straight from the children's mouth'. Indeed, I acknowledge towards the end of the paper that a disadvantage of this approach is that you loose the benefits of an inductive 'black canvas' approach. Definitely something I will consider with regard to further work! :)

tragtenberg
2020-07-24 13:17
you can also sign up to our mailing list: http://eepurl.com/c8IAab

harri.renney
2020-07-24 13:23
Thanks for the questions! Please message me if you want to know more:) We wrote a blog post which includes more results and discussion of the benchmark suite (Including source code) at the bottom of the post: https://muses-dmi.github.io/benchmarking/benchmarking_database_there_and_back_again/

lamberto.coccioli
2020-07-24 13:27
:+1:

fcac
2020-07-24 13:27
Hi @alucas02, thanks for the question. We had multiple discussions on this direction. We?ve some thoughts, but we?re open to suggestions! =D

fcac
2020-07-24 13:29
One approach would be to have an output block that deals with sound synthesis.

knishida20
2020-07-24 13:29
Thank you for your question. that is a good point. Although we've not tried, i think it should make a difference. aloe vera behave as a memristor mainly because its cell worked as a memristor. a state of a cell vary depending on many parameters such as water and light. So, i guess the parameter changes.

corey2.ford
2020-07-24 13:33
Awesome session. Thank you again @simon.hall for hosting! If you want to follow Codetta, or keep in touch with me about it, get in touch with me here: http://codetta.codes/NIME2020/

benedict.gaster
2020-07-24 13:46
@corey2.ford thinking a bit about this and why I feel TIC-80 works, but audio software via the computer not so much for teaching. I think, and this is purely personal, TIC-80 is teaching programming via gaming, and supports the standard interfaces for playing games, keyboard and controller. So this fits with the users external knowledge, but for music sitting at a computer to learn could seem strange to children. In our house my partner plays our piano and guitar and I often play the Ukulele or the OP-1 and the kids want to access them. The OP-1 has been a thing of joy for some of our kids, it's quick and easy to make beats and do strange and wonderful things. It feels like another excellent example of a tool that is accessible to experts are and beginners alike, and achieves this by again being constrained.

tom.mitchell
2020-07-24 14:00
I like this! Slow interactions via osmosis!

jmalloch
2020-07-24 14:06
Thanks!

corey2.ford
2020-07-24 14:06
@benedict.gaster I think you have hit on one of the other points which Weintrop and Wilinskly found (possibly in another paper). Some children actually prefer learning actual 'real world' tools. I.e. block-programs are not what real developers use, they'd actually rather use text-based tools. In this sense I suppose the OP-1 is more of a 'real instrument', with 'real' tactile feedback. Possibly this notion was best expressed by developmental research Montessori - may provide some great thinking material! :slightly_smiling_face:

knishida20
2020-07-24 14:24
thank you for your question! Did you mean the nonlinear voltage wave response to sinusoidal wave reminds you distortion, right? actually i've not consider. in my research i made a module just for analog modular synthesizer. but the idea of Aloe Vera for warm distortion is very interesting and it might be another possibility. one thing i have to mention is aloe vera exhibit non-linear wave response to a wave whose frequency less than 0.1Hz. So it might not easy to use it as "analog touch" to waveforms generated digitally, but I have interested in the idea!

v.zappi
2020-07-24 14:41
Interesting, maybe different organic materials may display non-linear response in audible frequencies?

tragtenberg
2020-07-24 14:46
@fcac has done some user testing with local musicians and he can talk more about the amazing feedback they gave. We have recently got a funding where we will be able to put in the hands of more local artists in a more creative/artistic context. We can't wait to have some more replications of it so more people can use in an unassisted way.

alucas02
2020-07-24 15:21
Great, thanks!

alucas02
2020-07-24 15:22
Great, thanks!

dianneverdonk
2020-07-24 19:33
That's awesome, thank you both!! I'll look into it and try to remember giving you feedback. It's a really amazing project and idea, and like said working on similar ideas but should first have a look into yours! Will keep you posted and would be really nice to have a chat in the near future. Greetings from Utrecht, The Netherlands

knishida20
2020-07-25 00:27
yeah, I have to try another organic materials! if i could find a material which display non-linear response in audible frequency, I can reshape a waveform by interacting with the material such as changing insertion way of electrodes. So I think it is very interesting possibility also. A prior research by Volkov et al. insists that some plants behave as a memristor such as mimosa pudica, venus flytrap and apple. So we would start to try those materials!

v.zappi
2020-07-25 00:30
I am looking ff to ruining all those perfect digital waves and create a modular system composed of 20 Belas!

knishida20
2020-07-25 00:31
I also like this idea. it opens up possibilities for installation! Thought, It might not easy to experiment, because the difference would be slight and there wold be many aspects which make differences. Thank you for a good insight!

knishida20
2020-07-25 09:47
that's interesting! I thought I have to make our modules and a performance only with analog technology. because i thought the digital aspects might ruin an interesting point of biomemristor. But your idea inspires me to explore the possibilities of a good collaboration of digital technology with analog technology. thank you for a good insight!