I am currently in Stockholm carrying out a Short Term Scientific Mission (STSM) in the Speech, music and Hearing group at KTH through the COST Action Sonic Interaction Design (SID). The main objective of the STSM is to work on preparations for some experiments on action-sound couplings that will be carried out in the SID project in the fall.
The first part of the SID experiments will involve studying how people move to sound, and the second part will look at how this knowledge can be used to create sound through movement. However, first we need to develop a solution for recording data in a synchronized manner, and this is what I am currently working on. So far I have worked on the following issues:
- Updating a number of Jamoma modules that will be used in the setup. This includes several of the input device modules (Wii, mouse, Wacom, video, etc.).
- Working on the FTM-modules in Jamoma, particularly the ones related to SDIF/GDIF recording and playback.
- Making patches for recording, playback and analysis of data.
- Work on GDIF. Here I have been working on summarizing all the low- and mid-level parameters for all the input devices to be used in the study, and I have also started to implement some of these as OSC namespaces in the Jamoma modules.
The first week of my STSM went quickly, and luckily I have another week to wrap things up. Hopefully, I will have a fully working prototype setup early next week so that I will have time to do a small pilot study with some users.