GDIF recording and playback

Kristian Nymoen have updated the Jamoma modules for recording and playing back GDIF data in Max 5. The modules are based on the FTM library (beta 12, 13-15 does not work), and can be downloaded here. We have also made available three use cases in the (soon to be expanded) fourMs database: simple mouse recording, sound saber and a short piano example. See the video below for a quick demonstration of how it works:

July 3, 2010 · 1 min · 74 words · ARJ

Papers at ICMC 2008

Last week I was in Belfast for the International Computer Music Conference (ICMC 2008). The conference was hosted by SARC, and it was great to finally be able to see (and hear!) the sonic lab which they have installed in their new building. I was involved in two papers, the first one being a Jamoma-related paper called “Flexible Control of Composite Parameters in Max/MSP” (PDF) written by Tim Place, Trond Lossius, Nils Peters and myself....

September 4, 2008 · 1 min · 211 words · ARJ

Janer's dissertation

I had a quick read of Jordi Janer’s dissertation today: Singing-Driven Interfaces for Sound Synthesizers. The dissertation presents a good overview of various types of voice analysis techniques, and suggestions for various ways of using the voice as a controller for synthesis. I am particularly interested in his suggestion of a GDIF namespace for structuring parameters for voice control: /gdif/instrumental/excitation/loudness x /gdif/instrumental/modulation/pitch x /gdif/instrumental/modulation/formants x1 x2 /gdif/instrumental/modulation/breathiness x /gdif/instrumental/selection/phoneticclass x...

May 23, 2008 · 1 min · 130 words · ARJ

Some thoughts on GDIF

We had a meeting about GDIF at McGill yesterday, and I realised that people had very different thoughts about what it is and what it can be used for. While GDIF is certainly intended for formalising the way we code movement and gesture information for realtime usage in NIME using OSC, it is also supposed to be used for offline analysis. I think the best way of doing this, is to have a three level approach as sketched here:...

February 20, 2007 · 2 min · 224 words · ARJ

NIME paper on GDIF

Here is the poster I presented at NIME 2006 in Paris based on the paper Towards a Gesture Description Interchange Format. The paper was written together with Tellef Kvifte, and the abstract reads: This paper presents our need for a Gesture Description Interchange Format (GDIF) for storing, retrieving and sharing information about music-related gestures. Ideally, it should be possible to store all sorts of data from various commercial and custom made controllers, motion capture and computer vision systems, as well as results from different types of gesture analysis, in a coherent and consistent way....

July 5, 2006 · 1 min · 139 words · ARJ

ICMC papers

My paper entitled “Using motiongrams in the study of musical gestures” was accepted to ICMC 06 in New Orleans. The abstract is: Navigating through hours of video material is often time-consuming, and it is similarly difficult to create good visualization of musical gestures in such a material. Traditional displays of time-sampled video frames are not particularly useful when studying single-shot studio recordings, since they present a series of still images and very little movement related information....

June 21, 2006 · 1 min · 213 words · ARJ

NIME 06 - IRCAM - Paris

I also recently got to know that two papers I have been co-authoring have been accepted to NIME in Paris. One is called “Towards a Coherent Terminology and Model of Instrument Description and Design” and the other “Towards a Gesture Description Interchange Format”. The idea in the latter is to develop a set of gestural descriptors as a GDIF to match the Sound Description Interchange Format (SDIF) which has been around for some years....

March 24, 2006 · 2 min · 274 words · ARJ