Jamoma 0.5 released

After extensive testing Jamoma 0.5 is finally released. Even though the version number is low, this release has been worked on for around 18 months, and is actively used in both teaching and performance. What is Jamoma? A platform for interactive art-based research and performance. It consists of several parallell development efforts: Jamoma Modular - a structured approach to development and control of modules in the graphical media environment Max. Jamoma DSP - an object-oriented, reflective, application programming interface for C++, with an emphasis on real-time signal processing....

November 11, 2009 · 1 min · 194 words · ARJ

Testing control of CataRT from video analysis

I am working with Victoria Johnson on a piece involving movement in physical and sonic space. Here is a screenshot of a patch where I use analysis output of some of my video modules from Jamoma to control the cursor navigating in the 2D-space in CataRT. The video camera is hanging in the ceiling, and this makes it possible for Victoria to explore sounds “spread out” on the floor. For one, CataRT is an amazing tool (thanks to Diemo for sharing it!...

October 7, 2009 · 1 min · 96 words · ARJ

STSM at KTH

I am currently in Stockholm carrying out a Short Term Scientific Mission (STSM) in the Speech, music and Hearing group at KTH through the COST Action Sonic Interaction Design (SID). The main objective of the STSM is to work on preparations for some experiments on action-sound couplings that will be carried out in the SID project in the fall. The first part of the SID experiments will involve studying how people move to sound, and the second part will look at how this knowledge can be used to create sound through movement....

June 26, 2009 · 2 min · 264 words · ARJ

Updated software

I was at the Musical Body conference at University of London last week and presented my work on visualisation of music-related movements. For my PhD I developed the Musical Gestures Toolbox as a collection of components and modules for Max/MSP/Jitter, and most of this has been merged into Jamoma. However, lots of potential users are not familiar with Max, so over the last couple of years I have decided to develop standalone applications for some of the main tasks....

April 27, 2009 · 1 min · 194 words · ARJ

Three workshops in a row

The last few weeks have been quite busy here in Oslo. We opened the new lab just about a month ago, and since then I have organised several workshops, guest lectures and concerts both at UiO and at NMH. I was planning to post some longer descriptions of what has been going on, but decided to go for a summary instead. {height=“150”} First we had a workshop called embedded systems workshop, but which I retroactively have renamed RaPMIC workshop (Rapid Prototyping of Music Instruments and Controllers)....

October 28, 2008 · 2 min · 359 words · ARJ

Some thoughts on data signal processing in Max

We are having a Jamoma workshop at the fourMs lab this week. Most of the time is being spent on making Jamoma 0.5 stable, but we are also discussing some other issues. Throughout these discussions, particularly about how to handle multichannel audio in Max, I have realised that we should also start thinking about data signals as a type in itself. Jamoma is currently, as is Max, split into three different “types” of modules and processing: control, audio and video....

October 23, 2008 · 3 min · 563 words · ARJ

Papers at ICMC 2008

Last week I was in Belfast for the International Computer Music Conference (ICMC 2008). The conference was hosted by SARC, and it was great to finally be able to see (and hear!) the sonic lab which they have installed in their new building. I was involved in two papers, the first one being a Jamoma-related paper called “Flexible Control of Composite Parameters in Max/MSP” (PDF) written by Tim Place, Trond Lossius, Nils Peters and myself....

September 4, 2008 · 1 min · 211 words · ARJ

NIME paper

A group of Jamoma-developers presented a paper suggesting an extension to OSC at this year’s NIME in Genova two weeks ago: Reference: Place, T., T. Lossius, A. R. Jensenius, N. Peters and P. Baltazar (2008): Proceedings of the 2008 International Conference on New Interfaces for Musical Expression, 5-7 June 2008, Genova. Downloads: Full paper Poster Abstract: An approach for creating structured Open Sound Control (OSC) messages by separating the addressing of node values and node properties is suggested....

June 16, 2008 · 1 min · 127 words · ARJ

Motiongrams sync'ed to spectrograms

One of my reasons for developing motiongrams was to have a solution for visualising movement in a way that would be compatible to spectrograms. That way it would be possible to study how movement is evolving over time, in relation to how the audio is changing over time. In my current implementation of motiongrams in Max/MSP/Jitter (and partially in EyesWeb), there has been no way to synchronise with a spectrogram. The problem was that the built-in spectrogram in Max/MSP was running much faster than the motiongram, and they was therefore out of sync from the start....

June 11, 2008 · 2 min · 244 words · ARJ

NIME Jamoma workshop

Some pictures from our Jamoma workshop after NIME: {width=“400” height=“237”} Pascal showing the ramping and mapping magic in Jamoma. {width=“400” height=“252”} Tim showing that Jamoma is soon to be working in Max 5.

June 8, 2008 · 1 min · 33 words · ARJ