Synchronisation is a core issue when carrying out research on multimodal sensing/acting and multimedia. My take on this has been through the work on GDIF, and we are currently implementing a GDIF/SDIF recorder/player using FTM for Max/MSP (see our ICMC2008 paper for more on this).

I just came across a software called Thought Conduit{.external .text} which promises synchronisation of audio, video, annotations and even OSC-streams. This sounds very exciting and I hope to be able to test this in practice at some point.