Today I will present the paper Non-Realtime Sonification of Motiongrams at the Sound and Music Computing Conference (SMC) in Stockholm. The paper is based on a new implementation of my sonomotiongram technique, optimised for non-realtime use. I presented a realtime version of the sonomotiongram technique at ACHI 2012 and a Kinect version, the Kinectofon, at NIME earlier this year. The new paper presents the ImageSonifyer application and a collection of videos showing how it works.
Title
Non-Realtime Sonification of Motiongrams
Links
- Paper (PDF)
- Poster (PDF)
- ImageSonifyer (Software) + source (@GitHub)
- Videos (@YouTube)
Abstract
The paper presents a non-realtime implementation of the sonomotiongram method, a method for the sonification of motiongrams. Motiongrams are spatiotemporal displays of motion from video recordings, based on frame-differencing and reduction of the original video recording. The sonomotiongram implementation presented in this paper is based on turning these visual displays of motion into sound using FFT filtering of noise sources. The paper presents the application ImageSonifyer, accompanied by video examples showing the possibilities of the sonomotiongram method for both analytic and creative applications
Reference
Jensenius, A. R. (2013). Non-realtime sonification of motiongrams. In Proceedings of Sound and Music Computing, pages 500–505, Stockholm.
BibTeX
@inproceedings{Jensenius:2013f,
Address = {Stockholm},
Author = {Jensenius, Alexander Refsum},
Booktitle = {Proceedings of Sound and Music Computing},
Pages = {500--505},
Title = {Non-Realtime Sonification of Motiongrams},
Year = {2013}}