Native instruments states that KORE should be the new universal sound platform solving “all problems” in large music software setups. Basically, it works as a generic host for plugins (VST and AU) that can be used in sequencers, and it comes with a hardware controller to facilitate the control.

The argumentation is convincing and the pictures nice, but it seems like this “new” product only scratches on the surface of the real problem. Controlling music software (and most hardware) is a nightmare because they are usually built from an engineering point of view, and based on a hardware paradigm from decades back. That leaves us with hundreds of parameters, and too many onscreen buttons, sliders and fancy graphics. Adding another manual mapping layer into the chain might help a little, but it doesn’t solve the fundamental problem of how to control musical sound.

The possibility to “search” your sounds is a nice feature, but when it is based on manual annotation it doesn’t really help much when you add your own stuff, and it doesn’t take into account that 2+2 is seldom 4 when working with sound. What we really need is a perceptually and semantically meaningful mapping level. Luckily, lots of research is going on in this field today and we can only hope that some of these ideas will be implemented in future commercial systems.