In computer music, many spatialization algorithms use a self-contained syntax and storage-format, wherein control messages (e.g. trajectories to move a sound virtually) programmed for one application are incompatible with any other implementation.
This lack of a standardization complicates the portability of compositions and requires manual synchronization and conversion of control data - a time consuming affair. Incompatible data formats also prevent collaboration between researchers and institutions.
SpatDIF is a format that describes spatial sound information in a structured way, in order to support real-time and non-real-time applications. The format serves to describe, store and share spatial audio scenes across audio applications and concert venues.
After another round of meetings and discussions we are ready to publish the new version of the specifications for SpatDIF. We feel that this is a major step forward in terms of clarity of structure and completenes of the CORE descriptors.
Download the specifications here.
In addition, we prepared a set of examples scenes both in a storage and a streaming formats, that clearly demonstrate the core concepts of SpatDIF.
Download the examples here.
We are happy to annouce that the paper "SpatDIF: Principles, Specification, and Examples" received the Best Paper Award of the 9th Sound and Music Computing Conference in Copenhagen, Denmark, July 12-14 2012.
SpatDIF, the Spatial Sound Description Interchange Format, is an ongoing collaborative effort offering a semantic and syntactic specification for storing and transmitting spatial audio scene descriptions. The SpatDIF core is a lightweight minimal solution providing the most essential set of descriptors for spatial sound scenes. Additional descriptors are introduced as extensions, expanding the namespace and scope with respect to authoring, scene description, rendering and reproduction of spatial audio. A general overview of the specification is provided, and two use cases are discussed, exemplifying SpatDIF's potential for file-based pieces as well as real-time streaming of spatial audio information.
Download the paper here.