Open Sound Control

Last updated

Open Sound Control (OSC) is a protocol for networking sound synthesizers, computers, and other multimedia devices for purposes such as musical performance or show control. OSC's advantages include interoperability, accuracy, flexibility and enhanced organization and documentation. [1] Its disadvantages include inefficient coding of information, increased load on embedded processors, [2] and lack of standardized messages/interoperability. [3] [4] [5] The first specification was released in March 2002.

Contents

Motivation

OSC is a content format developed at CNMAT by Adrian Freed and Matt Wright comparable to XML, WDDX, or JSON. [6] It was originally intended for sharing music performance data (gestures, parameters and note sequences) between musical instruments (especially electronic musical instruments such as synthesizers), computers, and other multimedia devices. OSC is sometimes used as an alternative to the 1983 MIDI standard, when higher resolution and a richer parameter space is desired. OSC messages are transported across the internet and within local subnets using UDP/IP and Ethernet. OSC messages between gestural controllers are usually transmitted over serial endpoints of USB wrapped in the SLIP protocol.[ citation needed ]

Features

OSC's main features, compared to MIDI, include: [1]

Applications

There are dozens of OSC applications, including real-time sound and media processing environments, web interactivity tools, software synthesizers, programming languages and hardware devices. OSC has achieved wide use in fields including musical expression, robotics, video performance interfaces, distributed music systems and inter-process communication.

The TUIO community standard for tangible interfaces such as multitouch is built on top of OSC. Similarly the GDIF system for representing gestures integrates OSC.

OSC is used extensively in experimental musical controllers, and has been built into several open source and commercial products.

The Open Sound World (OSW) music programming language is designed around OSC messaging. [7]

OSC is the heart of the DSSI plugin API, an evolution of the LADSPA API, in order to make the eventual GUI interact with the core of the plugin via messaging the plugin host. LADSPA and DSSI are APIs dedicated to audio effects and synthesizers.

In 2007, a standardized namespace within OSC called SYN, for communication between controllers, synthesizers and hosts, was proposed,

Notable software with OSC implementations include:

Notable hardware with OSC implementations include:

Design

OSC messages consist of an address pattern (such as /oscillator/4/frequency), a type tag string (such as ,fi for a float32 argument followed by an int32 argument), and the arguments themselves (which may include a time tag). [8] Address patterns form a hierarchical name space, reminiscent of a Unix filesystem path, or a URL, and refer to "Methods" inside the server, which are invoked with the attached arguments. Type tag strings are a compact string representation of the argument types. Arguments are represented in binary form with four-byte alignment. The core types supported are

An example message is included in the spec (with null padding bytes represented by ␀): /oscillator/4/frequency␀,f␀␀, Followed by the 4-byte float32 representation of 440.0: 0x43dc0000. [9]

Messages may be combined into bundles, which themselves may be combined into bundles, etc. Each bundle contains a timestamp, which determines whether the server should respond immediately or at some point in the future. [8]

Applications commonly employ extensions to this core set. More recently some of these extensions such as a compact Boolean type were integrated into the required core types of OSC 1.1.

The advantages of OSC over MIDI are primarily internet connectivity; data type resolution; and the comparative ease of specifying a symbolic path, as opposed to specifying all connections as seven-bit numbers with seven-bit or fourteen-bit data types. [8] This human-readability has the disadvantage of being inefficient to transmit and more difficult to parse by embedded firmware, however. [2]

The spec does not define any particular OSC Methods or OSC Containers. All messages are implementation-defined and vary from server to server.

Related Research Articles

<span class="mw-page-title-main">MIDI</span> Connection standard for electronic musical instruments

MIDI is a technical standard that describes a communication protocol, digital interface, and electrical connectors that connect a wide variety of electronic musical instruments, computers, and related audio devices for playing, editing, and recording music.

Zeta Instrument Processor Interface (ZIPI) was a research project initiated by Zeta Instruments and UC Berkeley's CNMAT (Center for New Music and Audio Technologies). Introduced in 1994 in a series of publications in Computer Music Journal from MIT Press, ZIPI was intended as the next-generation transport protocol for digital musical instruments, designed with compliance to the OSI model.

<span class="mw-page-title-main">LADSPA</span> Application programming interface for audio filters

The Linux Audio Developer's Simple Plugin API (LADSPA) is an application programming interface (API) standard for handling audio filters and audio signal processing effects, licensed under LGPL-2.1-or-later. Originally designed through consensus on the Linux Audio Developers mailing list, it now works on a variety of platforms. It is used in many free audio software projects, and there is a wide range of LADSPA plug-ins available.

General MIDI is a standardized specification for electronic musical instruments that respond to MIDI messages. GM was developed by the American MIDI Manufacturers Association (MMA) and the Japan MIDI Standards Committee (JMSC) and first published in 1991. The official specification is available in English from the MMA, bound together with the MIDI 1.0 specification, and in Japanese from the Association of Musical Electronic Industry (AMEI).

<span class="mw-page-title-main">Ardour (software)</span> Open-source digital audio workstation

Ardour is a hard disk recorder and digital audio workstation application that runs on Linux, macOS, FreeBSD and Microsoft Windows. Its primary author is Paul Davis, who was also responsible for the JACK Audio Connection Kit. It is intended as a digital audio workstation suitable for professional use.

<span class="mw-page-title-main">CV/gate</span> Analogue method of electronic sound production

CV/gate is an analog method of controlling synthesizers, drum machines, and similar equipment with external sequencers. The control voltage typically controls pitch and the gate signal controls note on-off.

Steinberg Media Technologies GmbH is a German musical software and hardware company based in Hamburg. It develops software for writing, recording, arranging and editing music, most notably Cubase, Nuendo, and Dorico. It also designs audio and MIDI hardware interfaces, controllers, and iOS/Android music apps including Cubasis. Steinberg created several industry standard music technologies including the Virtual Studio Technology (VST) format for plug-ins and the ASIO protocol. Steinberg has been a wholly owned subsidiary of Yamaha since 2005.

<span class="mw-page-title-main">Virtual Studio Technology</span> Audio plug-in software interface

Virtual Studio Technology (VST) is an audio plug-in software interface that integrates software synthesizers and effects units into digital audio workstations. VST and similar technologies use digital signal processing to simulate traditional recording studio hardware in software. Thousands of plugins exist, both commercial and freeware, and many audio applications support VST under license from its creator, Steinberg.

<span class="mw-page-title-main">Digital audio workstation</span> Electronic device or application software used for recording, editing and producing audio files

A digital audio workstation is an electronic device or application software used for recording, editing and producing audio files. DAWs come in a wide variety of configurations from a single software program on a laptop, to an integrated stand-alone unit, all the way to a highly complex configuration of numerous components controlled by a central computer. Regardless of configuration, modern DAWs have a central interface that allows the user to alter and mix multiple recordings and tracks into a final produced piece.

<span class="mw-page-title-main">Rosegarden</span> Digital audio workstation program for BSD and Linux

Rosegarden is a free software digital audio workstation program developed for Linux with ALSA, JACK and Qt4. It acts as an audio and MIDI sequencer, scorewriter, and musical composition and editing tool. It is intended to be a free alternative to such applications as Cubase.

DirectMusic is a deprecated component of the Microsoft DirectX API that allows music and sound effects to be composed and played and provides flexible interactive control over the way they are played. Architecturally, DirectMusic is a high-level set of objects, built on top of DirectSound, that allow the programmer to play sound and music without needing to get quite as low-level as DirectSound. DirectSound allows for the capture and playback of digital sound samples, whereas DirectMusic works with message-based musical data. Music can be synthesized either in hardware, in the Microsoft GS Wavetable SW Synth, or in a custom synthesizer.

<span class="mw-page-title-main">MIDI controller</span> Device that produces MIDI data

A MIDI controller is any hardware or software that generates and transmits Musical Instrument Digital Interface (MIDI) data to MIDI-enabled devices, typically to trigger sounds and control parameters of an electronic music performance. They most often use a musical keyboard to send data about the pitch of notes to play, although a MIDI controller may trigger lighting and other effects. A wind controller has a sensor that converts breath pressure to volume information and lip pressure to control pitch. Controllers for percussion and stringed instruments exist, as well as specialized and experimental devices. Some MIDI controllers are used in association with specific digital audio workstation software. The original MIDI specification has been extended to include a greater range of control features.

<span class="mw-page-title-main">Show control</span> Use of technology to coordinate multiple entertainment control systems

Show control is the use of automation technology to link together and operate multiple entertainment control systems in a coordinated manner. It is distinguished from an entertainment control system, which is specific to a single theatrical department, system or effect, one which coordinates elements within a single entertainment discipline such as lighting, sound, video, rigging, or pyrotechnics. A typical entertainment control system would be a lighting control console. An example of show control would be linking a video segment with a number of lighting cues, or having a sound cue trigger animatronic movements, or all of these combined. Shows with or without live actors can almost invariably incorporate entertainment control technology and usually benefit from show control to operate these subsystems independently, simultaneously, or in rapid succession.

<span class="mw-page-title-main">Lemur (input device)</span>

The Lemur was a highly customizable multi-touch device from French company JazzMutant founded by Yoann Gantch, Pascal Joguet, Guillaume Largillier and Julien Olivier in 2002, which served as a controller for musical devices such as synthesizers and mixing consoles, as well as for other media applications such as video performances. As an audio tool, the Lemur's role was equivalent to that of a MIDI controller in a MIDI studio setup, except that the Lemur used the Open Sound Control (OSC) protocol, a high-speed networking replacement for MIDI. The controller was especially well-suited for use with Reaktor and Max/MSP, tools for building custom software synthesizers.

<span class="mw-page-title-main">LMMS</span> Free software digital audio workstation

LMMS is a digital audio workstation application program. It allows music to be produced by arranging samples, synthesizing sounds, entering notes via computer keyboard or mouse or by playing on a MIDI keyboard, and combining the features of trackers and sequencers. It is free and open source software, written in Qt and released under GPL-2.0-or-later.

<span class="mw-page-title-main">LV2</span> Open standard for audio plugins

LV2 is a set of royalty-free open standards for music production plug-ins and matching host applications. It includes support for the synthesis and processing of digital audio and CV, events such as MIDI and OSC, and provides a free alternative to audio plug-in standards such as Virtual Studio Technology (VST) and Audio Units (AU).

<span class="mw-page-title-main">Qtractor</span> Digital audio workstation application for Linux

Qtractor is a hard disk recorder and digital audio workstation application for Linux. Qtractor is written in C++ and is based on the Qt framework. Its author is Rui Nuno Capela, who is also responsible for the Qjackctl, Qsynth and Qsampler line of Linux audio software. Qtractor's intention was to provide digital audio workstation software simple enough for the average home user, and yet powerful enough for the professional user.

RTP-MIDI is a protocol to transport MIDI messages within Real-time Transport Protocol (RTP) packets over Ethernet and WiFi networks. It is completely open and free, and is compatible both with LAN and WAN application fields. Compared to MIDI 1.0, RTP-MIDI includes new features like session management, device synchronization and detection of lost packets, with automatic regeneration of lost data. RTP-MIDI is compatible with real-time applications, and supports sample-accurate synchronization for each MIDI message.

<span class="mw-page-title-main">Audiocubes</span> Musical instrument

AudioCubes are a collection of wireless intelligent light-emitting objects, capable of detecting each other's location, orientation, and user gestures. They were created by Bert Schiettecatte as electronic musical instruments for use by musicians in live performance, sound design, musical composition, and for creating interactive applications in max/msp, pd and C++.

References

  1. 1 2 "Introduction to OSC". opensoundcontrol.org. 7 April 2021. Retrieved 11 September 2021.
  2. 1 2 Fraietta, Angelo (2008). "Open Sound Control: Constraints and Limitations". doi:10.5281/zenodo.1179537. S2CID   5690441.{{cite web}}: Missing or empty |url= (help)
  3. "Home · fabb/SynOSCopy Wiki". GitHub. Retrieved 2022-12-31. one of the reasons OSC has not replaced MIDI yet is that there is no connect-and-play … There is no standard namespace in OSC for interfacing e.g. a synth
  4. Supper, Ben (October 24, 2012). "We hate MIDI. We love MIDI". Focusrite Development. Retrieved 2023-01-01. OSC suffers from a superset of this problem: it's anarchy, and deliberately so. The owners of the specification have been so eager to avoid imposing constraints upon it that it has become increasingly difficult for hardware to cope with it. … More severely, there is an interoperability problem. OSC lacks a defined namespace for even the most common musical exchanges, to the extent that one cannot use it to send Middle C from a sequencer to a synthesiser in a standardised manner
  5. "OSC-Namespace and OSC-State: Schemata for Describing the Namespace and State of OSC-Enabled Systems" (PDF). OSC also introduces new obstacles. First, since there is no fixed set of messages, each participating server needs to know what messages it can send to the servers it intends to communicate with. Currently the OSC standard does not provide for a means of programmatically discovering all messages a server responds to
  6. "OpenSoundControl | CNMAT". cnmat.berkeley.edu. Retrieved 22 December 2019.
  7. "OSW Manual OpenSound Control (OSC)". osw.sourceforge.net. Retrieved 22 December 2019.
  8. 1 2 3 Wright, Matt (March 26, 2002). "The Open Sound Control 1.0 Specification". opensoundcontrol.org. Retrieved 22 December 2019.
  9. Wright, Matt (March 29, 2002). "Examples Supporting the OpenSoundControl 1.0 Spec". opensoundcontrol.stanford.edu. Retrieved 2023-01-01.