This article includes a list of references, related reading, or external links, but its sources remain unclear because it lacks inline citations .(August 2010) |
Open Sound Control (OSC) is a protocol for networking sound synthesizers, computers, and other multimedia devices for purposes such as musical performance or show control. OSC's advantages include interoperability, accuracy, flexibility and enhanced organization and documentation. [1] Its disadvantages include inefficient coding of information, increased load on embedded processors, [2] and lack of standardized messages/interoperability. [3] [4] [5] The first specification was released in March 2002.
OSC is a content format developed at CNMAT by Adrian Freed and Matt Wright comparable to XML, WDDX, or JSON. [6] It was originally intended for sharing music performance data (gestures, parameters and note sequences) between musical instruments (especially electronic musical instruments such as synthesizers), computers, and other multimedia devices. OSC is sometimes used as an alternative to the 1983 MIDI standard, when higher resolution and a richer parameter space is desired. OSC messages are transported across the internet and within local subnets using UDP/IP and Ethernet. OSC messages between gestural controllers are usually transmitted over serial endpoints of USB wrapped in the SLIP protocol.[ citation needed ]
OSC's main features, compared to MIDI, include: [1]
There are dozens of OSC applications, including real-time sound and media processing environments, web interactivity tools, software synthesizers, programming languages and hardware devices. OSC has achieved wide use in fields including musical expression, robotics, video performance interfaces, distributed music systems and inter-process communication.
The TUIO community standard for tangible interfaces such as multitouch is built on top of OSC. Similarly the GDIF system for representing gestures integrates OSC.
OSC is used extensively in experimental musical controllers, and has been built into several open source and commercial products.
The Open Sound World (OSW) music programming language is designed around OSC messaging. [7]
OSC is the heart of the DSSI plugin API, an evolution of the LADSPA API, in order to make the eventual GUI interact with the core of the plugin via messaging the plugin host. LADSPA and DSSI are APIs dedicated to audio effects and synthesizers.
In 2007, a standardized namespace within OSC called SYN, for communication between controllers, synthesizers and hosts, was proposed,
Notable software with OSC implementations include:
Notable hardware with OSC implementations include:
OSC messages consist of an address pattern (such as /oscillator/4/frequency
), a type tag string (such as ,fi
for a float32 argument followed by an int32 argument), and the arguments themselves (which may include a time tag). [8] Address patterns form a hierarchical name space, reminiscent of a Unix filesystem path, or a URL, and refer to "Methods" inside the server, which are invoked with the attached arguments. Type tag strings are a compact string representation of the argument types. Arguments are represented in binary form with four-byte alignment. The core types supported are
An example message is included in the spec (with null padding bytes represented by ␀): /oscillator/4/frequency␀,f␀␀
, Followed by the 4-byte float32 representation of 440.0: 0x43dc0000. [9]
Messages may be combined into bundles, which themselves may be combined into bundles, etc. Each bundle contains a timestamp, which determines whether the server should respond immediately or at some point in the future. [8]
Applications commonly employ extensions to this core set. More recently some of these extensions such as a compact Boolean type were integrated into the required core types of OSC 1.1.
The advantages of OSC over MIDI are primarily internet connectivity; data type resolution; and the comparative ease of specifying a symbolic path, as opposed to specifying all connections as seven-bit numbers with seven-bit or fourteen-bit data types. [8] This human-readability has the disadvantage of being inefficient to transmit and more difficult to parse by embedded firmware, however. [2]
The spec does not define any particular OSC Methods or OSC Containers. All messages are implementation-defined and vary from server to server.
MIDI is a technical standard that describes a communication protocol, digital interface, and electrical connectors that connect a wide variety of electronic musical instruments, computers, and related audio devices for playing, editing, and recording music.
Zeta Instrument Processor Interface (ZIPI) was a research project initiated by Zeta Instruments and UC Berkeley's CNMAT (Center for New Music and Audio Technologies). Introduced in 1994 in a series of publications in Computer Music Journal from MIT Press, ZIPI was intended as the next-generation transport protocol for digital musical instruments, designed with compliance to the OSI model.
The Linux Audio Developer's Simple Plugin API (LADSPA) is an application programming interface (API) standard for handling audio filters and audio signal processing effects, licensed under LGPL-2.1-or-later. Originally designed through consensus on the Linux Audio Developers mailing list, it now works on a variety of platforms. It is used in many free audio software projects, and there is a wide range of LADSPA plug-ins available.
General MIDI is a standardized specification for electronic musical instruments that respond to MIDI messages. GM was developed by the American MIDI Manufacturers Association (MMA) and the Japan MIDI Standards Committee (JMSC) and first published in 1991. The official specification is available in English from the MMA, bound together with the MIDI 1.0 specification, and in Japanese from the Association of Musical Electronic Industry (AMEI).
Ardour is a hard disk recorder and digital audio workstation application that runs on Linux, macOS, FreeBSD and Microsoft Windows. Its primary author is Paul Davis, who was also responsible for the JACK Audio Connection Kit. It is intended as a digital audio workstation suitable for professional use.
CV/gate is an analog method of controlling synthesizers, drum machines, and similar equipment with external sequencers. The control voltage typically controls pitch and the gate signal controls note on-off.
Steinberg Media Technologies GmbH is a German musical software and hardware company based in Hamburg. It develops software for writing, recording, arranging and editing music, most notably Cubase, Nuendo, and Dorico. It also designs audio and MIDI hardware interfaces, controllers, and iOS/Android music apps including Cubasis. Steinberg created several industry standard music technologies including the Virtual Studio Technology (VST) format for plug-ins and the ASIO protocol. Steinberg has been a wholly owned subsidiary of Yamaha since 2005.
Virtual Studio Technology (VST) is an audio plug-in software interface that integrates software synthesizers and effects units into digital audio workstations. VST and similar technologies use digital signal processing to simulate traditional recording studio hardware in software. Thousands of plugins exist, both commercial and freeware, and many audio applications support VST under license from its creator, Steinberg.
A digital audio workstation is an electronic device or application software used for recording, editing and producing audio files. DAWs come in a wide variety of configurations from a single software program on a laptop, to an integrated stand-alone unit, all the way to a highly complex configuration of numerous components controlled by a central computer. Regardless of configuration, modern DAWs have a central interface that allows the user to alter and mix multiple recordings and tracks into a final produced piece.
Rosegarden is a free software digital audio workstation program developed for Linux with ALSA, JACK and Qt4. It acts as an audio and MIDI sequencer, scorewriter, and musical composition and editing tool. It is intended to be a free alternative to such applications as Cubase.
DirectMusic is a deprecated component of the Microsoft DirectX API that allows music and sound effects to be composed and played and provides flexible interactive control over the way they are played. Architecturally, DirectMusic is a high-level set of objects, built on top of DirectSound, that allow the programmer to play sound and music without needing to get quite as low-level as DirectSound. DirectSound allows for the capture and playback of digital sound samples, whereas DirectMusic works with message-based musical data. Music can be synthesized either in hardware, in the Microsoft GS Wavetable SW Synth, or in a custom synthesizer.
A MIDI controller is any hardware or software that generates and transmits Musical Instrument Digital Interface (MIDI) data to MIDI-enabled devices, typically to trigger sounds and control parameters of an electronic music performance. They most often use a musical keyboard to send data about the pitch of notes to play, although a MIDI controller may trigger lighting and other effects. A wind controller has a sensor that converts breath pressure to volume information and lip pressure to control pitch. Controllers for percussion and stringed instruments exist, as well as specialized and experimental devices. Some MIDI controllers are used in association with specific digital audio workstation software. The original MIDI specification has been extended to include a greater range of control features.
Show control is the use of automation technology to link together and operate multiple entertainment control systems in a coordinated manner. It is distinguished from an entertainment control system, which is specific to a single theatrical department, system or effect, one which coordinates elements within a single entertainment discipline such as lighting, sound, video, rigging, or pyrotechnics. A typical entertainment control system would be a lighting control console. An example of show control would be linking a video segment with a number of lighting cues, or having a sound cue trigger animatronic movements, or all of these combined. Shows with or without live actors can almost invariably incorporate entertainment control technology and usually benefit from show control to operate these subsystems independently, simultaneously, or in rapid succession.
The Lemur was a highly customizable multi-touch device from French company JazzMutant founded by Yoann Gantch, Pascal Joguet, Guillaume Largillier and Julien Olivier in 2002, which served as a controller for musical devices such as synthesizers and mixing consoles, as well as for other media applications such as video performances. As an audio tool, the Lemur's role was equivalent to that of a MIDI controller in a MIDI studio setup, except that the Lemur used the Open Sound Control (OSC) protocol, a high-speed networking replacement for MIDI. The controller was especially well-suited for use with Reaktor and Max/MSP, tools for building custom software synthesizers.
LMMS is a digital audio workstation application program. It allows music to be produced by arranging samples, synthesizing sounds, entering notes via computer keyboard or mouse or by playing on a MIDI keyboard, and combining the features of trackers and sequencers. It is free and open source software, written in Qt and released under GPL-2.0-or-later.
LV2 is a set of royalty-free open standards for music production plug-ins and matching host applications. It includes support for the synthesis and processing of digital audio and CV, events such as MIDI and OSC, and provides a free alternative to audio plug-in standards such as Virtual Studio Technology (VST) and Audio Units (AU).
Qtractor is a hard disk recorder and digital audio workstation application for Linux. Qtractor is written in C++ and is based on the Qt framework. Its author is Rui Nuno Capela, who is also responsible for the Qjackctl, Qsynth and Qsampler line of Linux audio software. Qtractor's intention was to provide digital audio workstation software simple enough for the average home user, and yet powerful enough for the professional user.
RTP-MIDI is a protocol to transport MIDI messages within Real-time Transport Protocol (RTP) packets over Ethernet and WiFi networks. It is completely open and free, and is compatible both with LAN and WAN application fields. Compared to MIDI 1.0, RTP-MIDI includes new features like session management, device synchronization and detection of lost packets, with automatic regeneration of lost data. RTP-MIDI is compatible with real-time applications, and supports sample-accurate synchronization for each MIDI message.
AudioCubes are a collection of wireless intelligent light-emitting objects, capable of detecting each other's location, orientation, and user gestures. They were created by Bert Schiettecatte as electronic musical instruments for use by musicians in live performance, sound design, musical composition, and for creating interactive applications in max/msp, pd and C++.
{{cite web}}
: Missing or empty |url=
(help)one of the reasons OSC has not replaced MIDI yet is that there is no connect-and-play … There is no standard namespace in OSC for interfacing e.g. a synth
OSC suffers from a superset of this problem: it's anarchy, and deliberately so. The owners of the specification have been so eager to avoid imposing constraints upon it that it has become increasingly difficult for hardware to cope with it. … More severely, there is an interoperability problem. OSC lacks a defined namespace for even the most common musical exchanges, to the extent that one cannot use it to send Middle C from a sequencer to a synthesiser in a standardised manner
OSC also introduces new obstacles. First, since there is no fixed set of messages, each participating server needs to know what messages it can send to the servers it intends to communicate with. Currently the OSC standard does not provide for a means of programmatically discovering all messages a server responds to