Paraslash network audio streaming tools
Paraslash user manual
This document describes how to install, configure and use the paraslash network audio streaming system. Most chapters start with a chapter overview and conclude with an example section. We try to focus on general concepts and on the interaction of the various pieces of the paraslash package. Hence this user manual is not meant as a replacement for the manual pages that describe all command line options of each paraslash executable.
The core functionality of the para suite is provided by two main applications, para_server and para_audiod. para_server maintains the audio file database and acts as the streaming source, while para_audiod is the streaming client. Usually, both run in the background on different hosts but a local setup is also possible.
A simplified picture of a typical setup is as follows
The two client programs, para_client and para_audioc communicate with para_server and para_audiod, respectively.
para_gui controls para_server and para_audiod by executing para_client and para_audioc. In particular, it runs a command to obtain the state of para_audiod and para_server, and the metadata of the current audio file. This information is pretty-printed in a curses window.
The paraslash executables
para_server streams binary audio data (MP3, …) over local and/or remote networks. It listens on a TCP port and accepts commands such as play, stop, pause, next from authenticated clients. The components of para_server are illustrated in the following diagram:
Incoming connections arrive at the dispatcher which creates a process dedicated to the connection. Its task is to authenticate the client and to run the command handler which forwards the client request to either the audio file selector or the virtual streaming system. Results (if any) are sent back to the client.
The audio file selector manages audio files using various database tables. It maintains statistics on the usage of all audio files such as last-played time and the number of times each file was selected. It is also responsible for selecting and loading audio files for streaming. Additional information may be added to the database to allow fine-grained selection based on various properties of the audio file, including information found in (ID3) tags. Simple playlists are also supported. It is possible to store images (album covers) and lyrics in the database and associate these to the corresponding audio files. The section on the audio file selector discusses this topic in more detail.
Another component of para_server is the virtual streaming system, which controls the paraslash senders. During streaming it requests small chunks of data (e.g., mp3 frames) from the audio file selector and feeds them to the senders which forward the chunks to connected clients.
The three senders of para_server correspond to network streaming protocols based on HTTP, DCCP, or UDP. This is explained in the section on networking.
The client program to connect to para_server. paraslash commands are sent to para_server and the response is dumped to STDOUT. This can be used by any scripting language to produce user interfaces with little programming effort.
All connections between para_server and para_client are encrypted with a symmetric session key. For each user of paraslash you must create a public/secret RSA key pair for authentication.
If para_client is started without non-option arguments, an interactive session (shell) is started. Command history and command completion are supported through libreadline.
The purpose of para_audiod is to download, decode and play an audio stream received from para_server. A typical setup looks as follows.
The status task of para_audiod connects to para_server and runs the “stat” command to retrieve the current server status. If an audio stream is available, para_audiod starts a so-called buffer tree to play the stream.
The buffer tree consists of a receiver, any number of filters and a writer. The receiver downloads the audio stream from para_server and the filters decode or modify the received data. The writer plays the decoded stream.
The dispatcher of para_audiod listens on a local socket and runs audiod commands on behalf of para_audioc. For example, para_gui runs para_audioc to obtain status information about para_audiod and the current audio file. Access to the local socket may be restricted by means of Unix socket credentials.
The client program which talks to para_audiod. Used to control para_audiod, to receive status info, or to grab the stream at any point of the decoding process. Like para_client, para_audioc supports interactive sessions on systems with libreadline.
A command line HTTP/DCCP/UDP stream grabber. The http mode is compatible with arbitrary HTTP streaming sources (e.g. icecast). In addition to the three network streaming modes, para_recv can also operate in local (afh) mode. In this mode it writes the content of an audio file on the local file system in complete chunks to stdout, optionally ‘just in time’. This allows cutting audio files without decoding, and it enables third-party software which is unaware of the particular audio format to send complete frames in real time.
A filter program that reads from STDIN and writes to STDOUT. Like para_recv, this is an atomic building block which can be used to assemble higher-level audio receiving facilities. It combines several different functionalities in one tool: decoders for multiple audio formats and a number of processing filters, among these a normalizer for audio volume.
A small stand-alone program that prints tech info about the given audio file to STDOUT. It can be instructed to print a “chunk table”, an array of offsets within the audio file.
A modular audio stream writer. It supports a simple file writer output plug-in and optional WAV/raw players for ALSA (Linux) and OSS. para_write can also be used as a stand-alone WAV or raw audio player.
A command line audio player which supports the same audio formats as para_server. It differs from other players in that it has an insert and a command mode, like the vi editor. Line editing is based on libreadline, and tab completion and command history are supported.
Curses-based gui that presents status information obtained in a curses window. Appearance can be customized via themes. para_gui provides key-bindings for the most common server commands and new key-bindings can be added easily.
An alarm clock and volume-fader for OSS and ALSA.
This chapter lists the necessary software that must be installed to compile the paraslash package, describes how to compile and install the paraslash source code and the steps that have to be performed in order to set up a typical server and client.
For the impatient
In any case you will need
To build the sources from a tarball, execute
To build from git or a gitweb snapshot, run this command instead:
There should be no errors but probably some warnings about missing packages which usually implies that not all audio formats will be supported. If headers or libs are installed at unusual locations you might need to tell the configure script where to find them. Try
to see a list of options. If the paraslash package was compiled successfully, execute (optionally)
to run the paraslash test suite. If all tests pass, execute as root
to install executables under /usr/local/bin and the man pages under /usr/local/man.
Create a paraslash user
In order to control para_server at runtime you must create a paraslash user. As authentication is based on the RSA crypto system you’ll have to create an RSA key pair. If you already have a user and an RSA key pair, you may skip this step.
In this section we’ll assume a typical setup: You would like to run para_server on some host called server_host as user foo, and you want to connect to para_server from another machine called client_host as user bar.
As foo@server_host, create ~/.paraslash/server.users by typing the following commands:
Next, change to the “bar” account on client_host and generate the key pair with the commands
This generates the two files id_rsa and id_rsa.pub in ~/.ssh. Note that para_server won’t accept keys shorter than 2048 bits. Moreover, para_client rejects private keys which are world-readable.
para_server only needs to know the public key of the key pair just created. Copy this public key to server_host:
Finally, tell para_client to connect to server_host:
For this first try, we’ll use the info loglevel to make the output of para_server more verbose.
Now you can use para_client to connect to the server and issue commands. Open a new shell as bar@client_host and try
to retrieve the list of available commands and some server info. Don’t proceed if this doesn’t work.
Create and populate the database
An empty database is created with
This initializes a couple of empty tables under ~/.paraslash/afs_database-0.4. You normally don’t need to look at these tables, but it’s good to know that you can start from scratch with
in case something went wrong.
Next, you need to add some audio files to that database so that para_server knows about them. Choose an absolute path to a directory containing some audio files and add them to the audio file table:
This might take a while, so it is a good idea to start with a directory containing not too many files. Note that the table only contains data about the audio files found, not the files themselves.
You may print the list of all known audio files with
We will have to tell para_audiod that it should receive the audio stream from server_host via http:
You should now be able to listen to the audio stream once para_server starts streaming. To activate streaming, execute
Since no playlist has been specified yet, the “dummy” mode which selects all known audio files is activated automatically. See the section on the audio file selector for how to use playlists and moods to specify which files should be streamed in which order.
To identify streaming problems try to receive, decode and play the stream manually using para_recv, para_filter and para_write as follows. For simplicity we assume that you’re running Linux/ALSA and that only MP3 files have been added to the database.
Double check what is logged by para_server and use the –loglevel option of para_recv, para_filter and para_write to increase verbosity.
para_server uses a challenge-response mechanism to authenticate requests from incoming connections, similar to ssh’s public key authentication method. Authenticated connections are encrypted using the AES stream cipher in integer counter mode.
In this chapter we briefly describe RSA and AES, and sketch the authentication handshake between para_client and para_server. User management is discussed in the section on the user_list file. These sections are all about communication between the client and the server. Connecting para_audiod is a different matter and is described in a separate section.
RSA and AES
A block cipher is a transformation which operates on fixed-length blocks. For symmetric block ciphers the transformation is determined by a single key for both encryption and decryption. For asymmetric block ciphers, on the other hand, the key consists of two parts, called the public key and the private key. A message can be encrypted with either key and only the counterpart of that key can decrypt the message. Asymmetric block ciphers can be used for both signing and encrypting a message.
RSA is an asymmetric block cipher which is used in many applications, including ssh and gpg. The RSA public key encryption and signatures algorithms are defined in detail in RFC 2437. Paraslash relies on RSA for authentication.
Stream ciphers XOR the input with a pseudo-random key stream to produce the output. Decryption uses the same function calls as encryption. Any block cipher can be turned into a stream cipher by generating the pseudo-random key stream by encrypting successive values of a counter (counter mode).
AES, the advanced encryption standard, is a well-known symmetric block cipher. Paraslash employs AES in counter mode as described above to encrypt communications. Since a stream cipher key must not be used twice, a random key is generated for every new connection.
The authentication handshake between para_client and para_server goes as follows:
paraslash relies on the quality of the pseudo-random bytes provided by the crypto library (openssl or libgcrypt), on the security of the implementation of the RSA and AES crypto routines and on the infeasibility to invert the SHA1 function.
The user_list file
At startup para_server reads the user list file which contains one line per user. The default location of the user list file may be changed with the –user-list option.
There should be at least one user in this file. Each user must have an RSA key pair. The public part of the key is needed by para_server while the private key is needed by para_client. Each line of the user list file must be of the form
where username is an arbitrary string (usually the user’s login name), key is the full path to that user’s public RSA key, and perms is a comma-separated list of zero or more of the following permission bits:
The permission bits specify which commands the user is allowed to execute. The output of
contains the permissions needed to execute the command.
It is possible to make para_server reread the user_list file by executing the paraslash “hup” command or by sending SIGHUP to the PID of para_server.
para_audiod listens on a Unix domain socket. Those sockets are for local communication only, so only local users can connect to para_audiod. The default is to let any user connect but this can be restricted on platforms that support UNIX socket credentials which allow para_audiod to obtain the Unix credentials of the connecting process.
Use para_audiod’s –user-allow option to allow connections only for a limited set of users.
The audio file selector
paraslash comes with a sophisticated audio file selector (AFS), whose main task is to determine which file to stream next, based on information on the audio files stored in a database. It communicates also with para_client via the command handler whenever an AFS command is executed, for example to answer a database query.
Besides the simple playlists, AFS supports audio file selection based on moods which act as a filter that limits the set of all known audio files to those which satisfy certain criteria. It also maintains tables containing images (e.g. album cover art) and lyrics that can be associated with one or more audio files.
In this chapter we sketch the setup of the AFS process during server startup and proceed with the description of the layout of the various database tables. The section on playlists and moods explains these two audio file selection mechanisms in detail and contains practical examples. The way file renames and content changes are detected is discussed briefly before the Troubleshooting section concludes the chapter.
The AFS process
On startup, para_server forks to create the AFS process which opens the database tables. The AFS process accepts incoming connections which arrive either on a pipe which is shared with para_server, or on the local socket it is listening on. The setup is as follows.
The virtual streaming system, which is part of the server process, communicates with the AFS process via pipes and shared memory. When the current audio file changes, it sends a notification through the shared pipe. The AFS process queries the database to determine the next audio file, opens it, verifies that it has not been changed since it was added to the database and passes the open file descriptor back to the virtual streaming system, along with audio file meta-data such as file name, duration, audio format and so on. The virtual streaming system then starts to stream the file.
The command handlers of all AFS server commands use the local socket to query or update the database. For example, the command handler of the add command sends the path of an audio file to the local socket. The AFS process opens the file and tries to find an audio format handler which recognizes the file. If all goes well, a new database entry with metadata obtained from the audio format handler is added to the database.
Note that AFS employs libosl, the object storage layer library, as the database backend. This library offers functionality similar to a relational database, but is much more lightweight than a full featured database management system.
Metadata about the known audio files is stored in an OSL database. This database consists of the following tables:
All tables are described in more detail below.
The audio file table
This is the most important and usually also the largest table of the AFS database. It contains the information needed to stream each audio file. In particular the following data is stored for each audio file.
To add or refresh the data contained in the audio file table, the add command is used. It takes the full path of either an audio file or a directory. In the latter case, the directory is traversed recursively and all files which are recognized as valid audio files are added to the database.
The attribute table
The attribute table contains two columns, name and bitnum. An attribute is simply a name for a certain bit number in the attribute bitmask of the audio file table.
Each of the 64 bits of the attribute bitmask can be set for each audio file individually. Hence up to 64 different attributes may be defined. For example, “pop”, “rock”, “blues”, “jazz”, “instrumental”, “german_lyrics”, “speech”, whatever. You are free to choose as many attributes as you like and there are no naming restrictions for attributes.
A new attribute “test” is created by
and para_client lsatt
lists all available attributes. You can set the “test” attribute for an audio file by executing
Similarly, the “test” bit can be removed from an audio file with
Instead of a path you may use a shell wildcard pattern. The attribute is applied to all audio files matching this pattern:
gives you a verbose listing of your audio files also showing which attributes are set.
In case you wonder why the double-dash in the above command is needed: It tells para_client to not interpret the options after the dashes. If you find this annoying, just say
and be happy. In what follows we shall use this alias.
The “test” attribute can be dropped from the database with
Read the output of
for more information and a complete list of command line options to these commands.
The image, lyrics, moods and playlists tables are all blob tables. Blob tables consist of three columns each: The identifier which is a positive number that is auto-incremented, the name (an arbitrary string) and the content (the blob).
All blob tables support the same set of actions: cat, ls, mv, rm and add. Of course, add is used for adding new blobs to the table while the other actions have the same meaning as the corresponding Unix commands. The paraslash commands to perform these actions are constructed as the concatenation of the table name and the action. For example addimg, catimg, lsimg, mvimg, rmimg are the commands that manipulate or query the image table.
The add variant of these commands is special as these commands read the blob contents from stdin. To add an image to the image table the command
can be used.
Note that the images and lyrics are not interpreted at all, and also the playlist and the mood blobs are only investigated when the mood or playlist is activated with the select command.
The score table
The score table describes those audio files which are admissible for the current mood or playlist (see below). The table has two columns: a pointer to a row of the audio file table and a score value.
Unlike all other tables of the database, the score table remains in memory and is never stored on disk. It is initialized at startup and recomputed when the select command loads a new mood or playlist.
When the audio file selector is asked to open the next audio file, it picks the row with the highest score, opens the corresponding file and passes the file descriptor to the virtual streaming system. At this point the last_played and the num_played fields of the selected file are updated and the score is recomputed.
Playlists and moods
Playlists and moods offer two different ways of specifying the set of admissible files. A playlist in itself describes a set of admissible files. A mood, in contrast, describes the set of admissible files in terms of attributes and other type of information available in the audio file table. As an example, a mood can define a filename pattern, which is then matched against the names of audio files in the table.
Playlists are accommodated in the playlist table of the afs database, using the aforementioned blob format for tables. A new playlist is created with the addpl command by specifying the full (absolute) paths of all desired audio files, separated by newlines. Example:
If my_playlist already exists it is overwritten. To activate the new playlist, execute
The audio file selector will assign scores to each entry of the list, in descending order so that files will be selected in order. If a file could not be opened for streaming, its entry is removed from the score table (but not from the playlist).
A mood consists of a unique name and a definition. The definition is an expression which describes which audio files are considered admissible. At any time at most one mood can be active, meaning that para_server will only stream files which are admissible for the active mood.
The expression may refer to attributes and other metadata stored in the database. Expressions may be combined by means of logical and arithmetical operators in a natural way. Moreover, string matching based on regular expression or wildcard patterns is supported.
The set of admissible files is determined by applying the expression to each audio file in turn. For a mood definition to be valid, its expression must evaluate to a number, a string or a boolean value (“true” or “false”). For numbers, any value other than zero means the file is admissible. For strings, any non-empty string indicates an admissible file. For boolean values, true means admissible and false means not admissible. As a special case, the empty expression treats all files as admissible.
Expressions are based on a context-free grammar which distinguishes between several types for syntactic units or groupings. The grammar defines a set of keywords which have a type and a corresponding semantic value, as shown in the following table.
[*] For most audio formats, the year tag is stored as a string. It is converted to an integer by the mood parser. If the audio file has no year tag or the content of the year tag is not a number, the semantic value is zero. A special convention applies if the year tag is a one-digit or a two-digit number. In this case 1900 is added to the tag value.
Expressions may be grouped using parentheses, logical and arithmetical operators or string matching operators. The following table lists the available operators.
Besides integers, strings and booleans there is an additional type which describes regular expression or wildcard patterns. Patterns are not just strings because they also include a list of flags which modify matching behaviour.
Regular expression patterns are of the form
Note that only extended regular expression patterns are supported. See regex(3) for details.
Wildcard patterns are similar, but the pattern must be delimited by
[*] Not in POSIX, but both FreeBSD and NetBSD have it.
[**] GNU extension, silently ignored on non GNU systems.
See fnmatch(3) for details.
Mood definitions may contain arbitrary whitespace and comments. A comment is a word beginning with #. This word and all remaining characters of the line are ignored.
To create a new mood called “my_mood”, write its definition into some temporary file, say “tmpfile”, and add it to the mood table by executing
If the mood definition is really short, you may just pipe it to the client instead of using temporary files. Like this:
There is no need to keep the temporary file since you can always use the catmood command to get it back:
A mood can be activated by executing
Once active, the list of admissible files is shown by the ls command if the “-a” switch is given:
File renames and content changes
Since the audio file selector knows the SHA1 of each audio file that has been added to the afs database, it recognizes if the content of a file has changed, e.g. because an ID3 tag was added or modified. Also, if a file has been renamed or moved to a different location, afs will detect that an entry with the same hash value already exists in the audio file table.
In both cases it is enough to just re-add the new file. In the first case (file content changed), the audio table is updated, while metadata such as the num_played and last_played fields, as well as the attributes, remain unchanged. In the other case, when the file is moved or renamed, only the path information is updated, all other data remains as before.
It is possible to change the behaviour of the add command by using the “-l” (lazy add) or the “-f” (force add) option.
Use the debug loglevel (-l debug) to show debugging info. All paraslash executables have a brief online help which is displayed when -h is given. The –detailed-help option prints the full help text.
If para_server crashed or was killed by SIGKILL (signal 9), it may refuse to start again because of “dirty osl tables”. In this case you’ll have to run the oslfsck program of libosl to fix your database:
However, make sure para_server isn’t running before executing oslfsck.
If you don’t mind to recreate your database you can start from scratch by removing the entire database directory, i.e.
Be aware that this removes all attribute definitions, all playlists and all mood definitions and requires to re-initialize the tables.
Although oslfsck fixes inconsistencies in database tables it doesn’t care about the table contents. To check for invalid table contents, use
This prints out references to missing audio files as well as invalid playlists and mood definitions.
Similarly, para_audiod refuses to start if its socket file exists, since this indicates that another instance of para_audiod is running. After a crash a stale socket file might remain and you must run
once to fix it up.
Audio formats and audio format handlers
The following audio formats are supported by paraslash:
Mp3, MPEG-1 Audio Layer 3, is a common audio format for audio storage, designed as part of its MPEG-1 standard. An MP3 file is made up of multiple MP3 frames, which consist of a header and a data block. The size of an MP3 frame depends on the bit rate and on the number of channels. For a typical CD-audio file (sample rate of 44.1 kHz stereo), encoded with a bit rate of 128 kbit, an MP3 frame is about 400 bytes large.
OGG is a standardized audio container format, while Vorbis is an open source codec for lossy audio compression. Since Vorbis is most commonly made available via the OGG container format, it is often referred to as OGG/Vorbis. The OGG container format divides data into chunks called OGG pages. A typical OGG page is about 4KB large. The Vorbis codec creates variable-bitrate (VBR) data, where the bitrate may vary considerably.
Speex is an open-source speech codec that is based on CELP (Code Excited Linear Prediction) coding. It is designed for voice over IP applications, has modest complexity and a small memory footprint. Wideband and narrowband (telephone quality) speech are supported. As for Vorbis audio, Speex bit-streams are often stored in OGG files. As of 2012 this codec is considered obsolete since the Oppus codec, described below, surpasses its performance in all areas.
Opus is a lossy audio compression format standardized through RFC 6716 in 2012. It combines the speech-oriented SILK codec and the low-latency CELT (Constrained Energy Lapped Transform) codec. Like OGG/Vorbis and OGG/Speex, Opus data is usually encapsulated in OGG containers. All known software patents which cover Opus are licensed under royalty-free terms.
Advanced Audio Coding (AAC) is a standardized, lossy compression and encoding scheme for digital audio which is the default audio format for Apple’s iPhone, iPod, iTunes. Usually MPEG-4 is used as the container format and audio files encoded with AAC have the .m4a extension. A typical AAC frame is about 700 bytes large.
Windows Media Audio (WMA) is an audio data compression technology developed by Microsoft. A WMA file is usually encapsulated in the Advanced Systems Format (ASF) container format, which also specifies how meta data about the file is to be encoded. The bit stream of WMA is composed of superframes, each containing one or more frames of 2048 samples. For 16 bit stereo a WMA superframe is about 8K large.
The Free Lossless Audio Codec (FLAC) compresses audio without quality loss. It gives better compression ratios than a general purpose compressor like zip or bzip2 because FLAC is designed specifically for audio. A FLAC-encoded file consists of frames of varying size, up to 16K. Each frame starts with a header that contains all information necessary to decode the frame.
Unfortunately, each audio format has its own conventions how meta data is added as tags to the audio file.
For MP3 files, ID3, version 1 and 2 are widely used. ID3 version 1 is rather simple but also very limited as it supports only artist, title, album, year and comment tags. Each of these can only be at most 32 characters long. ID3, version 2 is much more flexible but requires a separate library being installed for paraslash to support it.
Ogg vorbis, ogg speex and flac files contain meta data as Vorbis comments, which are typically implemented as strings of the form “[TAG]=[VALUE]”. Unlike ID3 version 1 tags, one may use whichever tags are appropriate for the content.
AAC files usually use the MPEG-4 container format for storing meta data while WMA files wrap meta data as special objects within the ASF container format.
paraslash only tracks the most common tags that are supported by all tag variants: artist, title, year, album, comment. When a file is added to the AFS database, the meta data of the file is extracted and stored in the audio file table.
Chunks and chunk tables
paraslash uses the word “chunk” as common term for the building blocks of an audio file. For MP3 files, a chunk is the same as an MP3 frame, while for OGG files a chunk is an OGG page, etc. Therefore the chunk size varies considerably between audio formats, from a few hundred bytes (MP3) up to 16K (FLAC).
The chunk table contains the offsets within the audio file that correspond to the chunk boundaries of the file. Like the meta data, the chunk table is computed and stored in the database whenever an audio file is added.
The paraslash senders (see below) always send complete chunks. The granularity for seeking is therefore determined by the chunk size.
Audio format handlers
For each audio format paraslash contains an audio format handler whose first task is to tell whether a given file is a valid audio file of this type. If so, the audio file handler extracts some technical data (duration, sampling rate, number of channels etc.), computes the chunk table and reads the meta data.
The audio format handler code is linked into para_server and executed via the add command. The same code is also available as a stand-alone tool, para_afh, which prints the technical data, the chunk table and the meta data of a file. Moreover, all audio format handlers are combined in the afh receiver which is part of para_recv and para_play.
Paraslash uses different network connections for control and data. para_client communicates with para_server over a dedicated TCP control connection. To transport audio data, separate data connections are used. For these data connections, a variety of transports (UDP, DCCP, HTTP) can be chosen.
The chapter starts with the control service, followed by a section on the various streaming protocols in which the data connections are described. The way audio file headers are embedded into the stream is discussed briefly before the example section which illustrates typical commands for real-life scenarios.
Both IPv4 and IPv6 are supported.
The paraslash control service
para_server is controlled at runtime via the paraslash control connection. This connection is used for server commands (play, stop, …) as well as for afs commands (ls, select, …).
The server listens on a TCP port and accepts connections from clients that connect the open port. Each connection causes the server to fork off a client process which inherits the connection and deals with that client only. In this classical accept/fork approach the server process is unaffected if the child dies or goes crazy for whatever reason. In fact, the child process can not change address space of server process.
The section on client-server authentication above described the early connection establishment from the crypto point of view. Here it is described what happens after the connection (including crypto setup) has been established. There are four processes involved during command dispatch as sketched in the following diagram.
Note that the child process is not a child of the afs process, so communication of these two processes has to happen via local sockets. In order to avoid abuse of the local socket by unrelated processes, a magic cookie is created once at server startup time just before the server process forks off the AFS process. This cookie is known to the server, AFS and the child, but not to unrelated processes.
There are two different kinds of commands: First there are commands that cause the server to respond with some answer such as the list of all audio files. All but the addblob commands (addimg, addlyr, addpl, addmood) are of this kind. The addblob commands add contents to the database, so they need to transfer data the other way round, from the client to the server.
There is no knowledge about the server commands built into para_client, so it does not know about addblob commands. Instead, the server sends a special “awaiting data” packet for these commands. If the client receives this packet, it sends STDIN to the server, otherwise it dumps data from the server to STDOUT.
A network (audio) stream usually consists of one streaming source, the sender, and one or more receivers which read data over the network from the streaming source.
Senders are thus part of para_server while receivers are part of para_audiod. Moreover, there is the stand-alone tool para_recv which can be used to manually download a stream, either from para_server or from a web-based audio streaming service.
The following three streaming protocols are supported by paraslash:
See the Appendix on network protocols for brief descriptions of the various protocols relevant for network audio streaming with paraslash.
It is possible to activate more than one sender simultaneously. Senders can be controlled at run time and via config file and command line options.
Note that audio connections are not encrypted. Transport or Internet layer encryption should be used if encrypted data connections are needed.
Since DCCP and TCP are both connection-oriented protocols, connection establishment/teardown and access control are very similar between these two streaming protocols. UDP is the most lightweight option, since in contrast to TCP/DCCP it is connectionless. It is also the only protocol supporting IP multicast.
The HTTP and the DCCP sender listen on a (TCP/DCCP) port waiting for clients to connect and establish a connection via some protocol-defined handshake mechanism. Both senders maintain two linked lists each: The list of all clients which are currently connected, and the list of access control entries which determines who is allowed to connect. IP-based access control may be configured through config file and command line options and via the “allow” and “deny” sender subcommands.
Upon receiving a GET request from the client, the HTTP sender sends back a status line and a message. The body of this message is the audio stream. This is common practice and is supported by many popular clients which can thus be used to play a stream offered by para_server. For DCCP things are a bit simpler: No messages are exchanged between the receiver and sender. The client simply connects and the sender starts to stream.
DCCP is an experimental protocol which offers a number of new features not available for TCP. Both ends can negotiate these features using a built-in negotiation mechanism. In contrast to TCP/HTTP, DCCP is datagram-based (no retransmissions) and thus should not be used over lossy media (e.g. WiFi networks). One useful feature offered by DCCP is access to a variety of different congestion-control mechanisms called CCIDs. Two different CCIDs are available per default on Linux:
Unlike the HTTP and DCCP senders, the UDP sender maintains only a single list, the target list. This list describes the set of clients to which the stream is sent. There is no list for access control and no “allow” and “deny” commands for the UDP sender. Instead, the “add” and “delete” commands can be used to modify the target list.
Since both UDP and DCCP offer an unreliable datagram-based transport, additional measures are necessary to guard against disruptions over networks that are lossy or which may be subject to interference (as is for instance the case with WiFi). Paraslash uses FEC (Forward Error Correction) to guard against packet losses and reordering. The stream is FEC-encoded before it is sent through the UDP socket and must be decoded accordingly on the receiver side.
The packet size and the amount of redundancy introduced by FEC can be configured via the FEC parameters which are dictated by server and may also be configured through the “sender” command. The FEC parameters are encoded in the header of each network packet, so no configuration is necessary on the receiver side. See the section on FEC below.
Streams with headers and headerless streams
For OGG/Vorbis, OGG/Speex and wma streams, some of the information needed to decode the stream is only contained in the audio file header of the container format but not in each data chunk. Clients must be able to obtain this information in case streaming starts in the middle of the file or if para_audiod is started while para_server is already sending a stream.
This is accomplished in different ways, depending on the streaming protocol. For connection-oriented streams (HTTP, DCCP) the audio file header is sent prior to audio file data. This technique however does not work for the connectionless UDP transport. Hence the audio file header is periodically being embedded into the UDP audio data stream. By default, the header is resent after five seconds. The receiver has to wait until the next header arrives before it can start decoding the stream.
The “si” (server info) command lists some information about the currently running server process.
-> Show PIDs, number of connected clients, uptime, and more:
By default para_server activates both the HTTP and th DCCP sender on startup. This can be changed via command line options or para_server’s config file.
-> List config file options for senders:
-> Receive a DCCP stream using CCID2 and write the output into a file:
Note the quotes around the arguments for the dccp receiver. Each receiver has its own set of command line options and its own command line parser, so arguments for the dccp receiver must be protected from being interpreted by para_recv.
-> Receive FEC-encoded multicast stream and write the output into a file:
-> Receive this (FEC-encoded) unicast stream:
-> Create a minimal config for para_audiod for HTTP streams:
A paraslash filter is a module which transforms an input stream into an output stream. Filters are included in the para_audiod executable and in the stand-alone tool para_filter which usually contains the same modules.
While para_filter reads its input stream from STDIN and writes the output to STDOUT, the filter modules of para_audiod are always connected to a receiver which produces the input stream and a writer which absorbs the output stream.
Some filters depend on a specific library and are not compiled in if this library was not found at compile time. To see the list of supported filters, run para_filter and para_audiod with the –help option. The output looks similar to the following:
Out of these filter modules, a chain of filters can be constructed, much in the way Unix pipes can be chained, and analogous to the use of modules in gstreamer: The output of the first filter becomes the input of the second filter. There is no limitation on the number of filters and the same filter may occur more than once.
Like receivers, each filter has its own command line options which must be quoted to protect them from the command line options of the driving application (para_audiod or para_filter). Example:
For para_audiod, each audio format has its own set of filters. The name of the audio format for which the filter should be applied can be used as the prefix for the filter option. Example:
The “mp3” prefix above is actually interpreted as a POSIX extended regular expression. Therefore
activates the prebuffer filter for all supported audio formats (because “.” matches all audio formats) while
activates it only for wma and ogg streams.
For each supported audio format there is a corresponding filter which decodes audio data in this format to 16 bit PCM data which can be directly sent to the sound device or any other software that operates on undecoded PCM data (visualizers, equalizers etc.). Such filters are called decoders in general, and xxxdec is the name of the paraslash decoder for the audio format xxx. For example, the mp3 decoder is called mp3dec.
Note that the output of the decoder is about 10 times larger than its input. This means that filters that operate on the decoded audio stream have to deal with much more data than filters that transform the audio stream before it is fed to the decoder.
Paraslash relies on external libraries for most decoders, so these libraries must be installed for the decoder to be included in the executables. For example, the mp3dec filter depends on the mad library.
Forward error correction
As already mentioned earlier, paraslash uses forward error correction (FEC) for the unreliable UDP and DCCP transports. FEC is a technique which was invented already in 1960 by Reed and Solomon and which is widely used for the parity calculations of storage devices (RAID arrays). It is based on the algebraic concept of finite fields, today called Galois fields, in honour of the mathematician Galois (1811-1832). The FEC implementation of paraslash is based on code by Luigi Rizzo.
Although the details require a sound knowledge of the underlying mathematics, the basic idea is not hard to understand: For positive integers k and n with k < n it is possible to compute for any k given data bytes d_1, …, d_k the corresponding r := n -k parity bytes p_1, …, p_r such that all data bytes can be reconstructed from any k bytes of the set
FEC-encoding for unreliable network transports boils down to slicing the audio stream into groups of k suitably sized pieces called slices and computing the r corresponding parity slices. This step is performed in para_server which then sends both the data and the parity slices over the unreliable network connection. If the client was able to receive at least k of the n = k + r slices, it can reconstruct (FEC-decode) the original audio stream.
From these observations it is clear that there are three different FEC parameters: The slice size, the number of data slices k, and the total number of slices n. It is crucial to choose the slice size such that no fragmentation of network packets takes place because FEC only guards against losses and reordering but fails if slices are received partially.
FEC decoding in paralash is performed through the fecdec filter which usually is the first filter (there can be other filters before fecdec if these do not alter the audio stream).
Volume adjustment (amp and compress)
The amp and the compress filter both adjust the volume of the audio stream. These filters operate on uncompressed audio samples. Hence they are usually placed directly after the decoding filter. Each sample is multiplied with a scaling factor (>= 1) which makes amp and compress quite expensive in terms of computing power.
The amp filter amplifies the audio stream by a fixed scaling factor that must be known in advance. For para_audiod this factor is derived from the amplification field of the audio file’s entry in the audio file table while para_filter uses the value given at the command line.
The optimal scaling factor F for an audio file is the largest real number F >= 1 such that after multiplication with F all samples still fit into the sample interval [-32768, 32767]. One can use para_filter in combination with the sox utility to compute F:
The amplification value V which is stored in the audio file table, however, is an integer between 0 and 255 which is connected to F through the formula
To store V in the audio file table, the command
is used. The reader is encouraged to write a script that performs these computations :)
Unlike the amplification filter, the compress filter adjusts the volume of the audio stream dynamically without prior knowledge about the peak value. It maintains the maximal volume of the last n samples of the audio stream and computes a suitable amplification factor based on that value and the various configuration options. It tries to chose this factor such that the adjusted volume meets the desired target level.
Note that it makes sense to combine amp and compress.
Misc filters (wav and prebuffer)
These filters are rather simple and do not modify the audio stream at all. The wav filter is only useful with para_filter and in connection with a decoder. It asks the decoder for the number of channels and the sample rate of the stream and adds a Microsoft wave header containing this information at the beginning. This allows writing wav files rather than raw PCM files (which do not contain any information about the number of channels and the sample rate).
The prebuffer filter simply delays the output until the given time has passed (starting from the time the first byte was available in its input queue) or until the given amount of data has accumulated. It is mainly useful for para_audiod if the standard parameters result in buffer underruns.
Both filters require almost no additional computing time, even when operating on uncompressed audio streams, since data buffers are simply “pushed down” rather than copied.
Once an audio stream has been received and decoded to PCM format, it can be sent to a sound device for playback. This part is performed by paraslash writers which are described in this chapter.
A paraslash writer acts as a data sink that consumes but does not produce audio data. Paraslash writers operate on the client side and are contained in para_audiod and in the stand-alone tool para_write.
The para_write program reads uncompressed audio data from STDIN. If this data starts with a wav header, sample rate, sample format and channel count are read from the header. Otherwise CD audio (44.1KHz 16 bit little endian, stereo) is assumed but this can be overridden by command line options. para_audiod, on the other hand, obtains the sample rate and the number of channels from the decoder.
Like receivers and filters, each writer has an individual set of command line options, and for para_audiod writers can be configured per audio format separately. It is possible to activate more than one writer for the same stream simultaneously.
Unfortunately, the various flavours of Unix on which paraslash runs on have different APIs for opening a sound device and starting playback. Hence for each such API there is a paraslash writer that can play the audio stream via this API.
-> Use the OSS writer to play a wav file:
-> Enable ALSA software mixing for mp3 streams:
para_gui executes an arbitrary command which is supposed to print status information to STDOUT. It then displays this information in a curses window. By default the command
is executed, but this can be customized via the –stat-cmd option. In particular it possible to use
to make para_gui work on systems on which para_audiod is not running.
It is possible to bind keys to arbitrary commands via custom key-bindings. Besides the internal keys which can not be changed (help, quit, loglevel, version…), the following flavours of key-bindings are supported:
The general form of a key binding is
which maps key k to command c using mode m. Mode may be x, d or p for external, display and paraslash commands, respectively.
Currently there are only two themes for para_gui. It is easy, however, to add more themes. To create a new theme one has to define the position, color and geometry for for each status item that should be shown by this theme. See gui_theme.c for examples.
The “.” and “,” keys are used to switch between themes.
-> Show server info:
-> Jump to the middle of the current audio file by pressing F5:
-> vi-like bindings for jumping around:
-> Print the current date and time:
-> Call other curses programs:
Paraslash is an open source project and contributions are welcome. Here’s a list of things you can do to help the project:
Note that there is no mailing list, no bug tracker and no discussion forum for paraslash. If you’d like to contribute, or have questions about contributing, send email to Andre Noll email@example.com. New releases are announced by email. If you would like to receive these announcements, contact the author through the above address.
In order to compile the sources from the git repository (rather than from tar balls) and for contributing non-trivial changes to the paraslash project, some additional tools should be installed on a developer machine.
Paraslash has been developed using the git source code management tool since 2006. Development is organized roughly in the same spirit as the git development itself, as described below.
The following text passage is based on “A note from the maintainer”, written by Junio C Hamano, the maintainer of git.
There are four branches in the paraslash repository that track the source tree: “master”, “maint”, “next”, and “pu”.
The “master” branch is meant to contain what is well tested and ready to be used in a production setting. There could occasionally be minor breakages or brown paper bag bugs but they are not expected to be anything major, and more importantly quickly and easily fixable. Every now and then, a “feature release” is cut from the tip of this branch, named with three dotted decimal digits, like 0.4.2.
Whenever changes are about to be included that will eventually lead to a new major release (e.g. 0.5.0), a “maint” branch is forked off from “master” at that point. Obvious, safe and urgent fixes after the major release are applied to this branch and maintenance releases are cut from it. New features never go to this branch. This branch is also merged into “master” to propagate the fixes forward.
A trivial and safe enhancement goes directly on top of “master”. New development does not usually happen on “master”, however. Instead, a separate topic branch is forked from the tip of “master”, and it first is tested in isolation; Usually there are a handful such topic branches that are running ahead of “master”. The tip of these branches is not published in the public repository to keep the number of branches that downstream developers need to worry about low.
The quality of topic branches varies widely. Some of them start out as “good idea but obviously is broken in some areas” and then with some more work become “more or less done and can now be tested by wider audience”. Luckily, most of them start out in the latter, better shape.
The “next” branch is to merge and test topic branches in the latter category. In general, this branch always contains the tip of “master”. It might not be quite rock-solid production ready, but is expected to work more or less without major breakage. The maintainer usually uses the “next” version of paraslash for his own pleasure, so it cannot be that broken. The “next” branch is where new and exciting things take place.
The two branches “master” and “maint” are never rewound, and “next” usually will not be either (this automatically means the topics that have been merged into “next” are usually not rebased, and you can find the tip of topic branches you are interested in from the output of “git log next”). You should be able to safely build on top of them.
However, at times “next” will be rebuilt from the tip of “master” to get rid of merge commits that will never be in “master”. The commit that replaces “next” will usually have the identical tree, but it will have different ancestry from the tip of “master”.
The “pu” (proposed updates) branch bundles the remainder of the topic branches. The “pu” branch, and topic branches that are only in “pu”, are subject to rebasing in general. By the above definition of how “next” works, you can tell that this branch will contain quite experimental and obviously broken stuff.
When a topic that was in “pu” proves to be in testable shape, it graduates to “next”. This is done with
Sometimes, an idea that looked promising turns out to be not so good and the topic can be dropped from “pu” in such a case.
A topic that is in “next” is expected to be polished to perfection before it is merged to “master”. Similar to the above, this is done with
Note that being in “next” is not a guarantee to appear in the next release (being in “master” is such a guarantee, unless it is later found seriously broken and reverted), nor even in any future release.
The preferred coding style for paraslash coincides more or less with the style of the Linux kernel. So rather than repeating what is written there, here are the most important points.
Doxygen is a documentation system for various programming languages. The API reference on the paraslash web page is generated by doxygen.
It is more illustrative to look at the source code for examples than to describe the conventions in this manual, so we only describe which parts of the code need doxygen comments, but leave out details on documentation conventions.
As a rule, only the public part of the C source is documented with Doxygen. This includes structures, defines and enumerations in header files as well as public (non-static) C functions. These should be documented completely. For example, each parameter and the return value of a public function should get a descriptive doxygen comment.
No doxygen comments are necessary for static functions and for structures and enumerations in C files (which are used only within this file). This does not mean, however, that those entities need no documentation at all. Instead, common sense should be applied to document what is not obvious from reading the code.
The Internet Protocol is the primary networking protocol used for the Internet. All protocols described below use IP as the underlying layer. Both the prevalent IPv4 and the next-generation IPv6 variant are being deployed actively worldwide.
Connection-oriented and connectionless protocols
Connectionless protocols differ from connection-oriented ones in that state associated with the sending/receiving endpoints is treated implicitly. Connectionless protocols maintain no internal knowledge about the state of the connection. Hence they are not capable of reacting to state changes, such as sudden loss or congestion on the connection medium. Connection-oriented protocols, in contrast, make this knowledge explicit. The connection is established only after a bidirectional handshake which requires both endpoints to agree on the state of the connection, and may also involve negotiating specific parameters for the particular connection. Maintaining an up-to-date internal state of the connection also in general means that the sending endpoints perform congestion control, adapting to qualitative changes of the connection medium.
In IP networking, packets can be lost, duplicated, or delivered out of order, and different network protocols handle these problems in different ways. We call a transport-layer protocol reliable, if it turns the unreliable IP delivery into an ordered, duplicate- and loss-free delivery of packets. Sequence numbers are used to discard duplicates and re-arrange packets delivered out-of-order. Retransmission is used to guarantee loss-free delivery. Unreliable protocols, in contrast, do not guarantee ordering or data integrity.
With these definitions the protocols which are used by paraslash for steaming audio data may be classified as follows.
Below we give a short descriptions of these protocols.
The Transmission Control Protocol provides reliable, ordered delivery of a stream and a classic window-based congestion control. In contrast to UDP and DCCP (see below), TCP does not have record-oriented or datagram-based syntax, i.e. it provides a stream which is unaware and independent of any record (packet) boundaries. TCP is used extensively by many application layers. Besides HTTP (the Hypertext Transfer Protocol), also FTP (the File Transfer protocol), SMTP (Simple Mail Transfer Protocol), SSH (Secure Shell) all sit on top of TCP.
The User Datagram Protocol is the simplest transport-layer protocol, built as a thin layer directly on top of IP. For this reason, it offers the same best-effort service as IP itself, i.e. there is no detection of duplicate or reordered packets. Being a connectionless protocol, only minimal internal state about the connection is maintained, which means that there is no protection against packet loss or network congestion. Error checking and correction (if at all) are performed in the application.
The Datagram Congestion Control Protocol combines the connection-oriented state maintenance known from TCP with the unreliable, datagram-based transport of UDP. This means that it is capable of reacting to changes in the connection by performing congestion control, offering multiple alternative approaches. But it is bound to datagram boundaries (the maximum packet size supported by a medium), and like UDP it lacks retransmission to protect against loss. Due to the use of sequence numbers, it is however able to react to loss (interpreted as a congestion indication) and to ignore out-of-order and duplicate packets. Unlike TCP it allows to negotiate specific, binding features for a connection, such as the choice of congestion control: classic, window-based congestion control known from TCP is available as CCID-2, rate-based, “smooth” congestion control is offered as CCID-3.
The Hypertext Transfer Protocol is an application layer protocol on top of TCP. It is spoken by web servers and is most often used for web services. However, as can be seen by the many Internet radio stations and YouTube/Flash videos, http is by far not limited to the delivery of web pages only. Being a simple request/response based protocol, the semantics of the protocol also allow the delivery of multimedia content, such as audio over http.
IP multicast is not really a protocol but a technique for one-to-many communication over an IP network. The challenge is to deliver information to a group of destinations simultaneously using the most efficient strategy to send the messages over each link of the network only once. This has benefits for streaming multimedia: the standard one-to-one unicast offered by TCP/DCCP means that n clients listening to the same stream also consume n-times the resources, whereas multicast requires to send the stream just once, irrespective of the number of receivers. Since it would be costly to maintain state for each listening receiver, multicast often implies connectionless transport, which is the reason that it is currently only available via UDP.
Abstract socket namespace
UNIX domain sockets are a traditional way to communicate between processes on the same machine. They are always reliable (see above) and don’t reorder datagrams. Unlike TCP and UDP, UNIX domain sockets support passing open file descriptors or process credentials to other processes.
The usual way to set up a UNIX domain socket (as obtained from socket(2)) for listening is to first bind the socket to a file system pathname and then call listen(2), then accept(2). Such sockets are called pathname sockets because bind(2) creates a special socket file at the specified path. Pathname sockets allow unrelated processes to communicate with the listening process by binding to the same path and calling connect(2).
There are two problems with pathname sockets:
The abstract socket namespace is a non-portable Linux feature which avoids these problems. Abstract sockets are still bound to a name, but the name has no connection with file system pathnames.
Paraslash is licensed under the GPL, version 2. Most of the code base has been written from scratch, and those parts are GPL V2 throughout. Notable exceptions are FEC and the WMA decoder. See the corresponding source files for licencing details for these parts. Some code sniplets of several other third party software packages have been incorporated into the paraslash sources, for example log message coloring was taken from the git sources. These third party software packages are all published under the GPL or some other license compatible to the GPL.
Many thanks to Gerrit Renker who read an early draft of this manual and contributed significant improvements.
Application web pages