Paraslash Audio Streaming | |
About News Download Documentation Development |
Paraslash user manual
This document describes how to install, configure and use the paraslash network audio streaming system. Most chapters start with a chapter overview and conclude with an example section. We try to focus on general concepts and on the interaction of the various pieces of the paraslash package. Hence this user manual is not meant as a replacement for the manual pages that describe all command line options of each paraslash executable.
In this chapter we give an overview of the interactions of the programs contained in the paraslash package, followed by brief descriptions of all executables.
The core functionality of the para suite is provided by two main applications, para_server and para_audiod. para_server maintains the audio file database and acts as the streaming source, while para_audiod is the streaming client. Usually, both run in the background on different hosts but a local setup is also possible.
A simplified picture of a typical setup is as follows
.____________________.
| ______ |
.-----------------------. | .d########b. |
|.---------------------.| | .d############b |
|| || | .d######""####//b. |
|| || | 9######( )######P |
|| || | 'b######++######d' |
|| Screen || | "9############P" |
|| || | "9a########P" |
|| || | `""""'' |
|`---------------------'| | ________________ |
`-----------------------' | |________________| |
___) (___ |____________________|
`-._______.-' loudspeaker
| |
| |
| |
.____/ \___. ._____________. ._____/ \_____.
| | | | | |
| para_gui |-----| para_audioc |-----| para_audiod |
|____ ___| |_____________| |_____ _____|
\ / \ /
| |
| |
| |
._____/ \_____. ._____/ \_____.
| | | |
| para_client |-----------------------| para_server |
|_____________| |_____ _____|
\ /
|
|
.-'"""`-.
( )
|`-.___.-'|
| |
|. ' " ` .|
| |
`-.___.-'
Database
The two client programs, para_client and para_audioc communicate with para_server and para_audiod, respectively.
para_gui controls para_server and para_audiod by executing para_client and para_audioc. In particular, it runs a command to obtain the state of para_audiod and para_server, and the metadata of the current audio file. This information is pretty-printed in a curses window.
para_server streams binary audio data (MP3, …) over local and/or remote networks. It listens on a TCP port and accepts commands such as play, stop, pause, next from authenticated clients. The components of para_server are illustrated in the following diagram:
______________________________________________________________________ network
| | | | |
| .-'""""`-. | | | |
| ( ) | | | |
.____/ \_____. |`-.____.-'| .____/ \____/ \____/ \____. |
| | | | | | |
| dispatcher | | database | | senders (http/udp/dccp) | |
|____ _____| | | |___________ ___________| |
\ / |. ' "" ` .| \ / |
| | | | |
| `-.____.-' | |
| | | |
| | | |
| | | |
| ._____/ \_____. .________/ \________. |
| | | | | |
| | audio file |________| virtual streaming | |
| | selector | | system | |
| |_____ _____| |________ ________| |
| \ / \ / |
| | | |
| | | |
| | ._________________. | |
| | | | | |
| `---| command handler |---' |
| |____ ___ ____| |
| \ / \ / |
| | | |
| | | |
| | | |
`-------------------------' `--------------------------'
Incoming connections arrive at the dispatcher which creates a process dedicated to the connection. Its task is to authenticate the client and to run the command handler which forwards the client request to either the audio file selector or the virtual streaming system. Results (if any) are sent back to the client.
The audio file selector manages audio files using various database tables. It maintains statistics on the usage of all audio files such as last-played time and the number of times each file was selected. It is also responsible for selecting and loading audio files for streaming. Additional information may be added to the database to allow fine-grained selection based on various properties of the audio file, including information found in (ID3) tags. Simple playlists are also supported. It is possible to store images (album covers) and lyrics in the database and associate these to the corresponding audio files. The section on the audio file selector discusses this topic in more detail.
Another component of para_server is the virtual streaming system, which controls the paraslash senders. During streaming it requests small chunks of data (e.g., mp3 frames) from the audio file selector and feeds them to the senders which forward the chunks to connected clients.
The three senders of para_server correspond to network streaming protocols based on HTTP, DCCP, or UDP. This is explained in the section on networking.
The client program to connect to para_server. paraslash commands are sent to para_server and the response is dumped to STDOUT. This can be used by any scripting language to produce user interfaces with little programming effort.
All connections between para_server and para_client are encrypted with a symmetric session key. For each user of paraslash you must create a public/secret RSA key pair for authentication.
If para_client is started without non-option arguments, an interactive session (shell) is started. Command history and command completion are supported through libreadline.
The purpose of para_audiod is to download, decode and play an audio stream received from para_server. A typical setup looks as follows.
.----------------------------.
| |
| |
._____/ \_____. .___/ \____.
| | .----------| |
| para_server | | .______| receiver |
|_____ ____| | | |___ ____|
\ / | | \ /
| | | |
| | | |
| | | |
._____/ \_____. | | .___/ \____.
| | | | | |
| status task |-----+ | | filter 1 |
|_____________| | |___ ____|
| \ /
| | .____________________.
| | | ______ |
.____________. | .___/ \____. | .d########b. |
| | | | | | .d############b |
| dispatcher |----------' | filter 2 | | .d######""####//b. |
|_____ ____| |___ ____| | 9######( )######P |
\ / \ / | 'b######++######d' |
| | | "9############P" |
| | | "9a########P" |
._____/ \_____. .___/ \____. | `""""'' |
| | | | | ________________ |
| para_audioc | | writer |------| |________________| |
|_____________| |__________| |____________________|
The status task of para_audiod connects to para_server and runs the “stat” command to retrieve the current server status. If an audio stream is available, para_audiod starts a so-called buffer tree to play the stream.
The buffer tree consists of a receiver, any number of filters and a writer. The receiver downloads the audio stream from para_server and the filters decode or modify the received data. The writer plays the decoded stream.
The dispatcher of para_audiod listens on a local socket and runs audiod commands on behalf of para_audioc. For example, para_gui runs para_audioc to obtain status information about para_audiod and the current audio file. Access to the local socket may be restricted by means of Unix socket credentials.
The client program which talks to para_audiod. Used to control para_audiod, to receive status info, or to grab the stream at any point of the decoding process. Like para_client, para_audioc supports interactive sessions on systems with libreadline.
A command line HTTP/DCCP/UDP stream grabber. The http mode is compatible with arbitrary HTTP streaming sources (e.g. icecast). In addition to the three network streaming modes, para_recv can also operate in local (afh) mode. In this mode it writes the content of an audio file on the local file system in complete chunks to stdout, optionally ‘just in time’. This allows cutting audio files without decoding, and it enables third-party software which is unaware of the particular audio format to send complete frames in real time.
A filter program that reads from STDIN and writes to STDOUT. Like para_recv, this is an atomic building block which can be used to assemble higher-level audio receiving facilities. It combines several different functionalities in one tool: decoders for multiple audio formats and a number of processing filters, among these a normalizer for audio volume.
A small stand-alone program that prints tech info about the given audio file to STDOUT. It can be instructed to print a “chunk table”, an array of offsets within the audio file.
A modular audio stream writer. It supports a simple file writer output plug-in and optional WAV/raw players for ALSA (Linux) and OSS. para_write can also be used as a stand-alone WAV or raw audio player.
A command line audio player which supports the same audio formats as para_server. It differs from other players in that it has an insert and a command mode, like the vi editor. Line editing is based on libreadline, and tab completion and command history are supported.
Curses-based gui that presents status information obtained in a curses window. Appearance can be customized via themes. para_gui provides key-bindings for the most common server commands and new key-bindings can be added easily.
An alarm clock and volume-fader for OSS and ALSA.
This chapter lists the necessary software that must be installed to compile the paraslash package, describes how to compile and install the paraslash source code and the steps that have to be performed in order to set up a typical server and client.
git clone https://git.tuebingen.mpg.de/lopsub
cd lopsub && make && sudo make install
git clone https://git.tuebingen.mpg.de/osl
cd osl && make && sudo make install && sudo ldconfig
sudo apt-get install autoconf libssl-dev m4 \
libmad0-dev libid3tag0-dev libasound2-dev libvorbis-dev \
libfaad-dev libspeex-dev libflac-dev libsamplerate-dev \
libasound2-dev libao-dev libreadline-dev libncurses-dev \
libopus-dev
In any case you will need
lopsub. The long option parser for subcommands generates the command line and config file parsers for all paraslash executables. Clone the source code repository with
git clone https://git.tuebingen.mpg.de/lopsub
gcc or clang. All gcc versions >= 5.4 are currently supported. Moderately recent versions of clang should work as well.
gnu make is also shipped with the disto. On BSD systems the gnu make executable is often called gmake.
bash. Some scripts which run during compilation require the Bourne again shell. It is most likely already installed.
m4. Some source files are generated from templates by the m4 macro processor.
Optional:
libosl. The object storage layer library is used by para_server. To clone the source code repository, execute
git clone https://git.tuebingen.mpg.de/osl
openssl or
libgcrypt. At least one
of these two libraries is needed as the backend for cryptographic
routines on both the server and the client side. Both openssl and
libgcrypt are usually shipped with the distro, but you might have
to install the development package (libssl-dev
or libgcrypt-dev
on debian systems) as well.
flex and bison are needed to build the mood parser of para_server. The build system will skip para_server if these tools are not installed.
libmad. To compile in MP3
support for paraslash, the development package must be installed. It
is called libmad0-dev
on debian-based systems. Note that libmad is
not necessary on the server side, i.e., for sending MP3 files.
libid3tag. For version-2
ID3 tag support, you will need the libid3tag development package
libid3tag0-dev
. Without libid3tag, only version-1 tags are
recognized. The mp3 tagger also needs this library for modifying
(id3v1 and id3v2) tags.
ogg vorbis. For ogg vorbis streams
you need libogg, libvorbis, libvorbisfile. The corresponding Debian
packages are called libogg-dev
and libvorbis-dev
.
libfaad and mp4ff. For aac files
(m4a) you need libfaad and libmp4ff (package: libfaad-dev
). Note
that for some distributions, e.g. Ubuntu, mp4ff is not part of the
libfaad package. Install the faad library from sources (available
through the above link) to get the mp4ff library and header files.
speex. In order to stream or decode speex
files, libspeex (libspeex-dev
) is required.
flac. To stream or decode files
encoded with the Free Lossless Audio Codec, libFLAC (libFLAC-dev
)
must be installed.
libsamplerate. The
resample filter will only be compiled if this library is
installed. Debian package: libsamplerate-dev
.
alsa-lib. On Linux, you will
need to have the ALSA development package libasound2-dev
installed.
libao. Needed to build
the ao writer (ESD, PulseAudio,…). Debian package: libao-dev
.
curses. Needed for
para_gui. Debian package: libncurses-dev
.
GNU Readline. Only if
this library (libreadline-dev
) is installed, para_play is built,
Without it, para_client(1) and para_audioc(1) still work, but lack
support for interactive sessions.
To build the sources from a tarball, execute
./configure && make
To build from git or a gitweb snapshot, run this command instead:
./autogen.sh
There should be no errors but probably some warnings about missing packages which usually implies that not all audio formats will be supported. If headers or libs are installed at unusual locations you might need to tell the configure script where to find them. Try
./configure --help
to see a list of options. If the paraslash package was compiled successfully, execute (optionally)
make test
to run the paraslash test suite. If all tests pass, execute as root
make install
to install executables under /usr/local/bin and the man pages under /usr/local/man.
In order to control para_server at runtime you must create a paraslash user. As authentication is based on the RSA crypto system you’ll have to create an RSA key pair. If you already have a user and an RSA key pair, you may skip this step.
In this section we’ll assume a typical setup: You would like to run para_server on some host called server_host as user foo, and you want to connect to para_server from another machine called client_host as user bar.
As foo@server_host, create ~/.paraslash/server.users by typing the following commands:
user=bar
target=~/.paraslash/server.users
key=~/.paraslash/id_rsa.pub.$user
perms=AFS_READ,AFS_WRITE,VSS_READ,VSS_WRITE
mkdir -p ~/.paraslash
echo "user $user $key $perms" >> $target
Next, change to the “bar” account on client_host and generate the key pair with the commands
ssh-keygen -q -t rsa -b 2048 -N '' -m RFC4716
This generates the two files id_rsa and id_rsa.pub in ~/.ssh. Note that para_server won’t accept keys shorter than 2048 bits. Moreover, para_client rejects private keys which are world-readable.
para_server only needs to know the public key of the key pair just created. Copy this public key to server_host:
src=~/.ssh/id_rsa.pub
dest=.paraslash/id_rsa.pub.$LOGNAME
scp $src foo@server_host:$dest
Finally, tell para_client to connect to server_host:
conf=~/.paraslash/client.conf
echo 'hostname server_host' > $conf
For this first try, we’ll use the info loglevel to make the output of para_server more verbose.
para_server -l info
Now you can use para_client to connect to the server and issue commands. Open a new shell as bar@client_host and try
para_client help
para_client si
to retrieve the list of available commands and some server info. Don’t proceed if this doesn’t work.
An empty database is created with
para_client init
This initializes a couple of empty tables under ~/.paraslash/afs_database-0.7. You normally don’t need to look at these tables, but it’s good to know that you can start from scratch with
rm -rf ~/.paraslash/afs_database-0.7
in case something went wrong.
Next, you need to add some audio files to that database so that para_server knows about them. Choose an absolute path to a directory containing some audio files and add them to the audio file table:
para_client add /my/mp3/dir
This might take a while, so it is a good idea to start with a directory containing not too many files. Note that the table only contains data about the audio files found, not the files themselves.
You may print the list of all known audio files with
para_client ls
We will have to tell para_audiod that it should receive the audio stream from server_host via http:
para_audiod -l info -r '.:http -i server_host'
You should now be able to listen to the audio stream once para_server starts streaming. To activate streaming, execute
para_client play
Since no playlist has been specified yet, the “dummy” mode which selects all known audio files is activated automatically. See the section on the audio file selector for how to use playlists and moods to specify which files should be streamed in which order.
To identify streaming problems try to receive, decode and play the stream manually using para_recv, para_filter and para_write as follows. For simplicity we assume that you’re running Linux/ALSA and that only MP3 files have been added to the database.
para_recv -r 'http -i server_host' > file.mp3
# (interrupt with CTRL+C after a few seconds)
ls -l file.mp3 # should not be empty
para_filter -f mp3dec -f wav < file.mp3 > file.wav
ls -l file.wav # should be much bigger than file.mp3
para_write -w alsa < file.wav
Double check what is logged by para_server and use the –loglevel option of para_recv, para_filter and para_write to increase verbosity.
para_server uses a challenge-response mechanism to authenticate requests from incoming connections, similar to ssh’s public key authentication method. Authenticated connections are encrypted using the AES stream cipher in integer counter mode.
In this chapter we briefly describe RSA and AES, and sketch the authentication handshake between para_client and para_server. User management is discussed in the section on the user_list file. These sections are all about communication between the client and the server. Connecting para_audiod is a different matter and is described in a separate section.
A block cipher is a transformation which operates on fixed-length blocks. For symmetric block ciphers the transformation is determined by a single key for both encryption and decryption. For asymmetric block ciphers, on the other hand, the key consists of two parts, called the public key and the private key. A message can be encrypted with either key and only the counterpart of that key can decrypt the message. Asymmetric block ciphers can be used for both signing and encrypting a message.
RSA is an asymmetric block cipher which is used in many applications, including ssh and gpg. The RSA public key encryption and signatures algorithms are defined in detail in RFC 2437. Paraslash relies on RSA for authentication.
Stream ciphers XOR the input with a pseudo-random key stream to produce the output. Decryption uses the same function calls as encryption. Any block cipher can be turned into a stream cipher by generating the pseudo-random key stream by encrypting successive values of a counter (counter mode).
AES, the advanced encryption standard, is a well-known symmetric block cipher. Paraslash employs AES in counter mode as described above to encrypt communications. Since a stream cipher key must not be used twice, a random key is generated for every new connection.
The authentication handshake between para_client and para_server goes as follows:
para_client connects to para_server and sends an authentication request for a user. It does so by connecting to TCP port 2990 of the server host. This port is called the para_server control port.
para_server accepts the connection and forks a child process which handles the incoming request. The parent process keeps listening on the control port while the child process (also called para_server below) continues as follows.
para_server loads the RSA public key of that user, fills a fixed-length buffer with random bytes, encrypts that buffer using the public key and sends the encrypted buffer to the client. The first part of the buffer is the challenge which is used for authentication while the second part is the session key.
para_client receives the encrypted buffer and decrypts it with the user’s private key, thereby obtaining the challenge buffer and the session key. It hashes the challenge buffer with a crytographic hash function, sends the hash value back to para_server and stores the session key for further use.
para_server also computes the hash value of the challenge and compares it against what was sent back by the client.
If the two hashes do not match, the authentication has failed and para_server closes the connection.
Otherwise the user is considered authenticated and the client is allowed to proceed by sending a command to be executed. From this point on the communication is encrypted using the stream cipher with the session key known to both peers.
paraslash relies on the quality of the pseudo-random bytes provided by the crypto library (openssl or libgcrypt), on the security of the implementation of the RSA and AES crypto routines and on the infeasibility to invert the hash function.
Neither para_server or para_client create RSA keys on their own. This has to be done once for each user as sketched in Quick start and discussed in more detail below.
At startup para_server reads the user list file which contains one line per user. The default location of the user list file may be changed with the –user-list option.
There should be at least one user in this file. Each user must have an RSA key pair. The public part of the key is needed by para_server while the private key is needed by para_client. Each line of the user list file must be of the form
user <username> <key> <perms>
where username is an arbitrary string (usually the user’s login name), key is the full path to that user’s public RSA key, and perms is a comma-separated list of zero or more of the following permission bits:
+---------------------------------------------------------+
| AFS_READ | read the contents of the databases |
+-----------+---------------------------------------------+
| AFS_WRITE | change database contents |
+-----------+---------------------------------------------+
| VSS_READ | obtain information about the current stream |
+-----------+---------------------------------------------+
| VSS_WRITE | change the current stream |
+---------------------------------------------------------+
The permission bits specify which commands the user is allowed to execute. The output of
para_client help
contains the permissions needed to execute the command.
It is possible to make para_server reread the user_list file by executing the paraslash “hup” command or by sending SIGHUP to the PID of para_server.
para_audiod listens on a Unix domain socket. Those sockets are for local communication only, so only local users can connect to para_audiod. The default is to let any user connect but this can be restricted on platforms that support UNIX socket credentials which allow para_audiod to obtain the Unix credentials of the connecting process.
Use para_audiod’s –user-allow option to allow connections only for a limited set of users.
paraslash comes with a sophisticated audio file selector (AFS), whose main task is to determine which file to stream next, based on information on the audio files stored in a database. It communicates also with para_client via the command handler whenever an AFS command is executed, for example to answer a database query.
Besides the simple playlists, AFS supports audio file selection based on moods which act as a filter that limits the set of all known audio files to those which satisfy certain criteria. It also maintains tables containing images (e.g. album cover art) and lyrics that can be associated with one or more audio files.
In this chapter we sketch the setup of the AFS process during server startup and proceed with the description of the layout of the various database tables. The section on playlists and moods explains these two audio file selection mechanisms in detail and contains practical examples. The way file renames and content changes are detected is discussed briefly before the Troubleshooting section concludes the chapter.
On startup, para_server forks to create the AFS process which opens the database tables. The AFS process accepts incoming connections which arrive either on a pipe which is shared with para_server, or on the local socket it is listening on. The setup is as follows.
.___________________. .______________.
| | | |
| virtual streaming | | audio format |
| system | | handler |
|_________ _______| |_____ ______|
\ / \ /
| |
.-'""""`-. | | .-'""""`-.
( ) | | ( )
|`-.____.-'| .__/ \________________/ \___. |`-.____.-'|
| | | | | |
| file |----| AFS (audio file selector) |----| OSL |
| system | | process | | database |
| | |___________________________| | |
|. ' "" ` .| | |. ' "" ` .|
| | | | |
`-.____.-' | `-.____.-'
._______/ \_______.
| |
| command handler |
|_______ _______|
\ /
|
|
|
._____/ \_____.
| |
| para_client |
|_____________|
The virtual streaming system, which is part of the server process, communicates with the AFS process via pipes and shared memory. When the current audio file changes, it sends a notification through the shared pipe. The AFS process queries the database to determine the next audio file, opens it, verifies that it has not been changed since it was added to the database and passes the open file descriptor back to the virtual streaming system, along with audio file meta-data such as file name, duration, audio format and so on. The virtual streaming system then starts to stream the file.
The command handlers of all AFS server commands use the local socket to query or update the database. For example, the command handler of the add command sends the path of an audio file to the local socket. The AFS process opens the file and tries to find an audio format handler which recognizes the file. If all goes well, a new database entry with metadata obtained from the audio format handler is added to the database.
Note that AFS employs libosl, the object storage layer library, as the database backend. This library offers functionality similar to a relational database, but is much more lightweight than a full featured database management system.
Metadata about the known audio files is stored in an OSL database. This database consists of the following tables:
The audio file table contains path, hash and metadata of each known file.
The “attributes” table maps each of the 64 possible attributes to a string.
The “blob” tables store images, lyrics, moods, playlists. All of these are optional.
The “score” table describes the subset of admissible files for the current playlist or mood.
All tables are described in more detail below.
This is the most important and usually also the largest table of the AFS database. It contains the information needed to stream each audio file. In particular the following data is stored for each audio file.
The cryptographic hash value of the audio file contents. This is computed once when the file is added to the database. Whenever AFS selects this audio file for streaming the hash value is recomputed and checked against the value stored in the database to detect content changes.
The time when this audio file was last played.
The number of times the file has been played so far.
The attribute bitmask.
The image id which describes the image associated with this audio file.
The lyrics id which describes the lyrics associated with this audio file.
The audio format id (MP3, OGG, …).
An amplification value that can be used by the amplification filter to pre-amplify the decoded audio stream.
The chunk table. It describes the location and the timing of the building blocks of the audio file. This is used by para_server to send chunks of the file at appropriate times.
The duration of the audio file.
Tag information contained in the audio file (ID3 tags, Vorbis comments, …).
The number of channels
The encoding bitrate.
The sampling frequency.
To add or refresh the data contained in the audio file table, the add command is used. It takes the full path of either an audio file or a directory. In the latter case, the directory is traversed recursively and all files which are recognized as valid audio files are added to the database.
The attribute table contains two columns, name and bitnum. An attribute is simply a name for a certain bit number in the attribute bitmask of the audio file table.
Each of the 64 bits of the attribute bitmask can be set for each audio file individually. Hence up to 64 different attributes may be defined. For example, “pop”, “rock”, “blues”, “jazz”, “instrumental”, “german_lyrics”, “speech”, whatever. You are free to choose as many attributes as you like and there are no naming restrictions for attributes.
A new attribute “test” is created by
para_client addatt test
and para_client lsatt
lists all available attributes. You can set the “test” attribute for an audio file by executing
para_client setatt test+ /path/to/the/audio/file
Similarly, the “test” bit can be removed from an audio file with
para_client setatt test- /path/to/the/audio/file
Instead of a path you may use a shell wildcard pattern. The attribute is applied to all audio files matching this pattern:
para_client setatt test+ '/test/directory/*'
The command
para_client -- ls -l=v
gives you a verbose listing of your audio files also showing which attributes are set.
In case you wonder why the double-dash in the above command is needed: It tells para_client to not interpret the options after the dashes. If you find this annoying, just say
alias para='para_client --'
and be happy. In what follows we shall use this alias.
The “test” attribute can be dropped from the database with
para rmatt test
Read the output of
para help ls
para help setatt
for more information and a complete list of command line options to these commands.
The image, lyrics, moods and playlists tables are all blob tables. Blob tables consist of three columns each: The identifier which is a positive number that is auto-incremented, the name (an arbitrary string) and the content (the blob).
All blob tables support the same set of actions: cat, ls, mv, rm and add. Of course, add is used for adding new blobs to the table while the other actions have the same meaning as the corresponding Unix commands. The paraslash commands to perform these actions are constructed as the concatenation of the table name and the action. For example addimg, catimg, lsimg, mvimg, rmimg are the commands that manipulate or query the image table.
The add variant of these commands is special as these commands read the blob contents from stdin. To add an image to the image table the command
para addimg image_name < file.jpg
can be used.
Note that the images and lyrics are not interpreted at all, and also the playlist and the mood blobs are only investigated when the mood or playlist is activated with the select command.
The score table describes those audio files which are admissible for the current mood or playlist (see below). The table has two columns: a pointer to a row of the audio file table and a score value.
Unlike all other tables of the database, the score table remains in memory and is never stored on disk. It is initialized at startup and recomputed when the select command loads a new mood or playlist.
When the audio file selector is asked to open the next audio file, it picks the row with the highest score, opens the corresponding file and passes the file descriptor to the virtual streaming system. At this point the last_played and the num_played fields of the selected file are updated and the score is recomputed.
Playlists and moods offer two different ways of specifying the set of admissible files. A playlist in itself describes a set of admissible files. A mood, in contrast, describes the set of admissible files in terms of attributes and other type of information available in the audio file table. As an example, a mood can define a filename pattern, which is then matched against the names of audio files in the table.
Playlists are accommodated in the playlist table of the afs database, using the aforementioned blob format for tables. A new playlist is created with the addpl command by specifying the full (absolute) paths of all desired audio files, separated by newlines. Example:
find /my/mp3/dir -name "*.mp3" | para addpl my_playlist
If my_playlist already exists it is overwritten. To activate the new playlist, execute
para select p/my_playlist
The audio file selector will assign scores to each entry of the list, in descending order so that files will be selected in order. If a file could not be opened for streaming, its entry is removed from the score table (but not from the playlist).
A mood consists of a unique name and a definition. The definition is an expression which describes which audio files are considered admissible. At any time at most one mood can be active, meaning that para_server will only stream files which are admissible for the active mood.
The expression may refer to attributes and other metadata stored in the database. Expressions may be combined by means of logical and arithmetical operators in a natural way. Moreover, string matching based on regular expression or wildcard patterns is supported.
The set of admissible files is determined by applying the expression to each audio file in turn. For a mood definition to be valid, its expression must evaluate to a number, a string or a boolean value (“true” or “false”). For numbers, any value other than zero means the file is admissible. For strings, any non-empty string indicates an admissible file. For boolean values, true means admissible and false means not admissible. As a special case, the empty expression treats all files as admissible.
Expressions are based on a context-free grammar which distinguishes between several types for syntactic units or groupings. The grammar defines a set of keywords which have a type and a corresponding semantic value, as shown in the following table.
Keyword | Type | Semantic value |
---|---|---|
path |
string | Full path of the current audio file |
artist |
string | Content of the artist meta tag |
title |
string | Content of the title meta tag |
album |
string | Content of the album meta tag |
comment |
string | Content of the somment meta tag |
num_attributes_set |
integer | Number of attributes which are set |
year |
integer | Content of the year meta tag [*] |
num_played |
integer | How many times the file has been streamed |
image_id |
integer | The identifier of the (cover art) image |
lyrics_id |
integer | The identifier of the lyrics blob |
bitrate |
integer | The average bitrate |
frequency |
integer | The output sample rate |
channels |
integer | The number of channels |
duration |
integer | The number of milliseconds |
is_set("foo") |
boolean | True if attribute “foo” is set. |
[*] For most audio formats, the year tag is stored as a string. It is converted to an integer by the mood parser. If the audio file has no year tag or the content of the year tag is not a number, the semantic value is zero. A special convention applies if the year tag is a one-digit or a two-digit number. In this case 1900 is added to the tag value.
Expressions may be grouped using parentheses, logical and arithmetical operators or string matching operators. The following table lists the available operators.
Token | Meaning |
---|---|
|| |
Logical Or |
&& |
Logical And |
! |
Logical Not |
== |
Equal (can be applied to all types) |
!= |
Not equal. Likewise |
< |
Less than |
<= |
Less or equal |
>= |
Greater or equal |
+ |
Arithmetical minus |
- |
Binary/unary minus |
* |
Multiplication |
/ |
Division |
=~ |
Regular expression match |
=| |
Filename match |
Besides integers, strings and booleans there is an additional type which describes regular expression or wildcard patterns. Patterns are not just strings because they also include a list of flags which modify matching behaviour.
Regular expression patterns are of the form /pattern/[flags]
. That
is, the pattern is delimited by slashes, and is followed by zero or
more characters, each specifying a flag according to the following
table
Flag | POSIX name | Meaning |
---|---|---|
i |
REG_ICASE |
Ignore case in match |
n |
REG_NEWLINE |
Treat newline as an ordinary character |
Note that only extended regular expression patterns are supported. See regex(3) for details.
Wildcard patterns are similar, but the pattern must be delimited by
'|'
characters rather than slashes. For wildcard patterns different
flags exist, as shown below.
Flag | POSIX name | Meaning |
---|---|---|
n |
FNM_NOESCAPE |
Treat backslash as an ordinary character |
p |
FNM_PATHNAME |
Match a slash only with a slash in pattern |
P |
FNM_PERIOD |
Leading period has to be matched exactly |
l |
FNM_LEADING_DIR [*] |
Ignore “/*” rest after successful matching |
i |
FNM_CASEFOLD [*] |
Ignore case in match |
e |
FNM_EXTMATCH [**] |
Enable extended pattern matching |
[*] Not in POSIX, but both FreeBSD and NetBSD have it.
[**] GNU extension, silently ignored on non GNU systems.
See fnmatch(3) for details.
Mood definitions may contain arbitrary whitespace and comments. A comment is a word beginning with #. This word and all remaining characters of the line are ignored.
Files with no/invalid year tag: year == 0
Only oldies: year != 0 && year < 1980
Only 80’s Rock or Metal: (year >= 1980 && year < 1990) &&
(is_set("rock") || is_set("metal"))
Files with incomplete tags: artist == "" || title == "" || album =
"" || comment == "" || year == 0
Files with no attributes defined so far: num_attributes_set == 0
Only newly added files: num_played == 0
Only poor quality files: bitrate < 96
Cope with different spellings of Motörhead: artist =~ /mot(ö|oe{0,1})rhead/i
The same with extended wildcard patterns: artist =| |mot+(o\|oe\|ö)rhead|ie
To create a new mood called “my_mood”, write its definition into some temporary file, say “tmpfile”, and add it to the mood table by executing
para addmood my_mood < tmpfile
If the mood definition is really short, you may just pipe it to the client instead of using temporary files. Like this:
echo "$MOOD_DEFINITION" | para addmood my_mood
There is no need to keep the temporary file since you can always use the catmood command to get it back:
para catmood my_mood
A mood can be activated by executing
para select m/my_mood
Once active, the list of admissible files is shown by the ls command if the “-a” switch is given:
para ls -a
Since the audio file selector knows the hash of each audio file that has been added to the afs database, it recognizes if the content of a file has changed, e.g. because an ID3 tag was added or modified. Also, if a file has been renamed or moved to a different location, afs will detect that an entry with the same hash value already exists in the audio file table.
In both cases it is enough to just re-add the new file. In the first case (file content changed), the audio table is updated, while metadata such as the num_played and last_played fields, as well as the attributes, remain unchanged. In the other case, when the file is moved or renamed, only the path information is updated, all other data remains as before.
It is possible to change the behaviour of the add command by using the “-l” (lazy add) or the “-f” (force add) option.
Use the debug loglevel (-l debug) to show debugging info. All paraslash executables have a brief online help which is displayed when -h is given. The –detailed-help option prints the full help text.
If para_server crashed or was killed by SIGKILL (signal 9), it may refuse to start again because of “dirty osl tables”. In this case you’ll have to run the oslfsck program of libosl to fix your database:
oslfsck -fd ~/.paraslash/afs_database-0.7
However, make sure para_server isn’t running before executing oslfsck.
If you don’t mind to recreate your database you can start from scratch by removing the entire database directory, i.e.
rm -rf ~/.paraslash/afs_database-0.7
Be aware that this removes all attribute definitions, all playlists and all mood definitions and requires to re-initialize the tables.
Although oslfsck fixes inconsistencies in database tables it doesn’t care about the table contents. To check for invalid table contents, use
para_client check
This prints out references to missing audio files as well as invalid playlists and mood definitions.
Similarly, para_audiod refuses to start if its socket file exists, since this indicates that another instance of para_audiod is running. After a crash a stale socket file might remain and you must run
para_audiod --force
once to fix it up.
The following audio formats are supported by paraslash:
Mp3, MPEG-1 Audio Layer 3, is a common audio format for audio storage, designed as part of its MPEG-1 standard. An MP3 file is made up of multiple MP3 frames, which consist of a header and a data block. The size of an MP3 frame depends on the bit rate and on the number of channels. For a typical CD-audio file (sample rate of 44.1 kHz stereo), encoded with a bit rate of 128 kbit, an MP3 frame is about 400 bytes large.
OGG is a standardized audio container format, while Vorbis is an open source codec for lossy audio compression. Since Vorbis is most commonly made available via the OGG container format, it is often referred to as OGG/Vorbis. The OGG container format divides data into chunks called OGG pages. A typical OGG page is about 4KB large. The Vorbis codec creates variable-bitrate (VBR) data, where the bitrate may vary considerably.
Speex is an open-source speech codec that is based on CELP (Code Excited Linear Prediction) coding. It is designed for voice over IP applications, has modest complexity and a small memory footprint. Wideband and narrowband (telephone quality) speech are supported. As for Vorbis audio, Speex bit-streams are often stored in OGG files. As of 2012 this codec is considered obsolete since the Oppus codec, described below, surpasses its performance in all areas.
Opus is a lossy audio compression format standardized through RFC 6716 in 2012. It combines the speech-oriented SILK codec and the low-latency CELT (Constrained Energy Lapped Transform) codec. Like OGG/Vorbis and OGG/Speex, Opus data is usually encapsulated in OGG containers. All known software patents which cover Opus are licensed under royalty-free terms.
Advanced Audio Coding (AAC) is a standardized, lossy compression and encoding scheme for digital audio which is the default audio format for Apple’s iPhone, iPod, iTunes. Usually MPEG-4 is used as the container format and audio files encoded with AAC have the .m4a extension. A typical AAC frame is about 700 bytes large.
Windows Media Audio (WMA) is an audio data compression technology developed by Microsoft. A WMA file is usually encapsulated in the Advanced Systems Format (ASF) container format, which also specifies how meta data about the file is to be encoded. The bit stream of WMA is composed of superframes, each containing one or more frames of 2048 samples. For 16 bit stereo a WMA superframe is about 8K large.
The Free Lossless Audio Codec (FLAC) compresses audio without quality loss. It gives better compression ratios than a general purpose compressor like zip or bzip2 because FLAC is designed specifically for audio. A FLAC-encoded file consists of frames of varying size, up to 16K. Each frame starts with a header that contains all information necessary to decode the frame.
Unfortunately, each audio format has its own conventions how meta data is added as tags to the audio file.
For MP3 files, ID3, version 1 and 2 are widely used. ID3 version 1 is rather simple but also very limited as it supports only artist, title, album, year and comment tags. Each of these can only be at most 32 characters long. ID3, version 2 is much more flexible but requires a separate library being installed for paraslash to support it.
Ogg vorbis, ogg speex and flac files contain meta data as Vorbis comments, which are typically implemented as strings of the form “[TAG]=[VALUE]”. Unlike ID3 version 1 tags, one may use whichever tags are appropriate for the content.
AAC files usually use the MPEG-4 container format for storing meta data while WMA files wrap meta data as special objects within the ASF container format.
paraslash only tracks the most common tags that are supported by all tag variants: artist, title, year, album, comment. When a file is added to the AFS database, the meta data of the file is extracted and stored in the audio file table.
paraslash uses the word “chunk” as common term for the building blocks of an audio file. For MP3 files, a chunk is the same as an MP3 frame, while for OGG files a chunk is an OGG page, etc. Therefore the chunk size varies considerably between audio formats, from a few hundred bytes (MP3) up to 16K (FLAC).
The chunk table contains the offsets within the audio file that correspond to the chunk boundaries of the file. Like the meta data, the chunk table is computed and stored in the database whenever an audio file is added.
The paraslash senders (see below) always send complete chunks. The granularity for seeking is therefore determined by the chunk size.
For each audio format paraslash contains an audio format handler whose first task is to tell whether a given file is a valid audio file of this type. If so, the audio file handler extracts some technical data (duration, sampling rate, number of channels etc.), computes the chunk table and reads the meta data.
The audio format handler code is linked into para_server and executed via the add command. The same code is also available as a stand-alone tool, para_afh, which prints the technical data, the chunk table and the meta data of a file. Moreover, all audio format handlers are combined in the afh receiver which is part of para_recv and para_play.
Paraslash uses different network connections for control and data. para_client communicates with para_server over a dedicated TCP control connection. To transport audio data, separate data connections are used. For these data connections, a variety of transports (UDP, DCCP, HTTP) can be chosen.
The chapter starts with the control service, followed by a section on the various streaming protocols in which the data connections are described. The way audio file headers are embedded into the stream is discussed briefly before the example section which illustrates typical commands for real-life scenarios.
Both IPv4 and IPv6 are supported.
para_server is controlled at runtime via the paraslash control connection. This connection is used for server commands (play, stop, …) as well as for afs commands (ls, select, …).
The server listens on a TCP port and accepts connections from clients that connect the open port. Each connection causes the server to fork off a client process which inherits the connection and deals with that client only. In this classical accept/fork approach the server process is unaffected if the child dies or goes crazy for whatever reason. In fact, the child process can not change address space of server process.
The section on client-server authentication above described the early connection establishment from the crypto point of view. Here it is described what happens after the connection (including crypto setup) has been established. There are four processes involved during command dispatch as sketched in the following diagram.
server_host client_host
~~~~~~~~~~~ ~~~~~~~~~~~
+-----------+ connect +-----------+
|para_server|<------------------------------ |para_client|
+-----------+ +-----------+
| ^
| fork +---+ |
+----------> |AFS| |
| +---+ |
| ^ |
| | |
| | connect (cookie) |
| | |
| | |
| fork +-----+ inherited connection |
+---------->|child|<--------------------------+
+-----+
Note that the child process is not a child of the afs process, so communication of these two processes has to happen via local sockets. In order to avoid abuse of the local socket by unrelated processes, a magic cookie is created once at server startup time just before the server process forks off the AFS process. This cookie is known to the server, AFS and the child, but not to unrelated processes.
There are two different kinds of commands: First there are commands that cause the server to respond with some answer such as the list of all audio files. All but the addblob commands (addimg, addlyr, addpl, addmood) are of this kind. The addblob commands add contents to the database, so they need to transfer data the other way round, from the client to the server.
There is no knowledge about the server commands built into para_client, so it does not know about addblob commands. Instead, the server sends a special “awaiting data” packet for these commands. If the client receives this packet, it sends STDIN to the server, otherwise it dumps data from the server to STDOUT.
A network (audio) stream usually consists of one streaming source, the sender, and one or more receivers which read data over the network from the streaming source.
Senders are thus part of para_server while receivers are part of para_audiod. Moreover, there is the stand-alone tool para_recv which can be used to manually download a stream, either from para_server or from a web-based audio streaming service.
The following three streaming protocols are supported by paraslash:
HTTP. Recommended for public streams that can be played by any player like mpg123, xmms, itunes, winamp, etc. The HTTP sender is supported on all operating systems and all platforms.
DCCP. Recommended for LAN streaming. DCCP is currently available only for Linux.
UDP. Recommended for multicast LAN streaming.
See the Appendix on network protocols for brief descriptions of the various protocols relevant for network audio streaming with paraslash.
It is possible to activate more than one sender simultaneously. Senders can be controlled at run time and via config file and command line options.
Note that audio connections are not encrypted. Transport or Internet layer encryption should be used if encrypted data connections are needed.
Since DCCP and TCP are both connection-oriented protocols, connection establishment/teardown and access control are very similar between these two streaming protocols. UDP is the most lightweight option, since in contrast to TCP/DCCP it is connectionless. It is also the only protocol supporting IP multicast.
The HTTP and the DCCP sender listen on a (TCP/DCCP) port waiting for clients to connect and establish a connection via some protocol-defined handshake mechanism. Both senders maintain two linked lists each: The list of all clients which are currently connected, and the list of access control entries which determines who is allowed to connect. IP-based access control may be configured through config file and command line options and via the “allow” and “deny” sender subcommands.
Upon receiving a GET request from the client, the HTTP sender sends back a status line and a message. The body of this message is the audio stream. This is common practice and is supported by many popular clients which can thus be used to play a stream offered by para_server. For DCCP things are a bit simpler: No messages are exchanged between the receiver and sender. The client simply connects and the sender starts to stream.
DCCP is an experimental protocol which offers a number of new features not available for TCP. Both ends can negotiate these features using a built-in negotiation mechanism. In contrast to TCP/HTTP, DCCP is datagram-based (no retransmissions) and thus should not be used over lossy media (e.g. WiFi networks). One useful feature offered by DCCP is access to a variety of different congestion-control mechanisms called CCIDs. Two different CCIDs are available per default on Linux:
CCID 2. A Congestion Control mechanism similar to that of TCP. The sender maintains a congestion window and halves this window in response to congestion.
CCID-3. Designed to be fair when competing for bandwidth. It has lower variation of throughput over time compared with TCP, which makes it suitable for streaming media.
Unlike the HTTP and DCCP senders, the UDP sender maintains only a single list, the target list. This list describes the set of clients to which the stream is sent. There is no list for access control and no “allow” and “deny” commands for the UDP sender. Instead, the “add” and “delete” commands can be used to modify the target list.
Since both UDP and DCCP offer an unreliable datagram-based transport, additional measures are necessary to guard against disruptions over networks that are lossy or which may be subject to interference (as is for instance the case with WiFi). Paraslash uses FEC (Forward Error Correction) to guard against packet losses and reordering. The stream is FEC-encoded before it is sent through the UDP socket and must be decoded accordingly on the receiver side.
The packet size and the amount of redundancy introduced by FEC can be configured via the FEC parameters which are dictated by server and may also be configured through the “sender” command. The FEC parameters are encoded in the header of each network packet, so no configuration is necessary on the receiver side. See the section on FEC below.
For OGG/Vorbis, OGG/Speex and wma streams, some of the information needed to decode the stream is only contained in the audio file header of the container format but not in each data chunk. Clients must be able to obtain this information in case streaming starts in the middle of the file or if para_audiod is started while para_server is already sending a stream.
This is accomplished in different ways, depending on the streaming protocol. For connection-oriented streams (HTTP, DCCP) the audio file header is sent prior to audio file data. This technique however does not work for the connectionless UDP transport. Hence the audio file header is periodically being embedded into the UDP audio data stream. By default, the header is resent after five seconds. The receiver has to wait until the next header arrives before it can start decoding the stream.
The “si” (server info) command lists some information about the currently running server process.
-> Show PIDs, number of connected clients, uptime, and more:
para_client si
By default para_server activates both the HTTP and th DCCP sender on startup. This can be changed via command line options or para_server’s config file.
-> List config file options for senders:
para_server -h
-> Receive a DCCP stream using CCID2 and write the output into a file:
host=foo.org; ccid=2; filename=bar
para_recv --receiver "dccp --host $host --ccid $ccid" > $filename
Note the quotes around the arguments for the dccp receiver. Each receiver has its own set of command line options and its own command line parser, so arguments for the dccp receiver must be protected from being interpreted by para_recv.
-> Receive FEC-encoded multicast stream and write the output into a file:
filename=foo
para_recv -r udp > $filename
-> Receive this (FEC-encoded) unicast stream:
filename=foo
para_recv -r 'udp -i 0.0.0.0' > $filename
-> Create a minimal config for para_audiod for HTTP streams:
c=$HOME/.paraslash/audiod.conf.min; s=server.foo.com
echo receiver \".:http -i $s\" > $c
para_audiod --config $c
A paraslash filter is a module which transforms an input stream into an output stream. Filters are included in the para_audiod executable and in the stand-alone tool para_filter which usually contains the same modules.
While para_filter reads its input stream from STDIN and writes the output to STDOUT, the filter modules of para_audiod are always connected to a receiver which produces the input stream and a writer which absorbs the output stream.
Some filters depend on a specific library and are not compiled in if this library was not found at compile time. To see the list of supported filters, run para_filter and para_audiod with the –help option. The output looks similar to the following:
Available filters:
compress wav amp fecdec wmadec prebuffer oggdec aacdec mp3dec
Out of these filter modules, a chain of filters can be constructed, much in the way Unix pipes can be chained, and analogous to the use of modules in gstreamer: The output of the first filter becomes the input of the second filter. There is no limitation on the number of filters and the same filter may occur more than once.
Like receivers, each filter has its own command line options which must be quoted to protect them from the command line options of the driving application (para_audiod or para_filter). Example:
para_filter -f 'mp3dec --ignore-crc' -f 'compress --damp 1'
For para_audiod, each audio format has its own set of filters. The name of the audio format for which the filter should be applied can be used as the prefix for the filter option. Example:
para_audiod -f 'mp3:prebuffer --duration 300'
The “mp3” prefix above is actually interpreted as a POSIX extended regular expression. Therefore
para_audiod -f '.:prebuffer --duration 300'
activates the prebuffer filter for all supported audio formats (because “.” matches all audio formats) while
para_audiod -f 'wma|ogg:prebuffer --duration 300'
activates it only for wma and ogg streams.
For each supported audio format there is a corresponding filter which decodes audio data in this format to 16 bit PCM data which can be directly sent to the sound device or any other software that operates on undecoded PCM data (visualizers, equalizers etc.). Such filters are called decoders in general, and xxxdec is the name of the paraslash decoder for the audio format xxx. For example, the mp3 decoder is called mp3dec.
Note that the output of the decoder is about 10 times larger than its input. This means that filters that operate on the decoded audio stream have to deal with much more data than filters that transform the audio stream before it is fed to the decoder.
Paraslash relies on external libraries for most decoders, so these libraries must be installed for the decoder to be included in the executables. For example, the mp3dec filter depends on the mad library.
As already mentioned earlier, paraslash uses forward error correction (FEC) for the unreliable UDP and DCCP transports. FEC is a technique which was invented already in 1960 by Reed and Solomon and which is widely used for the parity calculations of storage devices (RAID arrays). It is based on the algebraic concept of finite fields, today called Galois fields, in honour of the mathematician Galois (1811-1832). The FEC implementation of paraslash is based on code by Luigi Rizzo.
Although the details require a sound knowledge of the underlying mathematics, the basic idea is not hard to understand: For positive integers k and n with k < n it is possible to compute for any k given data bytes d_1, …, d_k the corresponding r := n -k parity bytes p_1, …, p_r such that all data bytes can be reconstructed from any k bytes of the set
{d_1, ..., d_k, p_1, ..., p_r}.
FEC-encoding for unreliable network transports boils down to slicing the audio stream into groups of k suitably sized pieces called slices and computing the r corresponding parity slices. This step is performed in para_server which then sends both the data and the parity slices over the unreliable network connection. If the client was able to receive at least k of the n = k + r slices, it can reconstruct (FEC-decode) the original audio stream.
From these observations it is clear that there are three different FEC parameters: The slice size, the number of data slices k, and the total number of slices n. It is crucial to choose the slice size such that no fragmentation of network packets takes place because FEC only guards against losses and reordering but fails if slices are received partially.
FEC decoding in paralash is performed through the fecdec filter which usually is the first filter (there can be other filters before fecdec if these do not alter the audio stream).
The amp and the compress filter both adjust the volume of the audio stream. These filters operate on uncompressed audio samples. Hence they are usually placed directly after the decoding filter. Each sample is multiplied with a scaling factor (>= 1) which makes amp and compress quite expensive in terms of computing power.
The amp filter amplifies the audio stream by a fixed scaling factor that must be known in advance. For para_audiod this factor is derived from the amplification field of the audio file’s entry in the audio file table while para_filter uses the value given at the command line.
The optimal scaling factor F for an audio file is the largest real number F >= 1 such that after multiplication with F all samples still fit into the sample interval [-32768, 32767]. One can use para_filter in combination with the sox utility to compute F:
para_filter -f mp3dec -f wav < file.mp3 | sox -t wav - -e stat -v
The amplification value V which is stored in the audio file table, however, is an integer between 0 and 255 which is connected to F through the formula
V = (F - 1) * 64.
To store V in the audio file table, the command
para_client -- touch -a=V file.mp3
is used. The reader is encouraged to write a script that performs these computations :)
Unlike the amplification filter, the compress filter adjusts the volume of the audio stream dynamically without prior knowledge about the peak value. It maintains the maximal volume of the last n samples of the audio stream and computes a suitable amplification factor based on that value and the various configuration options. It tries to chose this factor such that the adjusted volume meets the desired target level.
Note that it makes sense to combine amp and compress.
These filters are rather simple and do not modify the audio stream at all. The wav filter is only useful with para_filter and in connection with a decoder. It asks the decoder for the number of channels and the sample rate of the stream and adds a Microsoft wave header containing this information at the beginning. This allows writing wav files rather than raw PCM files (which do not contain any information about the number of channels and the sample rate).
The prebuffer filter simply delays the output until the given time has passed (starting from the time the first byte was available in its input queue) or until the given amount of data has accumulated. It is mainly useful for para_audiod if the standard parameters result in buffer underruns.
Both filters require almost no additional computing time, even when operating on uncompressed audio streams, since data buffers are simply “pushed down” rather than copied.
Once an audio stream has been received and decoded to PCM format, it can be sent to a sound device for playback. This part is performed by paraslash writers which are described in this chapter.
A paraslash writer acts as a data sink that consumes but does not produce audio data. Paraslash writers operate on the client side and are contained in para_audiod and in the stand-alone tool para_write.
The para_write program reads uncompressed audio data from STDIN. If this data starts with a wav header, sample rate, sample format and channel count are read from the header. Otherwise CD audio (44.1KHz 16 bit little endian, stereo) is assumed but this can be overridden by command line options. para_audiod, on the other hand, obtains the sample rate and the number of channels from the decoder.
Like receivers and filters, each writer has an individual set of command line options, and for para_audiod writers can be configured per audio format separately. It is possible to activate more than one writer for the same stream simultaneously.
Unfortunately, the various flavours of Unix on which paraslash runs on have different APIs for opening a sound device and starting playback. Hence for each such API there is a paraslash writer that can play the audio stream via this API.
ALSA. The Advanced Linux Sound Architecture is only available on Linux systems. Although there are several mid-layer APIs in use by the various Linux distributions (ESD, Jack, PulseAudio), paraslash currently supports only the low-level ALSA API which is not supposed to be change. ALSA is very feature-rich, in particular it supports software mixing via its DMIX plugin. ALSA is the default writer on Linux systems.
OSS. The Open Sound System is the only API on *BSD Unixes and is also available on Linux systems, usually provided by ALSA as an emulation for backwards compatibility. This API is rather simple but also limited. For example only one application can open the device at any time. The OSS writer is activated by default on BSD Systems.
FILE. The file writer allows capturing the audio stream and writing the PCM data to a file on the file system rather than playing it through a sound device. It is supported on all platforms and is always compiled in.
AO. Libao is a cross-platform audio library which supports a wide variety of platforms including PulseAudio (gnome), ESD (Enlightened Sound Daemon), AIX, Solaris and IRIX. The ao writer plays audio through an output plugin of libao.
-> Use the OSS writer to play a wav file:
para_write --writer oss < file.wav
-> Enable ALSA software mixing for mp3 streams:
para_audiod --writer 'mp3:alsa -d plug:swmix'
para_gui executes an arbitrary command which is supposed to print status information to STDOUT. It then displays this information in a curses window. By default the command
para_audioc -- stat -p
is executed, but this can be customized via the –stat-cmd option. In particular it possible to use
para_client -- stat -p
to make para_gui work on systems on which para_audiod is not running.
It is possible to bind keys to arbitrary commands via custom key-bindings. Besides the internal keys which can not be changed (help, quit, loglevel, version…), the following flavours of key-bindings are supported:
external: Shutdown curses before launching the given command. Useful for starting other ncurses programs from within para_gui, e.g. aumix or dialog scripts. Or, use the mbox output format to write a mailbox containing one mail for each (admissible) file the audio file selector knows about. Then start mutt from within para_gui to browse your collection!
display: Launch the command and display its stdout in para_gui’s bottom window.
para: Like display, but start “para_client
The general form of a key binding is
key_map k:m:c
which maps key k to command c using mode m. Mode may be x, d or p for external, display and paraslash commands, respectively.
Currently there are only two themes for para_gui. It is easy, however, to add more themes. To create a new theme one has to define the position, color and geometry for for each status item that should be shown by this theme. See gui_theme.c for examples.
The “.” and “,” keys are used to switch between themes.
-> Show server info:
key_map "i:p:si"
-> Jump to the middle of the current audio file by pressing F5:
key_map "<F5>:p:jmp 50"
-> vi-like bindings for jumping around:
key_map "l:p:ff 10"
key_map "h:p:ff 10-"
key_map "w:p:ff 60"
key_map "b:p:ff 60-"
-> Print the current date and time:
key_map "D:d:date"
-> Call other curses programs:
key_map "U:x:aumix"
key_map "!:x:/bin/bash"
key_map "^E:x:/bin/sh -c 'vi ~/.paraslash/gui.conf'"
Paraslash is an open source project and contributions are welcome. Here’s a list of things you can do to help the project:
Note that there is no mailing list, no bug tracker and no discussion forum for paraslash. If you’d like to contribute, or have questions about contributing, send email to Andre Noll maan@tuebingen.mpg.de. New releases are announced by email. If you would like to receive these announcements, contact the author through the above address.
In order to compile the sources from the git repository (rather than from tar balls) and for contributing non-trivial changes to the paraslash project, some additional tools should be installed on a developer machine.
git. As described in more detail below, the git source code management tool is used for paraslash development. It is necessary for cloning the git repository and for getting updates.
autoconf GNU autoconf creates the configure file which is shipped in the tarballs but has to be generated when compiling from git.
discount. The HTML version of this manual and some of the paraslash web pages are written in the Markdown markup language and are translated into html with the converter of the Discount package.
doxygen. The documentation of paraslash’s C sources uses the doxygen documentation system. The conventions for documenting the source code is described in the Doxygen section.
global. This is used to generate browsable HTML from the C sources. It is needed by doxygen.
Paraslash has been developed using the git source code management tool since 2006. Development is organized roughly in the same spirit as the git development itself, as described below.
The following text passage is based on “A note from the maintainer”, written by Junio C Hamano, the maintainer of git.
There are four branches in the paraslash repository that track the source tree: “master”, “maint”, “next”, and “pu”.
The “master” branch is meant to contain what is well tested and ready to be used in a production setting. There could occasionally be minor breakages or brown paper bag bugs but they are not expected to be anything major, and more importantly quickly and easily fixable. Every now and then, a “feature release” is cut from the tip of this branch, named with three dotted decimal digits, like 0.4.2.
Whenever changes are about to be included that will eventually lead to a new major release (e.g. 0.5.0), a “maint” branch is forked off from “master” at that point. Obvious, safe and urgent fixes after the major release are applied to this branch and maintenance releases are cut from it. New features never go to this branch. This branch is also merged into “master” to propagate the fixes forward.
A trivial and safe enhancement goes directly on top of “master”. New development does not usually happen on “master”, however. Instead, a separate topic branch is forked from the tip of “master”, and it first is tested in isolation; Usually there are a handful such topic branches that are running ahead of “master”. The tip of these branches is not published in the public repository to keep the number of branches that downstream developers need to worry about low.
The quality of topic branches varies widely. Some of them start out as “good idea but obviously is broken in some areas” and then with some more work become “more or less done and can now be tested by wider audience”. Luckily, most of them start out in the latter, better shape.
The “next” branch is to merge and test topic branches in the latter category. In general, this branch always contains the tip of “master”. It might not be quite rock-solid production ready, but is expected to work more or less without major breakage. The maintainer usually uses the “next” version of paraslash for his own pleasure, so it cannot be that broken. The “next” branch is where new and exciting things take place.
The two branches “master” and “maint” are never rewound, and “next” usually will not be either (this automatically means the topics that have been merged into “next” are usually not rebased, and you can find the tip of topic branches you are interested in from the output of “git log next”). You should be able to safely build on top of them.
However, at times “next” will be rebuilt from the tip of “master” to get rid of merge commits that will never be in “master”. The commit that replaces “next” will usually have the identical tree, but it will have different ancestry from the tip of “master”.
The “pu” (proposed updates) branch bundles the remainder of the topic branches. The “pu” branch, and topic branches that are only in “pu”, are subject to rebasing in general. By the above definition of how “next” works, you can tell that this branch will contain quite experimental and obviously broken stuff.
When a topic that was in “pu” proves to be in testable shape, it graduates to “next”. This is done with
git checkout next
git merge that-topic-branch
Sometimes, an idea that looked promising turns out to be not so good and the topic can be dropped from “pu” in such a case.
A topic that is in “next” is expected to be polished to perfection before it is merged to “master”. Similar to the above, this is done with
git checkout master
git merge that-topic-branch
git branch -d that-topic-branch
Note that being in “next” is not a guarantee to appear in the next release (being in “master” is such a guarantee, unless it is later found seriously broken and reverted), nor even in any future release.
The preferred coding style for paraslash coincides more or less with the style of the Linux kernel. So rather than repeating what is written there, here are the most important points.
if (x is true) { we do y }
Doxygen is a documentation system for various programming languages. The API reference on the paraslash web page is generated by doxygen.
It is more illustrative to look at the source code for examples than to describe the conventions in this manual, so we only describe which parts of the code need doxygen comments, but leave out details on documentation conventions.
As a rule, only the public part of the C source is documented with Doxygen. This includes structures, defines and enumerations in header files as well as public (non-static) C functions. These should be documented completely. For example, each parameter and the return value of a public function should get a descriptive doxygen comment.
No doxygen comments are necessary for static functions and for structures and enumerations in C files (which are used only within this file). This does not mean, however, that those entities need no documentation at all. Instead, common sense should be applied to document what is not obvious from reading the code.
The Internet Protocol is the primary networking protocol used for the Internet. All protocols described below use IP as the underlying layer. Both the prevalent IPv4 and the next-generation IPv6 variant are being deployed actively worldwide.
Connectionless protocols differ from connection-oriented ones in that state associated with the sending/receiving endpoints is treated implicitly. Connectionless protocols maintain no internal knowledge about the state of the connection. Hence they are not capable of reacting to state changes, such as sudden loss or congestion on the connection medium. Connection-oriented protocols, in contrast, make this knowledge explicit. The connection is established only after a bidirectional handshake which requires both endpoints to agree on the state of the connection, and may also involve negotiating specific parameters for the particular connection. Maintaining an up-to-date internal state of the connection also in general means that the sending endpoints perform congestion control, adapting to qualitative changes of the connection medium.
In IP networking, packets can be lost, duplicated, or delivered out of order, and different network protocols handle these problems in different ways. We call a transport-layer protocol reliable, if it turns the unreliable IP delivery into an ordered, duplicate- and loss-free delivery of packets. Sequence numbers are used to discard duplicates and re-arrange packets delivered out-of-order. Retransmission is used to guarantee loss-free delivery. Unreliable protocols, in contrast, do not guarantee ordering or data integrity.
With these definitions the protocols which are used by paraslash for steaming audio data may be classified as follows.
- HTTP/TCP: connection-oriented, reliable,
- UDP: connectionless, unreliable,
- DCCP: connection-oriented, unreliable.
Below we give a short descriptions of these protocols.
The Transmission Control Protocol provides reliable, ordered delivery of a stream and a classic window-based congestion control. In contrast to UDP and DCCP (see below), TCP does not have record-oriented or datagram-based syntax, i.e. it provides a stream which is unaware and independent of any record (packet) boundaries. TCP is used extensively by many application layers. Besides HTTP (the Hypertext Transfer Protocol), also FTP (the File Transfer protocol), SMTP (Simple Mail Transfer Protocol), SSH (Secure Shell) all sit on top of TCP.
The User Datagram Protocol is the simplest transport-layer protocol, built as a thin layer directly on top of IP. For this reason, it offers the same best-effort service as IP itself, i.e. there is no detection of duplicate or reordered packets. Being a connectionless protocol, only minimal internal state about the connection is maintained, which means that there is no protection against packet loss or network congestion. Error checking and correction (if at all) are performed in the application.
The Datagram Congestion Control Protocol combines the connection-oriented state maintenance known from TCP with the unreliable, datagram-based transport of UDP. This means that it is capable of reacting to changes in the connection by performing congestion control, offering multiple alternative approaches. But it is bound to datagram boundaries (the maximum packet size supported by a medium), and like UDP it lacks retransmission to protect against loss. Due to the use of sequence numbers, it is however able to react to loss (interpreted as a congestion indication) and to ignore out-of-order and duplicate packets. Unlike TCP it allows to negotiate specific, binding features for a connection, such as the choice of congestion control: classic, window-based congestion control known from TCP is available as CCID-2, rate-based, “smooth” congestion control is offered as CCID-3.
The Hypertext Transfer Protocol is an application layer protocol on top of TCP. It is spoken by web servers and is most often used for web services. However, as can be seen by the many Internet radio stations and YouTube/Flash videos, http is by far not limited to the delivery of web pages only. Being a simple request/response based protocol, the semantics of the protocol also allow the delivery of multimedia content, such as audio over http.
IP multicast is not really a protocol but a technique for one-to-many communication over an IP network. The challenge is to deliver information to a group of destinations simultaneously using the most efficient strategy to send the messages over each link of the network only once. This has benefits for streaming multimedia: the standard one-to-one unicast offered by TCP/DCCP means that n clients listening to the same stream also consume n-times the resources, whereas multicast requires to send the stream just once, irrespective of the number of receivers. Since it would be costly to maintain state for each listening receiver, multicast often implies connectionless transport, which is the reason that it is currently only available via UDP.
UNIX domain sockets are a traditional way to communicate between processes on the same machine. They are always reliable (see above) and don’t reorder datagrams. Unlike TCP and UDP, UNIX domain sockets support passing open file descriptors or process credentials to other processes.
The usual way to set up a UNIX domain socket (as obtained from socket(2)) for listening is to first bind the socket to a file system pathname and then call listen(2), then accept(2). Such sockets are called pathname sockets because bind(2) creates a special socket file at the specified path. Pathname sockets allow unrelated processes to communicate with the listening process by binding to the same path and calling connect(2).
There are two problems with pathname sockets:
* The listing process must be able to (safely) create the
socket special in a directory which is also accessible to
the connecting process.
* After an unclean shutdown of the listening process, a stale
socket special may reside on the file system.
The abstract socket namespace is a non-portable Linux feature which avoids these problems. Abstract sockets are still bound to a name, but the name has no connection with file system pathnames.
Paraslash is licensed under the GPL, version 2. Most of the code base has been written from scratch, and those parts are GPL V2 throughout. Notable exceptions are FEC and the WMA decoder. See the corresponding source files for licencing details for these parts. Some code sniplets of several other third party software packages have been incorporated into the paraslash sources, for example log message coloring was taken from the git sources. These third party software packages are all published under the GPL or some other license compatible to the GPL.
Many thanks to Gerrit Renker who read an early draft of this manual and contributed significant improvements.
RFC 768 (1980): User Datagram Protocol
RFC 791 (1981): Internet Protocol
RFC 2437 (1998): RSA Cryptography Specifications
RFC 4340 (2006): Datagram Congestion Control Protocol (DCCP)
RFC 4341 (2006): Congestion Control ID 2: TCP-like Congestion Control
RFC 4342 (2006): Congestion Control ID 3: TCP-Friendly Rate Control (TFRC)
RFC 6716 (2012): Definition of the Opus Audio Codec