aboutsummaryrefslogtreecommitdiffstats log msg author committer range
path: root/interfaces.tex
diff options
 context: 12345678910152025303540 space: includeignore mode: unifiedssdiffstat only
Diffstat (limited to 'interfaces.tex')
-rw-r--r--interfaces.tex132
1 files changed, 81 insertions, 51 deletions
 diff --git a/interfaces.tex b/interfaces.texindex ad9abe9..72f9651 100644--- a/interfaces.tex+++ b/interfaces.tex@@ -113,25 +113,34 @@ the tools have to run at the native rate, it is not possible to use files anymore to interconnect the tools. For this usage, a network interconnection is available between the tools. -This network connection is based on ZeroMQ, a library that permits the creation-of a socket connection with automatic connection management (connection,-disconnection, error handling). ZeroMQ uses a TCP/IP connection, and can-therefore be used over any kind of IP networks.--This connection makes it possible to put the different tools on different-computers, but it is not necessary. It is also possible, and even encouraged to-use this interconnection locally on the same machine.+The standard protocol to carry both contribution (From audio encoder to+multiplexer) and distribution (from multiplexer to modulators) is+EDI, specified by ETSI~\cite{etsits102693}+\footnote{For a summary about the ZeroMQ interface used before EDI, see the+section~\ref{sec:zeromq} below.}.++EDI can be carried over UDP or other unreliable links, and offers a protection+layer to correct bit-errors. Over network connections where the occasional+congestion can occur, EDI can also be carried over TCP, which will ensure lost+packets get retransmit. Unless you are able to reserve bandwidth for the EDI+traffic, using TCP is the safer option.++While the main reason to use EDI is to put the different tools on different+computers, it is not necessary to do so.+It is possible, and even encouraged to use this interconnection locally on the+same machine, for increased flexibility. \subsubsection{Between Encoder and Multiplexer} \label{sec:between_encoder_and_multiplexer} -Between ODR-AudioEnc and ODR-DabMux, the ZeroMQ connection transmits AAC-superframes, with additional metadata that contains the audio level indication-for monitoring purposes. The multiplexer cannot easily derive the audio level-from the AAC bitstream without decoding it, so it makes more sense to calculate-this in the encoder.+Between ODR-AudioEnc and ODR-DabMux, the EDI protocol carries \dabplus{}+superframes or DAB frames, with additional metadata that contains the audio+level indication for monitoring purposes.+The multiplexer cannot easily derive the audio level from the encoded bitstream+without decoding it, so it makes more sense to calculate this in the encoder and+carry it along the encoded data. -On the multiplexer, the subchannel must be configured for ZeroMQ as follows:+On the multiplexer, the subchannel must be configured for EDI as follows: \begin{lstlisting} sub-fb { type dabplus@@ -139,29 +148,41 @@ sub-fb { id 24 protection 3 - inputfile "tcp://*:9001"- zmq-buffer 40- zmq-prebuffering 20+ inputproto edi+ inputuri "tcp://*:7002"+ buffer-management prebuffering } \end{lstlisting} -The ZeroMQ input supports several options in addition to the ones of a+The EDI input supports several options in addition to the ones of a subchannel that uses a file input. The options are: \begin{itemize}- \item \texttt{inputfile}: This defines the interface and port on which to+ \item \texttt{inputuri}: This defines the interface and port on which to listen for incoming data. It must be of the form- \texttt{tcp://*:}. Support for the \texttt{pgm://} protocol is- experimental, please see the \texttt{zmq\_bind} manpage for more- information about the protocols.- \item \texttt{zmq-buffer}: The ZeroMQ input handles an internal buffer for+ \texttt{://*:}, with \texttt{proto} may be either+ \texttt{tcp} or \texttt{udp}.++ \item \texttt{buffer-management}: Two buffer management approaches are+ possible with EDI:+ The other option \texttt{timestamped} will take+ into account the timestamps carried in EDI, inserting the audio into the+ ETI frame associated to that same time stamp.+ \texttt{prebuffering} ignores timestamps and+ pre-buffers some data before it starts streaming. This allows to+ compensate for network jitter.++ \item \texttt{buffer}: (Both buffer management settings)+ The input contains an internal buffer for incoming data. The maximum buffer size is given by this option, the- units are AAC frames ($24$\ms). Therefore, with a value of $40$, you+ units are frames ($24$\ms). Therefore, with a value of $40$, you will have a buffer of $40 * 24 = 960$\ms. The multiplexer will never- buffer more than this value, and will discard data one AAC superframe- ($5$ frames $= 100$\ms) when the buffer is full.- \item \texttt{zmq-prebuffering}: When the buffer is empty, the multiplexer- waits until this amount of AAC frames are available in the buffer+ buffer more than this value, and will discard data when the buffer is+ full.++ \item \texttt{prebuffering}: (Only in buffer management \texttt{prebuffering})+ When the buffer is empty, the multiplexer+ waits until this amount of frames are available in the buffer before it starts to consume data. \end{itemize} @@ -169,8 +190,12 @@ The goal of having a buffer in the input of the multiplexer is to be able to absorb network latency jitter: Because IP does not guarantee anything about the latency, some packets will reach the encoder faster than others. The buffer can then be used to avoid disruptions in these cases, and its size should be-adapted to the network connection. This has to be done in an empirical way, and-is a trade-off between absolute delay and robustness.+adapted to the network connection.+In both buffer management techniques, it is a trade-off between absolute delay+and robustness. When using pre-buffering, you directly control size of the+buffer, and you set it to a value depending on your network delays. When using+timestamped buffer management, the size of the input buffer is a consequence of+the effective delay you set in the timestamps. If the encoder is running remotely on a machine, encoding from a sound card, it will encode at the rate defined by the sound card clock. This clock will, if no@@ -178,8 +203,8 @@ special precautions are taken, be slightly off frequency. The multiplexer however runs on a machine where the system time is synchronised over NTP, and will not show any drift or offset. Two situations can occur: -Either the sound card clock is a bit slow, in which case the ZeroMQ buffer in-the multiplexer will fill up to the amount given by \texttt{zmq-prebuffering},+Either the sound card clock is a bit slow, in which case the input buffer in+the multiplexer will fill up to the amount given by \texttt{prebuffering}, and then start streaming data. Because the multiplexer will be a bit faster than the encoder, the amount of buffered data will slowly decrease, until the buffer is empty. Then the multiplexer will enter prebuffering, and wait again@@ -188,7 +213,7 @@ whose length corresponds to the prebuffering. Or the sound card clock is a bit fast, and the buffer will be filled up faster than data is consumed by the multiplexer. At some point, the buffer will hit-the maximum size, and one superframe will be discarded. This also creates an+the maximum size, and one frame will be discarded. This also creates an audible glitch. Consumer grade sound cards have clocks of varying quality. While these glitches@@ -198,7 +223,7 @@ behaviour in intervals that are not acceptable, e.g. more than once per hour. Both situations are suboptimal, because they lead to audio glitches, and also degrade the ability to compensate for network latency changes. It is preferable to use the drift compensation feature available in ODR-AudioEnc, which-insures that the encoder outputs the AAC bitstream at the nominal rate, aligned+insures that the encoder outputs the encoded bitstream at the nominal rate, aligned to the NTP-synchronised system time, and not to the sound card clock. The sound card clock error is compensated for inside the encoder. @@ -209,26 +234,31 @@ In order to be able to use the Internet as contribution network, some form of protection has to be put in place to make sure the audio data cannot be altered by third parties. Usually, some form of VPN is set up for this case. -Alternatively, the encryption mechanism ZeroMQ offers can also be used. To do-this, it is necessary to set up keys and to distribute them to the encoder and-the multiplexer.+% Previous versions described the ZeroMQ encryption functionality here. -\begin{lstlisting}- encryption 1- secret-key "keys/mux.sec"- public-key "keys/mux.pub"- encoder-key "keys/encoder1.pub"-\end{lstlisting}+\subsubsection{Between Multiplexer and Modulator} -\sidenote{Add configuration example}+The EDI protocol can also carry data of a complete ensemble from ODR-DabMux to+one or more instanced of ODR-DabMod. -\subsubsection{Between Multiplexer and Modulator}+\subsection{ZeroMQ}+\label{sec:zeromq}+Previous versions of this guide described an IP protocol based on+ZeroMQ, a library that permits the creation+of a socket connection with automatic connection management (connection,+disconnection, error handling). This has now been deprecated in favour of+the standards compliant EDI, which, when carried over TCP, can also be+used over congested networks like the Internet.++ZeroMQ was used both between for encoded audio contribution and for distribution+of ensemble data, and was a custom, non-specified protocol. It was created+before EDI got implemented in the ODR-mmbTools, and because carrying ETI inside+IP was not sufficient for carrying complete timestamps, and because there was no+other obvious option for audio contribution besides inventing a custom protocol. -The ZeroMQ connection can also be used to connect ODR-DabMux to one or more-instances of ODR-DabMod. One ZeroMQ frame contains four ETI frames, which-guarantees that the modulator always assembles the transmission frame in a-correct way, even in Transmission Mode I, where four ETI frames are used-together.+Now that it is proven that EDI-over-TCP can work satisfactorily over congested+networks, there is no reason for a protocol that is not conforming to ETSI+specifications to exist anymore. \subsection{Pipes} @@ -237,6 +267,6 @@ multiplexer on the same machine. It uses the same configuration as the file input but instead of using files, FIFOs, also called named pipes'' are created first using \texttt{mkfifo}. -This setup is deprecated in favour of the ZeroMQ interface.+This setup is deprecated in favour of EDI. % vim: spl=en spell tw=80 et