ERT Group BonnDept. of Computer Science,University of Bonn, 53113 Bonn Contact: Fabian Hargesheimer (hgh@cs.uni-bonn.de) The
ERT link page
|
While this is an appropriate way to deal with text files such as email, articles or binary files (which are always used as a whole, so the receiver can accept eventual delays caused by re-transmitting lost packets rather than getting corrupt or even no data at all), it is much less useful for real-time applications we are talking about.
In audio or video conferencing, the resulting data have a stream-like character rather than the file like character of email or web pages. The participants of a video conference cannot accept that their receiver stops on each missing frame until re-transmission succeeds. Besides that, data buffering would require large amounts of local memory on both ends of the transmission channel. Not to mention the increase of network traffic caused by re-sending (large) portions of data.
But: We can not predict which packets will be lost or how many packets will be lost during the transmission. What we really need is a better way to protect our messages against losses. An approach on which we concentrate is the use of Forward Error Correcting Schemes (FEC). An encoding scheme in which complete recovery is possible from any set of packets that contains the same number as the original message is called Maximum Distance Separable (MDS) code.
In our group we focus on the variants of the so-called Cauchy-based
coding schemes and also on some other related coding schemes.
The Cauchy coding scheme is described in a paper "An XOR-Based Erasure-Resilient
Coding Scheme" by J. Blömer, M. Kalfane, R. Karp, M. Karpinski, M.
Luby and D. Zuckerman. Cauchy Matrices are used to generate the code in
this work. We use the Cauchy coding scheme in our implementations.
In practice there are several reasons for making highest efforts
to keep this redundancy as small as possible. First, both on the sender
and on the receiver's side every additional packet means that some additional
work must be performed. The sender has to create and transmit the redundant
data, the receiver has to swallow and separate it from the information
necessary for decoding the message. Second, in many networks (e.g. the
Internet) a large amount of packet losses result from network overflow.
So adding more traffic to compensate problems probably caused by
too
much traffic is a technique that should be handled most carefully.
However the most important reason is the unpredictable character of
losses in the Internet. We cannot know in
advance what part of the packets will be lost, therefore we don't know
how much redundancy is necessary to transmit the whole message without
losses. In case of multimedia applications our goal is to achieve
graceful
degradation. This means that transmission quality degrades smoothly
with increasing loss rates. Thus in case of low loss rates very high quality
of transmission can be realized. And in case of high loss rates a certain
minimal quality of received data can still be guaranteed.
In order to achieve the graceful degradation we will use a priority
encoding transmission that is based on variable redundancy coding.
The main idea is that, instead of viewing a message as a monolithic chunk
of data, we try to divide it into (many) portions that can be evaluated
with respect to their importance for the receiver. Each of these portions
is then encoded and decoded separately, with redundancy corresponding
to its importance. Obviously the bigger the number of portions into which
we split the data the smoother the dependence between loss rates and transmission
quality. We will concentrate on data compression methods that allow us
to split the compressed data into a number of portions according to their
importance for the receiver and at the same time provide high compression
rates. Examples of such methods are DCT-based compression formats MPEG
and JPEG and wavelet-based image and video compression algorithms.