Distributed source coding (DSC) is a data compression technique that deals with two physically separated but statistically correlated sources which do not communicate with each other [1]. A DSC scheme depends on the basis that the encoding will be separated, but with joint decoding of the dependent sources. Moreover, with the DSC theory, independent encoding can be designed as efficiently as joint encoding. Furthermore, sometimes the correlated source signals are transmitted over a noisy channel. Then distributed joint source channel coding (DJSC) is designed to perform both data compression and cancelling of the noise. The key aspect of both DSC and DJSC schemes is to optimize the transmission energy required by the sources by exploiting the existing correlation, while maintaining reliable communication [2]. Distributed source coding techniques may be categorized into two groups, in terms of the presence or absence of perfect side-information [3]. Fig (1) and (2) show the two classes of DSC.

Figure 1 Schematic diagram of Distributed Source Coding (DSC) without side information.

Slepian and Wolf were the first to consider the DSC problem in 1973 and Wyner and Ziv extended it to rate/distortion in 1976. DSC has a number of promising applications ranging from video data compression to network MIMO under a constrained backhaul.

Slepian-Wolf Coding

The Slepian-Wolf (SW) theorem is the first basis of DSC which demonstrated achievable bit per character or rate regions of the compressed correlated sources. Let us consider the case shown in figure-(1) where X1 and X2 are two physically independent discrete sources that are encoded by two separate encoders but are decoded by a joint decoder. Separate encoding and decoding require rates R1≥H(X1) and R2≥ H(X2) to compress X1 and X2 without loss [4]. Then the SW result showed that even if the encoders are physically separated, to achieve reliable transmission of both X1 and X2, the rates of transmission must be such that

R_1≥H(├ X_1 ┤| X_2)

R_2≥H(├ X_2 ┤| X_1) and

R_1+R_2=H(X_(1,) X_2 ),

where, H (X1⃓ X2) and H (X1, X2) are entropies [1].

Figure 3, shows the two-dimensional Slepian-Wolf rate region for two encoded data streams [1]. If conventional lossless coding of source 1 is considered at rate of R1= H(X1), (as at point B) then X1 will be available at the joint decoder and hence source 2 needs only to be encoded at rate H(├ X_2 ┤| X_1) and vice versa for point A [2]. A few years later in 1976, the SWC problem in the noiseless channel was further studied for a lossy source coding problem which relies on side-information at the decoder and which is popularly known as the Wyner-Ziv Coding (WZC) problem [5].

Wyner-Ziv Coding

If a noisy and lossy coding scenario is considered, Wyner-Ziv made some advancement on the Slepian-Wolf method by determining the rate distortion function. The WZ problem deals with the issue which raises the question of how many bits are necessary for encoding the source {X1}, assuming that the side-information {X2} is known to the decoder side (shown in Fig. 4) [5].

**References: **

[1] D. Slepian and J. K. Wolf, “Noiseless coding of correlated information sources,” IEEE Trans. Inf. Theory, vol. IT-19, pp. 471–480, Jul. 1973.

[2] Abdulah Jeza Aljohani, Soon Xin Ng and Lajos Hanzo, “Distributed Source Coding and Its Applications in Relaying-Based Transmission”, IEEE Access (Volume: 4), pp. 1940 – 1970, 02 March 2016.

[3] D. Varodayan, ‘‘Adaptive distributed source coding,’’ Ph.D. dissertation, Dept. Elect. Eng., Stanford Univ., Stanford, CA, USA, 2010.

[4] C. E. Shannon. A mathematical theory of communication. Bell Sys. Tech. J, 27:379-423, 623-656, 1948.

[5] A. D. Wyner and J. Ziv,‘‘The rate-distortion function for source coding with side information at the decoder,’’ IEEE Trans. Inf. Theory, vol. 22, no. 1, pp. 1–10, Jan. 1976.