انت هنا الان : شبكة جامعة بابل > موقع الكلية > نظام التعليم الالكتروني > مشاهدة المحاضرة

CH3_ChannelCoding_4thyear_Part1

Share |
الكلية كلية الهندسة     القسم  الهندسة الكهربائية     المرحلة 4
أستاذ المادة احمد عبد الكاظم حمد الركابي       10/07/2018 21:15:09
Lecture 3/1
CHAPTER THREE: CHANNEL CODING
1. Channel Coding Theorem
A basic block diagram for the channel coding is shown in Fig.1.1
The binary message sequence at the input of the channel encoder may be the output of a source encoder or the output of a source directly. The channel encoder introduces systematic redundancy into the data stream by adding bits to the message bits in such a way as to facilitate the detection and/or correction of bit errors in the original binary message sequence at the receiver.
Channel coding theorem for a DMC is stated as follows:
Given a DMS X with entropy bit/symbol and a DMC with capacity bit/symbol, if , there exist a coding scheme for which the source output can be transmitted over the channel with an arbitrary small probability of error.
Conversely, if , it is not possible to transmit information over the channel with an arbitrarily small probability of error. Note that the channel coding theorem only asserts the existence of codes; it does not tell us how to construct these codes.
2. Block Coding
In block codes, the sequence of the message digits is divided into sequential blocks. Each block contains „k? binary digits. The encoder add to each block „m? check digits and so the codeword contains n=k+m coded digits. For any „n?, there are 2n possible binary sequences. Only 2k of these are codewords, since for any „k? sequence of message digits, the „m? check digits are uniquely determined. This set of 2k codewords is called an (n,k) block code.
After transmission through the channel, noise may change the codewords, and any of the possible 2n sequences may arrive at the receiver. A decoder must be provided at the receiver in order to decide which of the possible 2k codewords was transmitted.
3. Single Parity Check Codes
This is the simplest example of an error detection code. These codes have , and the check digits is taken to be modulo-2 addition of the „k? message digits (here ).
In general, the check digits is taken to be „0? or „1? depending on whether the message digits contain an even or odd number of 1?s, respectively. Then the total number of 1?s in every transmitted codeword is even (for even parity code) and odd (for odd parity code).
Example 3.1: for , the possible codewords are
000, 011, 101 and 110
For even party check code, if the received block has odd number of 1?s, then an error occurs (1 error, or 3 errors, or 5 errors, …etc.). However, if the number of 1?s is even then either no error occurs or an even number of errors.
Suggest that the uncoded bit error probability (prob. of error/bit)
Channel encoder
Discrete memoryless channel
DMC
Channel decoder
Binary message sequence
Coded sequence
Noise
Decoded binary sequence
Decide which of the possible 2k binary sequence was transmitted
Select ‘m’ check digits and adds them to each block of ‘k’ message digits
Fig.1.1
Lecture 3/2
(Undetected error) = = ?
(Detecting an error) = ?
4. Binary Repetition Codes
The simplest example of an error correcting code is the binary repetition code. Each single
message digit is transmitted along with „m? check digits, each check digit having the same value as
the message digit. Thus k=1 and
The decoder operates on the following majority decision rule:
No. of 1?s received < No. of 0?s ? 0 was transmitted
No. of 1?s received > No. of 0?s ? 1 was transmitted
No. of 1?s received = No. of 0?s ? no decision
Example 4.1: show that the use of binary repetition code with reduces the probability of
error over BSC with .
Solution:
? without channel coding
? with channel coding, each message is repeated three times ( ). Thus the codewords are
(000 and 111)
therefore
In general, for binary repetition code
?
5. Information Rate
The information rate of a code is defined to be:
?1
?
? ?
k m
k
n
k
Rc (5.1)
The repetition codes have an enormous error correction capability for large values of „m?. However,
there information rate, , then becomes very low. i.e., a large number of redundant
check digits are transmitted with each message digit. On the other hand, the single parity check
code has a very high information rate of, , but can do nothing more than detect an
odd number of errors.
In general, the most useful codes lie between these two extreme cases, and have both moderate
information rate and error correction capabilities.
6. Linear Parity-Check Codes
In an (n, k) block code, it is convenient to represent a binary codeword in matrix form as a row
vector whose elements are the code symbols. Thus we define a code vector c and a data vector d as
follows:
c =[c1 c2 . . . cn]
d =[d1 d2 . . . dk]

المادة المعروضة اعلاه هي مدخل الى المحاضرة المرفوعة بواسطة استاذ(ة) المادة . وقد تبدو لك غير متكاملة . حيث يضع استاذ المادة في بعض الاحيان فقط الجزء الاول من المحاضرة من اجل الاطلاع على ما ستقوم بتحميله لاحقا . في نظام التعليم الالكتروني نوفر هذه الخدمة لكي نبقيك على اطلاع حول محتوى الملف الذي ستقوم بتحميله .
الرجوع الى لوحة التحكم