انت هنا الان : شبكة جامعة بابل > موقع الكلية > نظام التعليم الالكتروني > مشاهدة المحاضرة
الكلية كلية الهندسة
القسم الهندسة الكهربائية
المرحلة 4
أستاذ المادة ابراهيم عبد الله مرداس الشجيري
4/13/2011 2:46:01 PM
Information Theory Modern digital communication depends on Information Theory, which was invented in the 1940 s by Claude E. Shannon. Shannon first published A Mathematical Theory of Communication in 1947-1948, and provides a mathematical model for communication
Information Sources An information source is a system that outputs from a fixed set of n symbols {x1 .. xn} in a sequence at some rate (see Fig. 1). In the simplest case, each symbol that might be output from the system is equally likely. The letter i will stand for some given ouput symbol from the set {x1 .. xn}. If all symbols are equally likely, then the probability that symbol i will be the one produced is pi = P = 1/n no matter which symbol we have in mind. For example, if the information source can produce four equally likely symbols (A, B, C, and D), then each symbol has a probability of .25 (that is, 25% or 1/4). An observer is uncertain which of the n symbols will be output. Once a given symbol xi is observed, the observer has obtained information from the source. The observer s uncertainty is reduced. The amount of information obtained can be measured because the number of possible symbols is known. and the unit of measure is binary digits, or bits. The unit of measure depends on the base of the logarithm. Most of the time, Information Theory uses the base 2 logarithm (log2). Any other logarithm base would work. If we used base 10, then the unit of measure would be decimal digits If a system can output any of 16 possible symbols, for each symbol observed the observer receives 4 bits of information. That is, it reduces the observer s uncertainty by 4 bits (see Fig. 2).
Entropies Defined, and Why They are Measures of Information
The amount of information about an event is closely related to its probability of occurrence . to formulate a mathematical equation in general any one of n equiprobable message then contain log2 n bits of information . because we have assumed all n message to be equiprobable , the probability of occurrence of each one is Pi=1\n and the information associated with each message is then The information content I of a single event or message is defined as the base-2 logarithm of its probability p:
Example 1
The four symbols A,B,C,D occur with probabilities 1\2, 1\4 ,1\8 , 1\8 , respectively .compute the information in the three–symbol message X=BDA, assuming that the symbols are statistically independent.
Solution : Because the symbols are independent, the measure of information is additive using eq. (2) and we can write
Entropy of Ensembles
We now move from considering the information content of a single event or message, to that of an ensemble. An ensemble is the set of outcomes of one or more random variables. The outcomes have probabilities attached to them. In general these probabilities are non-uniform, with event i having probability pi, but they must sum to 1 because all possible outcomes are included; hence they form a probability distribution
المادة المعروضة اعلاه هي مدخل الى المحاضرة المرفوعة بواسطة استاذ(ة) المادة . وقد تبدو لك غير متكاملة . حيث يضع استاذ المادة في بعض الاحيان فقط الجزء الاول من المحاضرة من اجل الاطلاع على ما ستقوم بتحميله لاحقا . في نظام التعليم الالكتروني نوفر هذه الخدمة لكي نبقيك على اطلاع حول محتوى الملف الذي ستقوم بتحميله .
الرجوع الى لوحة التحكم
|