Saturday, January 25, 2020

Encoder Viterbi Matlab

Encoder Viterbi Matlab Implementation of Convolutional Encoder and Viterbi Decoder Using Matlab and FPGA Abstract Channel coding is widely used in digital communication. By using channel encoding methods we can minimize signal noise and signal interference in our system. These techniques also utilize less bandwidth for error free transmission. In our project we have implemented convolutional encoder and viterbi decoder for channel coding. Convolutional encoding is vastly used for error correction in digital communication. We have implemented these techniques on matlab and performed a lot of simulation to check their performance. Chapter 1 DIGITAL COMMUNICATION SYSTEM INTRODUCTION Early communication was based on implicit assumption that messages signal is continuous varying time signal waveform. Such continuous time signals are referred as analog signals and there corresponding information sources are called analog sources. Analog signals are transmitted using carrier modulation over communication channel and accordingly demodulated at receiver. Such communication system is called analog communication systems. In digital transmission analog source output is converted to digital form. Message can be transmitted using digital modulation and at receiver demodulated as digital signal. The basic feature of digital communication system is that during finite interval of time it sends a waveform from possible number of waveforms. Important measure of system performance in digital communication systems is probability of error. 1.2 WHY DIGITAL COMMUNICATION Digital communication is preferred over analog communication because digital circuits have a less probability of distortion and interference than analog. Digital circuits are reliable than analog and have low cost. Digital hardware is more flexible to implement than analog. In digital signals time division multiplexing is simpler then FDM in analog signals. DIGITAL COMMUNICATION In digital communication system functional operations performed at both transmitter and receiver should be expanded to add messages signal bias at transmitter and message signal synthesis or interpolating at receiver. Additional functions include redundancy removal and channel encoding and decoding. 1.3.1 Source Nature Information is knowledge. Information can be of two types either analog or digital. We can collect information through listening or watching. Receiver newer know what it will receive in advance but only when some source generates an output towards it. The main responsibility on any communication channel is to send error less information towards receiver. 1.3.3 Source Encoder/Decoder What is source encoder? It is a technique which changes an analog signal into sequence of bits. This sequence of bits that is produced can also be used for the reconstruction of the signal. These bits contain information about the original signal. If we use this encoding technique it can also be helpful in appropriate bandwidth utilization. The sequence of bits is such that it can be used for data compression. 1.3.4 Quantization It is a process in which we sample the amplitude of a analog signal. Irreversible mechanism in which we erradicate redundant bits is called QUANTIZERS. The disadvantage of quantization is that it introduces noise in the sampled signal. Whereas while sampling distortion donot occur. But inspite of all that, quantizers and quantization is still widely used in determining the bit rate. And in any coding procedure of speech, amplitude quantization is the most important step. X8 X7 X6 X5 X4 X3 X2 X1 Figure 1.2: 8-level quantization 1.3.5 Modulation and Demodulation What is modulation and demodulation? Modulation is a process in which a baseband signal is mixed with a carier and converted into bandpass signal. And demodulation is a process in which original signal is recovered from modulated signal. And modulator and demodulators perform the above information. The modulator changes the signal into the form representing the required information. And reverse operation is performed by demodulator. The purpose of these devices is to produce and convey messages with minimum bit error rate. NOISE IN COMMUNICATION SYSTEMS Noise refers to something which is always present in the entire communication world. Noise is something that can be created or produced from variety of possessions. If noise is present in any system it makes the system ambiguous and less efficient. It also makes our receiver capability less efficient. And therefore also confines the transmission rate. Noise can be minimized by efficient designing technique which is not desired through different methods such as filtering. Noise which is caused by the thermal motion of electrons in all dissipative resistors is called thermal noise. These electrons are also responsible for thermal noise as a zero mean Gaussian random process. CHAPTER 2 CHANNEL CODING 2.1 INTRODUCTION Channel coding is used in communication system to improve the signal reliability in communication systems. By performing channel coding we can protect our signal from different types of noises and distortion. These methods of signal processing are tools for accomplishing desirable system tradeoffs. By using large scale integrated circuit and high speed digital processing methods it had made possible to provide as much as 10db performance improvement at much less cost. Shannon showed that by the addition of redundant bits to source information we introduce a method to minimize error in channel without disturbing information transmission rate provided that the information rate is less than channel capacity. Average number of information bits per unit time can be reduced by using function of the speech code. Minimum number of information bits should be transmitted. The input to encoder is the output of speech code. Radio link performance is improved by using Channel coding in mobile communication by the addition of redundant bits to source information. At the transmitter channel code maps the digital information which is produced by a data source into a form that can be decoded by the receiver with minimum errors. Channel coding mechanism insert noise to the codes in a controlled manner by adding extra bits so that the receiver can do detection and correction in a noisy channel. Channel codes which are produced are classified as block codes and convolution codes The hamming distance (minimum), dmin of a code is used as criteria for determining error correction ability. The minimum hamming distance is defined as smallest value of d. if minimum hamming distance is dmin ,(dmin -1)bit errors can be detected and we correct the integer [(dmin-1)/2] bit errors .raw data transmission rate can be reduced additional coded bits. Using Error-Correction Codes These codes are very useful to use.Without implementing these codes in our communication system our data delievered will be very noisy and corrupted.Below is the graph which showz comparison between uncoded and coded data error performance. Chapter 3 CONVLUTIONAL CODING INTRODUCTION TO CONVOLUTIONAL ENCODING The idea is to make all code word symbols to be the weighted sum of the input message symbols. And that is similar to the convolution used in linear time invariant systems where the output of system is found, if you know about the input and impulse response. So in convolutional encoder we usually get the output of the system, by convolving the input bits. Basically, convolutional codes do not reduce much noise as compared to an equivalent block code. In most of the cases, they generally offer more simple implementation upon block code of same power. The encoder is a simple circuit which contains the memory states and feedback logic, normally supported by XOR gates. The decoder is usually implemented in software. The Viterbi algorithm is the most favourable algorithm which is used to decode convolutional codes. It is found that they generally give good results in environment of lower noise. OVERVIEW OF CONVOLUTIONAL CODES Convolution codes represent one method within the general class of codes. Channel codes which are also called error-correction codes allow reliable communication of an information sequence over that channel which adds noise, bring in bit errors, or otherwise deform the transmitted signal. These codes have many applications which include deep-space communication and voice band modems. Convolutional codes are commonly prà ©cised by the following three parameters; (n, k, m). n = output bits k = input bits m= memory registers L=constraint length The quantity k/n which is called code rate is a measure of the capability of the codes. Usually range of n and k is from 1 to 8 and range of m is from 2 to 10 and the code rate from 1/8 to 7/8 except for deep space application where the code rates as low as 1/100 or even longer has been engaged. Often the manufactures of the Convolutional code chips specify the codes by the following parameters n, k, L. The quantity L is the constraint length of the code and is defined by Constraint length, L = k*(m-1). The constraint length L stand for the bits in the encoder memory that effects the production of n output bits. The constraint length L is also indicated by the letter K. 3.2.1 CONVOLUTIONAL ENCODING ENCODER STRUCTURE Convolutional codes protect by adding unwanted bits as any binary code. A rate k/n Convolutional encoder develops the input series of k-bit information symbols through one or more binary shift registers. The convolutional encoder calculates every n-bits representation (n > k) of the output series from linear process on the present input symbol and the contents of the shift register(s). Therefore, a k-bit input symbol is processed by a rate k/n convolutional encoder and computes an n-bit out put symbol with every shift update. Figure shows a non recursive convolutional encoder having rate of 1/2. For the encoder above, shows state variations and resulting output code words. Sequence U for the message sequence m=1 1 0 1 1 Solution Table 3.1 Branch word at time ti u1 u2 State at Time ti+1 State at Time ti Register Contents Input Bit mi 0 0 0 0 0 0 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 1 1 0 1 1 0 0 1 1 1 0 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 1 1 1 0 1 1 0 1 1 0 0 1 1 1 0 1 1 0 1 1 0 0 0 1 0 0 1 0 U = 1 1 0 1 0 1 0 0 0 1 0 1 1 1 POLYNOMIAL REPRESENTATION Sometimes, the encoder characters are characterized by initiator polynomial. Representation of an encoder can be done with a set of n initiator polynomial, one for each of the n modulo-2 adders. Each polynomial is of degree K-1 or less and tell about the connection of encoding shift register to that modulo-2 adder as the connection vector normally do. The coefficient of all the terms is either 1 or 0 of the degree polynomial depending upon whether connection exists or doesnt. For example in figure 4.1, we can write the generator polynomial g1(X) for the upper connections and g2(X) for the lower connections as follow. g1(X) = 1+X+X2 g2(X) = 1+ X2 The output sequence is found as follow U(X) = m(X) g1(X) interlaced with m(X) g2(X) Let the message vector m = 101 as a polynomial is represented as m(X) = 1+ X2 Then output polynomial U(X), of the figure 4.1 encoder can be calculated for the input message m is given as under. m(X) g1(X) = (1+ X2 )( 1+X+X2) = 1+X+X3+X4 m(X) g2(X) = (1+ X2 ) (1+ X2 ) = 1+ X4 m(X) g1(X) = 1+X+0X2+X3+X4 m(X) g2(X) = 1+0X+0X2+0X3+ X4 U(X) = (1, 1) + (1, 0) X + (0, 0) X2 + (1, 0) X3 + (1, 1) X4 U = 11 10 00 10 11 We demonstrated the encoder with polynomial initiators as also described for cyclic codes. Graphically there are three ways in which we can look at the encoder to gain better understanding of its operations. These are (a) State diagram (b) Tree diagram (c) Trellis diagram 3.2.2 STATE DIAGRAM Convolution encoders are finite-state technology. Hence state diagram offers significant insight into their performance. The states showed in the diagram symbolize the probable contents of right most K-1 stages of register, and paths represent the output symbols coming from such state changes. The states of registers are nominated as a=00, b=10, c=01 and d=11. There are only two conversions originating from every state, corresponding to two probable input bits. Output branch word is written next to every path state that is linked with the state transition. In below figure, we have used the complete line which denotes a path linked with input bit, 0 and a doted line is to a path with an input bit, 1. Observe that it is impossible in a single transition state to move forward from a given state to any random state. 3.2.3 THE TREE DIAGRAM One cannot easily use the state diagram for tracing back the encoder transitions as a function of time because it has only one disadvantage i.e. it cannot maintain the history record while the state diagram fully characterize encoder. State diagram is the advance form of tree diagram; it adds the dimensions of time than tree diagram. As the custom these trees also traverse from left to right at each bit inputs and each branch of the tree is describing the output branch. Following rule can be used to find the sequence of codeword; for an input bit of zero, its related branch word can be obtained by advancing to subsequent rightmost branch in the up direction. For an input bit of 1, its branch word can be obtained in the down direction. If we assume that the major contents of encoder are zeros, the diagram shows if initial input bit to the encoder is set to zero, the output will be 00 and if the initial input bit is a one, the output will be 11. Also if the initial bit input is one and next input is zero, the next output bit is one; the next output branch word is 01.By following these steps we observe that input bit stream 11011 traces bold line on the tree. This path matches to the output codeword sequence 1101010001. CHAPTER 4 VITERBI DECODER 4.1 VITERBI DECODING ALGORITHM This algorithm was revealed by Viterbi in 1967. The Viterbi algorithm performs maximum likelihood decoding. By taking benefit of the structure in the code trellis it also reduces the computational load. The benefit of Viterbi decoding is that its difficulty is not a function of the information of symbols in the code word sequence. The algorithm includes calculating a distance, or measure of resemblance b/w the received signal, and every the trellis paths entering each state at the same time. Those trellis paths that could not possibly by candidates for the maximum likelihood choice, viterbi algorithm removes them from consideration when two paths are entering the same state then the one having the best metric is selected and that path is called the surviving path. This choice of surviving path is carry out for every state. The complexity of the decoder is reduced by the remove paths with maximum unlikeliness. The decoder continues in this way to go forward into the trellis and making decision by eradicating the slightest likely paths. In fact in 1969, Omura also demonstrated that the Viterbi algorithm is maximum likelihood. The objective of selecting the optimum path can be articulated by selecting codeword which as minimum distance metric. 4.2 EXAMPLE OF VITERBI CONVOLUTIONAL DECODING Binary Symmetric Channel is assumed for simplicity thus hamming distance is a suitable measured distance .A similar trellis which we are using in encoder can also be used in decoder, as shown in figure 4.5. We set up at time t1 in 00 state referring to trellis diagram. Flushing in encoder is very important because it tells the decoder about the starting state because in this example there are only two likely transitions departing any state and not all the branches need to shown firstly. The full trellis structure starts after time t3. Central idea following the decoding procedure can be demonstrated by seeing the figure 4.1 encoder trellis in contrast with the figure 4.2 decoder trellis. It is suitable at each time interval, for the decoder to label every branch with hamming distance b/w the received input code symbols and the current transition word matching to the same transition at encoder end. The example in figure 4.2 shows the equivalent codeword sequence U, a message sequence m, and a noise distorted received sequence Z = 11 01 01 10 01 †¦Ã¢â‚¬ ¦. . Code symbols that will come from the encoder output which are results of state transitions are the encoder branch words As the code symbols are received they are accumulated by the decoder and are labeled on trellis branch. That is for each and every branch of the decoder trellis it will be marked with a matrix of likeliness i.e. Hamming distance. From the received sequence Z, we observe that code symbols received as the convolutional output at time t1 are 11, shown in figure 4.2. With the aim of labeling the decoder branches at time t1 with the least Hamming distance metric, we glance at the encoder state diagram figure encoder trellis. At this point we observe that a state 00-00 transition gives an output branch word of 00, but we are receiving 11. Consequently, on the decoder trellis we label 00—00 transition with hamming distance of 0. Observing encoder trellis, a state 00—10 transition yields an hamming distance of 1 with the output of 11. Hence, on the decoder trellis, we also label the state 00—01 transition with a Hamming distance of 0. So, the metric entered on the decoder trellis branch tells compares the corrupted and correct distances received associated with the branch transmitted with the branch word. To all intents and purposes, these metrics describes a correlation. The decoding algorithm finds the minimum distance path in order to correctly decode the data. The foundation of Viterbi decoding is that between any two paths which are ending up to the same state, path with minimum hamming distance will always be selected and other one will be discarded. Its example can be seen in figure 4.3 below. 4.3 Decoder Implementation In the decoding context the transitions during any of the time interval can be combined into 2^v-1 disjoint cells, where each cell is dissipating four of the possible transitions, where v is called the encoder memory. 4.3.1 Add-Compare-Select Computation Starting with the K=3, 2—cell example, figure 4.4 below shows the logic unit that corresponds to cell 1. The logic executes the special purpose calculation called add-compare-select (ACS). The state metric is calculated by adding the previous-time state metric of state a, to the branch metric and the previous-time state metric of state c, to the branch metric, this fallout in two possible path metrics as candidates for the new state metric. These two results are compared in the logic units of figure 4.4. The biggest likelihood (smallest distance) of the two path metrics is saved as the new state metric for the state a. Also shown in the figure 4.4 is the cell-1 add compare select logic that tells the new state metric and the new path history. This ACS process is also performed for the paths in other cells. The oldest bit on the path with the smallest state metric forms the decoder output. 4.3.2 Add-compare-select as seen Trellis Consider the same example for describing viterbi decoding. The codeword sequence was U = 1101010001, the message sequence was m = 11011 and received was Z = 1101011001. Figure 4.5 give a picture of a decoding trellis diagram. Most important point in the decoding through trellis tree is its hamming distance. This is the distance between received code symbols and their equivalent branch words. Trellis tells the value at every state x and for each time to time t1 to t6. We do ACS operation when we have two transitions ending up to the same state. And we get these types of situations after t4 transition and after that. For instance at time t4 the value for the state metric is obtained by incrementing sate t3. Similar operation is done for the state t2. The ACS process chose the minimum hamming distance path which also has maximum likelihood. The paths with minimum hamming distances are shown with bold lines and the paths with minimum likelihood are shown with faded lines. Trellis trees a re always observed from left to right. At any time when we want to check our decoder output we initiate with those states which has smallest paths. If we look at the figure below we can see that at time t6 path with minimum hamming distance has survived with distance =1. CHAPTER 5 SIMULATION METHODOLOGY 5.1 MATLAB SIMULATION 5.1.1 CONVOLUTONAL ENCODER VERTERBI DECODER We have implemented Convolutional encoder and viterbi decoder as source code. Matlab code also compares our viterbi decoder output with the built in decoder output by comparing bit error rates in our project. Making Matlab code and generating different code words for different symbols using convolutional codes and then decoding them with errors using viterbi decoder was the first step in our project. We have taken input from the user which will be coded by the convolutional encoder. Here we have generated random bits. Then the coded data will be decoded at the viterbi decoder. At the decoder side we have corrupted different bits by simply inverting them manually. Just to check what will be the bit error rate if different bits will be corrupted. Then we have compared our built in decoder function with our decoder code efficiency. In the receiver side we have used viterbi decoding algorithm to decode the transmitted signal. After these two steps (encoding and decoding) original data is obtained, which have errors if low SNR is used. 5.2 VHDL SIMULATION Our second step regarding to this project was to make synthesizable code of encoder and decoder in vhdl. For this we have used modelsim. Here we have implemented same logic as we used in matlab. 5.3 FPGA In the end we have burned our code in field programmable gate array. We made a synthesizable code in vhdl of our matlab logic and implemented on fpga. MATLAB RESULTS Here is the result of our matlab codes. If one bit is corrupted data_str = 111011010101000001111101101010101000101100111011010001000100011001111111110101100010101111100101010011101011101001000110 conv_code_str = 100110010001000010001000111100000011001010100100000100100010011000101100101000010111100110010001000010110011111100111011011101011111001010101010111001001000000111001110011000011010110111111000110010111101110100100001110100101111111100110101 msg_rec = 11101101010100000111110110 101010100010110011101 10100010 0010001 10 011 1111111010110001010111110 0101010 01110101110 1001000110 Message/ber retrieved with Verterbi_link_cont1 ber = 0 Message/ber retrieved with Vitdec ber =0 If two bits are corrupted data_str = 100010111000000011101000101100010010100110101101110110110010001100010010010011111001100001101000001001111000101011011101 conv_code_str = 100011001110011110011100011000001101111100101100100000010111010110111110010011110101010000010100000001000101011101111110101011010111010110111110100110111101110010011111001111000011001100101100011011101111000010011100100000100001001001100100 msg_rec = 10001011100000001110100010110001001010011010110 1110110110 0 10 001100010 010010011111001100001101000 001 0011110001 010110 11 1 0 1 Message/ber retrieved with Verterbi_link_cont1 ber = 0 Message/ber retrieved with Vitdec ber = 0.2667 if 3 bits are corrupted data_str = 101100011101110010110100100110010010001010111010011011111000000000110110000110101111100000100010100011001001111110001100 conv_code_str = 100110010111010011100100000111111110011011001011100101110101100000111110101101100010011000010010100011010001110100011100011110000000101011000101101110110101010110011010111001000000100101001110010101001101000001101111000100101001101101010111 msg_rec = 1110011111 01110 0 1 0 11010010011011 0 01010101011101 000 111 011 10 00100000110110100110111010100000100010 11011001110 0111110101100 Message/ber retrieved with Verterbi_link_cont1 ber = 0.1750 Message/ber retrieved with Vitdec ber = 0.2000 As the errors in bits increases bit error rate also increases. Appendix A Matlab Code %*********************************************************************************** %** CONVOLUTIONAL ENCODING TRELLIS DIAGRAM IMPLEMENTATION %************************************************************************************ function [code]= Conv_Enc(message1) % K=3 Length of Shift Register % # of states = 2^(K-1) = 4 % State can be considered as first two or last two bits of the shift register % 1/n Convolutional Encoder, Rate = 1/2 % n= length of generated codeword for i=1:length(message1) message(i)= num2str(message1(i)); end state=00; next_state=00; code1=[]; message=[message 00]; message=[message]; for t=1:length(message) inp= message(t); state=next_state; if(state==00) if(inp==0) next_state=00; outp=00; else next_state=10; outp= 11; end elseif(state==10) if(inp==0) next_state=01; outp=10; else next_state=11; outp= 01; end elseif(state==01) if(inp==0) next_state=00; outp=11; else next_state=10; outp= 00; end elseif(state==11) if(inp==0) next_state=01; outp=01; else next_state=11; outp= 10; end end code1= [code1 outp]; end for i=1:length(code1) code(i)= str2num(code1(i)); end % code=code1; %*********************************************************************************** %***************** DECODER IMPLEMENTATION*********************** %************************************************************************************ function [messa

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.