2 CHAPTER 1. HISTORICAL PERSPECTIVE, MOTIVATION AND OUTLINE
there was only limited research in trellis decoding of linear block codes [19, 20]. It was in 1988, when
Forney [21] showed that some block codes have relatively simple trellis structures. Motivated by Forney’s
work, Honary et al. [19, 22–25] as well as Lin et al. [20, 26, 27] proposed various methods for reducing the
associated complexity. The Chase algorithm [28] is one of the most popular techniques proposed for near
maximum likelihood decoding of block codes.
Furthermore, in 1961 Gorenstein and Zierler [29] extended the binary coding theory to treat non-binary
codes as well, where code symbols were constituted by a number of bits, and this led to the birth of
burst-error correcting codes. They also contrived a combination of algorithms, which is referred to as the
Peterson–Gorenstein–Zierler (PGZ) algorithm. In 1960 a prominent non-binary subset of BCH codes was
discovered by Reed and Solomon [30]; they were named Reed–Solomon (RS) codes after their inventors.
These codes exhibit certain optimality properties, since their codewords have the highest possible minimum
distance between the legitimate codewords for a given code rate. This, however, does not necessarily
guarantee attaining the lowest possible BER. The PGZ decoder can also be invoked for decoding non-
binary RS codes. A range of powerful decoding algorithms for RS codes was found by Berlekamp [31, 32]
and Massey [33, 34]. Various soft-decision decoding algorithms were proposed for the soft decoding of RS
codes by Sweeney et al. [35–37] and Honary and Markarian [19]. In recent years RS codes have found
practical applications, for example, in Compact Disc (CD) players, in deep-space scenarios [38], and in
the family of Digital Video Broadcasting (DVB) schemes [39], which were standardised by the European
Telecommunications Standardisation Institute (ETSI).
Inspired by the ancient theory of Residue Number Systems (RNS) [40–42], which constitute a
promising number system for supporting fast arithmetic operations [40, 41], a novel class of non-binary
codes referred to as Redundant Residue Number System (RRNS) codes were introduced in 1967. An RRNS
code is a maximum–minimum distance block code, exhibiting similar distance properties to RS codes.
Watson and Hastings [42] as well as Krishna et al. [43, 44] exploited the properties of the RRNS for
detecting or correcting a single error and also for detecting multiple errors. Later, the soft decoding of
RRNS codes was proposed in [45].
During the early 1970s, FEC codes were incorporated in various deep-space and satellite
communications systems, and in the 1980s they also became common in virtually all cellular mobile radio
systems. However, for a long time FEC codes and modulation have been treated as distinct subjects in
communication systems. By integrating FEC and modulation, in 1987 Ungerboeck [46, 47] and Schlegel
[48] proposed Trellis-Coded Modulation (TCM), which is capable of achieving significant coding gains
over power and band-limited transmission media. A further historic breakthrough was the invention of
turbo codes by Berrou et al. Berrou et al. in 1993 [12] and 1996 [13], which facilitate the operation
of communications systems near the Shannonian limits. Turbo coding is based on a composite codec
constituted by two parallel concatenated codecs. Since its recent invention, turbo coding has evolved at
an unprecedented rate and has reached a state of maturity within just a few years due to the intensive
research efforts of the turbo coding community. As a result of this dramatic evolution, turbo coding has
also found its way into standardised systems such as, for example, third-generation (3G) mobile radio
systems [49]. Even more impressive performance gains can be attained with the aid of turbo coding in the
context of video broadcast systems, where the associated system delay is less critical than in delay-sensitive
interactive systems.
More specifically, in their proposed scheme Berrou et al. [12, 13] used a parallel concatenation of two
Recursive Systematic Convolutional (RSC) codes, accommodating the turbo interleaver between the two
encoders. At the decoder an iterative structure using a modified version of the classic minimum BER MAP
invented by Bahl et al. [11] was invoked by Berrou et al., in order to decode these parallel concatenated
codes. Again, since 1993 a large amount of work has been carried out in the area aiming, for example,
to reduce the associated decoder complexity. Practical reduced-complexity decoders are, for example, the
Max-Log-MAP algorithm proposed by Koch and Baier [50], as well as by Erfanian et al. [51], the Log-
MAP algorithm suggested by Robertson et al. [52], and the SOVA advocated by Hagenauer as well as
H¨oher [53, 54]. Le Goff et al. [55], Wachsmann and Huber [56] as well as Robertson and W¨orz [57]
suggested the use of these codes in conjunction with bandwidth-efficient modulation schemes. Further
advances in understanding the excellent performance of the codes are due, for example, to Benedetto
and Montorsi [58, 59] and Perez et al. [60]. During the mid-1990s Hagenauer et al. [61], as well as