2952 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 64, NO. 7, JULY 2016
Trimming Soft-Input Soft-Output Viterbi Algorithms
Qin Huang, Senior Member, IEEE, Qiang Xiao, Li Quan, Zulin Wang, Member IEEE, and Shafei Wang
Abstract—In the soft-input soft-output Viterbi
algorithm (SOVA), the log-likelihood ratio (LLR) of each
bit is determined by the minimum metric difference between the
ML path and its competitive paths. This paper proposes to trim
large metric differences in order to reduce the complexity of
SOVA. By trimming the metric differences, only a small number
of backtracking operations are carried out, while many LLRs
may be omitted as the result of the lack of metric differences.
By revealing the relationship among neighboring LLRs, the
omitted LLRs are estimated from its neighoring LLRs as well
as intrinsic information. The extrinsic information transfer
chart analysis demonstrates that the proposed algorithm has
similar convergence behavior as the Log-MAP algorithm, if the
trimming factor M is moderate. Other analyses verify that our
approach provides good LLR quality with only at most 1/M
backtracking operations of SOVA. Simulation results show that
it outperforms SOVA and performs as well as its variants and
the Log-MAP algorithm.
Index Terms— Soft-input soft-output Viterbi algorithm
(SOVA), convolutional codes, turbo codes, trimming, LLR
estimation.
I. INTRODUCTION
S
OFT-INPUT soft-output Viterbi algorithm
(SOVA) [1] is the most important variant of Viterbi
algorithm (VA) [2], [3]. It can perform demodulation,
decoding, equalization, etc, with suboptimal performance and
reasonable complexity. Thus, it has been widely applied in
modern communication and storage systems.
SOVA consists of two stages. The first stage is to find
the maximum likelihood (ML) path, which is carried out by
VA [1]; the second stage is to calculate the log-likelihood
ratio (LLR) of each information bit from the minimum metric
difference between the ML path and its competitive paths
with complementary bit decision [1]. The complexity and the
quality of LLRs are largely determined by the second stage.
In the past two decades, many researchers have ded-
icated themselves to improving the quality of LLRs.
Manuscript received November 25, 2015; revised April 14, 2016; accepted
May 10, 2016. Date of publication May 18, 2016; date of current version
July 12, 2016. This work was supported by National Natural Science
Foundation of China under Grant 61471022 and 61201156, and NSAF
under Grant U1530117. The associate editor coordinating the review of
this paper and approving it for publication was R. Schober. (Corresponding
author: Zulin Wang.)
Q. Huang, Q. Xiao, and L. Quan are with the School of Electronic and
Information Engineering, Beihang University, Beijing 100191, China (e-mail:
qhuang.smash@gmail.com; xiao24qiang@163.com; vquanli@163.com).
Z. Wang is with the School of Electronic and Information Engineering,
Beihang University, Beijing 100191, China, and also with the Collaborative
Innovation Center of Geospatial Technology, Wuhan 430079, China (e-mail:
wzulin_201@163.com).
S. Wang is with the Institute of North Electronic Equipment, Beijing
100191, China (e-mail: wang_shafei@163.com).
Digital Object Identifier 10.1109/TCOMM.2016.2569588
In 1998, Fossorier et al. [4] modified SOVA (M-SOVA)
and proved the equivalence between M-SOVA and the
Max-Log-MAP algorithm. In 2000, Chen et al. [5] proposed
to update LLRs from two separate directions (bi-SOVA).
It performs slightly better than the Max-Log-MAP algorithm
by doubling the complexity. Huang and Ghrayeb [6] used two
scaling factors to further improve the performance of SOVA
(S-SOVA). Recently, researchers used adaptive methods [7]
and path metric exchange [8] techniques to reduce the com-
plexity of SOVA. Besides SOVA, there are other techniques
improving the decoding of convolutional codes and Turbo
codes. Sikora and Costello [9] modified the number of states
involved in the computation of the MAP algorithm, offering
good performance and complexity tradeoffs. Leanderson and
Sundberg [10], [11] introduced list decoding to achieve better
performance of decoding Turbo codes.
It is well-known that the complexity of SOVA largely
depends on the number of backtracking operations or the
number of metric differences. In this paper, we proposed to
reduce the complexity of SOVA by trimming metric differ-
ences M times. Since the proposed trimming SOVA (T-SOVA)
only uses at most 1/M of metric differences, it reduces the
number of backtracking operations by a factor of at least M.
However, due to the reduction of metric differences, there are
fewer or no competitive paths with complementary decision
for many information bits. In other words, LLRs of these
information bits degrade or are even omitted. It reveals that
both the choice of metric differences and the estimation of
LLRs are very important for the quality of LLRs. For the
choice of metric differences, smaller metric differences are
more possible to determine LLRs than larger ones. For the
estimation, both neighbouring LLRs and intrinsic information
are critical to the quality of LLRs. Then, a fast algorithm,
Lazy Viterbi algorithm (Lazy VA) [12], instead of Viterbi
algorithm is introduced in the first stage to make T-SOVA
more efficient. Besides its low complexity, Lazy VA indicates
good choice of metric differences without sorting, and gives
a lower bound on neighbouring LLRs. According to our
analysis, T-SOVA provides good LLR quality [13] with only at
most 1/M backtracking operations of SOVA. Extrinsic infor-
mation transfer chart (EXIT chart) analysis shows that T-SOVA
with moderate M has similar convergence behavior as the
Log-MAP algorithm. Moreover, simulation results on Turbo
codes indicate that T-SOVA outperforms bi-SOVA and
M-SOVA, and performs as well as S-SOVA and Log-MAP.
This paper is organized as follows. Section II gives
background knowledge. Section III presents the pro-
posed T-SOVA. Section IV gives performance analysis and
complexity. Section V concludes this paper.
0090-6778 © 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.