Future Generation Computer Systems 83 (2018) 85–94
Contents lists available at ScienceDirect
Future Generation Computer Systems
journal homepage: www.elsevier.com/locate/fgcs
3D visual discomfort predictor based on subjective
perceived-constraint sparse representation in 3D display system
Haiyong Xu
a,b
, Gangyi Jiang
b,
*, Mei Yu
b
, Ting Luo
a,b
, Zongju Peng
b
, Feng Shao
b
,
Hao Jiang
b
a
College of Science and Technology, Ningbo University, Ningbo, 315211, China
b
Faculty of Information Science and Engineering, Ningbo University, Ningbo, 315211, China
h i g h l i g h t s
• In this paper, based on the mechanism of neural activity in V1, the feature space of visual discomfort is established by considering the significant map
and spatial frequency of V1.
• Then, a subjective perceived-constraint sparse representation (SPCSR) is constructed by considering sparse coding of simple and complex cells in the
receptive field and human learning mechanism.
• Finally, a 3D visual discomfort predictor (3D-VDP) with SPCSR is proposed.
a r t i c l e i n f o
Article history:
Received 8 March 2017
Received in revised form 24 December 2017
Accepted 9 January 2018
Available online 3 February 2018
Keywords:
3D visual discomfort predictor
Subjective perceived-constraint sparse
representation
Neural activity mechanism
Receptive field
a b s t r a c t
Three-dimensional (3D) display systems have been widely adopted due to the recent increased availability
of an increasing 3D contents. However, viewers may experience visual discomfort due to the limited
viewing zone available of 3D display systems. Therefore, 3D visual discomfort prediction is important for
optimizing 3D display systems. In this paper, we propose a 3D visual discomfort predictor (3D-VDP) that is
based on the visual discomfort features of the primary visual cortex (V1) and the properties of subjective
perceived-constraint sparse representation (SPCSR). Embedding subjective values of visual discomfort as
a constraint into sparse representation such that the dictionary is more suitable for visual perception is the
major technical contribution of this study. Specifically, the proposed 3D-VDP with SPCSR consists of two
phases. In the training phase, first, the neural activity mechanism of V1 is considered, and the features
of visually important disparity and spatial frequency disparity are extracted to highlight the influence
of disparity on the comfort of stereoscopic images. Second, by considering the visual properties of the
receptive field and learning mechanism, a perceived dictionary of visual discomfort and the corresponding
perceived value of visual discomfort are obtained based on the subjective value of visual discomfort as
a constraint condition applied to the construction of a supervised dictionary learning algorithm. In the
testing phase, the sparse coefficient of visual discomfort of the stereoscopic image is computed according
to the perceived dictionary of visual discomfort by using the sparse coding algorithm, and the final
visual discomfort score of the stereoscopic image is obtained from the weighted sparse coefficients and
the perceived value of visual discomfort. Experimental results obtained with the IVY LAB database and
the NBU database demonstrate that, in comparison with closely related existing models, the proposed
3D-VDP with SPCSR achieves a high consistency of subjective assessment.
© 2018 Elsevier B.V. All rights reserved.
1. Introduction
Three-dimensional (3D) display systems are widely used
nowadays, and 3D multimedia, which can provide new visual
*
Corresponding author.
E-mail address: gyjiang@nbu.edu.cn (G. Jiang).
experiences such as stereoscopic viewing and viewpoint inter-
action, has been attracting increased amounts of attention and
is regarded as being the next generation of media [1]. However,
humans may experience a degree of discomfort including eye
fatigue, headaches, and nausea due to the influence of this media
on the human visual system (HVS), the stereoscopic image/video
content, the viewing conditions, and other factors, all of which
greatly reduce the quality of the experience [2]. Thus, it is of
https://doi.org/10.1016/j.future.2018.01.021
0167-739X/© 2018 Elsevier B.V. All rights reserved.