Page 5 of 23
Liuetal. BioMed Eng OnLine (2018) 17:114
e success of convolution neural networks in SR mostly depends on the contribu-
tion of the learned convolution kernels from the training samples. To investigate the
effects of different convolution kernels in SR tasks, we generated two distinct kernels
with sizes of
and
for a better visual representation. en, the two kernels
were applied to a simple low-resolution image. e convolution results and the differ
-
ence between the high-resolution and low-resolution images are shown in Fig.3. As
shown in the first row, the main difference between the high-resolution and low-resolu
-
tion images is at the edges. erefore, the task of SR is to recover detailed information,
such as edges. Furthermore, the second and third rows in Fig.3 show that convolution
operations with different kernel sizes yield varying responses along the edges, and the
strengths of the responses depend on the size of the convolution kernels. Due to the
receptive field range of the convolution kernels with different sizes, the larger convolu
-
tion kernels induce stronger responses along the edges. Consequently, these convolution
responses are extracted as multi-scale information of the convolution kernels.
Design ofmulti‑scale network architecture
Due to the forward and back propagation mechanisms in the convolution neural net-
work, we constructed a simple convolution network stacked by two convolution layers
as shown in Fig.4. Both convolution layers have only one convolution kernel. In the
convolution network, the input low-resolution images are submitted to the network
and convoluted using the following convolution layers sequentially to obtain the feature
maps. is procedure is called forward propagation. After the final convolution layer,
the errors in the feature maps and high-resolution images, and the difference images,
are computed based on the Euclidean distance of the loss layer. e difference images
are very important for adjusting the kernel parameters of the final convolution layer. All
parameters of each layer are adjusted using stochastic gradient descent.
Due to the multi-scale properties of different kernel sizes, fusing different scale con
-
volution responses is assumed to accelerate the SR procedure. In the following study, we
developed a simple MFCN as shown in Fig.5. As depicted in Fig.4, the MFCN has two
convolution layers, and each layer has only one convolution kernel. We added a fusion
layer to the network shown in Fig.5. e function of the fusion layer is simply to add
feature maps from (b) and (c). Initially, the fusion image had more details than the feature
map in (c). Moreover, compared with the difference image I in Fig.4, the difference image
(f) in Fig.5 is darker, which indicates less error between the recovered image and high-
resolution image and is beneficial for accelerating the convergence in the training phase.
erefore, it is desirable to design a convolution network that combines differ
-
ent scale information. Reconstructed images benefit from end-to-end learning of low/
Fig. 2 Super-resolution reconstruction based on deep convolutional network