this system can reach up to several millimeters without any dis-
tal optics. In addition, we show that the image reconstruction
process is remarkably robust with regard to external perturba-
tions, such as temperature variation and fiber bending. Last
but not least, the transfer-learning capability of the new system
is confirmed using cells of different morphologies and classes
for testing. The work presented here introduces a new platform
for various practical applications, such as biomedical research
and clinical diagnosis. The system performance of the Cell-
DCNN-GALOF is superior to state-of-the-art systems. It is also
a new cornerstone for imaging research based on waveguide de-
vices using transverse Anderson localization.
2 Methods
The experimental setup and details of DCNN are shown in
Fig. 1. The GALOF used here is fabricated using the stack-and-
draw method. Silica capillaries with different diameters and air-
filling fractions are fabricated first. The outer diameter of the
silica capillaries ranges from about 100 to 180 μm, and the ratio
of inner diameter to outer diameter ranges from 0.5 to 0.8. To
make a preform, capillaries are randomly fed into a silica jacket
tube. In the following steps, the preform is drawn to canes with
an outer diameter around 3 mm. Finally, the cane is drawn to the
GALOF with the desired size. The SEM image of the GALOF
cross-section is shown in Fig. 1(a).
In Fig. 1(a), the light source is an LED with a center wave-
length of 460 nm. An 80-cm long GALOF sample is utilized.
The diameter of the disordered structure is about 278 μm,and
the air–hole-filling fraction in the disordered structure is
∼28.5%.
39
The numerical aperture (NA) of the GALOF, based
on far-field emission angles, is measured to be ∼0.4; see Fig. S5
in the Supplementary Material. The temperat ure of a GALOF
segment can be raised by the heater underneath. A 10-mm-long
section in the middle of the GALOF is heated. We use fixed
stained cell samples in all of our experiments. The images of
cell samples are magnified by a 10× objective (NA = 0.3) and
split into two copies sent into a reference path and a measure-
ment path, respectively. The cell samples are scanned both ver-
tically and horizontally with steps of 5 μm to obtain training,
validation, and test data sets. In the reference beam path,
the image is further magnified by a 20× objective (NA ¼ 0.75)
and recorded by CCD 1 (Manta G-145B, 30 fps) after passing
through a tube lens. In the measurement path, the image is trans-
ported through the 80-cm -long GALOF and then projected onto
CCD 2 (Manta G-145B, 30 fps) by the same combination of
a 20× objective and tube lens. The reference images are labeled
as the ground truth. Both reference and raw images are 8-bit
grayscale images and are cropped to a size of 418 × 418 pixel s.
Figure 1(b) shows that experiments are performed for both
straight GALOF and bent GALOF. To bend the fiber, the input
end of the GALOF is fixed, whereas the output end of the
GALOF is moved by an offset distance. The amount of bending
is quantified by the offset distance from the end of the bent fiber
to the position of the straight fiber (equal to the length of the
dashed line). The relation between the offset distance d and
the corresponding bending angle of the fiber θ is given by d ¼
L½1 − cosðθÞ∕θ where L is the total length of the GALOF.
Figure 1(c) shows the detailed structure of the DCNN. The
raw image, which is resized to 420 × 420 using zero paddin gs,
is the input layer. The input layer is decimated by five down-
sampling blocks (blue and black arrows) to extract the feature
maps. Then five up-sampling blocks (white arrows) and one
convolutional block (yellow arrow) are applied to reconstruct
the images of cell samples with a size of 418 × 418. To visualize
the image reconstruction process, some sample feature maps are
shown in Fig. S6 in the Supplementary Material. The skip con-
nections (dark green arrows) pass feature information from fea-
ture-extraction layers to reconstruction layers by concatenation
operations. The mean absolute error (MAE)-based loss metrics
are calculated by comparing the reconstructed images with the
reference images. The MAE is defined as jI
rec
− I
ref
j∕ðwhÞ,
where I
rec
, I
ref
, w, and h are the reconstructed image intensity,
the reference image intensity, the width, and the height of the
images, respectively. The parameters of the DCNN are opti-
mized by minimizing the loss. Detailed block operation dia-
grams corresponding to the respective arrows are shown on
the right side of Fig. 1(d) (BN, batch normalization; ReLU, rec-
tified linear unit ; Conv, convolution; D-Conv, dilated convolu-
tion; T-Conv, transposed convolution; concat, concatenation).
The Keras framework is applied to develop the program code
for the DCNN. The regularization applied in the DCNN is de-
fined by the L2-norm. The parameters of the DCNN are initial-
ized by a truncated normal distribution. For both training and
evaluation, the MAE is utilized as the metric. The Adam opti-
mizer is adopted to minimize the loss function. During the train-
ing process, the batch size is set at 64 and the training is run
through 80 epochs with shuffling at each epoch for all of the
data shown in this paper. The learning rate is set at 0.005. Both
training and test processes are run in parallel on two GPUs
(GeForce GTX 1080 Ti).
3 Results
3.1 Imaging of Multiple Cell Types
To demonstrate the imaging reconstruction capability, two dif-
ferent types of cells, human red blood cells and cancerous
human stomach cells, serve as objects. By scanning across dif-
ferent areas of the cell sample, we collect 15,000 reference and
raw images as the training set, 1000 image pairs as the validation
set, and another 1000 image pairs as the test set for each type of
cell. During the first data acquisition process, the GALOF is
kept straight and at room temperature of about 20°C. The im-
aging depth is 0 mm, meaning that the image plane is located
directly at the fiber input facet. The training data are loaded into
the DCNN [see Fig. 1(c) for DCNN structure] to optimize the
parameters of the neural network and generate a computational
architecture that can accurately map the fiber-transported im-
ages to the corresponding original object. After the training pro-
cess, the test data are applied to the trained model to perfor m
imaging reconstruction and evaluate its performance using
the normalized MA E as the metric. In the first round of experi-
ments, we train and test each type of cell separately. With a
training data set of 15,000 image pairs, it takes about 6.4 h
to train the DCCN over 80 epochs on two GPUs using a per-
sonal computer. The accuracy improvement curves for both
training and validation processes over all 80 epochs are pro-
vided in Fig. S1 in the Supplementary Material. After training,
the reconstruction time of a single test image is about 0.05 s.
Figure 2 shows some samples from the test data set. In
Figs. 2(a)–2(c), reference images, raw images, and recovered
images of three in succession collected and reconstructed im-
ages of human red cells are shown, whereas in Figs. 2(d)–2(f),
three images of cancerous stomach cells are presented.
Comparing the reference images with the reconstructed images,
Zhao et al.: Deep-learning cell imaging through Anderson localizing optical fiber
Advanced Photonics 066001-3 Nov∕Dec 2019
•
Vol. 1(6)