As shown in Figure 1, the proposed network contains four strided convolutional layers and four strided deconvo- lutional layers. The Leaky Rectified Linear Unit (LReLU) with a negative slope of 0.2 is used after each convolutional and deconvolutional layer. The residual group [38] consists of three residual blocks, and 18 residual blocks are used in GRes. The filter size is set as 11×11 pixels in the first convo- lutional layer in the encoder module and 3 × 3 in all the other convolutional and deconvolutional layers. We jointly train the MSBDN and DFF module and use the Mean Squared Error (MSE) as the loss function to constrain the network output and ground truth. The entire training process con- tains 100 epochs optimized by the ADAM solver [28 ] with β1 = 0.9 and β2 = 0.999 with a batch size of 16. The initial learning rate is set as 10−4 with a decay rate of 0.75 after every 10 epochs. All the experiments are conducted on an NVIDIA 2080Ti GPU. The source code and trained models are availabe at https://github.com/BookerDeWitt/MSBDN- DFF 翻译
时间: 2024-02-14 21:03:52 浏览: 186
random-content-order-in-web-page.zip_On the Bottom
如图1所示,所提出的网络包含四个步幅卷积层和四个步幅反卷积层。在每个卷积和反卷积层之后使用LReLU(带有负斜率0.2的泄露整流线性单元)。残差组[38]包含三个残差块,GRes中使用18个残差块。在编码器模块的第一个卷积层中,滤波器大小设置为11×11像素,所有其他卷积和反卷积层的滤波器大小设置为3×3像素。我们联合训练MSBDN和DFF模块,并使用均方误差(MSE)作为损失函数,以约束网络输出和真实值之间的差距。整个训练过程包含100个时期,使用批大小为16的ADAM优化器[28 ]进行优化。初始学习率设置为10^-4,每10个时期衰减率为0.75。所有实验都在NVIDIA 2080Ti GPU上进行。源代码和训练模型可在https://github.com/BookerDeWitt/MSBDN-DFF上获得。
阅读全文