60 Gb/s 光学OFDM在1550 nm下100 m OM1 MMF系统的实验演示

0 下载量 154 浏览量 更新于2024-08-27 收藏 1.12MB PDF 举报
"实验展示了在100米OM1多模光纤(MMF)链接上,基于强度调制和直接检测(IMDD)的60 Gb/s光正交频分复用(OFDM)传输的新纪录。利用10 GHz电吸收调制激光强度调制器在单一1550纳米波长下实现。提出的方案采用了自适应比特加载和中心发射方案,有效地对抗了信道衰落,并简化了系统结构,对于短距离数据中心互连具有良好的潜力。" 这篇摘要描述了一项关于光通信领域的实验研究,主要关注的是在100米的OM1多模光纤中实现60 Gb/s的高速光正交频分复用(OFDM)传输技术。这种技术通常用于提高光通信系统的数据容量和传输效率。 首先,60 Gb/s的传输速度是该实验的显著特点,这标志着在IMDD系统中达到了新的传输速率记录。IMDD是一种常见的光通信接收技术,它通过直接检测光信号的强度来解码信息,相比其他技术,它更简单且成本较低。 其次,研究中使用了10 GHz电吸收调制激光强度调制器。电吸收调制器是一种关键的光电子器件,能够通过改变激光器的吸收特性来调整其输出功率,从而实现数据的编码。选择在1550纳米波长工作,是因为这个波段是光纤通信的常用窗口,具有低损耗和高的传输效率。 此外,实验还引入了自适应比特加载策略,这是一种动态分配比特到不同子载波的技术,可以根据信道条件优化数据传输,以降低误码率并提高系统的整体性能。结合简单的中心发射方案,这种方法可以有效地减轻由多模光纤引起的信道衰落问题,这种衰落通常源于模式色散,即不同模式在光纤中的传播速度不同。 最后,研究指出,这项技术在应对信道衰落和简化系统结构方面显示出良好的效果,预示着它在短距离数据中心互连中具有广阔的应用前景。数据中心互连需要高带宽、低延迟的连接,而此实验成果可能为实现这些需求提供一种解决方案。 这篇摘要涉及了高速光通信、多模光纤、IMDD技术、电吸收调制器、OFDM、自适应比特加载和中心发射等关键技术,这些都是当前光通信领域的重要研究方向。

4 Experiments This section examines the effectiveness of the proposed IFCS-MOEA framework. First, Section 4.1 presents the experimental settings. Second, Section 4.2 examines the effect of IFCS on MOEA/D-DE. Then, Section 4.3 compares the performance of IFCS-MOEA/D-DE with five state-of-the-art MOEAs on 19 test problems. Finally, Section 4.4 compares the performance of IFCS-MOEA/D-DE with five state-of-the-art MOEAs on four real-world application problems. 4.1 Experimental Settings MOEA/D-DE [23] is integrated with the proposed framework for experiments, and the resulting algorithm is named IFCS-MOEA/D-DE. Five surrogate-based MOEAs, i.e., FCS-MOEA/D-DE [39], CPS-MOEA [41], CSEA [29], MOEA/DEGO [43] and EDN-ARM-OEA [12] are used for comparison. UF1–10, LZ1–9 test problems [44, 23] with complicated PSs are used for experiments. Among them, UF1–7, LZ1–5, and LZ7–9 have 2 objectives, UF8–10, and LZ6 have 3 objectives. UF1–10, LZ1–5, and LZ9 are with 30 decision variables, and LZ6–8 are with 10 decision variables. The population size N is set to 45 for all compared algorithms. The maximum number of FEs is set as 500 since the problems are viewed as expensive MOPs [39]. For each test problem, each algorithm is executed 21 times independently. For IFCS-MOEA/D-DE, wmax is set to 30 and η is set to 5. For the other algorithms, we use the settings suggested in their papers. The IGD [6] metric is used to evaluate the performance of each algorithm. All algorithms are examined on PlatEMO [34] platform.

2023-05-24 上传

2023-06-09 09:46:11.022252: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1900] Ignoring visible gpu device (device: 0, name: GeForce GT 610, pci bus id: 0000:01:00.0, compute capability: 2.1) with Cuda compute capability 2.1. The minimum required Cuda capability is 3.5. 2023-06-09 09:46:11.022646: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2 To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. WARNING:tensorflow:5 out of the last 9 calls to <function Model.make_test_function.<locals>.test_function at 0x0000017BB39D0670> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:6 out of the last 11 calls to <function Model.make_test_function.<locals>.test_function at 0x0000017BB3AE83A0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.

2023-06-10 上传