yolov5涨点训练
时间: 2023-08-30 21:10:13 浏览: 126
通过引入自注意力的Vision Transformer(ViTs):MobileViTAttention,可以在Yolov5中提高训练的显存要求。此外,一种名为Global Response Normalization的标准化技术也被作者提出,使得FCMAE预训练与ConvNeXt架构的结合更加有效。作者观察到了Feature collapse的现象,即在ConvNeXt块中的MLP的channel expansion层中出现了很多死的或饱和的特征映射。的描述,作者在多个数据集上验证了yolov5s_C2f_ConvNeXtV2Block.yaml模型,在小目标和遮挡物方面的精度提升明显。作者的魔改网络和复现cvpr等前沿论文,以及组合优化的方法,使得yolov5在mAP上有明显的涨点。<span class="em">1</span><span class="em">2</span><span class="em">3</span>
#### 引用[.reference_title]
- *1* [Yolov5/Yolov7涨点技巧:MobileViT移动端轻量通用视觉transformer,MobileViTAttention助力小目标检测,涨...](https://blog.csdn.net/m0_63774211/article/details/130898507)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_1"}}] [.reference_item style="max-width: 50%"]
- *2* *3* [Yolov5涨点技巧:ConvNeXtV2Block结合C2f助力检测 | ConvNeXt V2来了,仅用最简单的卷积架构,性能不输...](https://blog.csdn.net/m0_63774211/article/details/131113572)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_1"}}] [.reference_item style="max-width: 50%"]
[ .reference_list ]
阅读全文