videoMAEV2
时间: 2023-09-01 17:09:30 浏览: 45
VideoMAE V2是一种用于视频Transformer预训练的简单且高效的自监督学习方法。它引入了极高掩码率和tube掩码策略两种关键设计,使视频重建任务更具挑战性,并缓解信息泄漏问题。实验结果表明,VideoMAE V2适用于不同规模的视频数据集,并且只需数千个视频片段就可以得到有效的结果,对于数据有限的场景具有重要的实用价值。\[2\]
#### 引用[.reference_title]
- *1* [【Paper】复现VideoMAE](https://blog.csdn.net/m0_51371693/article/details/131408101)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control_2,239^v3^insert_chatgpt"}} ] [.reference_item]
- *2* [VideoMAE:掩码自编码器是用于自监督视频预训练的高效利用数据的学习者](https://blog.csdn.net/weixin_51697828/article/details/125117105)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control_2,239^v3^insert_chatgpt"}} ] [.reference_item]
- *3* [文章阅读VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-...](https://blog.csdn.net/qq_42740834/article/details/129363049)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control_2,239^v3^insert_chatgpt"}} ] [.reference_item]
[ .reference_list ]