coop和cocoop
时间: 2023-09-12 14:11:10 浏览: 77
CoOp和CoCoOp是两个不同的模型,它们在上下文标记的数量上有所区别。CoOp中引入了较少的上下文标记数量,而CoCoOp引入了更多的参数,即Meta-Net。通过消融实验可以发现,增加参数大小并不是关键,因为用较大的GPU内存来训练CoCoOp相比CoOp会消耗更多的资源。因此,在实验中,CoCoOp使用了批大小为1的训练,并进行了10个epochs的训练。综合来看,CoCoOp在一些方面表现介于CoOp和CLIP之间。对于基类来说,CoCoOp不如CoOp,但超过了CLIP;对于未知类来说,CoCoOp不如CLIP,但超过了CoOp。<span class="em">1</span><span class="em">2</span><span class="em">3</span>
#### 引用[.reference_title]
- *1* [CoOp & CoCoOp](https://blog.csdn.net/qq_46563097/article/details/130281970)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_1"}}] [.reference_item style="max-width: 50%"]
- *2* *3* [CoCoOp: Conditional Prompt Learning for Vision-Language Models](https://blog.csdn.net/LuvLive/article/details/130601750)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_1"}}] [.reference_item style="max-width: 50%"]
[ .reference_list ]