scrapy设置随机user-agent
时间: 2023-06-05 12:47:18 浏览: 271
python scrapy 企业级分布式爬虫开发架构模板
Scrapy可以通过在settings.py文件中设置USER_AGENTS列表来实现随机user-agent。在列表中添加多个user-agent,Scrapy会随机选择一个作为请求头中的user-agent。这样可以避免被网站识别为爬虫,提高爬虫的稳定性和成功率。具体设置方法如下:
1. 在settings.py文件中添加USER_AGENTS列表,如下所示:
```
USER_AGENTS = [
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3',
'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:54.0) Gecko/20100101 Firefox/54.0',
'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36',
'Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; AS; rv:11.0) like Gecko',
'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36',
]
```
2. 在middlewares.py文件中添加RandomUserAgentMiddleware中间件,如下所示:
```
from scrapy import signals
import random
class RandomUserAgentMiddleware(object):
def __init__(self, agents):
self.agents = agents
@classmethod
def from_crawler(cls, crawler):
return cls(crawler.settings.getlist('USER_AGENTS'))
def process_request(self, request, spider):
request.headers.setdefault('User-Agent', random.choice(self.agents))
```
3. 在settings.py文件中启用RandomUserAgentMiddleware中间件,如下所示:
```
DOWNLOADER_MIDDLEWARES = {
'myproject.middlewares.RandomUserAgentMiddleware': 543,
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
}
```
这样就完成了随机user-agent的设置。
阅读全文