没有合适的资源?快使用搜索试试~ 我知道了~
首页python爬虫多次请求超时的几种重试方法(6种)
第一种方法 headers = Dict() url = 'https://www.baidu.com' try: proxies = None response = requests.get(url, headers=headers, verify=False, proxies=None, timeout=3) except: # logdebug('requests failed one time') try: proxies = None response = requests.get(url, headers=headers, verify=False, proxi
资源详情
资源评论
资源推荐

python爬虫多次请求超时的几种重试方法爬虫多次请求超时的几种重试方法(6种种)
第一种方法第一种方法
headers = Dict()
url = 'https://www.baidu.com'
try:
proxies = None
response = requests.get(url, headers=headers, verify=False, proxies=None, timeout=3)
except:
# logdebug('requests failed one time')
try:
proxies = None
response = requests.get(url, headers=headers, verify=False, proxies=None, timeout=3)
except:
# logdebug('requests failed two time')
print('requests failed two time')
总结 :代码比较冗余,重试try的次数越多,代码行数越多,但是打印日志比较方便
第二种方法第二种方法
def requestDemo(url,):
headers = Dict()
trytimes = 3 # 重试的次数
for i in range(trytimes):
try:
proxies = None
response = requests.get(url, headers=headers, verify=False, proxies=None, timeout=3)
# 注意此处也可能是302等状态码
if response.status_code == 200:
break
except:
# logdebug(f'requests failed {i}time')
print(f'requests failed {i} time')
总结 :遍历代码明显比第一个简化了很多,打印日志也方便
第三种方法第三种方法
def requestDemo(url, times=1):
headers = Dict()
try:
proxies = None
response = requests.get(url, headers=headers, verify=False, proxies=None, timeout=3)
html = response.text()
# todo 此处处理代码正常逻辑
pass
return html
except:
# logdebug(f'requests failed {i}time')
trytimes = 3 # 重试的次数
if times < trytimes:
times += 1
return requestDemo(url, times)
return 'out of maxtimes'
总结 :迭代 显得比较高大上,中间处理代码时有其它错误照样可以进行重试; 缺点 不太好理解,容易出错,另外try包含的
内容过多时,对代码运行速度不利。
第四种方法第四种方法
@retry(3) # 重试的次数 3
def requestDemo(url):
headers = Dict()
proxies = None
response = requests.get(url, headers=headers, verify=False, proxies=None, timeout=3)
html = response.text()
# todo 此处处理代码正常逻辑
pass
return html
def retry(times):
def wrapper(func):

















安全验证
文档复制为VIP权益,开通VIP直接复制

评论0