Python爬虫包爬虫包BeautifulSoup实例(三)实例(三)
主要为大家详细介绍了Python爬虫包BeautifulSoup实例,具有一定的参考价值,感兴趣的朋友可以参考一下
一步一步构建一个爬虫实例,抓取糗事百科的段子
先不用beautifulsoup包来进行解析
第一步第一步,访问网址并抓取源码
# -*- coding: utf-8 -*-
# @Author: HaonanWu
# @Date: 2016-12-22 16:16:08
# @Last Modified by: HaonanWu
# @Last Modified time: 2016-12-22 20:17:13
import urllib
import urllib2
import re
import os
if __name__ == '__main__':
# 访问网址并抓取源码
url = 'http://www.qiushibaike.com/textnew/page/1/?s=4941357'
user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36'
headers = {'User-Agent':user_agent}
try:
request = urllib2.Request(url = url, headers = headers)
response = urllib2.urlopen(request)
content = response.read()
except urllib2.HTTPError as e:
print e
exit()
except urllib2.URLError as e:
print e
exit()
print content.decode('utf-8')
第二步第二步,利用正则表达式提取信息
首先先观察源码中,你需要的内容的位置以及如何识别
然后用正则表达式去识别读取
注意正则表达式中的 . 是不能匹配的,所以需要设置一下匹配模式。
# -*- coding: utf-8 -*-
# @Author: HaonanWu
# @Date: 2016-12-22 16:16:08
# @Last Modified by: HaonanWu
# @Last Modified time: 2016-12-22 20:17:13
import urllib
import urllib2
import re
import os
if __name__ == '__main__':
# 访问网址并抓取源码
url = 'http://www.qiushibaike.com/textnew/page/1/?s=4941357'
user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36'
headers = {'User-Agent':user_agent}
try:
request = urllib2.Request(url = url, headers = headers)
response = urllib2.urlopen(request)
content = response.read()
except urllib2.HTTPError as e:
print e
exit()
except urllib2.URLError as e:
print e
exit()
regex = re.compile('<div class="content">.*?<span>(.*?)</span>.*?</div>', re.S)
items = re.findall(regex, content)
# 提取数据
# 注意换行符,设置 . 能够匹配换行符
for item in items:
print item
评论0