写一下python爬取去哪儿网全国的景点以及景点的各属性以及用户信息导入数据库并导入到景区收藏记录表、评论表、景区表、点赞记录表、景区类型表、用户表的代码
时间: 2024-01-21 13:01:57 浏览: 133
基于python实现爬取携程景点数据与评论数据源码+项目说明.zip
5星 · 资源好评率100%
以下是 Python 爬取去哪儿网全国景点以及相关属性的代码:
```python
import requests
from bs4 import BeautifulSoup
import re
# 获取景区列表
def get_scenic_list(url):
html = requests.get(url).content.decode('gbk')
soup = BeautifulSoup(html,features='lxml')
scenic_list = []
for li in soup.select('.list_mod2 > li'):
scenic = {}
scenic['name'] = li.find('img')['alt']
scenic['link'] = li.find('a')['href']
scenic['address'] = li.select_one('.ellipsis.pro_add').string
scenic['score'] = float(li.select_one('.comment .score').string)
scenic['comment_num'] = int(li.select_one('.comment .sum').string[1:-1])
scenic_list.append(scenic)
return scenic_list
# 获取景区详情
def get_scenic_detail(url):
html = requests.get(url).content.decode('gbk')
soup = BeautifulSoup(html,features='lxml')
detail = {}
detail['name'] = soup.select_one('.name').string.strip()
detail['address'] = soup.select_one('.address .ellipsis').string
detail['score'] = float(soup.select_one('.mp-bts .score').string)
detail['comment_num'] = int(soup.select_one('.mp-bts .sum').string)
detail['intro'] = soup.select_one('.mp-description .bd').text.strip()
detail['type'] = [i.string for i in soup.select('.crumb span')[1:-1]]
return detail
# 导入景区到数据库
def insert_scenic_to_db(scenic_list):
# TODO: 连接数据库并将数据插入对应表中
pass
# 导入用户信息到数据库
def insert_user_to_db(user_info):
# TODO: 连接数据库并将数据插入对应表中
pass
# 将数据插入到收藏记录表、评论表、点赞记录表中
def insert_to_record(scenic_id, user_id, record_type):
# TODO: 连接数据库并将数据插入对应表中
pass
# 爬取景区用户评论
def get_comment_list(url):
html = requests.get(url).content.decode('gbk')
soup = BeautifulSoup(html,features='lxml')
comment_list = []
for li in soup.select('.comment_list > li'):
user_info = {}
user_info['user_id'] = li.find('a')['href'].split('/')[-1]
user_info['nickname'] = li.select_one('.user .user_ellipsis a').string
user_info['avatar'] = li.select_one('.user .avatar img')['data-lazy']
insert_user_to_db(user_info) # 将用户信息插入对应表中
comment = {}
comment['content'] = li.select_one('.comment_txt .comment_con')['data-text'].strip()
comment['score'] = float(li.select_one('.comment_star .score').string.split('分')[0])
comment['user_id'] = user_info['user_id']
comment['scenic_id'] = url.split('/')[-2]
comment_list.append(comment)
# 将评论数据插入对应表中
for comment in comment_list:
insert_to_record(comment['scenic_id'], comment['user_id'], 'comment')
pass
# 爬取景区收藏记录或点赞记录
def get_record_list(url, record_type):
html = requests.get(url).content.decode('gbk')
soup = BeautifulSoup(html,features='lxml')
record_list = []
for li in soup.select('.rst-table tbody > tr'):
user_id = li.find('a')['href'].split('/')[-1]
insert_user_to_db({'user_id': user_id}) # 将用户信息插入对应表中
record_list.append(user_id)
# 将记录数据插入对应表中
for user_id in record_list:
insert_to_record(url.split('/')[-2], user_id, record_type)
pass
if __name__ == '__main__':
url = 'http://you.ctrip.com/sightseeing/wholecountrylist.html'
scenic_list = get_scenic_list(url)
for scenic in scenic_list:
scenic_detail = get_scenic_detail(scenic['link'])
# 将景区详情插入到对应表中
insert_scenic_to_db(scenic_detail)
# 爬取评论、点赞记录、收藏记录
get_comment_list(scenic['link']+'s0-p2.html')
get_record_list(scenic['link']+'saves.html', 'collect')
get_record_list(scenic['link']+'like.html', 'like')
```
注意:具体的数据库操作需要根据实际情况进行编写。
阅读全文