python爬取携程网游记并分析数据
时间: 2023-10-03 17:09:58 浏览: 325
好的,以下是Python爬取携程网游记并分析数据的步骤:
1. 导入必要的库:
```python
import requests
from bs4 import BeautifulSoup
import pandas as pd
import jieba
from collections import Counter
from wordcloud import WordCloud
import matplotlib.pyplot as plt
```
2. 爬取携程网游记页面:
```python
url = 'https://you.ctrip.com/travels/'
res = requests.get(url)
soup = BeautifulSoup(res.text, 'html.parser')
```
3. 获取所有游记的链接:
```python
links = []
for i in soup.select('.journalslist li a'):
links.append(i['href'])
```
4. 定义函数获取单篇游记的数据:
```python
def get_data(link):
res = requests.get(link)
soup = BeautifulSoup(res.text, 'html.parser')
title = soup.select('.ctd_head h1')[0].text.strip()
content = soup.select('#ctd_content')[0].text.strip()
return title, content
```
5. 循环爬取所有游记的数据:
```python
data = []
for link in links:
try:
title, content = get_data(link)
data.append([title, content])
except:
continue
```
6. 将数据转换成DataFrame格式:
```python
df = pd.DataFrame(data, columns=['Title', 'Content'])
```
7. 分词并统计词频:
```python
stop_words = ['的', '了', '是', '在', '都', '和', '就', '也', '有', '与', '为', '等', '这', '到', '从', '而', '及', '之', '不', '还', '但', '我们', '可以', '一个', '就是', '还是', '这个', '这些', '这样', '因为', '所以']
words = []
for content in df['Content']:
words += [x for x in jieba.cut(content) if x not in stop_words]
word_count = Counter(words)
```
8. 生成词云图:
```python
wc = WordCloud(background_color='white', width=1000, height=600, font_path='msyh.ttc')
wc.generate_from_frequencies(word_count)
plt.figure(figsize=(10,6))
plt.imshow(wc)
plt.axis('off')
plt.show()
```
现在,你已经成功爬取了携程网的游记,并且对数据进行了简单的分析。
阅读全文