帮我生成一个 Python 爬虫code,浏览知乎上的浏览量,并将结果保存在CSV文件中
时间: 2023-09-15 07:06:02 浏览: 117
Python爬虫实现获取股票信息并保存到文件(亲测可运行)
5星 · 资源好评率100%
### 回答1:
我可以为你提供一个简单的 Python 爬虫程序,可以用来浏览知乎上的浏览量,并将结果保存在 CSV 文件中。程序的代码如下:import requests
import csv# 定义知乎的 API 接口
url = 'https://www.zhihu.com/api/v4/questions/{0}/answers?include=data%5B%2A%5D.is_normal%2Cadmin_closed_comment%2Creward_info%2Cis_collapsed%2Cannotation_action%2Cannotation_detail%2Ccollapse_reason%2Cis_sticky%2Ccollapsed_by%2Csuggest_edit%2Ccomment_count%2Ccan_comment%2Ccontent%2Ceditable_content%2Cvoteup_count%2Creshipment_settings%2Ccomment_permission%2Ccreated_time%2Cupdated_time%2Creview_info%2Crelevant_info%2Cquestion%2Cexcerpt%2Crelationship.is_authorized%2Cis_author%2Cvoting%2Cis_thanked%2Cis_nothelp%2Cis_labeled%2Cis_recognized%2Cpaid_info%2Cpaid_info_content%3Bdata%5B%2A%5D.mark_infos%5B%2A%5D.url%3Bdata%5B%2A%5D.author.follower_count%2Cbadge%5B%2A%5D.topics&limit=5&offset=0&platform=desktop&sort_by=default'# 定义要查询的问题ID
question_id = '20552571'# 构建请求
r = requests.get(url.format(question_id))# 获取结果
data = r.json()# 打开 csv 文件,准备写入
f = open('zhihu.csv', 'w', encoding='utf-8')
writer = csv.writer(f)# 写入表头
writer.writerow(['title','view_count'])# 遍历结果,将数据写入 csv 文件
for d in data['data']:
title = d['question']['title']
view_count = d['question']['view_count']
writer.writerow([title, view_count])# 关闭文件
f.close()
### 回答2:
以下是一个用Python编写的爬虫代码,可以用来浏览知乎上问题的浏览量,并将结果保存在CSV文件中:
```python
import csv
import requests
from bs4 import BeautifulSoup
def get_views(url):
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0;Win64) AppleWebkit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.82 Safari/537.36"
}
response = requests.get(url, headers=headers)
soup = BeautifulSoup(response.text, "html.parser")
views = soup.select_one(".NumberBoard-itemValue").get_text().strip()
return views
def save_to_csv(views_list):
with open('zhihu_views.csv', 'w', newline='', encoding='utf-8') as file:
writer = csv.writer(file)
writer.writerow(["问题", "浏览量"])
for views in views_list:
writer.writerow(views)
def main():
urls = [
"https://www.zhihu.com/question/123",
"https://www.zhihu.com/question/456",
"https://www.zhihu.com/question/789"
]
views_list = []
for url in urls:
views = get_views(url)
views_list.append([url, views])
print(f"问题:{url}\n浏览量:{views}")
save_to_csv(views_list)
if __name__ == "__main__":
main()
```
以上代码中,我们首先导入了需要使用的库:`csv` 用于处理CSV文件,`requests` 用于发送网络请求,`BeautifulSoup` 用于解析网页内容。然后定义了 `get_views` 函数,用来获取知乎问题的浏览量。接着,我们定义了 `save_to_csv` 函数,用来将结果保存在CSV文件中。最后,我们在 `main` 函数中定义了要爬取的知乎问题的URL列表,并利用之前定义的函数进行爬取和保存。
请注意,为了成功发送请求,我们在 `get_views` 函数中设置了一个伪装的 User-Agent 头部。在 `main` 函数中,你可以根据自己的需求修改 `urls` 列表,将你感兴趣的知乎问题的URL添加进去。
### 回答3:
当然可以!以下是一个简单的Python爬虫代码示例,用于浏览知乎的浏览量并将结果保存在CSV文件中:
```python
import requests
import csv
def get_zhihu_views(url):
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36'
}
response = requests.get(url, headers=headers)
data = response.json() # 获取知乎页面的json数据
# 解析json数据获取浏览量
views = data['paging']['totals']
return views
def save_to_csv(views):
with open('zhihu_views.csv', 'w', newline='', encoding='utf-8') as file:
writer = csv.writer(file)
writer.writerow(['浏览量'])
writer.writerow([views])
if __name__ == "__main__":
zhihu_url = 'https://www.zhihu.com/api/v4/answers/796333432825032192'
views = get_zhihu_views(zhihu_url)
print('浏览量:', views)
save_to_csv(views)
print('已保存到CSV文件:zhihu_views.csv')
```
你需要将 `zhihu_url` 替换为你要获取浏览量的知乎回答的URL。在这个示例中,我们使用了 `requests` 库来发送HTTP请求,并以JSON格式获取了知乎页面的数据。然后,我们解析这个JSON数据,提取出浏览量。最后,我们使用 `csv` 库将浏览量保存到CSV文件中。
请确保在运行代码前已安装 `requests` 和 `csv` 库。你可以使用以下命令来安装它们:
```
pip install requests
pip install csv
```
希望这个代码对你有帮助!
阅读全文