python爬取豆瓣图书top250并保存json
时间: 2023-09-02 12:05:51 浏览: 155
以下是一个Python程序,用于爬取豆瓣图书Top250并将结果保存为JSON文件:
```python
import requests
import json
from bs4 import BeautifulSoup
def get_book_info(url):
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3'}
response = requests.get(url, headers=headers)
soup = BeautifulSoup(response.text, 'html.parser')
book_list = soup.find('div', {'class': 'article'}).find_all('li')
books = []
for book in book_list:
title = book.find('div', {'class': 'pl2'}).find('a').text.strip()
rating = book.find('span', {'class': 'rating_nums'}).text.strip()
author_info = book.find('p', {'class': 'pl'}).text.strip().split('/')
author = author_info[0].strip()
publisher = author_info[-3].strip()
pub_date = author_info[-2].strip()
price = author_info[-1].strip()
books.append({'title': title, 'rating': rating, 'author': author, 'publisher': publisher, 'pub_date': pub_date, 'price': price})
return books
all_books = []
for i in range(0, 250, 25):
url = 'https://book.douban.com/top250?start=' + str(i)
books = get_book_info(url)
all_books.extend(books)
with open('douban_top250.json', 'w', encoding='utf-8') as f:
json.dump(all_books, f, ensure_ascii=False, indent=4)
```
该程序使用requests库和BeautifulSoup库来解析豆瓣图书Top250页面,并将结果保存为JSON文件。首先定义了一个函数get_book_info来获取每一页的书籍信息,然后使用一个循环来遍历所有页面并将结果保存为一个列表。最后,使用json.dump函数将结果保存为JSON文件。
阅读全文