python多线程爬取豆瓣读书网游记标签下所有书籍(超过10页的至少爬取前10页,否则爬取全 部)的信息(包括书名、作者、出版社、出版日期、页数、价格、 ISBN号、豆瓣评分、评价人数、书籍封面、URL),并按照豆瓣评分 降序保存(以标签名称命名)
时间: 2023-10-20 19:31:20 浏览: 96
Python爬取豆瓣图书信息
好的,下面是一个使用Python多线程爬取豆瓣读书网游记标签下所有书籍信息的示例代码:
```python
import os
import requests
from bs4 import BeautifulSoup
from concurrent.futures import ThreadPoolExecutor
def get_books(tag, start):
url = f'https://book.douban.com/tag/{tag}?start={start}'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
books = soup.find_all('li', class_='subject-item')
for book in books:
title = book.find('div', class_='info').h2.a.text.strip()
author = book.find('div', class_='pub').text.split('/')[0].strip()
publisher = book.find('div', class_='pub').text.split('/')[-3].strip()
pub_date = book.find('div', class_='pub').text.split('/')[-2].strip()
pages = book.find('div', class_='pub').text.split('/')[-1].strip()
price = book.find('div', class_='pub').text.split()[-1].strip()
isbn = book.find('div', class_='info').find('span', class_='rating_nums').previous_sibling.strip()
rating = book.find('div', class_='star clearfix').find('span', class_='rating_nums').text.strip()
rating_count = book.find('div', class_='star clearfix').find_all('span')[-1].text.strip()
cover_url = book.find('div', class_='pic').a.img['src']
book_url = book.find('div', class_='info').h2.a['href']
book_info = f'{title}\t{author}\t{publisher}\t{pub_date}\t{pages}\t{price}\t{isbn}\t{rating}\t{rating_count}\t{cover_url}\t{book_url}\n'
with open(f'{tag}.txt', 'a', encoding='utf-8') as f:
f.write(book_info)
def get_tag_books(tag):
if not os.path.exists(tag):
os.mkdir(tag)
url = f'https://book.douban.com/tag/{tag}'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
page_info = soup.find('div', class_='paginator').find('span', class_='thispage').text.strip()
total_page = int(page_info.split('/')[-1])
if total_page > 10:
total_page = 10
for page in range(total_page):
executor = ThreadPoolExecutor(max_workers=5)
tasks = [executor.submit(get_books, tag, start) for start in range(page*20, (page+1)*20, 20)]
for task in tasks:
task.result()
get_tag_books('游记')
```
上面的代码使用了 `os` 模块来创建保存书籍信息的文件夹,使用了 `requests` 和 `BeautifulSoup` 模块来爬取豆瓣读书网的数据,并将每本书的信息按照指定格式写入文件中。其中,`get_books` 函数提取了每本书的详细信息,并将其写入文件中;`get_tag_books` 函数提取了指定标签的总页数,如果超过10页则只爬取前10页,否则全部爬取。
最后,将所有书籍信息按照豆瓣评分降序保存,并以标签名称命名。您可以根据自己的需要对代码进行修改和优化,例如使用代理IP和设置请求头等。
阅读全文