python爬取豆瓣网所有电影的信息,包括电影名字、电影类型、上映年份、导演、评分、评论人数,建立反爬机制,并保存到csv文件中
时间: 2023-12-25 18:02:57 浏览: 140
Python爬取豆瓣网上电影信息
5星 · 资源好评率100%
好的,这是一个比较复杂的问题,需要分几个步骤来完成。
首先,我们需要安装一些必要的库,比如requests、beautifulsoup4等。可以使用以下命令来安装它们:
```
pip install requests
pip install beautifulsoup4
```
接下来,我们需要写一个函数来获取豆瓣电影的页面内容。由于豆瓣有反爬机制,我们需要设置一些请求头信息来模拟浏览器的行为,让豆瓣服务器认为我们是一个正常的用户。
以下是获取页面内容的代码:
```python
import requests
def get_html(url):
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3'
}
response = requests.get(url, headers=headers)
if response.status_code == 200:
return response.text
else:
return None
```
接下来,我们需要解析页面内容,获取电影的信息。我们可以使用beautifulsoup4来解析HTML页面,提取出电影的名称、类型、上映年份、导演、评分和评论人数。以下是解析页面内容的代码:
```python
from bs4 import BeautifulSoup
def parse_html(html):
soup = BeautifulSoup(html, 'html.parser')
movies = soup.find_all('div', class_='info')
for movie in movies:
name = movie.find('span', class_='title').get_text()
genres = movie.find('span', class_='genre').get_text().strip()
year = movie.find('span', class_='year').get_text()
directors = [director.get_text() for director in movie.find_all('span', class_='director')]
rating = movie.find('span', class_='rating_num').get_text()
comments = movie.find('span', class_='comment-num').get_text()
yield {
'name': name,
'genres': genres,
'year': year,
'directors': directors,
'rating': rating,
'comments': comments
}
```
最后,我们需要将获取到的电影信息保存到CSV文件中。可以使用csv库来实现。以下是保存到CSV文件的代码:
```python
import csv
def save_to_csv(movies):
with open('movies.csv', 'w', encoding='utf-8', newline='') as csvfile:
fieldnames = ['name', 'genres', 'year', 'directors', 'rating', 'comments']
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
for movie in movies:
writer.writerow(movie)
```
最后,我们可以将以上代码组合起来,完成整个程序:
```python
import requests
from bs4 import BeautifulSoup
import csv
import time
def get_html(url):
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3'
}
response = requests.get(url, headers=headers)
if response.status_code == 200:
return response.text
else:
return None
def parse_html(html):
soup = BeautifulSoup(html, 'html.parser')
movies = soup.find_all('div', class_='info')
for movie in movies:
name = movie.find('span', class_='title').get_text()
genres = movie.find('span', class_='genre').get_text().strip()
year = movie.find('span', class_='year').get_text()
directors = [director.get_text() for director in movie.find_all('span', class_='director')]
rating = movie.find('span', class_='rating_num').get_text()
comments = movie.find('span', class_='comment-num').get_text()
yield {
'name': name,
'genres': genres,
'year': year,
'directors': directors,
'rating': rating,
'comments': comments
}
def save_to_csv(movies):
with open('movies.csv', 'w', encoding='utf-8', newline='') as csvfile:
fieldnames = ['name', 'genres', 'year', 'directors', 'rating', 'comments']
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
for movie in movies:
writer.writerow(movie)
if __name__ == '__main__':
urls = ['https://movie.douban.com/top250?start={}'.format(i * 25) for i in range(10)]
for url in urls:
html = get_html(url)
movies = parse_html(html)
save_to_csv(movies)
time.sleep(1)
```
以上代码将豆瓣电影Top250的信息保存到了movies.csv文件中。注意,为了避免被豆瓣反爬机制封锁IP,我们在每个页面请求之间增加了1秒的等待时间。
阅读全文