pycharm爬取贝壳网宁波房屋出售信息,分类存储到本地数据库中(建议sqlite)。 要求:项目源代码
时间: 2023-11-13 18:04:19 浏览: 65
很好,我可以为您提供一份Python源代码,实现您所需的功能。在运行代码之前,请确保已经安装以下Python库:
- requests
- beautifulsoup4
- sqlite3
以下是代码实现:
```python
import requests
from bs4 import BeautifulSoup
import sqlite3
# 获取页面源代码
def get_html(url):
response = requests.get(url)
response.encoding = 'utf-8'
return response.text
# 解析页面获取房屋信息
def parse_page(html):
soup = BeautifulSoup(html, 'html.parser')
house_list = soup.find_all('div', class_='info clear')
for house in house_list:
title = house.find('a', class_='title').text.strip()
info = house.find('div', class_='houseInfo').text.strip()
price = house.find('div', class_='totalPrice').text.strip()
unit_price = house.find('div', class_='unitPrice').text.strip()[2:-4]
yield title, info, price, unit_price
# 存储房屋信息到数据库
def save_to_database(houses):
conn = sqlite3.connect('house.db')
cursor = conn.cursor()
cursor.execute('CREATE TABLE IF NOT EXISTS house (id INTEGER PRIMARY KEY AUTOINCREMENT, '
'title TEXT, info TEXT, price TEXT, unit_price TEXT)')
for house in houses:
cursor.execute('INSERT INTO house (title, info, price, unit_price) VALUES (?, ?, ?, ?)', house)
conn.commit()
conn.close()
if __name__ == '__main__':
url = 'https://nb.ke.com/ershoufang/'
html = get_html(url)
houses = parse_page(html)
save_to_database(houses)
```
运行代码后,程序将自动爬取贝壳网宁波房屋出售信息,并将信息存储到本地SQLite数据库中。您可以根据需要自行修改代码,例如更改爬取的城市、页面数量等等。
阅读全文