import requests from bs4 import BeautifulSoup url = 'http://www.greenfinancechina.com/zhengcefagui/list-1.html' response = requests.get(url) soup = BeautifulSoup(response.text, 'html.parser') # 获取政策标题和发布时间 policy_list = soup.find_all('div', class_='news-item') for policy in policy_list: title = policy.find('a').text time = policy.find('span', class_='time').text print(title, time)
时间: 2023-12-31 15:06:41 浏览: 23
这段代码用于爬取中国绿色金融网站上的政策信息,并输出政策标题和发布时间。
如果您在 Spyder 中执行此代码,应该能够看到类似以下内容的输出:
```python
关于全面启动绿色信贷政策试点工作的通知 2021-11-02
关于发布《绿色债务融资外部评估管理办法》的公告 2021-11-02
...
```
这些输出显示了政策标题和发布时间。如果您想要保存这些政策信息,可以将输出写入文件中。例如,您可以将以下代码添加到循环中,将政策信息写入 CSV 文件中:
```python
import csv
# 打开 CSV 文件
with open('policies.csv', 'w', newline='', encoding='utf-8') as csvfile:
writer = csv.writer(csvfile)
writer.writerow(['title', 'time']) # 写入表头
# 写入政策信息
for policy in policy_list:
title = policy.find('a').text
time = policy.find('span', class_='time').text
writer.writerow([title, time])
```
这样,政策信息就会被写入 CSV 文件中。请注意,您需要在代码中指定正确的文件路径,以便能够将数据写入到正确的文件中。
相关问题
修改代码,使得li_list的编码格式是utf-8import requests from bs4 import BeautifulSoup url = 'https://www.icbc.com.cn/page/827855918799994880.html' response = requests.get(url=url) page_response = response.text soup = BeautifulSoup(page_response, 'html.parser',from_encoding='utf-8') li_list = soup.select('#mypagehtmlcontent p')
import requests
from bs4 import BeautifulSoup
url = 'https://www.icbc.com.cn/page/827855918799994880.html'
response = requests.get(url=url)
page_response = response.content.decode('utf-8')
soup = BeautifulSoup(page_response, 'html.parser')
li_list = soup.select('#mypagehtmlcontent p')
# 以下为测试代码,可以不用理会
for li in li_list:
print(li.text)
优化这段代码import requests from bs4 import BeautifulSoup url = "https://www.gupang.com/202305/64619.html" response = requests.get(url) soup = BeautifulSoup(response.content, "html.parser") title = soup.find("h1", class_="title").text content = soup.find("div", class_="content").text print(title) print(content)
可以使用以下方法进行优化:
1. 添加异常处理,以防止请求失败或解析失败导致程序崩溃:
```python
import requests
from bs4 import BeautifulSoup
url = "https://www.gupang.com/202305/64619.html"
try:
response = requests.get(url)
response.raise_for_status() # 抛出异常,如果请求失败
except requests.exceptions.RequestException as e:
print(e)
sys.exit(1)
soup = BeautifulSoup(response.content, "html.parser")
try:
title = soup.find("h1", class_="title").text
content = soup.find("div", class_="content").text
except AttributeError as e:
print(e)
sys.exit(1)
print(title)
print(content)
```
2. 使用`with`语句来管理资源,以确保在使用完文件或网络连接后自动释放资源:
```python
import requests
from bs4 import BeautifulSoup
url = "https://www.gupang.com/202305/64619.html"
try:
with requests.get(url) as response:
response.raise_for_status()
soup = BeautifulSoup(response.content, "html.parser")
title = soup.find("h1", class_="title").text
content = soup.find("div", class_="content").text
except (requests.exceptions.RequestException, AttributeError) as e:
print(e)
sys.exit(1)
print(title)
print(content)
```
这种方法可以避免手动关闭连接或文件,而且代码更加简洁。