import requests from bs4 import BeautifulSoup url = "https://www.dpm.org.cn/lights/royal/p/1.html" #代码开始 r=requests.get(url) r.encoding='utf-8' soup=BeautifulSoup(r.text,'html.parser') pics=soup.find_all("div",class_="pic") xh=1 for pic in pics: x=pic.find("img") imglj=x.attrs["src"] imgmz=x.attrs["title"].strip() #代码开始 r=requests.get(imglj) cpmc="image//"+imgmz+".jpg" f1=open(cpmc,"bw") f1.write(r.content) f1.close() xh+=1 报错 No such file or directory: 'image//清 沈全沈世杰沈世儒合笔婴戏图贴落.jpg'
时间: 2024-03-04 12:50:32 浏览: 145
这段代码报错是因为尝试在不存在的目录下创建文件。可以先在程序中加入一个创建目录的函数,如下所示:
```python
import os
def mkdir(path):
folder = os.path.exists(path)
if not folder:
os.makedirs(path)
url = "https://www.dpm.org.cn/lights/royal/p/1.html"
r=requests.get(url)
r.encoding='utf-8'
soup=BeautifulSoup(r.text,'html.parser')
pics=soup.find_all("div",class_="pic")
xh=1
mkdir("image") # 创建目录
for pic in pics:
x=pic.find("img")
imglj=x.attrs["src"]
imgmz=x.attrs["title"].strip()
r=requests.get(imglj)
cpmc="image/"+imgmz+".jpg" # 修改文件路径
f1=open(cpmc,"bw")
f1.write(r.content)
f1.close()
xh+=1
```
这样就能够在程序所在目录下创建一个名为 `image` 的目录,并在其中存储下载的图片了。
相关问题
修改代码,使得li_list的编码格式是utf-8import requests from bs4 import BeautifulSoup url = 'https://www.icbc.com.cn/page/827855918799994880.html' response = requests.get(url=url) page_response = response.text soup = BeautifulSoup(page_response, 'html.parser',from_encoding='utf-8') li_list = soup.select('#mypagehtmlcontent p')
import requests
from bs4 import BeautifulSoup
url = 'https://www.icbc.com.cn/page/827855918799994880.html'
response = requests.get(url=url)
page_response = response.content.decode('utf-8')
soup = BeautifulSoup(page_response, 'html.parser')
li_list = soup.select('#mypagehtmlcontent p')
# 以下为测试代码,可以不用理会
for li in li_list:
print(li.text)
优化这段代码import requests from bs4 import BeautifulSoup url = "https://www.gupang.com/202305/64619.html" response = requests.get(url) soup = BeautifulSoup(response.content, "html.parser") title = soup.find("h1", class_="title").text content = soup.find("div", class_="content").text print(title) print(content)
可以使用以下方法进行优化:
1. 添加异常处理,以防止请求失败或解析失败导致程序崩溃:
```python
import requests
from bs4 import BeautifulSoup
url = "https://www.gupang.com/202305/64619.html"
try:
response = requests.get(url)
response.raise_for_status() # 抛出异常,如果请求失败
except requests.exceptions.RequestException as e:
print(e)
sys.exit(1)
soup = BeautifulSoup(response.content, "html.parser")
try:
title = soup.find("h1", class_="title").text
content = soup.find("div", class_="content").text
except AttributeError as e:
print(e)
sys.exit(1)
print(title)
print(content)
```
2. 使用`with`语句来管理资源,以确保在使用完文件或网络连接后自动释放资源:
```python
import requests
from bs4 import BeautifulSoup
url = "https://www.gupang.com/202305/64619.html"
try:
with requests.get(url) as response:
response.raise_for_status()
soup = BeautifulSoup(response.content, "html.parser")
title = soup.find("h1", class_="title").text
content = soup.find("div", class_="content").text
except (requests.exceptions.RequestException, AttributeError) as e:
print(e)
sys.exit(1)
print(title)
print(content)
```
这种方法可以避免手动关闭连接或文件,而且代码更加简洁。
阅读全文