python爬虫保存图片
时间: 2024-06-03 19:06:07 浏览: 106
python图片爬虫
Python爬虫保存图片可以通过requests库和os库来实现。
首先,使用requests库获取图片的二进制数据,并将其保存到本地文件中。代码如下:
```python
import requests
import os
url = "http://example.com/image.jpg"
response = requests.get(url)
if response.status_code == 200:
with open("image.jpg", "wb") as f:
f.write(response.content)
print("Image saved successfully!")
else:
print("Failed to save image!")
```
其中,url为需要爬取的图片链接,response.content为获取到的二进制数据。通过with open语句,可以将二进制数据写入本地文件中,实现保存图片的功能。
如果需要保存多张图片,可以通过for循环来遍历图片链接,并为每张图片生成一个唯一的文件名,以免重名覆盖。代码如下:
```python
import requests
import os
urls = ["http://example.com/image1.jpg", "http://example.com/image2.jpg", "http://example.com/image3.jpg"]
for url in urls:
response = requests.get(url)
if response.status_code == 200:
filename = url.split("/")[-1]
with open(filename, "wb") as f:
f.write(response.content)
print(f"{filename} saved successfully!")
else:
print(f"Failed to save {filename}!")
```
在保存图片之前,需要保证本地目录存在,否则需要通过os库创建对应的目录。代码如下:
```python
import requests
import os
urls = ["http://example.com/image1.jpg", "http://example.com/image2.jpg", "http://example.com/image3.jpg"]
if not os.path.exists("images"):
os.mkdir("images")
os.chdir("images")
for url in urls:
response = requests.get(url)
if response.status_code == 200:
filename = url.split("/")[-1]
with open(filename, "wb") as f:
f.write(response.content)
print(f"{filename} saved successfully!")
else:
print(f"Failed to save {filename}!")
```
阅读全文