用python爬虫获取https://pixabay.com/zh/图片
时间: 2023-09-24 18:03:40 浏览: 61
好的,您可以使用Python的requests库和BeautifulSoup库来实现这个功能。以下是一个简单的示例代码,可以获取Pixabay网站上的所有图片:
```python
import requests
from bs4 import BeautifulSoup
url = "https://pixabay.com/zh/images/"
response = requests.get(url)
soup = BeautifulSoup(response.text, "html.parser")
images = soup.find_all("img")
for i, image in enumerate(images):
image_url = image["src"]
response = requests.get(image_url)
with open(f"image_{i}.jpg", "wb") as f:
f.write(response.content)
```
这段代码会首先发送一个GET请求到Pixabay网站,然后使用BeautifulSoup解析返回的HTML文档。通过查找所有的img标签,我们可以获取到所有的图片链接。最后,我们可以使用Python的文件操作功能将图片保存到本地。
相关问题
python爬虫爬取https://new.qq.com/动态新闻的图片
好的,这里提供一个基于Python的爬取https://new.qq.com/动态新闻的图片的示例代码:
```python
import requests
from bs4 import BeautifulSoup
url = "https://new.qq.com/ch/world/"
response = requests.get(url)
soup = BeautifulSoup(response.content, 'html.parser')
# 获取所有的新闻列表
news_list = soup.find_all('div', class_='detail')
for news in news_list:
# 获取新闻标题
title = news.find('a', class_='linkto').text
# 获取新闻详情链接
detail_url = news.find('a', class_='linkto')['href']
# 获取新闻详情页
detail_page = requests.get(detail_url)
detail_soup = BeautifulSoup(detail_page.content, 'html.parser')
# 获取新闻详情页中的图片链接
img_list = detail_soup.find_all('img')
for img in img_list:
img_url = img.get('src')
if img_url:
# 下载图片
r = requests.get(img_url, stream=True)
with open(title + '.jpg', 'wb') as f:
for chunk in r.iter_content(chunk_size=1024):
if chunk:
f.write(chunk)
```
这个代码会爬取https://new.qq.com/ch/world/页面上的所有新闻,进入每个新闻详情页获取其中的图片链接,并下载图片到本地。你可以根据需要修改代码适配自己的需求。
https://pixabay.com python
根据提供的引用内容,以下是一个使用Python爬虫下载Pixabay网站图片的示例:
```python
import requests
import os
def download_image(url, save_path):
response = requests.get(url)
if response.status_code == 200:
with open(save_path, 'wb') as f:
f.write(response.content)
print("图片下载成功:", save_path)
else:
print("图片下载失败:", url)
def crawl_pixabay_images(keyword, num_images):
url = f"https://pixabay.com/images/search/{keyword}/"
response = requests.get(url)
if response.status_code == 200:
image_urls = response.json()["hits"][:num_images]
for i, image in enumerate(image_urls):
image_url = image["largeImageURL"]
save_path = f"{keyword}_{i+1}.jpg"
download_image(image_url, save_path)
else:
print("请求失败:", url)
crawl_pixabay_images("python", 5)
```
这段代码使用了requests库来发送HTTP请求,并通过解析返回的JSON数据获取图片的URL。然后,使用`download_image`函数下载图片并保存到本地。最后,调用`crawl_pixabay_images`函数来爬取指定关键字的图片。