用python写一个爬取https://www.ivsky.com/tupian/meinv_t50/图片的代码
时间: 2023-06-21 22:04:09 浏览: 62
可以使用Python的requests和BeautifulSoup库来实现这个爬虫。以下是一个简单的示例:
```python
import requests
from bs4 import BeautifulSoup
url = "https://www.ivsky.com/tupian/meinv_t50/"
response = requests.get(url)
soup = BeautifulSoup(response.content, 'html.parser')
image_tags = soup.find_all('img', class_='il_img')
for image_tag in image_tags:
image_url = image_tag['src']
response = requests.get(image_url)
with open(image_url.split('/')[-1], 'wb') as f:
f.write(response.content)
```
这段代码首先使用requests库发送HTTP请求获取网页的HTML内容,然后使用BeautifulSoup库解析HTML内容并找到所有class为"il_img"的img标签。接着,遍历每个img标签获取其src属性,然后再次使用requests库发送HTTP请求获取图片二进制数据,并将其保存到本地文件中。
相关问题
import requests from lxml import etree from selenium import webdriver from selenium.webdriver.common.by import By import os import time url = 'https://www.ivsky.com/' #目标网址 headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36' } path = r'C:\Users\yangyuhuan\AppData\Local\Google\Chrome\Application\chromedriver' driver = webdriver.Chrome(executable_path=path) #申请驱动器 if not os.path.exists("./images"): os.mkdir("./images") #创建结果文件夹 for page in range(10):# 页码 url = f"https://www.ivsky.com/bizhi/index_{page}.html" driver.get(url) response = driver.page_source # 获取网页源码 html = etree.HTML(response)# 对网页结构进行解析 time.sleep(3) src_image_result=html.xpath('//div[@class="il_img"]/a/img/@src') name_image_result=html.xpath('//div[@class="il_img"]/a/img/@alt') for src,name in zip(src_image_result,name_image_result): img_url="https:"+src res3=requests.get(img_url,headers=headers).content #获取图片的二进制文件 time.sleep(3) print(f'正在下载:{name}') with open('./images/' + name + '.jpg', 'wb') as f: f.write(res3) 将这个代码改成多线程的
好的,我可以帮你将这段代码改成多线程的。你可以使用Python中的`threading`模块,下面是修改后的代码:
```
import requests
from lxml import etree
from selenium import webdriver
from selenium.webdriver.common.by import By
import os
import time
import threading
url = 'https://www.ivsky.com/' #目标网址
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36'
}
path = r'C:\Users\yangyuhuan\AppData\Local\Google\Chrome\Application\chromedriver'
driver = webdriver.Chrome(executable_path=path) #申请驱动器
if not os.path.exists("./images"):
os.mkdir("./images") #创建结果文件夹
def download_image(src, name):
img_url = "https:" + src
res3 = requests.get(img_url, headers=headers).content #获取图片的二进制文件
time.sleep(3)
print(f'正在下载:{name}')
with open('./images/' + name + '.jpg', 'wb') as f:
f.write(res3)
def download_page(page):
url = f"https://www.ivsky.com/bizhi/index_{page}.html"
driver.get(url)
response = driver.page_source # 获取网页源码
html = etree.HTML(response)# 对网页结构进行解析
time.sleep(3)
src_image_result=html.xpath('//div[@class="il_img"]/a/img/@src')
name_image_result=html.xpath('//div[@class="il_img"]/a/img/@alt')
threads = []
for src,name in zip(src_image_result,name_image_result):
t = threading.Thread(target=download_image, args=(src, name))
threads.append(t)
t.start()
for thread in threads:
thread.join()
for page in range(10):# 页码
download_page(page)
driver.quit()
```
在修改后的代码中,我们将下载图片的`download_image`函数和下载每个页面的`download_page`函数分别放到不同的线程中执行。同时,我们使用线程列表`threads`来保存每个线程对象,使用`thread.join()`方法来等待所有线程执行完毕后再结束程序。