python中的selector.css(#title)是在html中找什麽相關的
时间: 2024-05-22 07:14:29 浏览: 84
selector)是什么?
selector.css(selector) 是Python Scrapy框架中Selector对象的方法之一,它可以通过CSS选择器对网页中的元素进行定位和提取。selector可以是CSS选择器字符串,也可以是列表。使用该方法可以方便快捷地提取HTML标签中的内容。
相关问题
selector = parsel.Selector(html_data) lis = selector.css('.list_item h2 a::attr(href)').getall() details = [] for li in lis: detail_url = 'https://travel.qunar.com' + li detail_html = requests.get(detail_url).text details.append(detail_html) # 输出每个网站的 html 数据 for detail_html in details: print(detail_html) detail_html = requests.get(detail_url).text #解析数据 detail_selector = parsel.Selector(detail_html) title = detail_selector.css('.b_crumb_cont *:nth-child(3)::text').get() comment = detail_selector.css('.title white::text').get() view_count = detail_selector.css('.view_count::text').get() date = detail_selector.css('#js_mainleft > div.b_foreword > ul > li.f_item.when > p > span.data::text').get() days = detail_selector.css('#js_mainleft > div.b_foreword > ul > li.f_item.howlong > p > span.data::text').get() money = detail_selector.css('#js_mainleft > div.b_foreword > ul > li.f_item.howmuch > p > span.data::text').get() character = detail_selector.css('#js_mainleft > div.b_foreword > ul > li.f_item.who > p > span.data::text').get() play_list = detail_selector.css('#js_mainleft > div.b_foreword > ul > li.f_item.how > p > span.data > span::text').getall() for play in play_list: print(play) print(title, comment, view_count, date, days, money, character, play_list, detail_url) time.sleep(random.randint(3,5))这段代码只能输出最后一个网站的信息 该怎么改
这段代码只能输出最后一个网站的信息是因为在循环中每次请求新的网站信息时都会覆盖前面的信息,需要将每次请求的信息存储到一个列表中,最后再一起处理。
可以尝试如下修改:
```python
selector = parsel.Selector(html_data)
lis = selector.css('.list_item h2 a::attr(href)').getall()
details = []
for li in lis:
detail_url = 'https://travel.qunar.com' + li
detail_html = requests.get(detail_url).text
details.append(detail_html)
time.sleep(random.randint(3, 5)) # 每次请求后等待一段时间,避免请求频繁被封
for detail_html in details:
# 解析数据
detail_selector = parsel.Selector(detail_html)
title = detail_selector.css('.b_crumb_cont *:nth-child(3)::text').get()
comment = detail_selector.css('.title.white::text').get()
view_count = detail_selector.css('.view_count::text').get()
date = detail_selector.css('#js_mainleft > div.b_foreword > ul > li.f_item.when > p > span.data::text').get()
days = detail_selector.css('#js_mainleft > div.b_foreword > ul > li.f_item.howlong > p > span.data::text').get()
money = detail_selector.css('#js_mainleft > div.b_foreword > ul > li.f_item.howmuch > p > span.data::text').get()
character = detail_selector.css('#js_mainleft > div.b_foreword > ul > li.f_item.who > p > span.data::text').get()
play_list = detail_selector.css('#js_mainleft > div.b_foreword > ul > li.f_item.how > p > span.data > span::text').getall()
for play in play_list:
print(play)
print(title, comment, view_count, date, days, money, character, play_list, detail_url)
```
这样每次请求得到的网站信息都会被存储到 `details` 列表中,最后再遍历处理每个网站的信息。同时,为了避免请求频繁被封,可以在每次请求后等待一段时间。
selector = parsel.Selector(html_data) lis = selector.css('.list_item h2 a::attr(href)').getall() details = [] for li in lis: detail_url = 'https://travel.qunar.com' + li details.append(detail_url) # 输出每个网站的链接 for detail_url in details: print(detail_url) #解析数据 detail_selector = parsel.Selector(detail_html) title = detail_selector.css('.b_crumb_cont *:nth-child(3)::text').get() comment = detail_selector.css('.title white::text').get() view_count = detail_selector.css('.view_count::text').get() date = detail_selector.css('#js_mainleft > div.b_foreword > ul > li.f_item.when > p > span.data::text').get() days = detail_selector.css('#js_mainleft > div.b_foreword > ul > li.f_item.howlong > p > span.data::text').get() money = detail_selector.css('#js_mainleft > div.b_foreword > ul > li.f_item.howmuch > p > span.data::text').get() character = detail_selector.css('#js_mainleft > div.b_foreword > ul > li.f_item.who > p > span.data::text').get() play_list = detail_selector.css('#js_mainleft > div.b_foreword > ul > li.f_item.how > p > span.data > span::text').getall() for play in play_list: print(play) print(title, comment, view_count, date, days, money, character, play_list, detail_url) time.sleep(random.randint(3,5))这段代码为什么只能输出最后一个网站的内容,改怎么修改
这段代码只能输出最后一个网站的内容是因为在循环结束后,变量 `detail_html` 的值仍然是最后一个网站的内容,而在循环内部对该变量的赋值并没有被使用。
要修复这个问题,你可以将循环内部的代码封装成一个函数,然后在循环内部调用该函数,将每个网站的内容都进行解析和打印输出。具体代码实现如下:
```python
selector = parsel.Selector(html_data)
lis = selector.css('.list_item h2 a::attr(href)').getall()
details = []
for li in lis:
detail_url = 'https://travel.qunar.com' + li
details.append(detail_url)
# 定义解析详情页的函数
def parse_detail(detail_url):
detail_html = requests.get(detail_url).text
detail_selector = parsel.Selector(detail_html)
title = detail_selector.css('.b_crumb_cont *:nth-child(3)::text').get()
comment = detail_selector.css('.title.white::text').get()
view_count = detail_selector.css('.view_count::text').get()
date = detail_selector.css('#js_mainleft > div.b_foreword > ul > li.f_item.when > p > span.data::text').get()
days = detail_selector.css('#js_mainleft > div.b_foreword > ul > li.f_item.howlong > p > span.data::text').get()
money = detail_selector.css('#js_mainleft > div.b_foreword > ul > li.f_item.howmuch > p > span.data::text').get()
character = detail_selector.css('#js_mainleft > div.b_foreword > ul > li.f_item.who > p > span.data::text').get()
play_list = detail_selector.css('#js_mainleft > div.b_foreword > ul > li.f_item.how > p > span.data > span::text').getall()
for play in play_list:
print(play)
print(title, comment, view_count, date, days, money, character, play_list, detail_url)
time.sleep(random.randint(3, 5))
# 循环遍历每个网站的链接,并调用函数解析和输出详情页内容
for detail_url in details:
parse_detail(detail_url)
```
这样就可以输出每个网站的内容了。
阅读全文