No selector specified
时间: 2024-06-08 16:08:13 浏览: 90
根据您提供的引用[1],报错信息"No selector attribute (cookie/header/name/parameter) was specified"表示在使用struts页面时没有指定选择器属性。根据引用,可能的原因是在struts页面中使用了逻辑标签,但未指定选择器属性。要解决此问题,您需要在逻辑标签中明确指定选择器属性。
为了解决此问题,您可以按照以下步骤操作:
1. 找到引用中的逻辑标签 `<logic:present>` 和 `<logic:equal>`。
2. 在逻辑标签中找到`name`属性并确保它的值等于"show",这将作为选择器属性。
3. 如果在逻辑标签中还有其他选择器属性(如引用中提到的cookie/header/name/parameter),请确保它们也被正确指定。
通过按照上述步骤指定选择器属性,您应该能够解决"No selector specified"的问题。
相关问题
import requests from lxml import etree urls = ['https://www.xxddxs.com/book/{}.html'.format(i) for i in range(1, 20)] path = r'D:\py\venv\Scripts\xiaoshuo' def get_text(url: object) -> object: r = requests.get(url) r.encoding = 'utf-8' selector = etree.HTML(r.text) # Get the book title from the page title = selector.xpath('//div[@class="bookname"]/h1/text()') # If no title found, print an error message and return early if len(title) == 0: print(f"No title found for {url}") return # Get the text of the chapter text = selector.xpath('//div[@id="content"]/text()') # Print the text to standard output print(text) # Write the text to a file in the specified directory with open(path + "\\" + title[0], 'w', encoding='utf-8') as f: for i in text: f.write(i) if __name__ == '__main__': for url in urls: get_text(url) 帮我修改并可以爬取到标题
Sure! 以下是修改后的代码:
```python
import requests
from lxml import etree
urls = ['https://www.xxddxs.com/book/{}.html'.format(i) for i in range(1, 20)]
path = r'D:\py\venv\Scripts\xiaoshuo'
def get_text(url):
r = requests.get(url)
r.encoding = 'utf-8'
selector = etree.HTML(r.text)
# Get the book title from the page
title = selector.xpath('//div[@class="bookname"]/h1/text()')
# If no title found, print an error message and return early
if len(title) == 0:
print(f"No title found for {url}")
return
# Get the text of the chapter
text = selector.xpath('//div[@id="content"]/text()')
# Print the text to standard output
print(text)
# Write the text to a file in the specified directory
with open(path + "\\" + title[0].strip() + ".txt", 'w', encoding='utf-8') as f:
for i in text:
f.write(i)
if __name__ == '__main__':
for url in urls:
get_text(url)
```
在原有代码的基础上,我在 `open()` 方法中添加了 `.strip()`,以去除标题中的空白字符。此外,我还为文件添加了 `.txt` 扩展名,以方便后续的操作。
# coding:utf-8 import re import time import requests import lxml.html from lxml import etree urls = ['https://www.xxddxs.com/book/{}'.format(i) for i in range(1, 20)] path = r'D:\py\venv\Scripts\xiaoshuo' # path = './' sleep = 1 def get_text(url): r = requests.get(url) r.encoding = 'utf-8' selector = etree.HTML(r.text) # Get the book title from the page book_info = selector.xpath('/html/body/div[5]')[0] title = book_info.xpath("//h1")[0].text # title = .xpath('/html/body/div[5]/div[2]/h1') print(title) # If no title found, print an error message and return early if len(title) == 0: print(f"No title found for {url}") return # Get the text of the chapter chapter_list = selector.xpath('/html/body/div[5]/dl/dd/a') last_chapter_num = re.search(r'(\d*)\.html$',chapter_list[-1].attrib.get('href')).group(1) # Write the text to a file in the specified directory with open(title + ".txt", 'w', encoding='utf-8') as f: for num in range(1, int(last_chapter_num)+1): content_url = url + f'/{num}.html' print(content_url) time.sleep(sleep) # 防止请求量过多导致网页拒绝访问 try: r = requests.get(content_url, timeout=30) except: print('连接失败,重试一次') r = requests.get(content_url, timeout=30) r.encoding = 'utf-8' content_dom = lxml.html.fromstring(r.text) content = content_dom.xpath('//*[@id="chaptercontent"]')[0].text_content() f.writelines(content) if __name__ == '__main__': for url in urls: get_text(url) 这个代码的运行思路
这个代码的运行思路如下:
1. 定义需要爬取小说的URL列表和保存小说的文件路径
2. 定义一个函数 `get_text` 用来获取小说内容,其中包括小说标题,小说章节信息和小说正文
3. 在 `get_text` 函数中,先通过URL获取小说页面的HTML文本,然后使用XPath解析HTML文本获取小说的标题和章节信息
4. 然后通过正则表达式获取小说最新章节的编号,并根据章节编号遍历获取每一章节的正文
5. 最后将小说的标题和正文保存到指定的文件路径中
6. 在主程序中,循环遍历URL列表,调用 `get_text` 函数来获取每个小说的内容并保存到文件中。
阅读全文