为什么我的这段代码content没有值:from lxml import etree import requests urls=[] for i in range(0,5,1): i=i*20 url = 'https://movie.douban.com/review/best/?start={}'.format(i) urls.append(url) import requests from lxml import etree herders = { "Content-Type":"text/html; charset=utf-8", "uesr-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36 Edg/114.0.1823.67" } detail_urls = [] for url in urls: # 发送请求 response= requests.get(url,headers=herders) # 编码转码 content = response.content.decode('utf8') # 解析html字符串 html =etree.HTML(content) # 利用xpath提取每个电影影评的url detail_url = html.xpath('//h2/a/@href') detail_urls.append(detail_url) print(detail_urls) break
时间: 2024-04-10 07:30:53 浏览: 55
到sqrt(n)。在每次迭代中,我们检查 `$n` 是否可以整除当前的迭代变量 `$i`。如果可以整除,则将 `$i` 加入到数组中,并将 `$n` 更新为 `$n` 除以 `$i` 的结果。
最后,如果 `$n` 大于2,则说明 `$n` 本身也是一个质因子,我们将其加入到数组中。
最后,我们使用 `echo` 语句输出找到的质因子。
如果你运行这段代码,并将 `$n` 设置为36,它将输出:
```
正整数 36 的质因子为: 2 2 3 3
```
希望这能解决你的问题!如果你还有其他疑问,请随时提问。
相关问题
import requests from lxml import etree urls = ['https://www.xxddxs.com/book/{}.html'.format(i) for i in range(1, 20)] path = r'D:\py\venv\Scripts\xiaoshuo' def get_text(url: object) -> object: r = requests.get(url) r.encoding = 'utf-8' selector = etree.HTML(r.text) # Get the book title from the page title = selector.xpath('//div[@class="bookname"]/h1/text()') # If no title found, print an error message and return early if len(title) == 0: print(f"No title found for {url}") return # Get the text of the chapter text = selector.xpath('//div[@id="content"]/text()') # Print the text to standard output print(text) # Write the text to a file in the specified directory with open(path + "\\" + title[0], 'w', encoding='utf-8') as f: for i in text: f.write(i) if __name__ == '__main__': for url in urls: get_text(url) 帮我修改并可以爬取到标题
Sure! 以下是修改后的代码:
```python
import requests
from lxml import etree
urls = ['https://www.xxddxs.com/book/{}.html'.format(i) for i in range(1, 20)]
path = r'D:\py\venv\Scripts\xiaoshuo'
def get_text(url):
r = requests.get(url)
r.encoding = 'utf-8'
selector = etree.HTML(r.text)
# Get the book title from the page
title = selector.xpath('//div[@class="bookname"]/h1/text()')
# If no title found, print an error message and return early
if len(title) == 0:
print(f"No title found for {url}")
return
# Get the text of the chapter
text = selector.xpath('//div[@id="content"]/text()')
# Print the text to standard output
print(text)
# Write the text to a file in the specified directory
with open(path + "\\" + title[0].strip() + ".txt", 'w', encoding='utf-8') as f:
for i in text:
f.write(i)
if __name__ == '__main__':
for url in urls:
get_text(url)
```
在原有代码的基础上,我在 `open()` 方法中添加了 `.strip()`,以去除标题中的空白字符。此外,我还为文件添加了 `.txt` 扩展名,以方便后续的操作。
# coding:utf-8 import re import time import requests import lxml.html from lxml import etree urls = ['https://www.xxddxs.com/book/{}'.format(i) for i in range(1, 20)] path = r'D:\py\venv\Scripts\xiaoshuo' # path = './' sleep = 1 def get_text(url): r = requests.get(url) r.encoding = 'utf-8' selector = etree.HTML(r.text) # Get the book title from the page book_info = selector.xpath('/html/body/div[5]')[0] title = book_info.xpath("//h1")[0].text # title = .xpath('/html/body/div[5]/div[2]/h1') print(title) # If no title found, print an error message and return early if len(title) == 0: print(f"No title found for {url}") return # Get the text of the chapter chapter_list = selector.xpath('/html/body/div[5]/dl/dd/a') last_chapter_num = re.search(r'(\d*)\.html$',chapter_list[-1].attrib.get('href')).group(1) # Write the text to a file in the specified directory with open(title + ".txt", 'w', encoding='utf-8') as f: for num in range(1, int(last_chapter_num)+1): content_url = url + f'/{num}.html' print(content_url) time.sleep(sleep) # 防止请求量过多导致网页拒绝访问 try: r = requests.get(content_url, timeout=30) except: print('连接失败,重试一次') r = requests.get(content_url, timeout=30) r.encoding = 'utf-8' content_dom = lxml.html.fromstring(r.text) content = content_dom.xpath('//*[@id="chaptercontent"]')[0].text_content() f.writelines(content) if __name__ == '__main__': for url in urls: get_text(url) 这个代码的运行思路
这个代码的运行思路如下:
1. 定义需要爬取小说的URL列表和保存小说的文件路径
2. 定义一个函数 `get_text` 用来获取小说内容,其中包括小说标题,小说章节信息和小说正文
3. 在 `get_text` 函数中,先通过URL获取小说页面的HTML文本,然后使用XPath解析HTML文本获取小说的标题和章节信息
4. 然后通过正则表达式获取小说最新章节的编号,并根据章节编号遍历获取每一章节的正文
5. 最后将小说的标题和正文保存到指定的文件路径中
6. 在主程序中,循环遍历URL列表,调用 `get_text` 函数来获取每个小说的内容并保存到文件中。
阅读全文