import requests from bs4 import BeautifulSoup def getHTMLText(url): try: r = requests.get(url,timeout=30) r.raise_for_status() r.encoding = r.apparent_encoding return r.text except: return "产生异常" def jiexi(html_doc): soup = BeautifulSoup(html_doc,"html.parser") links = soup.find_all('a') for link in links: print(link.name,link["nref"],link.get.next()) if __name__ == 'main': url = "https://wallhaven.cc/toplist?page=7" print(getHTMLText(url))改进代码
时间: 2024-04-27 21:21:18 浏览: 261
这段代码可以改进的地方有:
1. 函数名jiexi不太直观,可以改为更明确的parse_links。
2. 在解析链接时,可以加上判断,如果链接是以http或https开头的,才输出链接信息,否则忽略。
3. 在主程序中,应该调用parse_links函数,而不是直接打印HTML页面内容。
改进后的代码如下:
```
import requests
from bs4 import BeautifulSoup
def get_html_text(url):
try:
r = requests.get(url, timeout=30)
r.raise_for_status()
r.encoding = r.apparent_encoding
return r.text
except:
return "产生异常"
def parse_links(html_doc):
soup = BeautifulSoup(html_doc, "html.parser")
links = soup.find_all('a')
for link in links:
if link.has_attr('href') and (link['href'].startswith('http') or link['href'].startswith('https')):
print(link.name, link['href'], link.get_text())
if __name__ == '__main__':
url = "https://wallhaven.cc/toplist?page=7"
html_text = get_html_text(url)
parse_links(html_text)
```
这样修改后,代码更加清晰易懂,功能也更加完善。
阅读全文