2017年Packt出版的《Python网络爬虫实战第二版》

5星 · 超过95%的资源 需积分: 10 4 下载量 85 浏览量 更新于2024-07-19 收藏 14.78MB PDF 举报
《Packt.Python.Web.Scraping.2nd.Edition.2017.5.pdf》是一本深入讲解Python网络爬虫技术的第二版教程,由 Katharine Jarmul 和 Richard Lawson 合著,由 Packt Publishing 出版。本书专注于从互联网上获取数据,帮助读者掌握如何利用Python进行Web抓取。 本书的核心知识点包括: 1. **Python Web Scraping简介**:介绍Python在Web抓取领域的基础,以及为何选择Python作为主要工具,因为其简洁的语法、丰富的库支持(如BeautifulSoup, Scrapy等)和广泛的社区资源。 2. **网页结构分析**:讲解如何解析HTML和XML文档,理解DOM树结构,以及XPath和CSS选择器的使用,这些都是抓取过程中必不可少的技术。 3. **数据抓取策略**:讨论了反爬虫机制,如robots.txt协议的理解,如何设置代理、延迟请求以及处理JavaScript渲染内容的方法,确保抓取的合规性和效率。 4. **Scrapy框架详解**:作为Python的流行Web爬虫框架,Scrapy的安装、配置和使用方法将详细介绍,包括如何编写中间件、下载器、spiders和pipelines,以构建完整的爬虫项目。 5. **网络数据存储与处理**:介绍如何将抓取的数据存储到数据库(如SQLite, MySQL, MongoDB等),并进行清洗、整理和分析,以便后续的数据挖掘或机器学习应用。 6. **版权和道德规范**:强调在进行Web抓取时的法律问题,提醒读者尊重网站的版权政策,只在合法范围内使用抓取的数据,并遵守相关法规。 7. **更新与维护**:由于是2017年的版本,书中可能包含当时最新的Python版本(如Python 3.x)及其库的特性,以及对Web抓取最佳实践的建议。 8. **版权声明**:明确指出该书享有所有权利,未经出版商书面许可,不得复制、存储或以任何形式传输内容,旨在保护作者和出版社的权益。 通过阅读这本书,读者能够全面掌握Python web scraping的技能,适应不断变化的网络环境,应对各种复杂的抓取场景。无论你是数据分析师、开发者还是希望扩展知识面的人士,都能从中受益良多。
2017-06-11 上传
Python Web Scraping - Second Edition by Katharine Jarmul English | 30 May 2017 | ASIN: B0725BCPT1 | 220 Pages | AZW3 | 3.52 MB Key Features A hands-on guide to web scraping using Python with solutions to real-world problems Create a number of different web scrapers in Python to extract information This book includes practical examples on using the popular and well-maintained libraries in Python for your web scraping needs Book Description The Internet contains the most useful set of data ever assembled, most of which is publicly accessible for free. However, this data is not easily usable. It is embedded within the structure and style of websites and needs to be carefully extracted. Web scraping is becoming increasingly useful as a means to gather and make sense of the wealth of information available online. This book is the ultimate guide to using the latest features of Python 3.x to scrape data from websites. In the early chapters, you'll see how to extract data from static web pages. You'll learn to use caching with databases and files to save time and manage the load on servers. After covering the basics, you'll get hands-on practice building a more sophisticated crawler using browsers, crawlers, and concurrent scrapers. You'll determine when and how to scrape data from a JavaScript-dependent website using PyQt and Selenium. You'll get a better understanding of how to submit forms on complex websites protected by CAPTCHA. You'll find out how to automate these actions with Python packages such as mechanize. You'll also learn how to create class-based scrapers with Scrapy libraries and implement your learning on real websites. By the end of the book, you will have explored testing websites with scrapers, remote scraping, best practices, working with images, and many other relevant topics. What you will learn Extract data from web pages with simple Python programming Build a concurrent crawler to process web pages in parallel Follow links to crawl a website Extract features from the HTML Cache downloaded HTML for reuse Compare concurrent models to determine the fastest crawler Find out how to parse JavaScript-dependent websites Interact with forms and sessions About the Author Katharine Jarmul is a data scientist and Pythonista based in Berlin, Germany. She runs a data science consulting company, Kjamistan, that provides services such as data extraction, acquisition, and modelling for small and large companies. She has been writing Python since 2008 and scraping the web with Python since 2010, and has worked at both small and large start-ups who use web scraping for data analysis and machine learning. When she's not scraping the web, you can follow her thoughts and activities via Twitter (@kjam) Richard Lawson is from Australia and studied Computer Science at the University of Melbourne. Since graduating, he built a business specializing in web scraping while travelling the world, working remotely from over 50 countries. He is a fluent Esperanto speaker, conversational in Mandarin and Korean, and active in contributing to and translating open source software. He is currently undertaking postgraduate studies at Oxford University and in his spare time enjoys developing autonomous drones. Table of Contents Introduction Scraping the data Caching downloads Concurrent downloading Dynamic content Interacting with forms Solving CAPTCHA Scrapy Putting it All Together