PySpark入门到精通指南

需积分: 47 38 下载量 56 浏览量 更新于2024-07-14 1 收藏 14.79MB PDF 举报
"《Learning Apache Spark with Python》是由Wenqiang Feng编写的关于使用Python进行Spark开发的教程,旨在帮助读者从入门到精通。该书涵盖了Spark的基础知识、配置平台、Spark的核心概念、组件架构,以及使用RDD进行编程,并涉及统计学和线性代数的初步知识。" 在《pyspark从入门到精通》这本书中,作者首先介绍了为何选择Spark作为大数据处理的工具,强调了Spark的高效性和灵活性。Spark是为大规模数据处理设计的并行计算框架,它提供了一种快速、通用且可扩展的处理方式。对于Python开发者来说,PySpark提供了友好的接口,使得数据科学家和工程师可以利用Python丰富的库进行数据分析。 在配置运行平台的部分,书中详细讲解了如何在不同环境下设置Spark,包括在Databricks社区云上运行、在Mac和Ubuntu系统上的配置、在Windows系统上的安装,以及如何使用文本编辑器或IDE(如PyCharm)与PySpark配合工作。此外,还介绍了如何配置Sparkling Water(Spark与H2O的结合)和在云端部署Spark。特别地,书中提到了在Google Colaboratory上使用PySpark,这是一个免费的在线Jupyter Notebook环境,方便学习和实验。 接下来,作者深入浅出地讲解了Apache Spark的核心概念,包括RDD(弹性分布式数据集)、Spark的主要组件以及其层次化架构。Spark的工作原理通过简单的例子进行了阐述,帮助读者理解其并行计算的机制。 在编程与RDD的章节,读者将学习如何创建和操作RDD,包括基本的转换和动作操作。此外,还对比了RDD与DataFrame的区别,DataFrame提供了更高级别的抽象,更适合于数据建模和SQL查询。 统计学和线性代数是数据分析的基础,书中介绍了相关符号、线性代数的基本概念,以及用于评估模型性能的测量公式,如混淆矩阵。同时,简要概述了统计检验,这些都是数据科学中常用的方法。 《pyspark从入门到精通》是一本全面的指南,涵盖了从安装配置到实际编程的各个环节,适合初学者和有一定经验的Spark开发者。通过这本书,读者可以系统地学习和掌握使用PySpark进行大数据分析的技能。
2017-05-12 上传
About This Book, Learn why and how you can efficiently use Python to process data and build machine learning models in Apache Spark 2.0Develop and deploy efficient, scalable real-time Spark solutionsTake your understanding of using Spark with Python to the next level with this jump start guide, Who This Book Is For, If you are a Python developer who wants to learn about the Apache Spark 2.0 ecosystem, this book is for you. A firm understanding of Python is expected to get the best out of the book. Familiarity with Spark would be useful, but is not mandatory., What You Will Learn, Learn about Apache Spark and the Spark 2.0 architectureBuild and interact with Spark DataFrames using Spark SQLLearn how to solve graph and deep learning problems using GraphFrames and TensorFrames respectivelyRead, transform, and understand data and use it to train machine learning modelsBuild machine learning models with MLlib and MLLearn how to submit your applications programmatically using spark-submitDeploy locally built applications to a cluster, In Detail, Apache Spark is an open source framework for efficient cluster computing with a strong interface for data parallelism and fault tolerance. This book will show you how to leverage the power of Python and put it to use in the Spark ecosystem. You will start by getting a firm understanding of the Spark 2.0 architecture and how to set up a Python environment for Spark., You will get familiar with the modules available in PySpark. You will learn how to abstract data with RDDs and DataFrames and understand the streaming capabilities of PySpark. Also, you will get a thorough overview of machine learning capabilities of PySpark using ML and MLlib, graph processing using GraphFrames, and polyglot persistence using Blaze. Finally, you will learn how to deploy your applications to the cloud using the spark-submit command., By the end of this book, you will have established a firm understanding of the Spark Python API and how it can be used to build data-intensive applications., Style and approach, This book takes a very comprehensive, step-by-step approach so you understand how the Spark ecosystem can be used with Python to develop efficient, scalable solutions. Every chapter is standalone and written in a very easy-to-understand manner, with a focus on both the hows and the whys of each concept.
2017-03-04 上传
About This Book, Learn why and how you can efficiently use Python to process data and build machine learning models in Apache Spark 2.0Develop and deploy efficient, scalable real-time Spark solutionsTake your understanding of using Spark with Python to the next level with this jump start guide, Who This Book Is For, If you are a Python developer who wants to learn about the Apache Spark 2.0 ecosystem, this book is for you. A firm understanding of Python is expected to get the best out of the book. Familiarity with Spark would be useful, but is not mandatory., What You Will Learn, Learn about Apache Spark and the Spark 2.0 architectureBuild and interact with Spark DataFrames using Spark SQLLearn how to solve graph and deep learning problems using GraphFrames and TensorFrames respectivelyRead, transform, and understand data and use it to train machine learning modelsBuild machine learning models with MLlib and MLLearn how to submit your applications programmatically using spark-submitDeploy locally built applications to a cluster, In Detail, Apache Spark is an open source framework for efficient cluster computing with a strong interface for data parallelism and fault tolerance. This book will show you how to leverage the power of Python and put it to use in the Spark ecosystem. You will start by getting a firm understanding of the Spark 2.0 architecture and how to set up a Python environment for Spark., You will get familiar with the modules available in PySpark. You will learn how to abstract data with RDDs and DataFrames and understand the streaming capabilities of PySpark. Also, you will get a thorough overview of machine learning capabilities of PySpark using ML and MLlib, graph processing using GraphFrames, and polyglot persistence using Blaze. Finally, you will learn how to deploy your applications to the cloud using the spark-submit command., By the end of this book, you will have established a firm understanding of the Spark Python API and how it can be used t