7、使用SparkSQL 编程创建Mysql表people(ID varchar(10),Name varchar(30),sex Int,Address varchar(30),Telephone varchar(20)),并且插入10条数据。
时间: 2024-05-17 07:18:09 浏览: 53
MySql笔记(一)
好的,下面是使用SparkSQL编程创建MySQL表和插入数据的步骤:
1. 导入需要的库和类:
```scala
import org.apache.spark.sql.{SparkSession, SaveMode}
import java.util.Properties
```
2. 创建SparkSession:
```scala
val spark = SparkSession.builder()
.appName("CreateMySQLTable")
.master("local[*]")
.getOrCreate()
```
3. 定义MySQL连接信息:
```scala
val url = "jdbc:mysql://localhost:3306/test"
val table = "people"
val properties = new Properties()
properties.setProperty("user", "root")
properties.setProperty("password", "your_password")
```
其中,你需要将“localhost”替换成你的MySQL服务器地址,将“test”替换成你的数据库名,将“root”替换成你的MySQL用户名,将“your_password”替换成你的MySQL密码。
4. 创建数据表和插入数据:
```scala
val data = Seq(
("1", "Tom", 1, "Beijing", "1234567890"),
("2", "Jerry", 0, "Shanghai", "2345678901"),
("3", "Alice", 1, "Guangzhou", "3456789012"),
("4", "Bob", 0, "Shenzhen", "4567890123"),
("5", "Charlie", 1, "Hangzhou", "5678901234"),
("6", "David", 0, "Chengdu", "6789012345"),
("7", "Emily", 1, "Xi'an", "7890123456"),
("8", "Frank", 0, "Wuhan", "8901234567"),
("9", "Grace", 1, "Chongqing", "9012345678"),
("10", "Henry", 0, "Nanjing", "0123456789")
)
val rdd = spark.sparkContext.parallelize(data)
val df = spark.createDataFrame(rdd).toDF("ID", "Name", "Sex", "Address", "Telephone")
df.write.mode(SaveMode.Overwrite).jdbc(url, table, properties)
```
这个代码会创建一个名为“people”的数据表,并且插入10条数据。
5. 关闭SparkSession:
```scala
spark.stop()
```
希望这个解答能够帮到你。
阅读全文