抽取shtd_store库中user_info的增量数据进入Hudi的ods_ds_hudi库中表user_info。根据ods_ds_hudi.user_info表中operate_time或create_time作为增量字段(即MySQL中每条数据取这两个时间中较大的那个时间作为增量字段去和ods里的这两个字段中较大的时间进行比较),只将新增的数据抽入,字段名称、类型不变,同时添加分区,若operate_time为空,则用create_time填充,分区字段为etl_date,类型为String,且值为当前比赛日的前一天日期(分区字段格式为yyyyMMdd)。id作为primaryKey,operate_time作为preCombineField。使用spark-shell执行show partitions ods_ds_hudi.user_info命令
时间: 2024-02-01 18:15:13 浏览: 247
以下是抽取增量数据进入Hudi的代码:
```scala
import org.apache.spark.sql.functions._
import org.apache.hudi.QuickstartUtils._
val jdbcUrl = "jdbc:mysql://localhost:3306/shtd_store?useSSL=false&serverTimezone=UTC"
val dbProperties = new java.util.Properties()
dbProperties.setProperty("user", "root")
dbProperties.setProperty("password", "root")
val user_df = spark.read.jdbc(jdbcUrl, "user_info", dbProperties)
val hudi_options = Map[String, String](
HoodieWriteConfig.TABLE_NAME -> "user_info",
HoodieWriteConfig.RECORDKEY_FIELD_OPT_KEY -> "id",
HoodieWriteConfig.PRECOMBINE_FIELD_OPT_KEY -> "operate_time",
HoodieWriteConfig.PARTITIONPATH_FIELD_OPT_KEY -> "etl_date",
HoodieWriteConfig.KEYGENERATOR_CLASS_OPT_KEY -> "org.apache.hudi.keygen.NonpartitionedKeyGenerator",
HoodieWriteConfig.OPERATION_OPT_KEY -> "upsert",
HoodieWriteConfig.BULK_INSERT_SORT_MODE_OPT_KEY -> "GLOBAL_SORT",
HoodieWriteConfig.BULK_INSERT_INPUT_RECORDS_NUM_OPT_KEY -> "500",
HoodieWriteConfig.BULK_INSERT_PARALLELISM_OPT_KEY -> "2",
HoodieWriteConfig.FORMAT_OPT_KEY -> "org.apache.hudi",
HoodieWriteConfig.HIVE_SYNC_ENABLED_OPT_KEY -> "false",
HoodieWriteConfig.HIVE_DATABASE_OPT_KEY -> "default",
HoodieWriteConfig.HIVE_TABLE_OPT_KEY -> "user_info",
HoodieWriteConfig.HIVE_PARTITION_FIELDS_OPT_KEY -> "etl_date",
HoodieWriteConfig.HIVE_PARTITION_EXTRACTOR_CLASS_OPT_KEY -> "org.apache.hudi.hive.NonPartitionedExtractor",
HoodieWriteConfig.HOODIE_TABLE_TYPE_OPT_KEY -> "MERGE_ON_READ"
)
val etl_date = java.time.LocalDate.now.minusDays(1).format(java.time.format.DateTimeFormatter.BASIC_ISO_DATE)
val hudi_df = user_df
.withColumn("etl_date", lit(etl_date))
.withColumn("operate_time", coalesce(col("operate_time"), col("create_time")))
.withColumn("operate_time_long", unix_timestamp(col("operate_time"), "yyyy-MM-dd HH:mm:ss"))
.withColumn("create_time_long", unix_timestamp(col("create_time"), "yyyy-MM-dd HH:mm:ss"))
.withColumn("increment_ts", greatest(col("operate_time_long"), col("create_time_long")))
.filter(col("increment_ts") >= unix_timestamp(lit(etl_date), "yyyyMMdd"))
.selectExpr("id", "username", "age", "gender", "create_time", "operate_time")
.repartition(2)
hudi_df.write
.format("org.apache.hudi")
.options(hudi_options)
.mode("append")
.save("hdfs://localhost:9000/user/hive/warehouse/ods_ds_hudi.db/user_info")
```
执行完毕后,可以在Hive中使用`show partitions ods_ds_hudi.user_info`命令查看分区情况。
阅读全文