flink 设置kafka offset demo
时间: 2023-07-22 13:31:01 浏览: 199
好的,下面是一个简单的 Flink 设置 Kafka Offset 的示例代码:
```java
import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.api.common.serialization.SimpleStringSchema;
import org.apache.flink.api.java.utils.ParameterTool;
import org.apache.flink.streaming.api.CheckpointingMode;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.source.SourceFunction;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer09;
import org.apache.flink.streaming.connectors.kafka.KafkaDeserializationSchema;
import org.apache.flink.streaming.connectors.kafka.KafkaSerializationSchema;
import org.apache.flink.streaming.connectors.kafka.KafkaSink;
import org.apache.flink.streaming.connectors.kafka.config.StartupMode;
import org.apache.flink.streaming.util.serialization.JSONKeyValueDeserializationSchema;
import org.apache.flink.streaming.util.serialization.KeyedSerializationSchema;
import org.apache.flink.util.Collector;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.apache.kafka.common.serialization.StringSerializer;
import java.util.Properties;
public class FlinkKafkaOffsetDemo {
public static void main(String[] args) throws Exception {
// 从参数中读取 Kafka topic 和其他参数
final ParameterTool parameterTool = ParameterTool.fromArgs(args);
String topic = parameterTool.get("topic");
String brokers = parameterTool.get("brokers");
String groupId = parameterTool.get("group-id");
String offsetReset = parameterTool.get("offset-reset", "latest");
// 设置 Flink 的执行环境
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.enableCheckpointing(5000L, CheckpointingMode.EXACTLY_ONCE);
// 设置 Kafka Consumer 的配置
Properties kafkaProps = new Properties();
kafkaProps.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, brokers);
kafkaProps.setProperty(ConsumerConfig.GROUP_ID_CONFIG, groupId);
kafkaProps.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
kafkaProps.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
kafkaProps.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, offsetReset);
// 从 Kafka 中读取数据
FlinkKafkaConsumer<String> kafkaConsumer = new FlinkKafkaConsumer<>(topic, new SimpleStringSchema(), kafkaProps);
kafkaConsumer.setStartFromEarliest();
DataStream<String> input = env.addSource(kafkaConsumer);
// 对数据进行处理
DataStream<String> result = input.flatMap(new FlatMapFunction<String, String>() {
@Override
public void flatMap(String value, Collector<String> out) throws Exception {
out.collect(value);
}
});
// 将数据写入 Kafka
Properties producerProps = new Properties();
producerProps.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, brokers);
producerProps.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
producerProps.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
KafkaSerializationSchema<String> kafkaSerializationSchema = new KeyedSerializationSchema<String>() {
@Override
public byte[] serializeKey(String element) {
return null;
}
@Override
public byte[] serializeValue(String element) {
return element.getBytes();
}
@Override
public String getTargetTopic(String element) {
return topic;
}
};
KafkaSink<String> kafkaSink = new KafkaSink<>(producerProps, kafkaSerializationSchema);
result.addSink(kafkaSink);
// 执行 Flink Job
env.execute("Flink Kafka Offset Demo");
}
}
```
在上面的示例中,我们使用 FlinkKafkaConsumer 设置了 Kafka Consumer 的配置,并从 Kafka 中读取了数据。在从 Kafka 中读取数据的过程中,我们可以通过设置 `setStartFromEarliest()` 或 `setStartFromLatest()` 方法来设置从什么位置开始读取数据。
读取到的数据会经过我们自定义的 `flatMap()` 函数进行处理,然后再将处理后的数据写入 Kafka 中。在写入数据时,我们使用了 KafkaSink,并设置了 Kafka Producer 的配置和序列化方式。
在实际使用时,我们可以根据具体的业务场景来设置 Kafka Consumer 的 offset,以实现更加灵活的数据处理。
阅读全文