springboot整合easy-es-boot-starter和canbal、kafka实现mysql数据同步的接口,可实现手动刷新索引
时间: 2023-07-15 10:12:37 浏览: 277
springboot整合spring-kafka kafkaTemplate
实现步骤如下:
1. 引入依赖
在pom.xml中引入easy-es-boot-starter、canal和kafka的依赖:
```xml
<dependency>
<groupId>com.github.xiaolyuh</groupId>
<artifactId>easy-es-boot-starter</artifactId>
<version>${easy-es-boot-starter.version}</version>
</dependency>
<dependency>
<groupId>com.alibaba.otter</groupId>
<artifactId>canal.client</artifactId>
<version>${canal.version}</version>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<version>${spring-kafka.version}</version>
</dependency>
```
2. 配置Canal
在application.yml中配置Canal:
```yaml
canal:
host: 127.0.0.1
port: 11111
destination: example
username: canal
password: canal
```
3. 配置Kafka
在application.yml中配置Kafka:
```yaml
spring:
kafka:
bootstrap-servers: localhost:9092
consumer:
group-id: group1
auto-offset-reset: earliest
enable-auto-commit: true
```
4. 创建数据同步监听器
创建一个数据同步监听器,用于监听Canal的数据变化事件,并将变化的数据发送到Kafka中:
```java
@Component
public class DataSyncListener implements CanalEventListener {
private final KafkaTemplate<String, String> kafkaTemplate;
private final ObjectMapper objectMapper;
public DataSyncListener(KafkaTemplate<String, String> kafkaTemplate, ObjectMapper objectMapper) {
this.kafkaTemplate = kafkaTemplate;
this.objectMapper = objectMapper;
}
@Override
public void onEvent(CanalEntry.Entry entry) {
if (entry.getEntryType() != CanalEntry.EntryType.ROWDATA) {
return;
}
CanalEntry.RowChange rowChange;
try {
rowChange = CanalEntry.RowChange.parseFrom(entry.getStoreValue());
} catch (InvalidProtocolBufferException e) {
throw new RuntimeException("parse event has an error , data:" + entry.toString(), e);
}
EventType eventType = rowChange.getEventType();
if (eventType == EventType.QUERY || eventType == EventType.HEARTBEAT) {
return;
}
for (CanalEntry.RowData rowData : rowChange.getRowDatasList()) {
String tableName = entry.getHeader().getTableName();
Map<String, Object> data = new HashMap<>();
for (CanalEntry.Column column : rowData.getAfterColumnsList()) {
data.put(column.getName(), column.getValue());
}
String json = null;
try {
json = objectMapper.writeValueAsString(data);
} catch (JsonProcessingException e) {
throw new RuntimeException(e);
}
kafkaTemplate.send(tableName, json);
}
}
}
```
5. 创建索引刷新接口
创建一个接口,用于手动刷新索引:
```java
@RestController
@RequestMapping("/index")
public class IndexController {
private final ElasticSearchService elasticSearchService;
public IndexController(ElasticSearchService elasticSearchService) {
this.elasticSearchService = elasticSearchService;
}
@GetMapping("/refresh")
public void refresh() {
elasticSearchService.refreshIndex();
}
}
```
6. 实现索引刷新功能
实现一个ElasticSearchService,用于刷新索引:
```java
@Service
public class ElasticSearchService {
private final RestHighLevelClient restHighLevelClient;
private final ElasticsearchProperties elasticsearchProperties;
public ElasticSearchService(RestHighLevelClient restHighLevelClient, ElasticsearchProperties elasticsearchProperties) {
this.restHighLevelClient = restHighLevelClient;
this.elasticsearchProperties = elasticsearchProperties;
}
public void refreshIndex() {
RefreshRequest request = new RefreshRequest(elasticsearchProperties.getIndex());
try {
restHighLevelClient.indices().refresh(request, RequestOptions.DEFAULT);
} catch (IOException e) {
throw new RuntimeException(e);
}
}
}
```
这样就实现了一个手动刷新索引的接口,同时根据Canal监听MySQL的数据变化,将变化的数据发送到Kafka中,再由Elasticsearch自动同步到索引中。
阅读全文