Long.parseLong(output.get(6))/(1024*1024) 将结果保留两位有效数字
时间: 2024-03-02 09:53:17 浏览: 98
可以使用 Java 中的 DecimalFormat 类来实现保留两位有效数字的功能,代码如下:
```
double result = Long.parseLong(output.get(6))/(1024*1024);
DecimalFormat decimalFormat = new DecimalFormat("#0.00");
String formattedResult = decimalFormat.format(result);
```
其中,`#0.00` 表示保留两位有效数字的格式。`format` 方法将会返回一个字符串类型的结果 `formattedResult`,即保留了两位有效数字的结果。
相关问题
flink读取kafka数据,并将偏移量保存到Mysql
可以通过Flink的Kafka Consumer实现从Kafka中读取数据,并通过Flink的JDBC Output Format将偏移量保存到MySQL中。以下是一个简单的示例代码:
```
import org.apache.flink.api.common.functions.MapFunction;
import org.apache.flink.api.common.serialization.SimpleStringSchema;
import org.apache.flink.api.java.utils.ParameterTool;
import org.apache.flink.streaming.api.CheckpointingMode;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.AssignerWithPunctuatedWatermarks;
import org.apache.flink.streaming.api.functions.sink.SinkFunction;
import org.apache.flink.streaming.api.watermark.Watermark;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer;
import org.apache.flink.streaming.connectors.kafka.KafkaSerializationSchema;
import org.apache.flink.streaming.connectors.kafka.KafkaSink;
import org.apache.flink.streaming.util.serialization.KeyedSerializationSchemaWrapper;
import org.apache.flink.types.Row;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
import javax.annotation.Nullable;
import java.nio.charset.StandardCharsets;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import java.util.Properties;
import java.util.concurrent.ExecutionException;
public class FlinkKafkaToMysql {
public static void main(String[] args) throws Exception {
// 获取参数
final ParameterTool parameterTool = ParameterTool.fromArgs(args);
// 设置执行环境
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.enableCheckpointing(5000, CheckpointingMode.EXACTLY_ONCE);
// 设置Kafka Consumer
Properties properties = new Properties();
properties.setProperty("bootstrap.servers", parameterTool.get("bootstrap.servers"));
properties.setProperty("group.id", parameterTool.get("group.id"));
FlinkKafkaConsumer<String> consumer = new FlinkKafkaConsumer<>(parameterTool.get("input.topic"), new SimpleStringSchema(), properties);
// 设置Kafka Producer
FlinkKafkaProducer<Row> producer = new FlinkKafkaProducer<>(parameterTool.get("output.topic"), new KafkaSerializationSchema<Row>() {
@Override
public ProducerRecord<byte[], byte[]> serialize(Row element, @Nullable Long timestamp) {
return new ProducerRecord<>(parameterTool.get("output.topic"), element.toString().getBytes(StandardCharsets.UTF_8));
}
}, properties, FlinkKafkaProducer.Semantic.EXACTLY_ONCE);
// 设置JDBC Output Format
JDBCOutputFormat jdbcOutputFormat = JDBCOutputFormat.buildJDBCOutputFormat()
.setDrivername(parameterTool.get("jdbc.driver"))
.setDBUrl(parameterTool.get("jdbc.url"))
.setUsername(parameterTool.get("jdbc.username"))
.setPassword(parameterTool.get("jdbc.password"))
.setQuery("INSERT INTO " + parameterTool.get("mysql.table") + " (topic, partition, offset) VALUES (?, ?, ?)")
.setSqlTypes(new int[] {java.sql.Types.VARCHAR, java.sql.Types.INTEGER, java.sql.Types.BIGINT})
.finish();
// 读取Kafka数据并保存偏移量到MySQL
DataStream<String> inputStream = env.addSource(consumer);
inputStream.map(new MapFunction<String, Row>() {
@Override
public Row map(String value) throws Exception {
String[] parts = value.split(",");
return Row.of(parts[0], Integer.parseInt(parts[1]), Long.parseLong(parts[2]));
}
}).addSink(new SinkFunction<Row>() {
@Override
public void invoke(Row value, Context context) throws Exception {
jdbcOutputFormat.writeRecord(value);
}
});
// 执行任务
env.execute("Flink Kafka to MySQL");
}
}
```
在上述代码中,我们首先通过`ParameterTool`获取传入的参数,然后设置Flink的执行环境和Kafka Consumer。接着,我们创建了一个`JDBCOutputFormat`对象,用于将数据保存到MySQL中。最后,我们通过`DataStream.map()`操作将Kafka中读取的数据转换成`Row`对象,并通过`SinkFunction`将偏移量保存到MySQL中。
需要注意的是,在保存数据到MySQL时,我们使用了`JDBCOutputFormat`的`writeRecord()`方法,该方法会自动打开和关闭数据库连接。因此,我们不需要手动管理数据库连接。
本项目有一部分时间戳数据,请编写Mapreduce程序将这些时间戳按升序进行排序,同时转换成对应的date时间
好的,这是一个非常常见的MapReduce任务,我们可以利用MapReduce中的排序功能来完成这个任务。下面是一个示例程序,假设时间戳数据文件的每一行都是一个时间戳,格式为Unix时间戳:
```java
import java.io.IOException;
import java.text.SimpleDateFormat;
import java.util.Date;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import org.apache.hadoop.mapreduce.lib.partition.HashPartitioner;
import org.apache.hadoop.util.GenericOptionsParser;
public class TimestampSort {
public static class TimestampMapper extends Mapper<LongWritable, Text, LongWritable, LongWritable> {
private LongWritable timestamp = new LongWritable();
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
long ts = Long.parseLong(value.toString().trim());
timestamp.set(ts);
context.write(timestamp, new LongWritable(1));
}
}
public static class TimestampReducer extends Reducer<LongWritable, LongWritable, Text, Text> {
public void reduce(LongWritable key, Iterable<LongWritable> values, Context context) throws IOException, InterruptedException {
long ts = key.get();
SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
String date = sdf.format(new Date(ts * 1000));
context.write(new Text(date), new Text(""));
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
if (otherArgs.length != 2) {
System.err.println("Usage: timestamps <input> <output>");
System.exit(2);
}
Job job = Job.getInstance(conf, "Timestamp Sort");
job.setJarByClass(TimestampSort.class);
job.setMapperClass(TimestampMapper.class);
job.setReducerClass(TimestampReducer.class);
job.setPartitionerClass(HashPartitioner.class);
job.setNumReduceTasks(1);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
TextInputFormat.addInputPath(job, new Path(otherArgs[0]));
TextOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
```
这个程序的主要思路是将时间戳作为Map的输出key,然后利用MapReduce框架自带的排序功能,将时间戳按升序排序。在Reducer中,将时间戳转换成对应的日期时间格式,输出到文件中即可。由于所有的时间戳都被映射到了同一个Reducer中,所以Reducer也只需要一个,这样可以减少通信开销,提高程序性能。
阅读全文