Java网络编程深入解析:BIO、NIO与AIO技术对比

版权申诉
0 下载量 17 浏览量 更新于2024-10-25 收藏 64KB ZIP 举报
资源摘要信息:"本文档是关于Java网络编程的相关读书笔记,涵盖了Java中三种主要的I/O模型:BIO(阻塞I/O)、NIO(非阻塞I/O)和AIO(异步非阻塞I/O)。这三种I/O模型代表了Java在网络编程方面的不同技术途径和设计哲学。 首先,BIO(Blocking I/O)模型是一种传统的I/O模型,Java的早期版本主要依赖于此模型。在BIO模型中,一个连接或者请求在进行读写操作时,如果数据没有准备好,程序将会阻塞,直到数据到来。这种模型的优点是实现简单,但是其缺点也很明显,尤其是在高并发的场景下,会消耗大量的线程资源,从而导致性能瓶颈。 接下来,NIO(Non-blocking I/O)模型是Java 1.4版本后引入的一种新的I/O模型。NIO的核心是通过使用缓冲区(Buffer)和通道(Channel)的概念,以及选择器(Selector)来实现多路复用I/O操作。在NIO中,可以使用单个线程来处理多个连接(非阻塞),通过选择器来监控多个通道的状态,一旦通道有可读写的数据,便可以立即进行处理。这大大提高了程序处理I/O请求的效率,特别适合于连接数较多且每个连接不需要频繁交互的应用场景。 最后,AIO(Asynchronous Non-blocking I/O)是Java 7中引入的异步非阻塞I/O模型。AIO的操作是完全异步的,当一个异步操作被启动之后,调用者可以继续执行其它操作,不会因为I/O操作的等待而被阻塞。当数据被读取或写入完成后,会通过回调函数(Future)通知调用者。AIO允许开发者充分利用系统资源,实现高效的并发处理,适合于那些对I/O响应时间要求较高、能够处理回调的场景。 这份读书笔记不仅仅是对Java网络编程的简单回顾,更重要的是,它提供了一个对比分析,帮助开发者根据不同的业务场景和性能需求,选择最适合的I/O模型。文档可能包含了对BIO、NIO和AIO在实际应用场景中的案例分析,以及如何在项目中应用这些模型的具体建议。此外,笔记中可能还包括了对Java I/O模型底层实现原理的探讨,比如对于Java中IO流的封装、缓冲区的工作机制以及选择器的内部实现等内容的解读。 总的来说,这份读书笔记为Java网络编程提供了全面的参考,对于想要深入理解和掌握Java网络编程I/O模型的开发者来说,是一个宝贵的资料。" 在了解这些概念的基础上,一个专业的IT行业大师能够进一步详细阐述BIO、NIO和AIO在具体实现中的关键细节,如何结合JDK的API进行网络编程实践,以及如何针对不同应用场景选择最合适的I/O模型。这份读书笔记不仅是一份学习资料,也可以是项目开发时的技术参考资料,因为它深入探讨了Java I/O模型的理论和实际应用,提供了对于Java网络编程深入理解的基石。

org.apache.flume.EventDeliveryException: Failed to send events at org.apache.flume.sink.AbstractRpcSink.process(AbstractRpcSink.java:389) at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:67) at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:145) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.flume.FlumeException: NettyAvroRpcClient { host: localhost, port: 44444 }: RPC connection error at org.apache.flume.api.NettyAvroRpcClient.connect(NettyAvroRpcClient.java:181) at org.apache.flume.api.NettyAvroRpcClient.connect(NettyAvroRpcClient.java:120) at org.apache.flume.api.NettyAvroRpcClient.configure(NettyAvroRpcClient.java:638) at org.apache.flume.api.RpcClientFactory.getInstance(RpcClientFactory.java:90) at org.apache.flume.sink.AvroSink.initializeRpcClient(AvroSink.java:127) at org.apache.flume.sink.AbstractRpcSink.createConnection(AbstractRpcSink.java:210) at org.apache.flume.sink.AbstractRpcSink.verifyConnection(AbstractRpcSink.java:270) at org.apache.flume.sink.AbstractRpcSink.process(AbstractRpcSink.java:346) ... 3 more Caused by: java.io.IOException: Error connecting to localhost/127.0.0.1:44444 at org.apache.avro.ipc.NettyTransceiver.getChannel(NettyTransceiver.java:261) at org.apache.avro.ipc.NettyTransceiver.<init>(NettyTransceiver.java:203) at org.apache.avro.ipc.NettyTransceiver.<init>(NettyTransceiver.java:152) at org.apache.flume.api.NettyAvroRpcClient.connect(NettyAvroRpcClient.java:167) ... 10 more Caused by: java.net.ConnectException: 拒绝连接: localhost/127.0.0.1:44444 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.jboss.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:152) at org.jboss.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105) at org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318) at org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42) at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ... 1 more

2023-06-11 上传

Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: 拒绝连接: node01/192.168.72.110:7077 Caused by: java.net.ConnectException: 拒绝连接 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:715) at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330) at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:702) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 23/07/07 22:44:06 ERROR StandaloneSchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up. 23/07/07 22:44:06 WARN StandaloneSchedulerBackend: Application ID is not initialized yet. 23/07/07 22:44:06 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 45096. 23/07/07 22:44:06 INFO NettyBlockTransferService: Server created on node01:45096 23/07/07 22:44:06 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 23/07/07 22:44:06 INFO SparkUI: Stopped Spark web UI at http://node01:4040 23/07/07 22:44:06 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, node01, 45096, None) 23/07/07 22:44:06 INFO StandaloneSchedulerBackend: Shutting down all executors

2023-07-11 上传