精通Hadoop管理:调优与安全

需积分: 13 18 下载量 64 浏览量 更新于2024-07-20 收藏 16.97MB PDF 举报
"Expert Hadoop Administration - 管理、调优与安全" 本书《Expert Hadoop Administration》是Sam R. Alapati所著的一本关于Hadoop集群管理的权威指南,面向那些需要在各种环境下创建、配置、保护、管理和优化生产级Hadoop集群的专业人士。作者基于其在大规模Hadoop管理中的丰富经验,提供了实用的建议和深入的技术解释,涵盖了广泛的主题,并提供了一系列实际案例。 在书中,Alapati揭示了复杂的Hadoop环境,帮助读者理解在集群管理背后发生的具体过程。通过从零开始构建集群并配置高可用性、性能、安全、加密等关键属性,读者将获得前所未有的洞察力。无论你使用哪种Hadoop发行版或运行何种Hadoop应用程序,书中所教授的高价值管理技巧都将不可或缺。 本书的主要知识点包括: 1. **Hadoop架构与管理员视角**:深入理解Hadoop架构,学习如何从管理员的角度出发理解Hadoop的工作原理。 2. **集群创建与配置**:构建简单和完全分布式集群的步骤和策略,包括规划和创建过程。 3. **MapReduce和Spark应用**:在Hadoop集群中运行MapReduce和Spark应用,掌握这些框架的使用。 4. **数据管理和高可用性**:了解NameNode的角色,学习HDFS命令、权限和存储管理,以及如何保护数据和实现高可用性。 5. **数据移动与YARN资源分配**:学习如何将数据导入和导出Hadoop,以及使用YARN来分配资源和调度作业。 6. **Oozie和Hue工作流管理**:通过Oozie和Hue管理作业流程,提高工作效率。 7. **Hadoop安全**:实施Hadoop的安全措施,保护集群不受攻击。 8. **监控、优化与故障排查**:监控集群性能,优化资源,调试Hadoop问题,以及如何进行基准测试。 此外,本书还覆盖了Hadoop的其他关键组件,如Hive和Pig,以及如何在YARN上配置和优化Apache Spark。通过这一系列章节,读者将能够掌握全面的Hadoop管理技能,无论是在实际操作还是解决问题上,都能游刃有余。 总结来说,《Expert Hadoop Administration》是Hadoop管理员和数据专业人士的理想参考资料,它不仅提供了理论知识,还包含了大量实践指导,旨在帮助读者成为真正的Hadoop专家。

ALTER DATABASE test1 RENAME TO test2; NoViableAltException(254@[]) at org.apache.hadoop.hive.ql.parse.HiveParser.alterDatabaseStatementSuffix(HiveParser.java:9043) at org.apache.hadoop.hive.ql.parse.HiveParser.alterStatement(HiveParser.java:7647) at org.apache.hadoop.hive.ql.parse.HiveParser.ddlStatement(HiveParser.java:4337) at org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:2494) at org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1420) at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:220) at org.apache.hadoop.hive.ql.parse.ParseUtils.parse(ParseUtils.java:74) at org.apache.hadoop.hive.ql.parse.ParseUtils.parse(ParseUtils.java:67) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:616) at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1826) at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1773) at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1768) at org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126) at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:214) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:318) at org.apache.hadoop.util.RunJar.main(RunJar.java:232) FAILED: ParseException line 1:21 cannot recognize input near 'test1' 'RENAME' 'TO' in alter database statement

2023-05-24 上传

at org.apache.hadoop.ipc.Client.call(Client.java:1476) at org.apache.hadoop.ipc.Client.call(Client.java:1413) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy29.mkdirs(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:563) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy30.mkdirs(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3014) at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2984) at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047) at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1043) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036) at org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:751) at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:674) at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:586)

2023-07-16 上传

Java对hdfs操作报如下错误,请问怎么解决?错误如下:Exception in thread "main" java.io.IOException: (null) entry in command string: null chmod 0700 I:\tmp\hadoop-22215\mapred\staging\222151620622033\.staging at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:770) at org.apache.hadoop.util.Shell.execCommand(Shell.java:866) at org.apache.hadoop.util.Shell.execCommand(Shell.java:849) at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:733) at org.apache.hadoop.fs.RawLocalFileSystem.mkOneDirWithMode(RawLocalFileSystem.java:491) at org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:532) at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:509) at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:305) at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:133) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:144) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308) at com.sl.maxTemperature.main(maxTemperature.java:41)

2023-04-23 上传

[2023-07-04 17:11:29.952]Exception when trying to cleanup container container_e10_1661450914423_18596_01_000003: java.io.IOException: Problem signalling container 97181 with SIGTERM; output: null and exitCode: -1 at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.signalContainer(LinuxContainerExecutor.java:750) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.sendSignal(ContainerLaunch.java:908) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.signalProcess(ContainerLaunch.java:922) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.cleanupContainer(ContainerLaunch.java:774) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncher.handle(ContainersLauncher.java:173) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncher.handle(ContainersLauncher.java:62) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:221) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:143) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.yarn.server.nodemanager.containermanager.runtime.ContainerExecutionException: Signal container failed at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DefaultLinuxContainerRuntime.signalContainer(DefaultLinuxContainerRuntime.java:163) at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DelegatingLinuxContainerRuntime.signalContainer(DelegatingLinuxContainerRuntime.java:159) at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.signalContainer(LinuxContainerExecutor.java:739) ... 8 more | org.apache.flink.yarn.YarnResourceManager (ResourceManager.java:822)

2023-07-12 上传

INFO [upload-pool-47] c.e.d.j.DataUnitService.DataUnitService#tohiveWy[DataUnitService.java:172] /u01/tarsftp//2023070719592612007140001.txt.gz解压>>>>>>/u01/untarsftp/ 2023-07-07 20:11:54,787 WARN [Thread-4655234] o.a.h.h.DFSClient.DFSOutputStream$DataStreamer#run[DFSOutputStream.java:558] DataStreamer Exception org.apache.hadoop.ipc.RemoteException: File /dataunit/cu_access_log/10/2023070719592612007140001.txt could only be written to 0 of the 1 minReplication nodes. There are 11 datanode(s) running and no node(s) are excluded in this o peration. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2121) at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:286) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2706) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:875) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:561) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)

2023-07-13 上传