I have 4 nodes CentOS hadoop cluster. I installed cloudera manager 5.5.1.
I failed to start Hbase Master.
FATAL org.apache.hadoop.hbase.master.HMaster Unhandled exception.
Starting shutdown
.
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
Permission denied: user=hbase, access=WRITE,
inode="/":hdfs:supergroup:drwxr-xr-x
Any thoughts?
Thanks
Related
I was using rabbitmq on Windows about 6 months ago, now I am reinstalling rabbitmq and erlang but I get this error:
C:\Program Files\new_RabbitMQ Server\rabbitmq_server-3.8.12\sbin>rabbitmq-server
Configuring logger redirection
12:49:39.059 [warning] Using RABBITMQ_ADVANCED_CONFIG_FILE: c:/Users/ALFA RAYAN/AppData/Roaming/RabbitMQ/advanced.config
12:49:39.810 [error]
BOOT FAILED
12:49:39.810 [error] BOOT FAILED
===========
12:49:39.810 [error] ===========
ERROR: distribution port 25672 in use by another node: rabbit#Ali
12:49:39.810 [error] ERROR: distribution port 25672 in use by another node: rabbit#Ali
12:49:39.810 [error]
12:49:40.826 [error] Supervisor rabbit_prelaunch_sup had child prelaunch started with rabbit_prelaunch:run_prelaunch_first_phase() at undefined exit with reason {dist_port_already_used,25672,"rabbit","Ali"} in context start_error
12:49:40.826 [error] CRASH REPORT Process <0.150.0> with 0 neighbours exited with reason: {{shutdown,{failed_to_start_child,prelaunch,{dist_port_already_used,25672,"rabbit","Ali"}}},{rabbit_prelaunch_app,start,[normal,[]]}} in application_master:init/4 line 138
{"Kernel pid terminated",application_controller,"{application_start_failure,rabbitmq_prelaunch,{{shutdown,{failed_to_start_child,prelaunch,{dist_port_already_used,25672,\"rabbit\",\"Ali\"}}},{rabbit_prelaunch_app,start,[normal,[]]}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,rabbitmq_prelaunch,{{shutdown,{failed_to_start_child,prelaunch,{dist_port_already_used,25672,"rabbit","Ali"}}},{rabbit_prelau
Crash dump is being written to: erl_crash.dump...done
i use otp_win64_23.2 and rabbitmq-server-3.8.12
Go to windows services and look for RabbitMQ. Right click on it and stop the service.
Restart the RabbitMQ server in an administrator mode. This worked for me.
I try to run this 'rabbitmq-server' command in my cmd but that give me this error ...
Configuring logger redirection
13:44:01.865 [warning] Using RABBITMQ_ADVANCED_CONFIG_FILE: c:/Users/saikat/AppData/Roaming/RabbitMQ/advanced.config
13:44:02.838 [error]
13:44:02.838 [error] BOOT FAILED
BOOT FAILED
13:44:02.838 [error] ===========
===========
13:44:02.838 [error] ERROR: distribution port 25672 in use by another node: rabbit#DESKTOP-1I7H1RC
ERROR: distribution port 25672 in use by another node: rabbit#DESKTOP-1I7H1RC
13:44:02.838 [error]
13:44:03.839 [error] Supervisor rabbit_prelaunch_sup had child prelaunch started with rabbit_prelaunch:run_prelaunch_first_phase() at undefined exit with reason {dist_port_already_used,25672,"rabbit","DESKTOP-1I7H1RC"} in context start_error
13:44:03.840 [error] CRASH REPORT Process <0.152.0> with 0 neighbours exited with reason: {{shutdown,{failed_to_start_child,prelaunch,{dist_port_already_used,25672,"rabbit","DESKTOP-1I7H1RC"}}},{rabbit_prelaunch_app,start,[normal,[]]}} in application_master:init/4 line 138
{"Kernel pid terminated",application_controller,"{application_start_failure,rabbitmq_prelaunch,{{shutdown,{failed_to_start_child,prelaunch,{dist_port_already_used,25672,\"rabbit\",\"DESKTOP-1I7H1RC\"}}},{rabbit_prelaunch_app,start,[normal,[]]}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,rabbitmq_prelaunch,{{shutdown,{failed_to_start_child,prelaunch,{dist_port_already_used,25672,"rabbit","DESKTOP-1I7H1RC"}}},{r
Crash dump is being written to erl_crash.dump...done
No doubt you have already resolved this, but for anyone else this may help I had the same issue on Windows and managed to resolve it by doing the followed.
Open powershell as admin in *\rabbitmq_server-3.8.9\sbin*
Stop the service by running: .\rabbitmq-service.bat stop
Start the service by running: .\rabbitmq-server.bat
If you are on windows then go to Services
Search RabbitMq and Right click
Stop service
Open cmd as administrator
run cd C:\Program Files\RabbitMQ Server\rabbitmq_server-3.8.17\sbin
and then run rabbitmq-server
For me what I did was killing the process of erl from the task manager and then running the command:
rabbitmq-server.bat
I'm trying to start my rabbitmq server using command:sudo rabbitmq-server on ubuntu 20.04 but it crashes.I have absolutely no clue whatsoever as to what I should do.
Rabbitmq version: 3.8.5 erlang version: 23
17:41:07.587 [error]
17:41:07.591 [error] BOOT FAILED
BOOT FAILED
17:41:07.592 [error] ===========
===========
17:41:07.592 [error] ERROR: distribution port 25672 in use by rabbit#nadaanbaalak
ERROR: distribution port 25672 in use by rabbit#nadaanbaalak
17:41:07.592 [error]
17:41:08.594 [error] Supervisor rabbit_prelaunch_sup had child prelaunch started with rabbit_prelaunch:run_prelaunch_first_phase() at undefined exit with reason {dist_port_already_used,25672,"rabbit","nadaanbaalak"} in context start_error
17:41:08.594 [error] CRASH REPORT Process <0.153.0> with 0 neighbours exited with reason: {{shutdown,{failed_to_start_child,prelaunch,{dist_port_already_used,25672,"rabbit","nadaanbaalak"}}},{rabbit_prelaunch_app,start,[normal,[]]}} in application_master:init/4 line 138
{"Kernel pid terminated",application_controller,"{application_start_failure,rabbitmq_prelaunch,{{shutdown,{failed_to_start_child,prelaunch,{dist_port_already_used,25672,\"rabbit\",\"nadaanbaalak\"}}},{rabbit_prelaunch_app,start,[normal,[]]}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,rabbitmq_prelaunch,{{shutdown,{failed_to_start_child,prelaunch,{dist_port_already_used,25672,"rabbit","nadaanbaalak"}}},{rabb
Crash dump is being written to: erl_crash.dump...done
Any help would be great
It's pretty simple. Just needed to look at the logs. killed the process at the port number "25672" and restarted the rabbitmq server.
I am setting up apache storm in distributed mode. My Zookeeper is working fine. I am unable to start apache storm nimbus even.
I am following: http://chennaihug.org/knowledgebase/storm-multinode-installation/
Zookeeper config file:
tickTime=2000
dataDir=/data/zookeeper
clientPort=2181
initLimit=5
syncLimit=2
server.1=scarlet:2888:3888
server.2=plum:2888:3888
server.3=peacock:2888:3888
server.4=green:2888:3888
server.5=mustard:2888:3888
server.6=white:2888:3888
Storm.yaml:
storm.zookeeper.servers:
- "scarlet"
- "plum"
- "green"
- "white"
- "mustard"
- "peacock"
nimbus.host: "scarlet"
storm.zookeeper.port: 2181
java.library.path: "/usr/lib/jvm/java-8-oracle"
storm.local.dir: "/app/storm"
I started zookeeper using:
/opt/zookeeper-3.4.10/bin/zkCli.sh -server scarlet:2181,plum:2181,peacock:2181,green:2181,mustard:2181,white:2181
Checked the status of zookeeper. 5 followers and 1 leader. All working fine.
I start apache storm using:
bin/storm nimbus
where it gives the error:
Unrecognized option: -client
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Unrecognized option: -client
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Unrecognized option: -client
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Unrecognized option: -client
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Running: /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -server -Ddaemon.name=nimbus -Dstorm.options= -Dstorm.home=/opt/apache-storm-1.2.2 -Dstorm.log.dir= -Djava.library.path= -Dstorm.conf.file= -cp /opt/apache-storm-1.2.2/*:/opt/apache-storm-1.2.2/lib/*:/opt/apache-storm-1.2.2/extlib/*:/opt/apache-storm-1.2.2/extlib-daemon/*:/opt/apache-storm-1.2.2/conf -Dlogfile.name=nimbus.log -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector -Dlog4j.configurationFile=/opt/apache-storm-1.2.2/cluster.xml org.apache.storm.daemon.nimbus
2019-01-14 17:20:21,591 main ERROR Unable to create file /nimbus.log java.io.IOException: Permission denied
Turns out the problem was java installation. Purged complete openjdk and re-installed it.
apt purge default-jdk default-jdk-headless openjdk-8-jdk openjdk-8-jdk-headless openjdk-8-jre openjdk-8-jre-headless
apt install openjdk-8-jdk
While setting up a single node cluster without Cygwin on windows 10,I followed the specific document- Link for Hadoop installation in windows 10
I am facing the below error while starting the hdfs using D:\hadoop-2.6.2.tar\hadoop-2.6.2\hadoop-2.6.2\sbin>start-dfs.cmd
Error message stack trace:
17/01/12 12:25:42 FATAL datanode.DataNode: Exception in secureMain java.lang.RuntimeException: Error while running command to get file permissions : ExitCodeException exitCode=-1073741515:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:808)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:791)
at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1097)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:582)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:557)
at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:139)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:156)
at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2299)
at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2341)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2323)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2215)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2262)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2438)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2462)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:620)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:557)
at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:139)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:156)
at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2299)
at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2341)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2323)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2215)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2262)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2438)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2462) 17/01/12 12:25:42 INFO util.ExitUtil: Exiting with status 1
Also this error message about starting namenode:
17/01/12 12:25:43 FATAL namenode.NameNode: Failed to start namenode.
java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:557)
at org.apache.hadoop.fs.FileUtil.canWrite(FileUtil.java:996)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:490)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:309)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1022)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:741)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:538)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:597)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:764)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:748)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1441)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1507)
17/01/12 12:25:43 INFO util.ExitUtil: Exiting with status 1
[]Problem analysis ] /data directory permissions is not enough, the NameNode cannot be started.
[Solution]
(1) in the root, the operation of the/data/directory permissions assigned to hadoop users;
(2) empty /data directory file;
(3) to reformat the NameNode, restart the hadoop cluster.