When starting up graphdb, the log reports a few warnings related to an illegal reflective access operation by org.springframework.cglib.core.ReflectUtils in lib/spring-core-5.0.4.RELEASE.jar and then pauses for a wile at:
[INFO ] 2018-11-19 17:02:34,109 [main | c.o.g.Config] Using 'file:/home/ubuntu/graphdb-free-8.7.2/conf/logback.xml' as logback's configuration file for graphdb
[INFO ] 2018-11-19 17:02:34,427 [main | c.o.g.s.GraphDB] Starting GraphDB in workbench mode.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.springframework.cglib.core.ReflectUtils$1 (file:/home/ubuntu/graphdb-free-8.7.2/lib/spring-core-5.0.4.RELEASE.jar) to method java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int,java.security.ProtectionDomain)
WARNING: Please consider reporting this to the maintainers of org.springframework.cglib.core.ReflectUtils$1
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
[INFO ] 2018-11-19 17:02:39,572 [main | c.o.g.Config] GraphDB Home directory: /home/ubuntu/graphdb-free-8.7.2
[INFO ] 2018-11-19 17:02:39,572 [main | c.o.g.Config] GraphDB Config directory: /home/ubuntu/graphdb-free-8.7.2/conf
[INFO ] 2018-11-19 17:02:39,573 [main | c.o.g.Config] GraphDB Data directory: /home/ubuntu/graphdb-free-8.7.2/data
[INFO ] 2018-11-19 17:02:39,573 [main | c.o.g.Config] GraphDB Work directory: /home/ubuntu/graphdb-free-8.7.2/work
[INFO ] 2018-11-19 17:02:39,573 [main | c.o.g.Config] GraphDB Logs directory: /home/ubuntu/graphdb-free-8.7.2/logs
After approximately 8-13 minutes, the log reports a session ID generation process is finished and the server is deployed:
[WARN ] 2018-11-19 16:38:41,843 [main | o.a.c.u.SessionIdGeneratorBase] Creation of SecureRandom instance for session ID generation using [SHA1PRNG] took [784,201] milliseconds.
Running:
graphdb-free-8.7.2
Ubuntu 18.04.1 LTS
openjdk version "10.0.2" 2018-07-17, OpenJDK Runtime Environment (build 10.0.2+13-Ubuntu-1ubuntu0.18.04.3), OpenJDK 64-Bit Server VM (build 10.0.2+13-Ubuntu-1ubuntu0.18.04.3, mixed mode)
Is it necessary for it to take so much time? Or, can this be turned off?
Thanks!
You can safely ignore the first warning message caused by running the database with Java 9. The new module encapsulation system causes this warning. See what is an illegal reflective access.
For some unknown reasons, the Apache Tomcat 9.0.4 code base is not capable to generate a random ID. Like it's suggested in Slow startup on Tomcat 7.0.57 because of SecureRandom you should start the database with ./graphdb -Djava.security.egd=file:/dev/./urandom or simply add to $GDB_HOME/bin/graphdb.in.sh the line: JAVA_OPTS_ARRAY+=("-Djava.security.egd=file:/dev/./urandom").
Related
When I finished install ambari-server with httpd local repository and Comfire Hosts on webUI, I got some error as follow:
INFO 2018-05-27 15:39:16,776 NetUtil.py:70 - Connecting to https://master:8440/ca
ERROR 2018-05-27 15:39:16,787 NetUtil.py:96 - [Errno 8] _ssl.c:493: EOF occurred in violation of protocol
ERROR 2018-05-27 15:39:16,788 NetUtil.py:97 - SSLError: Failed to connect.Please check openssl library versions.
Refer to: https://bugzilla.redhat.com/show_bug.cgi?id=1022468 for more details.
WARNING 2018-05-27 15:39:16,789 NetUtil.py:124 - Server at https://master:8440 is not reachable, sleeping for 10 seconds...
INFO 2018-05-27 15:39:26,793 NetUtil.py:70 - Connecting to https://master:8440/ca
ERROR 2018-05-27 15:39:26,799 NetUtil.py:96 - [Errno 8] _ssl.c:493: EOF occurred in violation of protocol
ERROR 2018-05-27 15:39:26,799 NetUtil.py:97 - SSLError: Failed to connect. Please check openssl library versions.Refer to: https://bugzilla.redhat.com/show_bug.cgi?id=1022468 for more details.
WARNING 2018-05-27 15:39:26,801 NetUtil.py:124 - Server at https://master:8440 is not reachable, sleeping for 10 seconds...
My environment message as follow:
CentOS Linux release 7.5.1804 (Core)
Python2.7.5
Java1.8.0_171
OpenSSL1.0.2k
Ambari2.6.2.0
HDP-2.6.5.0
On my other amabri-agent nodes, I can reach master on 8440 port as follow:
[root#slave2 ~]# telnet master 8440
Trying 192.168.17.128...
Connected to master.
Escape character is '^]'.
Please give me some help, thanks a lot!
I am also getting the same issue.
This worked for me.
In /etc/ambari-agent/conf/ambari-agent.ini
Add this line below [security]
force_https_protocol=PROTOCOL_TLSv1_2
In /etc/python/cert-verification.cfg
[https]
verify=disable
(change from default to disable)
Please check JAVA_HOME and openSSL version in your setup
Hiveserver2 does not start after installing HDP 2.6.4.0-91 using cloudbreak on AWS.
Start the hiveserver2 in the Ambari UI and check the contents of /var/log/hive/hiveserver2.log.
Below is the error log.
Any help would be appreciated.
Contents of hiveserver2.log
2018-03-08 04:41:53,345 WARN [main-EventThread]: server.HiveServer2 (HiveServer2.java:process(343)) - This instance of HiveServer2 has been removed from the list of server instances available for dynamic service discovery. The last client session has ended - will shutdown now.
2018-03-08 04:41:53,347 INFO [main]: zookeeper.ZooKeeper (ZooKeeper.java:close(684)) - Session: 0x16203aad5af0040 closed
2018-03-08 04:41:53,347 INFO [main]: server.HiveServer2 (HiveServer2.java:removeServerInstanceFromZooKeeper(361)) - Server instance removed from ZooKeeper.
2018-03-08 04:41:53,348 INFO [main-EventThread]: server.HiveServer2 (HiveServer2.java:stop(405)) - Shutting down HiveServer2
2018-03-08 04:41:53,348 INFO [main-EventThread]: server.HiveServer2 (HiveServer2.java:removeServerInstanceFromZooKeeper(361)) - Server instance removed from ZooKeeper.
2018-03-08 04:41:53,348 INFO [main-EventThread]: zookeeper.ClientCnxn (ClientCnxn.java:run(524)) - EventThread shut down
2018-03-08 04:41:53,348 WARN [main]: server.HiveServer2 (HiveServer2.java:startHiveServer2(508)) - Error starting HiveServer2 on attempt 1, will retry in 60 seconds
org.apache.tez.dag.api.SessionNotRunning: TezSession has already shutdown. Application application_1520480101488_0046 failed 2 times due to AM Container for appattempt_1520480101488_0046_000002 exited with exitCode: -1000
For more detailed output, check the application tracking page: http://ip-10-0-91-7.ap-northeast-2.compute.internal:8088/cluster/app/application_1520480101488_0046 Then click on links to logs of each attempt.
Diagnostics: ExitCodeException exitCode=2: tar: Removing leading `/' from member names
tar: Skipping to next header
gzip: /hadoopfs/fs1/yarn/nodemanager/filecache/60_tmp/tmp_tez.tar.gz: invalid compressed data--format violated
tar: Exiting with failure status due to previous errors
Failing this attempt. Failing the application.
at org.apache.tez.client.TezClient.waitTillReady(TezClient.java:699)
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:218)
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:116)
at org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager.startPool(TezSessionPoolManager.java:76)
at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:488)
at org.apache.hive.service.server.HiveServer2.access$700(HiveServer2.java:87)
at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:720)
at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:593)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
I had exactly the same issue with HDP on AWS. FYI, In my case the issue was with HDP version 2.6.4.5-2. I'm going to show how I fixed using this version because it is the latest at this time.
As the error log shows the problem is with tez.tar.gz that is corrupted then YARN is unable to decompress it in the YARN container.
This tez.tar.gz file is copied from the hdfs:///hdp/apps/<hdp_version>/tez/tez.tar.gz.
To reproduce the error and confirm that this file is corrupted, you can run the following command:
sudo su
su hdfs
hdfs dfs -get /hdp/apps/2.6.4.5-2/tez.tar.gz
tar -xvzf tez.tar.gz
You will get the following error:
gzip: stdin: invalid compressed data--format violated
tar: Unexpected EOF in archive
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now
The fix is pretty simple, you must just replace the HDFS file with the one that you have on your local file-system running the following command:
hdfs dfs -rm /hdp/apps/2.6.4.5-2/tez/tez.tar.gz
hdfs dfs -put /usr/hdp/current/tez-client/lib/tez.tar.gz /hdp/apps/2.6.4.5-2/tez/tez.tar.gz
Now restart Hive Server 2 service and done!
NOTE: If something similar happens with other services you can do the same thing. Please check the following link that has more details: https://community.hortonworks.com/articles/30096/foxing-broken-targz-and-jar-files-in-hdp-24.html
Hope this helps!
my question about the installation of openshift environment using minishift on virtual box.
minishift v1.4.1+0f658ea
VirtualBox-5.1.26-117224-Win.exe
The installation is incomplete due to the folowing error:-
C:\Users\xyzdgs\Desktop\Openshift_n_Docker\OpenShift Developer>minishift.exe start --vm-driver=C:\Program Files\Oracle\VirtualBox\VBoxSVC.exe
-- Starting local OpenShift cluster using 'C:\Program' hypervisor ...
-- Minishift VM will be configured with ...
Memory: 2 GB
vCPUs : 2
Disk size: 20 GB
Downloading ISO 'https://github.com/minishift/minishift-b2d-iso/releases/download/v1.1.0/minishift-b2d.iso'
40.00 MiB / 40.00 MiB [===========================================] 100.00% 0s
-- Starting Minishift VM ... | Unsupported driver: C:\Program
So, to solve this I simply put the directory where all drivers are located in the installation and run it again
C:\Users\xyzdgs\Desktop\Openshift_n_Docker\OpenShift Developer>minishift.exe start --vm-driver=C:\Program Files\Oracle\VirtualBox\
-- Starting local OpenShift cluster using 'C:\Program' hypervisor ...
-- Starting Minishift VM ... / FAIL E0825 11:20:43.830638 1260 start.go:342]
Error starting the VM: Error getting the state for host: machine does not exist.
Retrying.
| FAIL E0825 11:20:44.297638 1260 start.go:342] Error starting the VM: Error getting the state for host: machine does not exist. Retrying.
/ FAIL E0825 11:20:44.612638 1260 start.go:342] Error starting the VM: Error getting the state for host: . Retrying.
Error starting the VM: Error getting the state for host: machine does not exist
Error getting the state for host: machine does not exist
Error getting the state for host: machine does not exist
It says "machine does not exist", shouldn't the machine be created by minishift itself (see te procedure here: blog.novatec-gmbh.de/getting-started-minishift-openshift-origin-one-vm/)
Not sure what is causing this. Please guide.
The main issue with the command -- and what it's really complaining about -- is that you're passing in an unquoted path:
minishift.exe start --vm-driver=C:\Program Files\Oracle\VirtualBox\VBoxSVC.exe
should have been
minishift.exe start --vm-driver="C:\Program Files\Oracle\VirtualBox\VBoxSVC.exe"
But according to the MiniShift documentation, you should update to VirtualBox 5.1.12+ (which you have) and use the following syntax:
minishift.exe start --vm-driver=virtualbox
7 months after this question was asked and using VirtualBox v4.3.30, I can get MiniShift v1.15.1 running with the last command, but can't get it to accept your previous syntax or even produce the same error from it.
While setting up a single node cluster without Cygwin on windows 10,I followed the specific document- Link for Hadoop installation in windows 10
I am facing the below error while starting the hdfs using D:\hadoop-2.6.2.tar\hadoop-2.6.2\hadoop-2.6.2\sbin>start-dfs.cmd
Error message stack trace:
17/01/12 12:25:42 FATAL datanode.DataNode: Exception in secureMain java.lang.RuntimeException: Error while running command to get file permissions : ExitCodeException exitCode=-1073741515:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:808)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:791)
at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1097)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:582)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:557)
at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:139)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:156)
at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2299)
at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2341)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2323)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2215)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2262)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2438)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2462)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:620)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:557)
at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:139)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:156)
at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2299)
at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2341)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2323)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2215)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2262)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2438)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2462) 17/01/12 12:25:42 INFO util.ExitUtil: Exiting with status 1
Also this error message about starting namenode:
17/01/12 12:25:43 FATAL namenode.NameNode: Failed to start namenode.
java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:557)
at org.apache.hadoop.fs.FileUtil.canWrite(FileUtil.java:996)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:490)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:309)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1022)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:741)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:538)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:597)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:764)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:748)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1441)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1507)
17/01/12 12:25:43 INFO util.ExitUtil: Exiting with status 1
[]Problem analysis ] /data directory permissions is not enough, the NameNode cannot be started.
[Solution]
(1) in the root, the operation of the/data/directory permissions assigned to hadoop users;
(2) empty /data directory file;
(3) to reformat the NameNode, restart the hadoop cluster.
I`m trying to create a cluster with two nodes using glassfish 4.1.1 build 1.
One node is local, an the other one is ssh.The node is working as if I ping it it responds ok. ( Successfully made SSH connection to node node2 (gfNode2))
I have setup ssh, create the node, create one instance (i2) on that node but when i want to start the instance i get:
i2: Could not start instance i2 on node node2 (gfNode2). Command failed on node node2 (gfNode2): Previous synchronization failed at Sep 10, 2016 12:25:27 PM Will perform full synchronization. Removing all cached state for instance i2. CLI802 Synchronization failed for directory config, caused by: remote failure: SynchronizeFiles: Exception reading request Command start-local-instance failed. To complete this operation run the following command locally on host gfNode2 from the GlassFish install location /opt/glassfish4: lib/nadmin start-local-instance --node node2 --sync normal i2
if i run this command on node 2 machine i get:
./nadmin start-local-instance --node node2 --sync normal i2
Previous synchronization failed at Sep 10, 2016 12:25:27 PM
Will perform full synchronization.
Removing all cached state for instance i2.
Enter admin user name> admin
Enter admin password for user "admin">
CLI802 Synchronization failed for directory config, caused by:
remote failure: SynchronizeFiles: Exception reading request
Command start-local-instance failed.
any idea what to try next ?
Update:
The DAS is reachabe, the ssh is working properly (ping-node-ssh works from das).
What I have noticed is that even after i have installed (install-node-ssh) and create with(create-node-ssh), node 2 has no files inside.
At /glassfish4/glassfish/nodes/node2/i2 there is only one file: .syncstate which is empty. The node2/i2 directories are there but nothing in i2. Maybe due to : Removing all cached state for instance i2.
That is what i got in DAS logs:
[2016-09-10T19:31:14.806+0000] [glassfish 4.1] [WARNING] [] [javax.enterprise.system.core] [tid: _ThreadID=106 _ThreadName=admin- listener(5)] [timeMillis: 1473535874806] [levelValue: 900] [[
Could not start instance i2 on node node2 (gfNode2).: Command ' /opt/glassfish4/glassfish/lib/nadmin --_auxinput - --interactive=false start-local-instance --node node2 --sync normal i2' failed on node node2 (gfNode2): Previous synchronization failed at Sep 10, 2016 12:25:27 PM
Will perform full synchronization.
Removing all cached state for instance i2.
Command start-local-instance failed.
CLI802 Synchronization failed for directory config, caused by:
remote failure: SynchronizeFiles: Exception reading request: To complete this operation run the following command locally on host gfNode2 from the GlassFish install location /opt/glassfish4:
lib/nadmin start-local-instance --node node2 --sync normal i2]]
[2016-09-10T19:31:14.818+0000] [glassfish 4.1] [SEVERE] [] [org.glassfish.admingui] [tid: _ThreadID=102 _ThreadName=admin-listener(1)] [timeMillis: 1473535874818] [levelValue: 1000] [[
RestResponse.getResponse() gives FAILURE. endpoint = 'https://localhost:4848/management/domain/servers/server/i2/start-instance'; attrs = '{}']]
[2016-09-10T19:31:14.820+0000] [glassfish 4.1] [SEVERE] [] [org.glassfish.admingui] [tid: _ThreadID=102 _ThreadName=admin-listener(1)] [timeMillis: 1473535874820] [levelValue: 1000] [[
Error in instanceAction ;
endpoint=https://localhost:4848/management/domain/servers/server/i2/start-instance;attrsMap=null]]
If I try to run the command from node2 I got what is showed on first code block of the post...
The problem here is that the remote instance i2 can't communicate with the DAS to download its configuration.
You will need to verify:
Is the DAS online?
Is the server where the DAS is reachable by the remote node?
Is SSH communication working properly? (use the asadmin command ping-node-ssh
If you open the server.log file for the instance and on the DAS, that should give you a more detailed error message and indicate whether or not the request is reaching the DAS.
The instance logs are located in:
$GLASSFISH_HOME/glassfish/nodes/node2/i2/logs/server.log
The domain logs are located in:
$GLASSFISH_HOME/glassfish/domains/domain1/logs/server.log