I am new to Aerospike and need your help to troubleshoot a restore issue. I have Aerospike running on my mac and it seem to work all fine except that it do not allow me to restore from .asb file. I took backup from an aerospike instance running on an Ubuntu machine using asbackup utility. But when I try to restore the .asb file using asrestore command on my mac instance, it throws following exception:
asrestore -d ~
restoring: host 127.0.0.1 port 3000 bin_list (null) from directory /home/vagrant
2015-08-25 15:13:43 INFO Add node BB9A9EAAB270008 127.0.0.1:3000
Aug 25 2015 15:13:43 GMT: starting restore: filename: /home/vagrant/BB9A3F5AA1ED512_00000.asb FILE 0x7f63680008c0
put failed in restore: unusual error 20 trying again
put failed in restore: unusual error 20 trying again
put failed in restore: unusual error 20 trying again
put failed in restore: unusual error 20 trying again
put failed in restore: unusual error 20 trying again
put failed in restore: unusual error 20 trying again
put failed in restore: unusual error 20 trying again
put failed in restore: unusual error 20 trying again
put failed in restore: unusual error 20 trying again
put failed in restore: unusual error 20 trying again
put failed in restore: unusual error 20 trying again
put failed in restore: unusual error 20 trying again
restore: too many consecutive put failure
Aug 25 2015 15:13:44 GMT: expired 0 : skipped 0 : attempted 0 : [updated 0 not-updated (existed 0 gen-old 0)]
I tried using -t option to restrict the thread count, but no respite.
Has any one faced a similar issue?
Looking forward to your help.
Error 20 indicates a bad namespace parameter. Check your server errorlog for more details. Seems like the namespace that is there in the backup file is not defined in the configuration of the cluster where you are trying to load using asrestore.
Two options
Create a namespace with the same namespace name as in the backup file
Write a script to change the namespace name in the backup files to the intended name which is valid in the cluster where you are trying to load.
The backup file format is documented at http://www.aerospike.com/docs/tools/backup/file_format.html
Related
Hadoop 3.3.5
Hive 3.1.3
Tez 0.10.2
I follow the instruction in this link to build tez 0.10.2 for hadoop 3.3.5: https://tez.apache.org/install.html
The db is stored on s3 bucket and I am able to run 'select count(*) from m1.t1' using hive.execution.engine=mr.
When I set hive.execution.engine=tez, and run the same query, I got this error immediately:
2023-02-15T21:21:09,208 INFO [a6e2cd1a-b2c9-42d8-9568-8e0b64677f77 main] client.TezClient: App did not succeed. Diagnostics: Application application_1676506240754_0019 failed 2 times due to AM Contai
ner for appattempt_1676506240754_0019_000002 exited with exitCode: 1
Failing this attempt.Diagnostics: [2023-02-15 21:21:08.730]Exception from container-launch.
Container id: container_1676506240754_0019_02_000001
Exit code: 1
[2023-02-15 21:21:08.732]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
Error: Could not find or load main class org.apache.tez.dag.app.DAGAppMaster
[2023-02-15 21:21:08.733]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
Error: Could not find or load main class org.apache.tez.dag.app.DAGAppMaster
If I set tez.use.cluster.hadoop-libs to true in tez-site.xml, I got YARN running but failed with load aws credential error even I have set the fs.s3a credentials in hadoop's core-site.xml, hive's hive-site.xml and .bashrc environment variables.
keys are masked to show sample only:
echo $AWS_ACCESS_KEY_ID
I9U996400005XXXXXXXX
echo $AWS_SECRET_KEY
mPY8GiU6NegNWoVnaODXXXXXXXXXXXXXXXXXXXX
hive> set hive.execution.engine=tez;
hive> select count(*) from m1.t1;
Query ID = hdp-user_20230215210146_62ed9fab-5d4a-42a9-bf54-5fb6f84a9048
Total jobs = 1
Launching Job 1 out of 1
Status: Running (Executing on YARN cluster with App id application_1676506240754_0015)
----------------------------------------------------------------------------------------------
VERTICES MODE STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED
----------------------------------------------------------------------------------------------
Map 1 container INITIALIZING -1 0 0 -1 0 0
Reducer 2 container INITED 1 0 0 1 0 0
----------------------------------------------------------------------------------------------
VERTICES: 00/02 [>>--------------------------] 0% ELAPSED TIME: 2.03 s
----------------------------------------------------------------------------------------------
Status: Failed
Vertex failed, vertexName=Map 1, vertexId=vertex_1676506240754_0015_3_00, diagnostics=[Vertex vertex_1676506240754_0015_3_00 [Map 1] killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1 initializer failed, vertex=vertex_1676506240754_0015_3_00 [Map 1], java.nio.file.AccessDeniedException: s3a://hadoop-cluster/warehouse/tablespace/managed/hive/m1.db/t1: org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException: No AWS Credentials provided by TemporaryAWSCredentialsProvider SimpleAWSCredentialsProvider EnvironmentVariableCredentialsProvider IAMInstanceCredentialsProvider : com.amazonaws.SdkClientException: Unable to load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY))
Tried to add all fs.s3a properties from core-site.xml to tez-site.xml and set fs,s3a,access.key and set fs.s3a.secret.key= inside hive session but still get same error.
org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException: No AWS Credentials provided by TemporaryAWSCredentialsProvider SimpleAWSCredentialsProvider EnvironmentVariableCredentialsProvider IAMInstanceCredentialsProvider : com.amazonaws.SdkClientException: Unable to load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY))
Question: according to tez install instruction
Ensure tez.use.cluster.hadoop-libs is not set in tez-site.xml, or if it is set, the value should be false
But when set to false, tez could not run.
When set to true, I got aws credential error even though I set them in every possible location or environment variables.
==========================================================
Update:
Not sure if this is the right answer to this problem but I finally got it working by adding this property to hive-site.xml
<property>
<name>hive.conf.hidden.list</name>
<value>javax.jdo.option.ConnectionPassword,hive.server2.keystore.password,fs.s3a.proxy.password,dfs.adls.oauth2.credential,fs.adl.oauth2.credential</value>
</property>
Default all fs.s3a credential are hidden config even you don't set this property. I explicitly add this property and remove all fs.s3a credential related from the value.
Now, I can run select count(*) with tez.
my question about the installation of openshift environment using minishift on virtual box.
minishift v1.4.1+0f658ea
VirtualBox-5.1.26-117224-Win.exe
The installation is incomplete due to the folowing error:-
C:\Users\xyzdgs\Desktop\Openshift_n_Docker\OpenShift Developer>minishift.exe start --vm-driver=C:\Program Files\Oracle\VirtualBox\VBoxSVC.exe
-- Starting local OpenShift cluster using 'C:\Program' hypervisor ...
-- Minishift VM will be configured with ...
Memory: 2 GB
vCPUs : 2
Disk size: 20 GB
Downloading ISO 'https://github.com/minishift/minishift-b2d-iso/releases/download/v1.1.0/minishift-b2d.iso'
40.00 MiB / 40.00 MiB [===========================================] 100.00% 0s
-- Starting Minishift VM ... | Unsupported driver: C:\Program
So, to solve this I simply put the directory where all drivers are located in the installation and run it again
C:\Users\xyzdgs\Desktop\Openshift_n_Docker\OpenShift Developer>minishift.exe start --vm-driver=C:\Program Files\Oracle\VirtualBox\
-- Starting local OpenShift cluster using 'C:\Program' hypervisor ...
-- Starting Minishift VM ... / FAIL E0825 11:20:43.830638 1260 start.go:342]
Error starting the VM: Error getting the state for host: machine does not exist.
Retrying.
| FAIL E0825 11:20:44.297638 1260 start.go:342] Error starting the VM: Error getting the state for host: machine does not exist. Retrying.
/ FAIL E0825 11:20:44.612638 1260 start.go:342] Error starting the VM: Error getting the state for host: . Retrying.
Error starting the VM: Error getting the state for host: machine does not exist
Error getting the state for host: machine does not exist
Error getting the state for host: machine does not exist
It says "machine does not exist", shouldn't the machine be created by minishift itself (see te procedure here: blog.novatec-gmbh.de/getting-started-minishift-openshift-origin-one-vm/)
Not sure what is causing this. Please guide.
The main issue with the command -- and what it's really complaining about -- is that you're passing in an unquoted path:
minishift.exe start --vm-driver=C:\Program Files\Oracle\VirtualBox\VBoxSVC.exe
should have been
minishift.exe start --vm-driver="C:\Program Files\Oracle\VirtualBox\VBoxSVC.exe"
But according to the MiniShift documentation, you should update to VirtualBox 5.1.12+ (which you have) and use the following syntax:
minishift.exe start --vm-driver=virtualbox
7 months after this question was asked and using VirtualBox v4.3.30, I can get MiniShift v1.15.1 running with the last command, but can't get it to accept your previous syntax or even produce the same error from it.
While setting up a single node cluster without Cygwin on windows 10,I followed the specific document- Link for Hadoop installation in windows 10
I am facing the below error while starting the hdfs using D:\hadoop-2.6.2.tar\hadoop-2.6.2\hadoop-2.6.2\sbin>start-dfs.cmd
Error message stack trace:
17/01/12 12:25:42 FATAL datanode.DataNode: Exception in secureMain java.lang.RuntimeException: Error while running command to get file permissions : ExitCodeException exitCode=-1073741515:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:808)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:791)
at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1097)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:582)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:557)
at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:139)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:156)
at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2299)
at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2341)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2323)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2215)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2262)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2438)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2462)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:620)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:557)
at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:139)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:156)
at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2299)
at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2341)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2323)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2215)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2262)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2438)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2462) 17/01/12 12:25:42 INFO util.ExitUtil: Exiting with status 1
Also this error message about starting namenode:
17/01/12 12:25:43 FATAL namenode.NameNode: Failed to start namenode.
java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:557)
at org.apache.hadoop.fs.FileUtil.canWrite(FileUtil.java:996)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:490)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:309)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1022)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:741)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:538)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:597)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:764)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:748)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1441)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1507)
17/01/12 12:25:43 INFO util.ExitUtil: Exiting with status 1
[]Problem analysis ] /data directory permissions is not enough, the NameNode cannot be started.
[Solution]
(1) in the root, the operation of the/data/directory permissions assigned to hadoop users;
(2) empty /data directory file;
(3) to reformat the NameNode, restart the hadoop cluster.
I am trying to run pig script in local mode on a single node cluster as given below.
hduser#ubuntu:~$ pig -x local -f "/home/hduser/ddsoft/pigscript/FirstUDF.pig"
But I am getting below error.
[main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 101: file
'/home/hduser/ddsoft/hive-0.13.1-bin/hcatalog/share/hcatalog/hcatalog-core-0.13.1.jar'
does not exist.
how do I register the jar file mentioned in the error message. I tried updating the .bashrc, but it didn’t fix the error.
I'm using http://simplehtmldom.sourceforge.net/ and file_get_contents() in my webApp. The file_get_contents() work fine on localhost. But when upload webApp on server(Windows server 2012 r2) i get this error. How to fix this error?
> Warning: file_get_contents(): SSL operation failed with code 1. OpenSSL Error messages: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed in E:\cfnic.com\includes\class\PHP_Simple_HTML_DOM_Parser.php on line 75
Warning: file_get_contents(): Failed to enable crypto in E:\cfnic.com\includes\class\PHP_Simple_HTML_DOM_Parser.php on line 75
Warning: file_get_contents(https://www.markafoni.com/kadin/): failed to open stream: operation failed in E:\cfnic.com\includes\class\PHP_Simple_HTML_DOM_Parser.php on line 75
Fatal error: Call to a member function find() on boolean in E:\cfnic.com\includes\theme\category.php on line 159
You basically have to set the environment variable SSL_CERT_FILE to the path of the PEM file of the ssl-certificate downloaded from the following link : http://curl.haxx.se/ca/cacert.pem.
It took me a lot of time to figure this out.
For a detailed answer, you can have a look here: https://stackoverflow.com/questions/34590842/cannot-install-composer-on-mac-os-x/34617315