I just installed a single node Hadoop 2.2.0 cluster running on ubuntu.
I tried a couple of basic example calculations and it works fine.
I then tried to setup hive 0.12.0, that includes hcatalog.
I actually follow this tutorial.
And when I try to start hcatalog, I always get the following error :
bash $HIVE_HOME/hcatalog/sbin/hcat_server.sh start
dirname: missing operand
Try `dirname --help' for more information.
Started metastore server init, testing if initialized correctly...
/usr/local/hive/hcatalog/sbin/hcat_server.sh: line 91: /usr/local/hive-0.12.0/hcatalog/sbin/../var/log/hcat.out: No such file or directory
Metastore startup failed, see /usr/local/hive-0.12.0/hcatalog/sbin/../var/log/hcat.err
But there's no hcat.err file at all, I'm kind of blocked right now.
Any help would be much appreciated !
Thanks in advance,
Guillaume
I worked out that hcat was not executable in the hive installation I have downloaded.
S just sudo chmod A+X hcat and it works
Related
I created an EMR Spark cluster with the following configuration:
Then I ssh into the master node, typed the command s3-dist-cp, then got the following error:
s3-dist-cp: command not found
I searched the whole disk but found nothing:
sudo find / -name "*s3-dist-cp*"
Where is the s3-dist-cp command? Thanks!
It turns out I must select "Hadoop", see the screenshot below:
I am trying to install Hive on my Ubuntu 19.10 machine .
I am using this doc https://phoenixnap.com/kb/install-hive-on-ubuntu.
As mentioned in step 6, where I am trying to initiate Derby Database, I write the command in the right path : ~/apache-hive-3.1.2-bin/bin
schematool –initSchema –dbType derby
But I get this error :
schematool: command not found.
How can I resolve this please ?
I had the same question before.
Maybe because of the wrong configuration files, like hive-site.xml, hive-env.sh. A blank space in my configuration file caused this error.
Default path for schematool is $HIVE_HOME/bin/schematool (/apache-hive-3.1.2-bin/bin/schematool in your case). Try to add this HIVE_HOME on your .bashrc file, worked for me.
# Hive
export HIVE_HOME=/<your hive path>
export PATH=$PATH:$HIVE_HOME/bin
Try this
using this command I resolved this issue
hive --service schematool -dbType mysql -password hive -username hive -validate
run ./schematool –initSchema –dbType derby
don't forget the ./
Background
Yesterday our machine crashed unexpectedly and our AOF file for Redis got corrupted.
Upon trying to start the service with sudo systemctl start redis-server we are greeted with the following logs:
Bad file format reading the append only file: make a backup of your
AOF file, then use ./redis-check-aof --fix
Research
Aparently this looks like a simple error to fix, just execute ./redis-check-aof --fix <filename>.
Except I don't have the smallest idea of where that file is.
I have searched the Github discussions for this issue, but unfortunately none provides me with the location for the file:
https://github.com/antirez/redis/issues/4931
The persistence documentation also doesn't make a mention of the location for this file:
https://redis.io/topics/persistence
Specs
These are the specs of the system where I am running Redis:
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.3 LTS
Release: 16.04
Codename: xenial
Question
Where is located this file?
You have two choices:
Find the configure file for Redis, normally, it's named redis.conf. The dir and appendfilename configuration specify the directory and file name of the AOF file.
Connect to Redis with redis-cli, and use the CONFIG GET command to get the dir configuration, i.e. CONFIG GET dir. The AOF file should located under this directory.
The path is typically /var/lib/redis/appendonly.aof you will need to run sudo redis-check-aof --fix /var/lib/redis/appendonly.aof
in case if you use docker and append volume to /data then the path to appendonly.aof will be: /data/appendonly.aof
In my case, I was using docker. I started the redis server without using --appendonly yes, then it started without any issues. And then ran CONFIG GET dir like #for-stack said and got this output:
1) "dir"
2) "/data"
So I checked under the /data path and found the file appendonly.aof
Then I ran /usr/local/bin/redis-check-aof --fix /data/appendonly.aof to fix the issue.
I ran /path/redis-check-aof --fix /data/appendonly.aof to fix this.
Thanks all.
After setting Hadoop Home path and Prefix path in .bashrc and /etc/profile also im getting the same error - Cannot find hadoop installation: $HADOOP_HOME or $HADOOP_PREFIX must be set or hadoop must be in the path
If i run the script from crontab im facing this error from hive> prompt its working fine
plz help with the regarding how to solve this
Set $HADOOP_HOME in $HIVE_HOME/conf/hive-env.sh
try loading user bash profile in the script, as below,
. ~/.bash_profile
bash is included in user bash_profile and it will have user specific configurations as well.
see the similar question Hbase commands not working in script executed via crontab
I am setting up Apche Hadoop 2.6 for the Psuedo Distributed Operation by following the instructions provided in the link:
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html
I am facing an issue after I execute the command: $ bin/hdfs dfs -put etc/hadoop input
The error message is: put:'input': No such file or directory
How to resolve this?
Also, I have edited the hadoop-env.sh with the statement: export HADOOP_PREFIX=/usr/local/hadoop, but cannot understand that why shell prints out the warning: /usr/local/hadoop/etc/hadoop/hadop-env.sh: line 32: export:='/usr/local/hadoop': not a valid identifier
Thanks for the help.
I have fixed this problem.
I created the directory: $ bin/hdfs dfs -mkdir /user/root and the problem got solved, as I was logged in as the root in ubuntu. Earlier, I was giving wrong username, hence, facing the issue.