Impala Connection error - hive

I am trying to run the below impala command in my cloudera cluster
impala-shell -i connect 10.223.121.11:21000 -d prod_db -f /home/cloudera/views/a.hql
but I get error as
Error, could not parse arguments "10.223.121.11:21000"
Could some one help me on this?

Flag -i should be defined as: -i hostname or --impalad=hostname (without connect)
command connect should be used within impala-shell Connecting to impalad through impala-shell
The default port of 21000 is assumed unless you provide another value.
So this should works:
impala-shell --impalad=10.223.121.11 -d prod_db -f /home/cloudera/views/a.hql

In own scenario, I was connected with the impala-shell but suddenly I got [Not connected] > . Trying to reconnect failed & I didn't want to restart my machine (which is another option).
And Trying this:
[Not connected] > CONNECT myhostname
did not help either
I realized that my IP did change.
By just adjusting my IP from dynamic to static fixed it.

Related

How to execute sql script for PostgreSQL database hosted on AWS?

I am trying to run an sql file for a database hosted on AWS RDS. The command I am using is the following:
psql -v user=myusername -v dbname=postgres -v passwd=mypassword -f ./explorerpg.sql
After running it I get the following result:
sql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
What am I missing? For those curious I am trying to get Hyperledger explorer to display the database contents of an AWS blockchain. The sql script is from Hyperledger explorer.
Any suggestions are greatly appreciated!
You're missing the -h (or --host) option. The command you're running is trying to connect to a PostgreSQL server on your localhost. Additionally, you're trying too hard with the command line. You want something more like:
psql -U myusername -d postgres -h dbhostname.randomcharacters.us-west-2.rds.amazonaws.com -v passwd=mypassword -f ./explorerpg.sql
The password likely should not be done this way - a file containing the password is more common. But the key is to find the hostname of your RDS server and specify it with the -h option.

Docker: SSH freezes on login

I can succefully login to server using ssh 111.111.111.111 without password. But after multiple ssh login, I can't access server for some while(it freezes when I try to login).
To tell the whole story I'm trying to create generic docker machine using following lines.
docker-machine create\
--driver generic\
--generic-ip-address=111.111.111.111\
srv
All of the errors are ssh related, and they are quite randomly at different stages:
Error getting SSH command: Something went wrong running an SSH command!
command : cat /etc/os-release
err : exit status 255
output :
or:
if ! type docker; then curl -sSL https://get.docker.com | sh -; fi
SSH cmd err, output: exit status 255:
error installing docker:
After any of these error I can't login for somewhile. Please let me know if any log or confs is needed.
Since docker executes procedures via separate ssh command, somehow my provider detected me as an intruder for brute force attacks, changing the ssh port on remote server solved the problem.
Please view another question that I asked at
SSH parallel command execution freeze for this matter.

ssh -L forward multiple ports

I'm currently running a bunch of:
sudo ssh -L PORT:IP:PORT root#IP
where IP is the target of a secured machine, and PORT represents the ports I'm forwarding.
This is because I use a lot of applications which I cannot access without this forwarding. After performing this, I can access through localhost:PORT.
The main problem occured now that I actually have 4 of these ports that I have to forward.
My solution is to open 4 shells and constantly search my history backwards to look for exactly which ports need to be forwarded etc, and then run this command - one in each shell (having to fill in passwords etc).
If only I could do something like:
sudo ssh -L PORT1+PORT2+PORT+3:IP:PORT+PORT2+PORT3 root#IP
then that would already really help.
Is there a way to make it easier to do this?
The -L option can be specified multiple times within the same command. Every time with different ports. I.e. ssh -L localPort0:ip:remotePort0 -L localPort1:ip:remotePort1 ...
Exactly what NaN answered, you specify multiple -L arguments. I do this all the time. Here is an example of multi port forwarding:
ssh remote-host -L 8822:REMOTE_IP_1:22 -L 9922:REMOTE_IP_2:22
Note: This is same as -L localhost:8822:REMOTE_IP_1:22 if you don't specify localhost.
Now with this, you can now (from another terminal) do:
ssh localhost -p 8822
to connect to REMOTE_IP_1 on port 22
and similarly
ssh localhost -p 9922
to connect to REMOTE_IP_2 on port 22
Of course, there is nothing stopping you from wrapping this into a script or automate it if you have many different host/ports to forward and to certain specific ones.
For people who are forwarding multiple port through the same host can setup something like this in their ~/.ssh/config
Host all-port-forwards
Hostname 10.122.0.3
User username
LocalForward PORT_1 IP:PORT_1
LocalForward PORT_2 IP:PORT_2
LocalForward PORT_3 IP:PORT_3
LocalForward PORT_4 IP:PORT_4
and it becomes a simple ssh all-port-forwards away.
You can use the following bash function (just add it to your ~/.bashrc):
function pfwd {
for i in ${#:2}
do
echo Forwarding port $i
ssh -N -L $i:localhost:$i $1 &
done
}
Usage example:
pfwd hostname {6000..6009}
jbchichoko and yuval have given viable solutions. But jbchichoko's answer isn't a flexible answer as a function, and the opened tunnels by yuval's answer cannot be shut down by ctrl+c because it runs in the background. I give my solution below solving both the two flaws:
Defing a function in ~/.bashrc or ~/.zshrc:
# fsshmap multiple ports
function fsshmap() {
echo -n "-L 1$1:127.0.0.1:$1 " > $HOME/sh/sshports.txt
for ((i=($1+1);i<$2;i++))
do
echo -n "-L 1$i:127.0.0.1:$i " >> $HOME/sh/sshports.txt
done
line=$(head -n 1 $HOME/sh/sshports.txt)
cline="ssh "$3" "$line
echo $cline
eval $cline
}
A example of running the function:
fsshmap 6000 6010 hostname
Result of this example:
You can access 127.0.0.1:16000~16009 the same as hostname:6000~6009
In my company both me and my team members need access to 3 ports of a non-reachable "target" server so I created a permanent tunnel (that is a tunnel that can run in background indefinitely, see params -f and -N) from a reachable server to the target one. On the command line of the reachable server I executed:
ssh root#reachableIP -f -N -L *:8822:targetIP:22 -L *:9006:targetIP:9006 -L *:9100:targetIP:9100
I used user root but your own user will work. You will have to enter the password of the chosen user (even if you are already connected to the reachable server with that user).
Now port 8822 of the reachable machine corresponds to port 22 of the target one (for ssh/PuTTY/WinSCP) and ports 9006 and 9100 on the reachable machine correspond to the same ports of the target one (they host two web services in my case).
Another one liner that I use and works on debian:
ssh user#192.168.1.10 $(for j in $(seq 20000 1 20100 ) ; do echo " -L$j:127.0.0.1:$j " ; done | tr -d "\n")
One of the benefits of logging into a server with port forwarding is facilitating the use of Jupyter Notebook. This link provides an excellent description of how to it. Here I would like to do some summary and expansion for all of you guys to refer.
Situation 1. Login from a local machine named Host-A (e.g. your own laptop) to a remote work machine named Host-B.
ssh user#Host-B -L port_A:localhost:port_B
jupyter notebook --NotebookApp.token='' --no-browser --port=port_B
Then you can open a browser and enter: http://localhost:port_A/ to do your work on Host-B but see it in Host-A.
Situation 2. Login from a local machine named Host-A (e.g. your own laptop) to a remote login machine named Host-B and from there login to the remote work machine named Host-C. This is usually the case for most analytical servers within universities and can be achieved by using two ssh -L connected with -t.
ssh -L port_A:localhost:port_B user#Host-B -t ssh -L port_B:localhost:port_C user#Host-C
jupyter notebook --NotebookApp.token='' --no-browser --port=port_C
Then you can open a browser and enter: http://localhost:port_A/ to do your work on Host-C but see it in Host-A.
Situation 3. Login from a local machine named Host-A (e.g. your own laptop) to a remote login machine named Host-B and from there login to the remote work machine named Host-C and finally login to the remote work machine Host-D. This is not usually the case but might happen sometime. It's an extension of Situation 2 and the same logic can be applied on more machines.
ssh -L port_A:localhost:port_B user#Host-B -t ssh -L port_B:localhost:port_C user#Host-C -t ssh -L port_C:localhost:port_D user#Host-D
jupyter notebook --NotebookApp.token='' --no-browser --port=port_D
Then you can open a browser and enter: http://localhost:port_A/ to do your work on Host-D but see it in Host-A.
Note that port_A, port_B, port_C, port_D can be random numbers except common port numbers listed here. In Situation 1, port_A and port_B can be the same to simplify the procedure.
Here is a solution inspired from the one from Yuval Atzmon.
It has a few benefits over the initial solution:
first it creates a single background process and not one per port
it generates the alias that allows you to kill your tunnels
it binds only to 127.0.0.1 which is a little more secure
You may use it as:
tnl your.remote.com 1234
tnl your.remote.com {1234,1235}
tnl your.remote.com {1234..1236}
And finally kill them all with tnlkill.
function tnl {
TUNNEL="ssh -N "
echo Port forwarding for ports:
for i in ${#:2}
do
echo " - $i"
TUNNEL="$TUNNEL -L 127.0.0.1:$i:localhost:$i"
done
TUNNEL="$TUNNEL $1"
$TUNNEL &
PID=$!
alias tnlkill="kill $PID && unalias tnlkill"
}
An alternative approach is to tell ssh to work as a SOCKS proxy using the -D flag.
That way you would be able to connect to any remote network address/port accesible through the ssh server as long as the client applications are able to go through a SOCKS proxy (or work with something like socksify).
If you want a simple solution that runs in the background and is easy to kill - use a control socket
# start
$ ssh -f -N -M -S $SOCKET -L localhost:9200:localhost:9200 $HOST
# stop
$ ssh -S $SOCKET -O exit $HOST
I've developed loco for help with ssh forwarding. It can be used to share ports 5000 and 7000 on remote locally at the same ports:
pip install loco
loco listen SSHINFO -r 5000 -r 7000
First It can be done using Parallel Execution by xargs -P 0.
Create a file for binding the ports e.g.
localhost:8080:localhost:8080
localhost:9090:localhost:8080
then run
xargs -P 0 -I xxx ssh -vNTCL xxx <REMOTE> < port-forward
or you can do a one-liner
echo localhost:{8080,9090} | tr ' ' '\n' | sed 's/.*/&:&/' | xargs -P 0 -I xxx ssh -vNTCL xxx <REMOTE>
pros independent ssh port-forwarding, they are independent == avoiding Single Point of Failure
cons each ssh port-forwarding is forked separately, somehow not efficient
second it can be done using curly brackets expansion feature in bash
echo "ssh -vNTC $(echo localhost:{10,20,30,40,50} | perl -lpe 's/[^ ]+/-L $&:$&/g') <REMOTE>"
# output
ssh -vNTC -L localhost:10:localhost:10 -L localhost:20:localhost:20 -L localhost:30:localhost:30 -L localhost:40:localhost:40 -L localhost:50:localhost:50 <REMOTE>
real example
echo "-vNTC $(echo localhost:{8080,9090} | perl -lpe 's/[^ ]+/-L $&:$&/g') gitlab" | xargs ssh
Forwarding 8080 and 9090 to gitlab server.
pros one single fork == efficient
cons by closing this process (ssh) all forwarding are closed == Single Point of Failure
You can use this zsh function (probably works with bash, too)(Put it in ~/.zshrc):
ashL () {
local a=() i
for i in "$#[2,-1]"
do
a+=(-L "${i}:localhost:${i}")
done
autossh -M 0 -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3" -NT "$1" "$a[#]"
}
Examples:
ashL db#114.39.161.24 6480 7690 7477
ashL db#114.39.161.24 {6000..6050} # Forwards the whole range. This is simply shell syntax sugar.

SSH to multiple hosts at once

I have a script which loops through a list of hosts, connecting to each of them with SSH using an RSA key, and then saving the output to a file on my local machine - this all works correctly. However, the commands to run on each server take a while (~30 minutes) and there are 10 servers. I would like to run the commands in parallel to save time, but can't seem to get it working. Here is the code as it is now (working):
for host in $HOSTS; do
echo "Connecting to $host"..
ssh -n -t -t $USER#$host "/data/reports/formatted_report.sh"
done
How can I speed this up?
You should add & to the end of the ssh call, it will run on the background.
for host in $HOSTS; do
echo "Connecting to $host"..
ssh -n -t -t $USER#$host "/data/reports/formatted_report.sh" &
done
I tried using & to send the SSH commands to the background, but I abandoned this because after the SSH commands are completed, the script performs some more commands on the output files, which need to have been created.
Using & made the script skip directly to those commands, which failed because the output files were not there yet. But then I learned about the wait command which waits for background commands to complete before continuing. Now this is my code which works:
for host in $HOSTS; do
echo "Connecting to $host"..
ssh -n -t -t $USER#$host "/data/reports/formatted_report.sh" &
done
wait
Try massh http://m.a.tt/er/massh/. This is a nice tool to run ssh across multiple hosts.
The Hypertable project has recently added a multi-host ssh tool. This tool is built with libssh and establishes connections and issues commands asynchronously and in parallel for maximum parallelism. See Multi-Host SSH Tool for complete documentation. To run a command on a set of hosts, you would run it as follows:
$ ht ssh host00,host01,host02 /data/reports/formatted_report.sh
You can also specify a host name or IP pattern, for example:
$ ht ssh 192.168.17.[1-99] /data/reports/formatted_report.sh
$ ht ssh host[00-99] /data/reports/formatted_report.sh
It also supports a --random-start-delay <millis> option that will delay the start of the command on each host by a random time interval between 0 and <millis> milliseconds. This option can be used to avoid thundering herd problems when the command being run accesses a central resource.

Cygrunsrv & autossh : A way to embedd remote commands in the command line?

I'm using cygrunsrv and autossh on windows XP to create a service building a tunnel to a remote server but i also want to create another tunnel from the remote server to another server.
I can achieve it with this command line :
autossh -M 5432 serverA -t 'autossh -M 4321 serverB -N'
but when I want to set it up in cygwin through cygrunsrv to make it works as a service :
cygrunsrv -I TUNNEL -p /usr/bin/autossh -a "-M 5432 serverA -t 'autossh -M 4321 serverB -N'" -e AUTOSSH_NTSERVICE=yes -e AUTOSSH_POLL=20 -e AUTOSSH_GATETIME=30
It's not fully working. The service is creating the tunnel correctly to ServerA but it's not sending the autossh command "autossh -M 4321 serverB -N" to ServerA.
I tried to escape the quote but all my efforts didn't make any difference and I'm not seeing any command sent in the autossh logs.
I think the problem is related to pseudo terminal that is not created through the cygrunsrv.
I'd like to know if there's a way to fix my cygrunsrv command line to make it work or should I consider a different approach ?
Lionel, try removing the AUTOSSH_NTSERVICE=yes from the cygrunsrv invocation. As /usr/share/doc/autossh/README.Cygwin explains:
Setting AUTOSSH_NTSERVICE=yes in the calling environment ...
change[s] autossh's behavior in three useful
ways:
(1) Add an -N flag to each invocation of ssh, thus disabling shell
access. The idea is that if you're running autossh as a system
service, you're using it to forward ports; it wouldn't make sense to
run a shell session as a system service. (If you think this reasoning
is wrong, please send a bug report to the author or Cygwin maintainer,
and tell us what you're trying to do.)
Despite what the above says, it seems that you may have a good reason for not wanting -N (which suppresses command execution) in your service's ssh invocation. Removing AUTOSSH_NTSERVICE=yes should take care of it. It will have a couple of other minor disadvantages, but you can probably live with it. Read the rest of README.Cygwin for the details.